Get off Oracle without breaking the application that pays the bills.
We move production workloads from Oracle Database to PostgreSQL — PL/SQL conversion, sequence semantics, query plan regressions, and DMS-based cutover included. No big-bang weekends.
Most Oracle-to-Postgres projects stall on PL/SQL, not on data.
Teams budget for a schema conversion and a data copy. Then they hit 40,000 lines of PL/SQL packages with autonomous transactions, a reporting layer built on CONNECT BY PRIOR, and a billing job that depends on Oracle's MERGE returning row counts in a specific way. ora2pg gets the tables across in a weekend. The procedural code, the query plans, and the operational behavior take months. Meanwhile the Oracle license renewal is six weeks out and someone has to make a decision.
- ▸ PL/SQL packages with autonomous transactions and global state that don't translate to Postgres function semantics.
- ▸ Hierarchical queries using CONNECT BY PRIOR that need full rewrites as recursive CTEs with different performance profiles.
- ▸ Sequences with CACHE/NOCACHE/ORDER behavior that silently change ID generation patterns after migration.
- ▸ Application code using OracleCommand, named parameters with colons, and TNS-style connection strings hardcoded across services.
Treat it like a port, not a copy. The schema is the easy part.
- STEP-01
Inventory with ora2pg first
Run ora2pg in assessment mode against every schema. The migration cost report gives you object counts, PL/SQL line counts, and a complexity score per package. That number — not vendor optimism — drives the timeline. Expect the report to flag 15-30% of PL/SQL as non-trivial.
- STEP-02
Port PL/SQL to PL/pgSQL deliberately
Autonomous transactions, hierarchical CONNECT BY, MERGE semantics, and packages don't map cleanly. Rewrite packages as schemas with functions, convert CONNECT BY to recursive CTEs, replace autonomous transactions with dblink or background workers. Don't trust ora2pg's auto-conversion of anything over ~50 lines without a human read.
- STEP-03
Fix sequences and identity columns
Oracle sequences with NOCACHE behave differently than Postgres. Convert to GENERATED BY DEFAULT AS IDENTITY where possible — it survives pg_dump cleanly and avoids the ownership headaches of bare sequences. Reset sequence high-water marks after every data load or you'll hit duplicate key errors on first insert.
- STEP-04
Benchmark the hot paths early
Postgres planner handles correlated subqueries and OR-heavy predicates differently. Pull your 20 worst Oracle queries from AWR, run them on a loaded Postgres instance, and tune before cutover. Common fixes: rewrite to lateral joins, add partial indexes, force JIT off for short queries.
- STEP-05
Cutover with logical replication
For systems that can't take downtime, use AWS DMS or GoldenGate to stream Oracle → Postgres while you validate. Run both in parallel for 2-4 weeks with shadow reads. Cut writes over only after row counts, checksums, and a representative query suite all match within tolerance.
# ora2pg.conf — assessment + targeted export config we actually ship
ORACLE_DSN dbi:Oracle:host=oracle-prod;sid=ORCL;port=1521
ORACLE_USER migration_ro
SCHEMA BILLING
# Assessment first. Read the report before writing any code.
TYPE SHOW_REPORT
ESTIMATE_COST 1
COST_UNIT_VALUE 5
# When exporting, keep these honest:
# - identity columns instead of bare sequences where possible
USE_IDENTITY 1
# - don't auto-convert PL/SQL we haven't reviewed
PLSQL_PGSQL 0
# - preserve precision; NUMBER -> numeric, not float
PG_NUMERIC_TYPE 0
PG_INTEGER_TYPE 1
# - Oracle DATE has time; map to timestamp(0), not date
REPLACE_AS_BOOLEAN 0
DATA_TYPE DATE:timestamp(0),NUMBER(1,0):smallint
# Parallelize data copy but cap it — Oracle redo can't keep up otherwise
JOBS 4
ORACLE_COPIES 4
DATA_LIMIT 10000
The ora2pg config we start every engagement with — assessment report first, then a controlled export with explicit type mappings rather than defaults.
Field FAQ.
→ How accurate is ora2pg's automatic PL/SQL to PL/pgSQL conversion?
For straight CRUD procedures and simple functions, ora2pg gets 70-85% of the syntax right. For anything involving packages, autonomous transactions, %ROWTYPE chains, or Oracle-specific built-ins like DBMS_OUTPUT and UTL_FILE, the output compiles but behaves differently or doesn't compile at all. We treat ora2pg's PL/SQL output as a first draft. Every package over a few hundred lines gets a human rewrite with unit tests against captured Oracle inputs and outputs.
→ When should we use AWS SCT instead of ora2pg?
AWS SCT is the better choice when you're already committed to RDS or Aurora Postgres and want the conversion report tightly integrated with DMS for the data movement phase. It also handles application code scanning for embedded SQL, which ora2pg doesn't. ora2pg wins on schema and PL/SQL fidelity and on running anywhere. We commonly use SCT for the assessment and DMS handoff, and ora2pg for the actual schema and procedural conversion.
→ What happens to our Oracle sequences and identity columns?
Oracle sequences map to Postgres sequences, but the semantics differ. Oracle's CACHE with NOORDER can produce non-monotonic IDs across RAC nodes; Postgres sequences are monotonic per session. If your application depends on ID ordering for anything, audit it. We convert most sequence-backed columns to GENERATED BY DEFAULT AS IDENTITY, which is cleaner to dump and restore. After every data load you must reset the sequence to MAX(id)+1 or first inserts will collide.
→ How do we handle CONNECT BY PRIOR hierarchical queries?
Rewrite them as recursive CTEs using WITH RECURSIVE. The translation is mechanical but the performance profile changes. Oracle's CONNECT BY has decades of optimization; Postgres recursive CTEs materialize at each step. For deep hierarchies, expect to add indexes on the parent_id column and possibly a path-based denormalization (ltree or a materialized closure table) for read-heavy workloads. We benchmark these specifically before cutover because they're a frequent regression point.
→ What's the realistic license cost reduction?
For a database running Oracle Enterprise Edition with options like Partitioning, Advanced Security, or Diagnostics Pack, customers typically eliminate 60-80% of database licensing spend after migration. The remaining cost shifts to Postgres infrastructure (RDS, Aurora, or self-managed) and ongoing engineering. Standard Edition migrations show smaller savings. The ROI also depends on whether you can decommission Oracle entirely or need to run both during a transition — running both for a year erases most year-one savings.
→ Can you do this without downtime?
Yes, for most OLTP systems. We use AWS DMS or Oracle GoldenGate to replicate changes from Oracle to Postgres in near-real-time while you validate. Application traffic stays on Oracle until you're ready to flip. The actual cutover is a brief read-only window — typically 5-30 minutes — to drain in-flight transactions and confirm replication lag is zero. Reporting workloads and data warehouses often tolerate longer windows and don't need DMS.
→ Does VooStack's SDVOSB status matter for this work?
It matters if you're a federal agency or a prime contractor with small business or SDVOSB set-aside requirements. We're certified through SBA's VetCert program and can be awarded directly under SDVOSB sole-source authority up to the applicable thresholds, or compete on set-aside vehicles. For commercial customers it's irrelevant to the technical work — same engineers, same approach. The certification just opens contracting paths that other firms can't access.
→ What about Oracle features with no Postgres equivalent?
Real Application Clusters, Flashback, and Data Guard don't have one-to-one replacements. RAC's shared-storage active-active model maps to Postgres patterns like Patroni with streaming replication plus a connection pooler — different architecture, similar availability outcomes. Flashback queries can be approximated with temporal tables or a CDC stream into a separate audit store. We document each gap during assessment with the proposed Postgres pattern and the operational tradeoff, so nothing surprises you at cutover.
→ How long does a typical migration take?
A mid-size OLTP system — say 500GB of data, 30-50 packages, a few hundred tables — runs 8-12 weeks from kickoff to cutover when the team is focused. Larger systems with heavy PL/SQL or strict compliance review can run 6-9 months. The data copy is rarely the bottleneck. The bottleneck is application regression testing, query plan tuning, and the operational runbooks for the new platform. We sequence work so you see a running Postgres replica within the first 2-3 weeks.
Continue recon.
Modernization Services
Full scope of legacy-to-cloud and database migration work we deliver.
REL-02Migration Case Studies
Real cutovers we've shipped — timelines, gotchas, and what we'd do again.
REL-03Assessment Packages
Fixed-scope Oracle assessment with cost report and migration plan in 3 weeks.
REL-04Talk to an Engineer
Skip the sales call. Get a senior engineer on the first conversation.
Bring us your worst PL/SQL package. We'll tell you what the migration actually costs.
Talk to a VooStack operator. We respond within one business day.