Off Oracle, off SQL Server, off the licensing treadmill — without a downtime weekend.
We move Oracle, SQL Server, DB2, and aging MySQL workloads to Postgres with CDC-based dual-running, PL/pgSQL procedure parity, and performance validation before cutover. SDVOSB-certified for federal work.
Most legacy-to-Postgres migrations fail at procedures, not at data.
The pitch is simple: stop paying Oracle, move to Postgres, save seven figures. The execution is where teams stall. Schema converts in an afternoon with ora2pg. Then someone notices the application has 800 PL/SQL procedures, three of them generate 60% of revenue, and nobody on staff has touched them since 2014. Migration plans collapse into 18-month rewrites or get abandoned after a botched weekend cutover where Postgres performance regressed on a single critical query and nobody had time to debug it under pressure.
- ▸ Stored procedures and packages converted by automated tools but never tested against production query patterns or edge cases.
- ▸ Cutover scheduled as a single weekend window with no rollback path — one bad index choice triggers an emergency revert.
- ▸ Performance validated on averages instead of p95/p99, hiding the one query that locks up at month-end close.
- ▸ HA, backup, and observability treated as afterthoughts — team realizes post-cutover they have no equivalent of Data Guard or Query Store.
Cutover with both databases live, not a weekend prayer.
- STEP-01
Schema and PL/SQL inventory
We pull the full DDL, every stored procedure, trigger, package, sequence, and synonym. Tools like ora2pg or pgloader generate first-pass conversions, but we hand-audit anything that uses CONNECT BY, MERGE with OUTPUT, hierarchical queries, or T-SQL table-valued parameters. You get a line-item report of what converts cleanly and what needs rewrite.
- STEP-02
Procedure rewrite to PL/pgSQL
Oracle PL/SQL and T-SQL don't map 1:1. We rewrite procedures to PL/pgSQL (or push logic into the app where it belongs), preserving exact return shapes. Each procedure ships with pgTAP tests that compare output row-for-row against the legacy system on a frozen dataset.
- STEP-03
Dual-running with CDC
We stand up Debezium or AWS DMS to stream changes from the legacy DB into Postgres in near real time. The application writes to both for a window — typically 2 to 6 weeks — while we diff row counts, checksums, and query latencies nightly. Cutover is a config flip, not a migration weekend.
- STEP-04
Performance parity validation
We replay production query traffic against Postgres using pg_stat_statements and auto_explain. Anything slower than the legacy baseline gets indexed, rewritten, or partitioned before cutover. We verify p50, p95, and p99 latencies — not averages — and we publish the comparison to your team.
- STEP-05
Operational handoff
Postgres HA looks different than Always On or Data Guard. We set up Patroni or RDS/Aurora multi-AZ, configure pgBackRest or managed snapshots, wire pg_stat_statements into your observability stack, and train your DBAs on VACUUM, autovacuum tuning, and bloat monitoring before we leave.
# Debezium connector: stream Oracle -> Kafka -> Postgres during dual-run
name: oracle-to-postgres-cdc
config:
connector.class: io.debezium.connector.oracle.OracleConnector
database.hostname: legacy-oracle.internal
database.port: 1521
database.user: c##dbz
database.dbname: ORCLCDB
database.pdb.name: ORCLPDB1
log.mining.strategy: online_catalog
snapshot.mode: initial
table.include.list: APP.ORDERS,APP.CUSTOMERS,APP.LINE_ITEMS
tombstones.on.delete: false
# Route to a sink that applies into Postgres with idempotent upserts
transforms: unwrap,route
transforms.unwrap.type: io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.delete.handling.mode: rewrite
transforms.route.type: org.apache.kafka.connect.transforms.RegexRouter
transforms.route.regex: ([^.]+)\.APP\.(.+)
transforms.route.replacement: pg.public.$2
# Validate: row counts and checksums diffed nightly via a reconciliation job
heartbeat.interval.ms: 10000
heartbeat.action.query: "INSERT INTO dbz_heartbeat (ts) VALUES (SYSTIMESTAMP)" CDC-based dual-running lets you cut over with a config flip instead of a downtime window — and gives you a reversible path if Postgres misbehaves on day one.
Field FAQ.
→ How long does a typical Oracle or SQL Server to Postgres migration take?
For a single application database in the 200GB–2TB range with a few hundred stored procedures, plan on 4 to 7 months end-to-end. The schema and data move is rarely the long pole — procedure rewrites, application query compatibility, and performance tuning are. We front-load the inventory in week one so you get a defensible timeline before committing to the full engagement.
→ What about Oracle features with no direct Postgres equivalent — like packages, AQ, or hierarchical queries?
Packages map to schemas with grouped functions. Oracle Advanced Queuing typically gets replaced by pgmq, a real broker like SQS or RabbitMQ, or LISTEN/NOTIFY for low-volume cases. CONNECT BY rewrites cleanly to recursive CTEs. The honest answer is some features require app-side changes, and we flag those in the inventory phase before you commit to a cutover date.
→ How do you validate that Postgres is as fast as the legacy database?
We capture production query patterns from the source — V$SQL on Oracle, Query Store on SQL Server — and replay them against Postgres with realistic data volume. We compare p50, p95, and p99 latencies per query, not just averages. Anything regressing gets fixed before cutover: better indexes, partitioning, query rewrites, or occasionally a JIT or parallel-worker tweak in postgresql.conf.
→ What's the realistic license cost savings?
Oracle Enterprise Edition with options like Partitioning, Diagnostics, and RAC commonly runs $47K+ per processor list, often six to seven figures annually for a mid-size workload. SQL Server Enterprise is similar. Postgres license cost is zero. Your real ongoing spend shifts to managed hosting (RDS, Aurora, Cloud SQL) or self-managed infrastructure plus DBA time. Most clients see 60–85% reduction in database TCO over three years.
→ Can you do this for federal or DoD systems?
Yes. VooStack is SDVOSB-certified and we've done modernization work in environments with FedRAMP, IL4, and ATO requirements. Postgres has strong precedent in federal — it runs in GovCloud, Azure Government, and on-prem in classified enclaves. We can subcontract under prime vehicles or work directly through SDVOSB set-aside contracts. Compliance documentation and STIG alignment are part of the deliverable, not an afterthought.
→ What happens to our HA and DR setup? We currently use Always On / Data Guard.
Postgres HA is different but mature. For self-managed, Patroni with etcd handles automatic failover and is what most serious shops run. On managed services, RDS Multi-AZ or Aurora handle failover natively with sub-30-second RTO. For DR, pgBackRest gives you point-in-time recovery with parallel restore. We document RTO and RPO targets up front and prove them with game-day exercises before handoff.
→ Do you migrate the application code too, or just the database?
Both, when needed. Applications using ORM layers (Hibernate, Entity Framework, SQLAlchemy) often need only connection string and dialect changes. Apps with raw SQL — especially T-SQL or PL/SQL embedded in code — need real work: NVL becomes COALESCE, SYSDATE becomes NOW(), TOP becomes LIMIT, and so on. We can do the application-side migration as part of the engagement or hand a detailed change list to your team.
→ How do you handle the cutover itself? We can't take 12 hours of downtime.
You shouldn't have to. With CDC-based dual-running via Debezium or AWS DMS, both databases stay in sync continuously for weeks. Cutover becomes a sequence of: stop writes briefly, drain the CDC lag (usually seconds), flip the connection string, resume writes. Total downtime is typically under 5 minutes. If something goes wrong, you flip back — the legacy DB is still authoritative and current.
→ What observability changes when we move to Postgres?
You lose Oracle Enterprise Manager and SQL Server Management Studio dashboards. You gain pg_stat_statements, auto_explain, pg_stat_activity, and a strong ecosystem of open tooling — pganalyze, Datadog DBM, or self-hosted with Prometheus and the postgres_exporter. We wire this into your existing observability stack during the engagement so your on-call team isn't flying blind on day one of cutover.
Continue recon.
Modernization services
How we scope and execute legacy-to-cloud database and application migrations.
REL-02Migration case studies
Real cutover timelines, license savings, and lessons from prior database migrations.
REL-03Migration assessment
Fixed-scope two-week inventory: schema, procedures, risk, and timeline.
REL-04Talk to an engineer
Skip the discovery call carousel — get a senior engineer on the first call.
Get a fixed-scope migration assessment before you sign another Oracle renewal.
Talk to a VooStack operator. We respond within one business day.