[ MODERNIZATION ] // ON-PREM TO CLOUD

Most cloud migrations fail the cost test in month four. Yours doesn't have to.

We move VMware, IIS, SQL Server, and legacy .NET/Java estates to AWS, Azure, or GCP — with FinOps guardrails, identity bridging, and FedRAMP-aligned landing zones in place before the first workload cuts over.

Veteran-Owned SDVOSB
[001 / 005] Field Conditions

Lift-and-shift gets you off the datacenter floor — and onto a bill that's 2x worse.

// SITUATION

The standard pattern: leadership signs a datacenter exit, a partner rehosts 200 VMs into EC2 or Azure VMs at parity sizing, and twelve weeks later the cloud bill is double the colo invoice. Nothing was replatformed, nobody tagged anything, NAT Gateway egress is eating $40K/month, and the SQL Server licensing got worse, not better. Identity is half-federated, the AD trust is flaky, and three apps still phone home to an on-prem file share that nobody owns. Now finance wants a repatriation plan and engineering wants to quit.

  • VMs sized for physical-server peaks run 24/7 in the cloud at retail rates — no rightsizing, no Savings Plans, no auto-scaling.
  • NAT Gateway, cross-AZ, and inter-region egress charges that nobody modeled show up as 18-25% of the monthly bill.
  • Active Directory trusts get bolted on after cutover, breaking Kerberos auth for apps that worked fine on-prem.
  • FedRAMP or CMMC scope wasn't defined up front, so the agency ATO package becomes a six-month documentation retrofit.
30-45%
Typical run-rate reduction after replatforming
< 8 wks
Landing zone live with guardrails enforced
FedRAMP
Moderate and High boundaries shipped on GovCloud
[002 / 005] Operational Approach

Re-platform what matters, retire what doesn't, and instrument cost from the first VPC.

  1. STEP-01

    Inventory and disposition decisions

    We catalog every workload — OS, runtime, data gravity, latency budget, compliance posture — and assign one of six dispositions (retire, retain, rehost, replatform, refactor, repurchase). Most portfolios shed 15-30% of workloads before a single VM moves.

  2. STEP-02

    Landing zone with guardrails

    AWS Control Tower or Azure Landing Zones with SCPs, Config rules, and a hub-and-spoke network. Identity federates to Entra ID or Okta on day one. No workload lands until tagging, logging, and budget alarms are enforced by policy.

  3. STEP-03

    Network and identity bridging

    Direct Connect or ExpressRoute with redundant BGP peers, Transit Gateway for east-west, and DNS resolver endpoints so on-prem and cloud resolve each other cleanly. AD trusts or AWS Managed Microsoft AD eliminate the auth seams that break Kerberos-dependent apps.

  4. STEP-04

    Replatform the load-bearing pieces

    SQL Server moves to RDS or Azure SQL MI, file shares to FSx or Azure Files, batch jobs to ECS/Fargate or Container Apps. We rewrite the 10% of code that blocks managed services rather than lifting VMs that will cost 2x forever.

  5. STEP-05

    FinOps and FedRAMP from day zero

    Cost allocation tags, Savings Plans modeling, and per-team budgets ship with the landing zone. For federal workloads we map controls to FedRAMP Moderate or IL4 baselines using GovCloud or Azure Government, with SSP artifacts generated as we build.

// YAML PATTERN
# wave-02/disposition.yaml
workload: claims-intake-api
owner: claims-platform
current_state:
  host: vmw-prd-iis-07
  runtime: .NET Framework 4.7.2 on IIS 10
  db: SQL Server 2016 (Always On, 1.2 TB)
  auth: Windows Auth via on-prem AD
  rps_p95: 240
  data_gravity: high  # PII + PHI

disposition: replatform   # not rehost — IIS lift would cost 2.1x
target:
  compute: ECS Fargate (Linux containers, .NET 8 after port)
  db: RDS for SQL Server MI, Multi-AZ, gp3 1.5 TB
  auth: AWS Managed Microsoft AD, one-way trust to corp.local
  network: private subnets, NLB, Direct Connect VIF
  region: us-gov-west-1   # FedRAMP High boundary

blockers:
  - dotnet_framework_to_core_port  # ~3 sprints
  - hardcoded_unc_paths            # replace with FSx for Windows

finops:
  cost_center: 4412
  budget_monthly_usd: 8500
  savings_plan_coverage_pct: 70
  tags: [env, app, owner, data_class, cost_center]

A disposition matrix entry — this is what a real migration wave plan looks like before a VM ever moves.

[003 / 005] Common Questions

Field FAQ.

Why does lift-and-shift usually cost more than the on-prem footprint it replaced?

Because you're paying retail hourly rates for VMs sized like physical servers, with EBS provisioned for peak IOPS that never materialize, plus NAT Gateway egress, cross-AZ traffic, and snapshot sprawl nobody owns. A typical rehosted portfolio runs 1.6-2.2x the on-prem TCO for the first year. Replatforming the top 20% of workloads onto managed services — RDS, Fargate, S3 — is what flips the math. Lift-and-shift is a valid tactic for a datacenter exit deadline, not a destination.

How do you handle Active Directory and Kerberos-dependent apps during migration?

We stand up AWS Managed Microsoft AD or Azure AD DS in the landing zone and establish a one-way or two-way forest trust to your on-prem AD. Domain-joined EC2 or Azure VMs authenticate against the cloud directory, but users and group policies still flow from corp. For apps that hardcode SPNs or rely on constrained delegation, we map those out in the wave plan before cutover — surprises here are what cause 3 a.m. rollbacks.

What does FinOps from day one actually look like in practice?

Mandatory tagging enforced by SCP or Azure Policy (env, app, owner, cost_center, data_class), per-team budgets with anomaly alerts wired to Slack or Teams, Savings Plan and Reserved Instance modeling refreshed monthly, and a weekly unit-cost review — cost per transaction, per tenant, per claim processed. Dashboards in CUR + Athena or Azure Cost Management. The point is to make engineers see cost in the same place they see latency, not three weeks later in a finance report.

Does SDVOSB certification matter for a commercial cloud migration?

For commercial work, no — you're hiring us for the engineering. For federal agencies and prime contractors with small-business subcontracting goals, our SDVOSB status lets us be sourced sole-source up to the SDVOSB threshold and counts toward Section 15 goals. We also carry the operating muscle for FedRAMP Moderate and High boundaries, GovCloud and Azure Government landing zones, and the SSP and POA&M artifacts that come with them.

How long does a realistic migration take for a mid-size portfolio?

For 80-150 applications, plan on 9-15 months end to end. The first 6-8 weeks is discovery, disposition, and landing zone build — no workloads move. Waves of 8-15 apps then ship every 3-4 weeks, with the hardest 10% (mainframe-adjacent, COTS with vendor lock, anything with regulatory data flow questions) saved for the back half. Anyone promising a 90-day cutover for a portfolio that size is selling a rehost that you'll pay for twice.

AWS, Azure, or GCP — how do you decide?

Workload fit and existing licensing usually decide it before preference does. Heavy Microsoft estates with EA discounts, Power Platform, and Entra integration trend Azure. Data and ML-heavy workloads, BigQuery dependencies, or Anthos footprints trend GCP. Broadest service catalog, deepest FedRAMP coverage, and the largest managed-service surface area trend AWS. We've shipped on all three and will tell you when a multi-cloud posture is justified versus when it just doubles your operational burden.

What about FedRAMP — can we inherit controls instead of building from scratch?

Yes, and you should. AWS GovCloud and Azure Government provide inheritable controls at the FedRAMP High and IL4/5 levels — we map your SSP to the CSP's responsibility matrix so your team only authors the controls you actually own (typically 30-40% of the total). We generate SSP narratives, POA&M entries, and continuous monitoring evidence as part of the build, not as a documentation sprint after go-live.

How do you avoid the 'two datacenters' trap during a long migration?

By treating the hybrid period as a designed state, not an accident. Direct Connect or ExpressRoute with redundant VIFs, a routable IP plan that doesn't overlap, DNS resolver endpoints both directions, and a single identity plane. We also enforce that no new workload gets built on-prem after the landing zone is live — every team has to justify why they're not deploying to the target. Otherwise you carry both bills for years.

What happens to our SQL Server licensing when we move?

Three options, and the math differs by workload. License-included RDS or Azure SQL MI is simplest but most expensive at scale. BYOL via Microsoft's Azure Hybrid Benefit or AWS dedicated hosts can cut SQL costs 40-55% if you have Software Assurance. Or you migrate to PostgreSQL on Aurora or Azure Database — viable for maybe 30% of SQL Server workloads, painful for the rest. We model all three before cutover so the decision is financial, not accidental.

[ NEXT ACTION ]

Stop paying VMware prices in AWS. Let's plan a migration that actually lowers your run rate.

Talk to a VooStack operator. We respond within one business day.