HIPAA software built by engineers who've survived an OCR audit.
We design, build, and modernize systems for covered entities and business associates — including Epic and Cerner integrations, BAA-backed AI workflows, and audit logging that holds up under investigation.
Most HIPAA failures aren't hacks — they're undocumented PHI flows.
The pattern repeats: a team builds a working healthcare app, passes an internal review, and ships. Two years later a support engineer pastes a patient record into a Slack channel, or a Datadog log captures an SSN in a stack trace, or an intern's laptop with a database export gets stolen. None of those are sophisticated attacks. They're consequences of architecture decisions made before anyone mapped where PHI actually lives. By the time OCR sends a letter, the team is reverse-engineering data flows under a 30-day deadline instead of explaining a system they actually understand.
- ▸ Free-text fields containing PHI flow into Sentry, Datadog, or CloudWatch logs with no scrubbing or BAA in place.
- ▸ Vendors integrated during a hackathon — Twilio, SendGrid, an AI API — never had a BAA executed before going to production.
- ▸ Audit logs exist but capture HTTP requests, not PHI access events with record IDs, actor IDs, and reason codes.
- ▸ EHR integrations built against Epic or Cerner sandboxes get promoted to production without re-validation against the live tenant's interface rules.
Build the controls into the system, not the policy binder.
- STEP-01
Map PHI flow before code
Before any sprint, we diagram every place PHI enters, rests, or leaves the system — intake forms, S3 buckets, RDS, Lambda logs, third-party APIs, support tickets. Anything not on the diagram gets blocked at the network layer.
- STEP-02
Lock down the BAA boundary
We inventory every vendor touching PHI and confirm executed BAAs before integration. AWS, Azure, Twilio, SendGrid, Datadog, OpenAI — each has a specific BAA process. No BAA, no PHI, no exceptions, even for prototypes.
- STEP-03
Encrypt with documented keys
TLS 1.2+ in transit, AES-256 at rest via KMS or HSM, with key rotation policies and access logs. Field-level encryption for SSN, diagnoses, and free-text notes. Document the key custodian and rotation cadence in the SSP.
- STEP-04
Audit log everything PHI-adjacent
Every read, write, export, and failed access attempt against PHI hits an append-only log with user, timestamp, record ID, and reason code. Logs ship to a separate account so a compromised app can't erase them. Retain six years minimum.
- STEP-05
EHR integration via FHIR or HL7v2
Epic and Cerner integrations go through documented FHIR R4 or HL7v2 interfaces with SMART-on-FHIR auth where possible. We test against sandbox tenants, not production, and coordinate with the hospital's interface team on message validation and error queues.
// PHI access wrapper — every read goes through here.
// No direct repository calls from controllers.
import { auditLog } from "./audit";
import { decryptField } from "./kms";
import { Patient, AccessReason } from "./types";
export async function readPatient(
actorId: string,
patientId: string,
reason: AccessReason,
): Promise<Patient> {
if (!reason) {
throw new Error("PHI access requires a documented reason code");
}
const row = await db.patients.findOne({ id: patientId });
if (!row) throw new NotFoundError();
// Decrypt sensitive fields with per-field KMS keys
const ssn = await decryptField(row.ssn_ct, "phi-ssn-key");
const dx = await decryptField(row.dx_ct, "phi-dx-key");
// Append-only audit log to separate AWS account
await auditLog({
actor: actorId,
action: "PHI_READ",
resource: `patient/${patientId}`,
reason,
ts: new Date().toISOString(),
});
return { id: row.id, ssn, diagnoses: dx, ...row.nonPhi };
} Every PHI read funnels through one wrapper that enforces a reason code, decrypts per-field, and writes an immutable audit entry — the pattern OCR investigators look for first.
Field FAQ.
→ What's the actual difference between the HIPAA Privacy Rule and the Security Rule?
The Privacy Rule governs who can see PHI and under what circumstances — it applies to all forms, including paper. The Security Rule is narrower: it covers electronic PHI specifically and mandates administrative, physical, and technical safeguards like access controls, audit logging, and encryption. In practice, Privacy drives your Notice of Privacy Practices, minimum-necessary policies, and patient rights workflows. Security drives your architecture, encryption choices, and incident response. You need both, and auditors will ask about both separately.
→ Do we need a BAA with every vendor, or only ones that obviously store PHI?
Any vendor that creates, receives, maintains, or transmits PHI on your behalf needs a signed BAA before they touch the data. That includes the obvious ones — cloud hosts, EHR vendors, billing clearinghouses — but also error trackers, log aggregators, email senders, and AI APIs. Sentry, Datadog, SendGrid, OpenAI, and Anthropic all have BAA processes. If you can't get a BAA, you cannot route PHI through them. Period. We audit this list as the first step on every engagement.
→ Is encryption-at-rest with AWS RDS default encryption enough?
It's a starting point, not the finish line. Default RDS encryption protects against someone walking off with a physical disk, which is not a realistic threat. What auditors and breach scenarios actually care about is field-level encryption for the most sensitive elements, documented key management with rotation, separation between the application's IAM role and the KMS key custodian, and encrypted backups with the same controls. We typically layer envelope encryption on top of platform encryption for SSN, diagnoses, and free-text clinical notes.
→ How do we integrate with Epic or Cerner without a six-month procurement cycle?
You don't bypass the cycle, but you can shorten it. Epic's App Orchard (now Showroom) and Cerner's code program have defined onboarding paths using FHIR R4 and SMART-on-FHIR. For internal hospital integrations, the interface team will usually give you HL7v2 over MLLP or a FHIR endpoint behind their gateway. We've shipped both. The realistic timeline from kickoff to production is eight to sixteen weeks, mostly waiting on the hospital's interface and security teams, not engineering.
→ What audit logging do we actually need to satisfy HIPAA?
The Security Rule requires you to record and examine activity in systems containing ePHI. In practice that means an append-only log capturing who accessed what record, when, from where, and ideally why. Logs must be tamper-evident — we ship them to a separate AWS account with write-only IAM policies — and retained six years. Failed access attempts matter as much as successful ones, because they're the leading indicator of credential abuse. We also log administrative actions on the logging system itself.
→ What turns a routine HIPAA review into an OCR investigation?
Almost always one of three things: a reportable breach affecting 500+ individuals, a patient complaint that gets escalated, or a state AG referral. Once OCR is involved, they request your risk analysis, policies, audit logs, and BAAs. The investigation usually finds problems unrelated to the original incident — missing BAAs, no documented risk analysis, audit logs that don't actually capture PHI access. The penalties scale with willful neglect, so the documentation gap costs more than the breach itself.
→ What are our breach notification obligations and timelines?
For breaches affecting fewer than 500 individuals, you notify affected patients within 60 days and report to HHS annually. For breaches of 500 or more, you notify patients within 60 days, notify HHS within 60 days, and notify prominent media in the affected state. Business associates must notify the covered entity within 60 days of discovery, though most BAAs we draft tighten that to 10 or 15 days. The clock starts at discovery, not confirmation, which trips up a lot of teams.
→ Can we use OpenAI or Claude with PHI in our application?
Yes, but only under a BAA. Anthropic offers a BAA for Claude through specific channels including AWS Bedrock and Google Vertex with appropriate configuration. OpenAI offers BAAs for enterprise and certain API tiers. The consumer ChatGPT product is not BAA-eligible. Beyond the BAA, you need to think about prompt logging, training data exclusion, and de-identification where possible. We typically de-identify before the prompt and re-identify on the response side to minimize PHI surface area.
→ Does SDVOSB status matter for HIPAA work in federal health contexts?
It matters significantly for VA, IHS, MHS Genesis, and certain HHS contracts where SDVOSB set-asides apply. Being SDVOSB-certified means we're eligible for sole-source awards up to $7M for services and can compete on set-aside vehicles without teaming. For commercial covered entities the certification is irrelevant — what matters there is the engineering track record. We work both sides, but the federal health space is where veteran-owned status genuinely changes the procurement path.
Continue recon.
Compliance Engineering
How we build HIPAA, FedRAMP, and CMMC controls directly into application architecture.
REL-02Healthcare Builds
Past engagements covering EHR integration, claims processing, and PHI-adjacent AI workflows.
REL-03Reference Architectures
Patterns for PHI-safe data stores, audit logging, and field-level encryption on AWS.
REL-04Scope an Engagement
Tell us your timeline, EHR, and BAA boundary — we'll come back with a real plan.
Shipping a HIPAA system and want it built right the first time? Let's scope it.
Talk to a VooStack operator. We respond within one business day.