Threat-aware.
Architecture-first.
My path into confidential computing started with a specific problem: legal AI platforms need to process attorney-client privileged data in the cloud, but standard cloud deployments give the provider · and anyone who compromises them · full visibility into instance memory. That gap led me to hardware-enforced isolation, and I've been building at that boundary ever since.
Two production TEE deployments later, I have hands-on experience with the full attack surface: host-level isolation, attestation flows, supply chain integrity, and the residual risks no enclave can fully close. I build the defenses, then stress-test them · the security assessment below Mizan is my own work product. CompTIA Security+ certified. Open to engineer and analyst roles where security is a first-class constraint.
AppSec runs through every project I've shipped: input validation and injection prevention on all form surfaces, rate limiting and DDoS mitigation at the infrastructure layer, session hijacking controls and encrypted session isolation, supply chain integrity on build pipelines, and AI guardrails to prevent prompt injection at the inference layer. Security wasn't bolted on after the fact · it was in the architecture from day one.
Azure
What I've shipped.
Legal AI sits at one of the highest-risk intersections in modern software: attorney-client privileged data, cloud infrastructure with shared-responsibility models, and AI inference ·an attack surface that barely existed five years ago.
The core problem is structural. Default cloud deployments allow the provider to inspect instance memory. For legal data, this isn't a compliance checkbox issue ·it's a categorical breach of attorney ethics obligations. The threat model had to account for adversaries who aren't external hackers. The most dangerous actors are insiders: cloud operators, Mizan employees, or a compromised build pipeline.
Trust boundaries mapped before any code was written:
- Host OS → Enclave · primary isolation boundary
- Client → Inference layer · data in transit
- Build pipeline → Enclave binary · supply chain integrity
- Mizan operators → Production environment · insider threat
- Client inputs → AI model · prompt injection surface
AWS Nitro Enclaves ·Hardware Memory Isolation. The host OS has zero read/write access to enclave memory. Enforced at the hypervisor level, not in software. A root-level host compromise cannot extract enclave data ·the isolation boundary doesn't run through the OS at all.
Cryptographic Attestation via AWS KMS. The enclave binary is measured at boot. PCR values are embedded in the attestation document and validated by KMS key policy before any key material is released. The exact, unmodified binary is the only path to decryption ·a tampered build produces different PCR values and receives nothing.
Attestation-Bound TLS. The enclave generates its own TLS keypair on boot. That public key is included in the attestation document, proving to clients they're communicating with a genuine, unmodified enclave ·not an intercepted proxy. MITM is cryptographically infeasible by design.
No Persistent Storage. All computation is in-memory. Data is discarded on session close. There is no database write path, no log file, no disk surface inside the enclave. Nothing to exfiltrate at rest.
chmod 600. Includes a separate decryption utility for log review. Designed to demonstrate the architectural difference between a malicious keylogger and a consented audit tool.
Writing it down.
Tools of the trade.
Education & training.
My certifications don't fully reflect where my skills are. Security+ was accessible and I earned it · but OSCP, CISSP, and the certs that match my actual experience level cost real money, and I'm working toward them sustainably. I'm not going to pretend the gap isn't there. What I can tell you is that the work on this page is real, the threat model is my own, and the production systems are live. Judge me on that.