
• Privacy Engineering
• MPC language
• Privacy by Design
Write the logic you need—analytics, ML, matching, whatever. Mark what comes out the other end. The raw data never moves. Never gets pooled. Can't leak.
Stoffel Lang: What is this?
A language for computing over data you don't actually want. You need the insights—the aggregate scores, the match results. You don't need the raw records sitting in your database waiting to leak.
Type-level guarantees
If it compiles, you didn't accidentally leak something. No code review debates about "is this safe?"—the compiler already checked.
Normal developer workflow
Local sim. Unit tests. Deterministic builds. This isn't a research project—it's infrastructure that works like infrastructure.
Patterns you can copy
Threshold checks. Overlap detection. Risk scoring. Federated aggregation. The gnarly stuff, already written.
What makes this different from "we promise not to look"?
Answers-only outputs
You get the count. The match status. The risk score. The system can't give you the plaintext even if you ask for it. Architectural enforcement, not policy.
Building blocks that compose
Aggregates. Thresholds. Comparisons. Key operations. Write your logic the way you'd write any function—just with secrets that stay secret.
Compile-time leak prevention
That thing where you accidentally log sensitive data? The build fails instead. Catches it before standup, not after the incident report.
How does this actually work?
Mark what's sensitive
`secret` types for the stuff you don't want in logs, databases, or Slack screenshots. `public` for everything else.
Write your computation
Analytics. ML pipelines. Matching logic. Whatever you need. It looks like normal code because it is normal code.
Explicit reveals
Want to output the aggregate count? Fine. The individual records? Compiler says no.
MPC happens under the hood. You get answers. The raw data never decrypts, never centralizes, never becomes your problem.



