• Privacy Engineering

• MPC language

• Privacy by Design

You shouldn't have to choose between useful and creepy

You shouldn't have to choose between useful and creepy

Write the logic you need—analytics, ML, matching, whatever. Mark what comes out the other end. The raw data never moves. Never gets pooled. Can't leak.
# Secret Value Example - Stoffel MPC Program
  # Takes user age and salary as private inputs

  # Calculate eligibility score based on age and salary
  proc calculate_eligibility(age: secret int64, salary: secret int64): secret int64 =
    # Age factor: people between 25-65 get higher scores
    let age_score = age * 2
    
    # Salary factor: higher salary increases eligibility
    let salary_score = salary / 1000
    
    # Combined eligibility score
    let total_score = age_score + salary_score
    return total_score

  # Determine risk category based on inputs
  proc assess_risk_category(age: secret int64, salary: secret int64): secret int64 =
    let base_risk = 100
    let age_adjustment = age / 2
    let salary_adjustment = salary / 10000
    
    let final_risk = base_risk - age_adjustment + salary_adjustment
    return final_risk

  # Main computation function
  proc main() =
    # These would be secret inputs from different parties in real MPC
    let user_age: secret int64 = 35      # Age input
    let salary: secret int64 = 75000     # Salary input
    
    # Perform secure computations
    let eligibility_score = calculate_eligibility(user_age, salary)
    let risk_score = assess_risk_category(user_age, salary)
    
    # Results computed without revealing individual age or salary
    discard eligibility_score
    discard risk_score

Stoffel Lang: What is this?

A language for computing over data you don't actually want. You need the insights—the aggregate scores, the match results. You don't need the raw records sitting in your database waiting to leak.

Type-level guarantees

If it compiles, you didn't accidentally leak something. No code review debates about "is this safe?"—the compiler already checked.

Normal developer workflow

Local sim. Unit tests. Deterministic builds. This isn't a research project—it's infrastructure that works like infrastructure.

Patterns you can copy

Threshold checks. Overlap detection. Risk scoring. Federated aggregation. The gnarly stuff, already written.

What makes this different from "we promise not to look"?

Answers-only outputs

You get the count. The match status. The risk score. The system can't give you the plaintext even if you ask for it. Architectural enforcement, not policy.

Building blocks that compose

Aggregates. Thresholds. Comparisons. Key operations. Write your logic the way you'd write any function—just with secrets that stay secret.

Compile-time leak prevention

That thing where you accidentally log sensitive data? The build fails instead. Catches it before standup, not after the incident report.

How does this actually work?

  1. Mark what's sensitive

`secret` types for the stuff you don't want in logs, databases, or Slack screenshots. `public` for everything else.

  1. Write your computation

Analytics. ML pipelines. Matching logic. Whatever you need. It looks like normal code because it is normal code.

  1. Explicit reveals

Want to output the aggregate count? Fine. The individual records? Compiler says no.

MPC happens under the hood. You get answers. The raw data never decrypts, never centralizes, never becomes your problem.

© 2025 Stoffel. All rights reserved.

© 2025 Stoffel. All rights reserved.

© 2025 Stoffel. All rights reserved.