• Privacy-First ML

Train together without trusting each other

Train together without trusting each other

You know federated learning could unlock the signal you need. But you also know what Legal will say about a central aggregator seeing everyone's updates. Stoffel removes the aggregator from the trust boundary. No one can see individual updates, just the combined model.

Built for teams stuck between innovation and compliance

ML Teams at Multi-Party Organizations

Train on signal you can't centralize

Get model lift without exposing raw updates

Ship cross-org experiments without six-month review cycles

Privacy-Focused Product Teams

Build features that require distributed training

Meet privacy requirements structurally, not procedurally

Meet privacy requirements structurally, not procedurally

Avoid accumulating sensitive gradient data

Avoid accumulating sensitive gradient data

Infrastructure Teams Managing Consortium Data

Remove the central aggregator from the trust boundary

Run federated workflows without "trusted third party" assumptions

Deploy with existing Flower pipelines

Privacy guarantees you can actually explain in a review meeting

Stop arguing about who to trust with the aggregation. The system architecture prevents anyone from seeing individual updates—including us.

No One Sees Individual Updates

Not even the aggregator can read another participant's gradients

Parties jointly compute the aggregate using MPC

Individual updates remain cryptographically hidden throughout

Only the combined model is revealed to participants

Control What Leaves Each Round

Policy enforcement at the protocol level, not the promise level

Only global model and approved metrics are revealed

No raw updates, no plaintext gradients, no CSV exports

Define output policy once; the system enforces it

Keep Your Existing Pipeline

Drop-in integration with Flower, not a framework rewrite

Use your current Python and Flower code

Configure clipping and differential privacy as usual

No data lake merges, no new agents on analyst machines

Everything you need to run private federated aggregation

FedAvg Module in Stoffel Lang

Drop-in replacement for your Flower aggregation strategy with MPC privacy guarantees built in.

Orchestration Layer

Handles round management, timeouts, and partial participation so your training runs don't break when someone drops.

Python/Flower Adapter SDK

Call the Stoffel aggregator exactly like you'd call a standard FedAvg server—same interface, private backend.

Local Development Tools

Test your aggregation logic locally before deploying to multi-party environments.

Output Policy Configuration

Specify which metrics and artifacts can leave each round—everything else stays encrypted.

Documentation & Integration Support

Step-by-step guides for migrating existing Flower deployments to Stoffel's private aggregation.

For Cautious Builders

Ship like a normal developer
Privacy happens in the background

We built this for teams who want structural guarantees, not policy promises. You shouldn't have to become a cryptographer to stop accumulating liability.

Familiar Code Patterns

Same Flower interface you're already using—just point to our aggregator endpoint.

Familiar Code Patterns

Same Flower interface you're already using—just point to our aggregator endpoint.

Local Simulation

Test your aggregation logic on your machine before running multi-party.

Local Simulation

Test your aggregation logic on your machine before running multi-party.

Clear Error Messages

No cryptic protocol failures—actual debugging information when something goes wrong.

Clear Error Messages

No cryptic protocol failures—actual debugging information when something goes wrong.

Flexible Privacy Controls

Add differential privacy and gradient clipping where you need it; skip it where you don't.

Flexible Privacy Controls

Add differential privacy and gradient clipping where you need it; skip it where you don't.

Works With Your Stack

Python 3.8+, compatible with standard ML frameworks (PyTorch, TensorFlow, etc.).

Works With Your Stack

Python 3.8+, compatible with standard ML frameworks (PyTorch, TensorFlow, etc.).

Honest About Tradeoffs

MPC adds computation time. We're upfront about performance characteristics so you can decide if the tradeoff works.

Honest About Tradeoffs

MPC adds computation time. We're upfront about performance characteristics so you can decide if the tradeoff works.

Minimal changes

# server.py
import flwr as fl
# drop-in FedAvg replacement
from stoffel_flower.strategy import StoffelFedAvg 

# Instantiate Stoffel’s FedAvg
strategy = StoffelFedAvg(
    # Standard FedAvg parameters (unchanged)
    fraction_fit=0.5,
    min_fit_clients=2,
    min_available_clients=2,
    on_fit_config_fn=fit_config,

fl.server.start_server(
    server_address="0.0.0.0:8080",
    config=fl.server.ServerConfig(num_rounds=5),
    strategy=strategy,
)

Minimal changes

# server.py
import flwr as fl
# drop-in FedAvg replacement
from stoffel_flower.strategy import StoffelFedAvg 

# Instantiate Stoffel’s FedAvg
strategy = StoffelFedAvg(
    # Standard FedAvg parameters (unchanged)
    fraction_fit=0.5,
    min_fit_clients=2,
    min_available_clients=2,
    on_fit_config_fn=fit_config,

fl.server.start_server(
    server_address="0.0.0.0:8080",
    config=fl.server.ServerConfig(num_rounds=5),
    strategy=strategy,
)

Three steps. No data movement

  1. Local Training

Each party trains on their local data as usual. Nothing leaves their environment at this stage.

  1. Private Aggregation

Stoffel's MPC protocol combines the updates without any party being able to read another's contribution. The computation happens jointly; the inputs stay hidden.

  1. Distribution

Everyone receives the new global model. That's all they learn—the aggregate result, nothing about individual contributions.

FAQ

Have more questions? Contact our team with any questions you may have.

Does any raw data move between parties?

No. Each party trains locally. Only the aggregate model is computed and shared. Individual updates never leave in plaintext.

Does any raw data move between parties?

No. Each party trains locally. Only the aggregate model is computed and shared. Individual updates never leave in plaintext.

Does any raw data move between parties?

No. Each party trains locally. Only the aggregate model is computed and shared. Individual updates never leave in plaintext.

How do you handle stragglers or parties dropping out?

Built-in support for timeouts and partial participation. If someone doesn't respond, the round continues with available parties.

How do you handle stragglers or parties dropping out?

Built-in support for timeouts and partial participation. If someone doesn't respond, the round continues with available parties.

How do you handle stragglers or parties dropping out?

Built-in support for timeouts and partial participation. If someone doesn't respond, the round continues with available parties.

Can I use differential privacy and gradient clipping?out Stoffel

Yes. Hook points exist in the protocol for both. Configure them like you would in standard Flower.

Can I use differential privacy and gradient clipping?out Stoffel

Yes. Hook points exist in the protocol for both. Configure them like you would in standard Flower.

Can I use differential privacy and gradient clipping?out Stoffel

Yes. Hook points exist in the protocol for both. Configure them like you would in standard Flower.

What's the performance impact compared to plaintext aggregation?

MPC adds computation overhead. Exact impact depends on number of parties and network conditions. We recommend piloting with your actual setup to evaluate the tradeoff.

What's the performance impact compared to plaintext aggregation?

MPC adds computation overhead. Exact impact depends on number of parties and network conditions. We recommend piloting with your actual setup to evaluate the tradeoff.

What's the performance impact compared to plaintext aggregation?

MPC adds computation overhead. Exact impact depends on number of parties and network conditions. We recommend piloting with your actual setup to evaluate the tradeoff.

© 2025 Stoffel. All rights reserved.

© 2025 Stoffel. All rights reserved.

© 2025 Stoffel. All rights reserved.