Starknet Re{define} Hackathon · Privacy track
Private DeFi.
Verifiable execution.
Our zkML coprocessor: off-chain inference and proofs, consumed by the chain. Delegate once with session keys; every action is verified on-chain.
No proof, no execution.
DeFi forces a choice: full transparency or opaque custody. zkde.fi gives you verifiable privacy — prove the rules were followed without revealing your strategy.
Our zkML coprocessor drives two core concepts
We build a zkML coprocessor: off-chain inference and proofs, consumed by the chain. It drives trustless execution and verifiable AI. We verify intent (your constraints) and inference (model outputs); part of the logic runs on-chain (e.g. allocation risk in Cairo).
What is the coprocessor? Off-chain inference + proofs; the chain verifies and executes.
Trustless execution
Definition
Execution that doesn't require trust. No proof, no execution. Verification is deterministic. Part of the logic (e.g. allocation risk) runs on-chain in Cairo; the rest is proof-gated.
How zkML drives it
The coprocessor produces proofs of inference (risk, anomaly) and intent (constraints). The contract verifies those proofs (Garaga + Integrity), then executes. Where we have on-chain logic (e.g. AllocationRouter in Cairo), the contract computes and enforces it. So zkML (proofs) + on-chain slice = trustless execution.
Verifiable AI
Definition
The AI's decisions are verifiable — we prove model outputs (or predicates on them) without revealing inputs, model details, or raw outputs. Only compliance (e.g. risk below threshold) is proven on-chain.
How zkML drives it
We run risk and anomaly models (inference); we prove predicates on the outputs via Groth16 (Garaga). Raw scores and analysis stay private. Today: Groth16 (risk, anomaly). Roadmap: RISC Zero. zkML = inference + proof; that's what makes the AI verifiable.
How it works
zkDE is the engine. GATE is the standard. Proof-gated execution unlocks trustless delegation.
Set constraints
Define max position, allowed protocols, risk limits. Grant a session key once.
Inference + proof
We run risk and anomaly models (inference), prove the result (e.g. risk below threshold), then verify on-chain. The contract also runs allocation risk check in Cairo where applicable. No proof, no execution.
Verified execution
The contract checks the proof on-chain. No proof, no execution. Receipt stored for audit.
Privacy by design
Three pillars of privacy in the zkDE engine. GATE defines how agents run — governed by proof.
Intent hiding
Trade intent stays hidden until execution. No broadcast until proven valid. MEV and front-running protection built in.
Confidential transactions
Amount-hiding transfers using Garaga Groth16. Only commitments visible on-chain. Your balance stays private.
Selective disclosure
Prove compliance without revealing strategy. Show you followed the rules without exposing your full history.
Hybrid proof system
Inference proofs (Garaga) plus execution proofs (Integrity) together enable verifiable execution. Allocation risk is computed on-chain (Cairo) for rebalance flow. Groth16 for privacy, STARK for execution.
Garaga (Groth16 / SNARK)
Proof-gated risk and anomaly checks; confidential transfers. Hides model outputs and amounts.
Integrity (STARK)
Constraint proofs, receipts. Native Starknet verification, no trusted setup.