Verifiable AI Risk Engine
The AI Risk Engine analyzes DeFi protocols in real-time and generates allocation recommendations. Every recommendation is committed on-chain before execution, creating an immutable audit trail.
Why Verifiable AI?
AI systems managing capital face a trust problem. Users must trust that:
Correct Model
The AI used the model it claims to use, not a manipulated version.
Real Data
The AI based its decision on actual market data, not fabricated inputs.
Policy Adherence
The AI followed DAO-defined constraints, not its own preferences.
Obsqra.fi solves this with a commit-reveal pattern: the AI commits to its recommendation on-chain before execution. This creates cryptographic proof that the AI's decision was made at a specific time with specific reasoning.
How It Works
DefiLlama + CoinGecko APIs
Multi-factor analysis
Hash commitment on-chain
Verify and apply changes
DefiLlama + CoinGecko APIs
Multi-factor analysis
Hash commitment on-chain
Verify and apply changes
Step 1: Data Collection
The AI service fetches real-time data from trusted sources:
- ‣DefiLlama: Protocol TVL, TVL changes, category data
- ‣CoinGecko: Token prices, 7-day volatility, market cap
- ‣On-chain: Current allocations, pool TVL, recent events
Step 2: Risk Scoring
Each protocol receives a risk score (0-100) based on multiple factors:
| Factor | Weight | Description |
|---|---|---|
| TVL | 25% | Larger TVL = lower risk (more battle-tested) |
| Volatility | 25% | 7-day price volatility of underlying tokens |
| Utilization | 20% | For lending protocols, higher = more risk |
| Age | 15% | Older protocols have longer track records |
| Audit Status | 15% | Multiple audits reduce risk score |
Step 3: Commitment
Before execution, the AI commits a hash of its recommendation:
// Commitment hash includes all decision parameters
commitmentHash = keccak256(
aaveAllocation, // e.g., 4000 (40%)
lidoAllocation, // e.g., 3500 (35%)
compoundAllocation, // e.g., 2500 (25%)
reason, // "Low correlation..."
timestamp // Block timestamp
)Step 4: Execution and Verification
When the allocation is applied, the contract verifies the commitment:
// Contract verifies before applying
require(
keccak256(aave, lido, compound, reason, timestamp)
== storedCommitment,
"CommitmentMismatch"
);Risk Scoring in Detail
The risk model produces a score from 0 (lowest risk) to 100 (highest risk) for each protocol. Here's how current protocols typically score:
| Protocol | Typical Score | Risk Level | Key Factors |
|---|---|---|---|
| Aave | 25-35 | Low | High TVL, multiple audits, long history |
| Lido | 30-45 | Low-Medium | Staking risk, validator performance |
| Compound | 25-40 | Low-Medium | Established, utilization-dependent |
| Curve | 35-50 | Medium | Complex pools, impermanent loss |
| Yearn | 40-55 | Medium | Strategy risk, composability |
Allocation Algorithm
The AI optimizes for risk-adjusted returns. The core algorithm:
# Risk-adjusted allocation
for protocol in [aave, lido, compound]:
score = protocol.apy / (protocol.riskScore + 1)
# Normalize to 100%
for protocol in protocols:
allocation = protocol.score / totalScore * 10000This means protocols with higher APY and lower risk get larger allocations. The algorithm naturally diversifies because concentrating in one protocol would increase overall risk.
Example Calculation
| Protocol | APY | Risk Score | Adj. Score | Allocation |
|---|---|---|---|---|
| Aave | 3.5% | 30 | 3.5 / 31 = 0.113 | 40% |
| Lido | 4.2% | 40 | 4.2 / 41 = 0.102 | 35% |
| Compound | 2.8% | 35 | 2.8 / 36 = 0.078 | 25% |
On-Chain Audit Trail
Every AI decision emits events that create a permanent, queryable audit trail:
event RecommendationCommitted(
uint256 indexed recommendationId,
bytes32 commitmentHash,
uint256 timestamp
);
event AIVerifiedAllocation(
uint256 indexed recommendationId,
uint256 aaveWeight,
uint256 lidoWeight,
uint256 compoundWeight,
string reason
);Anyone can query these events to verify that every allocation change was preceded by a valid AI commitment. This provides regulatory-grade auditability without sacrificing user privacy.
Roadmap: Full ZK-ML
The current commit-reveal pattern is "Verifiable AI-Lite." The full vision includes ZK proofs of ML model execution:
Current: Commit-Reveal
- +Proves AI committed before execution
- +Prevents post-hoc justification
- +Low gas cost (~50k gas)
- -Doesn't prove model execution
Future: ZK-ML Proofs
- +Proves exact model was run
- +Proves inputs were valid
- +Proves constraints were checked
- -Higher gas cost (~500k gas)