Verifiable AI Risk Engine

The AI Risk Engine analyzes DeFi protocols in real-time and generates allocation recommendations. Every recommendation is committed on-chain before execution, creating an immutable audit trail.

Why Verifiable AI?

AI systems managing capital face a trust problem. Users must trust that:

Correct Model

Correct Model

The AI used the model it claims to use, not a manipulated version.

Real Data

Real Data

The AI based its decision on actual market data, not fabricated inputs.

Policy Adherence

Policy Adherence

The AI followed DAO-defined constraints, not its own preferences.

Obsqra.fi solves this with a commit-reveal pattern: the AI commits to its recommendation on-chain before execution. This creates cryptographic proof that the AI's decision was made at a specific time with specific reasoning.

How It Works

Data Fetch

DefiLlama + CoinGecko APIs

Risk Scoring

Multi-factor analysis

Commit

Hash commitment on-chain

Execute

Verify and apply changes

Step 1: Data Collection

The AI service fetches real-time data from trusted sources:

  • DefiLlama: Protocol TVL, TVL changes, category data
  • CoinGecko: Token prices, 7-day volatility, market cap
  • On-chain: Current allocations, pool TVL, recent events

Step 2: Risk Scoring

Each protocol receives a risk score (0-100) based on multiple factors:

Factor
TVL
Weight
25%
Description
Larger TVL = lower risk (more battle-tested)
Factor
Volatility
Weight
25%
Description
7-day price volatility of underlying tokens
Factor
Utilization
Weight
20%
Description
For lending protocols, higher = more risk
Factor
Age
Weight
15%
Description
Older protocols have longer track records
Factor
Audit Status
Weight
15%
Description
Multiple audits reduce risk score

Step 3: Commitment

Before execution, the AI commits a hash of its recommendation:

// Commitment hash includes all decision parameters
commitmentHash = keccak256(
  aaveAllocation,    // e.g., 4000 (40%)
  lidoAllocation,    // e.g., 3500 (35%)
  compoundAllocation, // e.g., 2500 (25%)
  reason,            // "Low correlation..."
  timestamp          // Block timestamp
)

Step 4: Execution and Verification

When the allocation is applied, the contract verifies the commitment:

// Contract verifies before applying
require(
  keccak256(aave, lido, compound, reason, timestamp)
    == storedCommitment,
  "CommitmentMismatch"
);

Risk Scoring in Detail

The risk model produces a score from 0 (lowest risk) to 100 (highest risk) for each protocol. Here's how current protocols typically score:

Protocol
Aave
Typical Score
25-35
Risk Level
Low
Key Factors
High TVL, multiple audits, long history
Protocol
Lido
Typical Score
30-45
Risk Level
Low-Medium
Key Factors
Staking risk, validator performance
Protocol
Compound
Typical Score
25-40
Risk Level
Low-Medium
Key Factors
Established, utilization-dependent
Protocol
Curve
Typical Score
35-50
Risk Level
Medium
Key Factors
Complex pools, impermanent loss
Protocol
Yearn
Typical Score
40-55
Risk Level
Medium
Key Factors
Strategy risk, composability
ℹ️
Dynamic Scoring
Risk scores update in real-time based on market conditions. A protocol's score can spike during high volatility, exploit news, or unusual utilization patterns. The AI service polls data every 5 minutes and recalculates scores.

Allocation Algorithm

The AI optimizes for risk-adjusted returns. The core algorithm:

# Risk-adjusted allocation
for protocol in [aave, lido, compound]:
    score = protocol.apy / (protocol.riskScore + 1)

# Normalize to 100%
for protocol in protocols:
    allocation = protocol.score / totalScore * 10000

This means protocols with higher APY and lower risk get larger allocations. The algorithm naturally diversifies because concentrating in one protocol would increase overall risk.

Example Calculation

Protocol
Aave
APY
3.5%
Risk Score
30
Adj. Score
3.5 / 31 = 0.113
Allocation
40%
Protocol
Lido
APY
4.2%
Risk Score
40
Adj. Score
4.2 / 41 = 0.102
Allocation
35%
Protocol
Compound
APY
2.8%
Risk Score
35
Adj. Score
2.8 / 36 = 0.078
Allocation
25%

On-Chain Audit Trail

Every AI decision emits events that create a permanent, queryable audit trail:

event RecommendationCommitted(
    uint256 indexed recommendationId,
    bytes32 commitmentHash,
    uint256 timestamp
);

event AIVerifiedAllocation(
    uint256 indexed recommendationId,
    uint256 aaveWeight,
    uint256 lidoWeight,
    uint256 compoundWeight,
    string reason
);

Anyone can query these events to verify that every allocation change was preceded by a valid AI commitment. This provides regulatory-grade auditability without sacrificing user privacy.

Roadmap: Full ZK-ML

The current commit-reveal pattern is "Verifiable AI-Lite." The full vision includes ZK proofs of ML model execution:

Current: Commit-Reveal

  • +Proves AI committed before execution
  • +Prevents post-hoc justification
  • +Low gas cost (~50k gas)
  • -Doesn't prove model execution

Future: ZK-ML Proofs

  • +Proves exact model was run
  • +Proves inputs were valid
  • +Proves constraints were checked
  • -Higher gas cost (~500k gas)
Foundation for the Future
The commit-reveal architecture is designed to upgrade to ZK-ML without breaking changes. The same contract interfaces will accept ZK proofs instead of commitment hashes when the proving infrastructure is ready.