Services
Page
Services
Page
WEB3

Dependency Exposure Is Part of the Attack Surface

Web3 systems rarely fail inside a single contract. They fail at the boundaries between contracts, integrations, and external operators.
Dependencies define what you can detect, what you can coordinate, and how quickly you can respond under stress.
What Counts as a Dependency in Web3
Dependencies include any external system or actor your protocol assumes will behave correctly or respond on time.

Examples

Oracles and price feeds
Bridges and cross chain messaging
Relayers and keepers
RPC providers and indexing services
CEX and market maker workflows
Custody and key management infrastructure
Off chain services that trigger on chain actions
Why Dependency Risk Behaves Differently Than Code Risk
You can patch your code. You cannot patch someone else's incentives or their incident timelines.
When a dependency degrades, your system enters a mode you may not have designed for.
Failure properties
  • Asymmetric visibility. You see symptoms, not root causes
  • Coordination latency. Response requires external alignment
  • Blame ambiguity. Ownership is unclear during incidents
  • Cascades. One degraded dependency can invalidate other assumptions
Common Dependency Failure Patterns
Most dependency incidents appear as valid system behavior until they compound.

01. Oracle Behavior Under Stress

Price feeds can lag, diverge, or reflect manipulated markets.
Contracts may execute correctly while outcomes become economically exploitable.

02. Bridge and Messaging Degradation

Message delays or halts can freeze flows and create partial states.
Users experience stuck funds and uncertainty.

03. Relayer and Keeper Failures

Automations can stop during volatility.
Systems that rely on timely triggers can fail silently.

04. Off Chain Service Drift

Indexers and backend services can diverge from chain reality.
Teams learn about it when users lose trust.

05.

Listings, deposits, withdrawals, and market making behaviors shift risk outside your loop.
Recovery can depend on parties you do not control.
What Dependency Exposure Locks In
Once integrations are live, dependency changes often require migrations, partner coordination, or user facing disruptions.
In practice, teams accept dependency assumptions long after those assumptions stopped being true.

Locked areas

Monitoring coverage and blind spots
Escalation paths and response timelines
Emergency actions that require partner coordination
Degraded mode user experience
Public trust during partial outages
The Dependency Exposure List Teams Need
A useful dependency exposure list makes assumptions explicit.
It clarifies what you depend on, why, and what happens when it degrades.

Elements

Dependency name and role in system behavior
Expected behavior and failure modes
Owner and escalation contact path
Detection signals and monitoring sources
Degraded mode behavior and user impact
Workarounds and intervention constraints
Where Teams Usually Look Next
Once dependency exposure is treated as risk surface, teams typically validate security boundaries, intervention paths, and operational readiness for degraded states.
Dependency Exposure Is Part of the Attack Surface | Web3 Risk