Cross-chain bridge exploits explained: how attackers mint, unlock, or reroute funds
Most bridge losses come from broken verification, stolen signing authority, or compromised user routing to a fake front-end.
Cross-chain bridge exploits happen when an attacker makes a bridge release or mint assets without valid backing, or tricks users into sending funds to attacker-controlled contracts. The common thread is simple: a bridge is an accounting system across chains, and the exploit breaks its proof, its authority, or its instruction path.
Key Takeaways
- Cross-chain bridges typically lock assets on one chain and mint a wrapped claim on another, so an exploit often creates unbacked tokens or drains the locked collateral.
- Bridge hacks cluster into three failures: fake receipts (verification failure), forged approvals (key/validator or upgrade authority failure), and hijacked user routing (front-end or network-layer failure).
- Cross-chain bridges have been hacked for more than $2.8B, almost 40% of all Web3 value hacked, per DefiLlama figures cited by Chainlink.
- Bridge exploits account for more than half of all DeFi hacks, per Presto Research.
What cross-chain bridge exploits are (and why bridges are targeted)
Cross-chain bridge exploits explained in plain language look less like “coins moving between chains” and more like a broken receipt system. A typical cross chain bridge crypto flow locks tokens on Chain A and mints a wrapped token on Chain B, or uses liquidity pools to pay out on the destination chain. Either way, the bridge is deciding when it is allowed to mint, unlock, or release value based on some proof that something happened elsewhere.
That decision point is why bridges are targeted. Bridges concentrate two things attackers want in one place: a large pool of collateral (the locked vault or pooled liquidity) and a single “yes/no” control that authorizes minting or unlocking. Chainlink’s education hub cites DefiLlama figures showing bridges have been hacked for more than $2.8B, representing almost 40% of all Web3 value hacked. Presto Research adds that bridge exploits account for more than half of all DeFi hacks, which is why bridge risk belongs in any broader guide to what is defi.
In practice, the damage is not limited to the bridge’s vault. If the locked collateral is stolen, the wrapped asset on the destination chain can become effectively unbacked because redemption through the bridge no longer works. That is why experienced traders treat a wrapped token as credit exposure to the bridge’s collateral and control system, not as the “same asset on a different chain.”
How bridges work: the mechanics attackers try to break
A crypto bridge does two jobs, and every exploit attacks one of them. First, it observes Chain A and proves an event happened there. Second, it executes a matching action on Chain B, such as minting a wrapped token, releasing escrowed funds, or delivering a cross-chain message. ChainUp’s description is blunt: bridges do not teleport assets. They prove an event on the source chain and trigger a corresponding action on the destination chain.
In the common lock-and-mint model, the bridge locks the original token on the source chain and mints an equivalent wrapped token on the destination chain. Bridging back burns the wrapped token and unlocks the original. StartupDefense and Presto Research both describe this pattern, and it is the mental model that explains why “unbacked” is the default failure mode. If the bridge mints without a real lock, or unlocks without a real burn, supply and collateral stop matching.
Not every bridge mints wrapped assets. ChainUp also describes liquidity-network bridges that pay users from pools on the destination chain and settle later, plus more trust-minimized designs that verify the other chain’s state on-chain using light clients. The architecture matters because it tells a user what the bridge is using as proof and who can override it. That question is usually more predictive than whether the contracts have an audit badge.
The main exploit paths: what fails in practice
Bridge incidents map cleanly to three promises traders implicitly rely on. Promise one is verification: the bridge correctly verifies a real deposit or message. Promise two is authority: mint and unlock rights cannot be forged through stolen keys, compromised validators, or unsafe upgrades. Promise three is delivery: the user’s instruction reaches the real bridge, not an attacker’s lookalike.
Verification failures create “fake receipts.” Presto Research describes the common pattern as issuing assets on the destination chain and withdrawing them without a legitimate deposit on the source chain. Chainlink’s vulnerability list includes the same class of issue, where smart contract logic or verification steps are exploited to mint or withdraw without proper collateralization. This is why bridge bugs often look like accounting bugs, not like someone “breaking a blockchain.”
Authority failures are usually key or validator compromises. Chainlink highlights private key compromise as a major bridge vulnerability and lists incidents like Ronin (March 2022) and Harmony (June 2022) involving compromised multisig keys. Once an attacker controls enough signing power, the bridge can be made to approve withdrawals that look valid to the destination chain. Unsafe upgradability is a close cousin of this problem. If an admin key can change bridge logic without strong controls, the attacker does not need to find a bug. They can rewrite the rules.
Delivery failures are the category most explainers skip. Presto Research documents BGP hijacking incidents where users were redirected from a legitimate front-end to a phishing site that sent deposits to attacker-controlled contracts. This does not require breaking the bridge contracts at all. It exploits the fact that most users interact with a bridge through a website and wallet prompts, not by manually verifying contract addresses.
why do bridges keep getting hacked
Bridges keep getting hacked because they combine complexity with concentrated value. StartupDefense notes bridges rely on smart contracts, validators, and cross-chain communication components like oracles and relayers. Each component adds attack surface, and the bridge has to be correct on multiple chains at once.
Real-world loss data reinforces the incentive. Chainlink cites DefiLlama figures showing more than $2.8B hacked from bridges, almost 40% of all Web3 value hacked. Presto Research states bridge exploits account for more than half of all DeFi hacks. Attackers follow the payout, and bridges often hold a single vault of collateral or control a single minting authority that can be abused.
The other reason is that “security” is not one layer. A bridge can have solid on-chain code and still lose user funds through compromised keys, unsafe upgrades, or infrastructure attacks like BGP hijacking. That is why the fastest bridge risk assessment starts with two questions: what is used as proof, and who can override it.
what is a single signer bridge
A single signer bridge is a bridge where one private key, or one entity effectively controlling the signing threshold, can authorize cross-chain messages or withdrawals. In practice, it is the extreme end of the same risk spectrum Chainlink describes under private key management. If the signer is compromised, the attacker can approve arbitrary releases or mints.
Some bridges advertise multisigs or validator committees, but operational reality can still collapse into “single signer” risk if keys are poorly distributed, if one operator controls enough keys, or if the upgrade admin can bypass normal checks. Chainlink’s examples include incidents where a small number of multisig keys were enough to drain a bridge, and it also notes cases where compromised keys were under centralized control.
For users, the implication is simple. A single signer bridge turns a cross-chain transfer into unsecured exposure to one key’s operational security. That can be acceptable for small, time-bounded usage, but it is a different risk class than a design where verification is more trust-minimized.
what is a dvn and why does it matter
A dvn and why it matters comes down to who verifies cross-chain messages. A dvn decentralized verifier network is a design pattern used in some cross-chain messaging systems to validate that a message observed on one chain should be accepted on another. The point is not the acronym. The point is that the verifier layer is the bridge’s “proof engine,” and that is where fake receipts are either stopped or allowed through.
Messaging-focused bridges add flexibility because they can carry intent, not just tokens. ChainUp describes cross-chain messaging layers that relay arbitrary instructions using oracle and relayer pathways with application-level validation. That flexibility is also a risk surface. If the verifier layer is weak or overly centralized, a single compromised pathway can approve messages that cause downstream contracts to mint, unlock, or update balances.
This is why terms like layerzero protocol matter operationally. When a bridge is really a messaging layer, the user is trusting the message verification configuration as much as the token contract. The practical check is to identify the verifier set, the signing threshold, and the upgrade controls that can change those rules.
what were the biggest bridge hacks ever
The biggest bridge hacks ever are dominated by a handful of incidents that show the three failure modes in action. StartupDefense lists several major events and approximate losses: Ronin (2022) at about $620M, Poly Network (2021) at about $610M with funds later returned, Nomad (2022) at about $190M, and the BNB Chain bridge (2022) at about $100M.
Ronin is the cleanest example of authority failure. Presto Research describes a validator takeover where 5 of 9 validators were compromised, enabling a transfer of 173,600 ETH and 25.5M USDC, totaling about $625M in losses. The slight difference versus the ~$620M figure reflects common reporting variance from pricing and accounting, not a different underlying mechanism.
Qubit is a verification failure example. Presto Research describes how a null token address input bypassed validation and minted about $185M of qXETH, with losses around $80M after swaps. Nomad, as listed by StartupDefense, is a reminder that a single smart contract mistake can turn a bridge into a public withdrawal machine.
how does a bridge exploit spread to lending protocols
A bridge exploit spreads to lending protocols through collateral quality, not through magic “cross-chain infection.” When a bridge is drained, the wrapped token on the destination chain can become effectively unbacked because redemption is broken, as Presto Research explains. If that wrapped token is accepted as collateral in a lending market, the lending market is now pricing a claim that may no longer be redeemable.
The mechanical path is straightforward. Attackers who mint unbacked wrapped assets can sell them for other assets, or deposit them as collateral to borrow real liquidity. Even if the attacker does not touch a lending protocol, other users holding the wrapped asset can rush to exit, pushing the token off peg and forcing liquidations. That is classic defi contagion risk, and it is why bridge incidents often show up as broader liquidity stress across an ecosystem.
This is also why “TVL is high” is not a safety argument for bridges. Presto Research notes that higher value locked does not strengthen a bridge’s security. It mainly increases the incentive to attack and increases the blast radius when the wrapped asset’s credibility breaks.
how to check if a bridge is safe before using it
How to check if a bridge is safe before using it starts with classifying the bridge’s promises. First, identify what the bridge uses as proof on the destination chain. ChainUp describes verification mechanisms ranging from trusted validators and multisig committees to more trust-minimized light clients. A user does not need to read code to ask, “Is this proof mostly off-chain signatures, or on-chain verification of the other chain’s state?”
Second, map the authority surface area. StartupDefense emphasizes validator safety and recommends multi-signature controls, decentralization of validators, and monitoring for anomalous activity. Chainlink’s framework adds that private key management and unsafe upgradability are recurring failure points. Practically, the key questions are who can sign, what threshold is required, and who can change the rules through upgrades.
Third, assume the front-end can lie. Presto Research documents BGP hijacking losses at KLAYswap (about $1.9M) and Celer cBridge (about $235k) where users were redirected to phishing sites. Operationally, that means using bookmarked URLs, verifying contract addresses from official documentation, and being extra cautious during outages or sudden announcements when attackers often try to exploit confusion.
Finally, treat bridged assets as short-duration exposure. The risk is not just the bridge contract. It is the bridge’s collateral, its signing and upgrade controls, and the infrastructure that delivers the transaction. That framing plugs directly back into the main what is defi guide because bridges are one of the fastest ways for a single failure to cascade across multiple protocols.
Common misconceptions
“Bridges move tokens between chains” is the misconception that causes most risk blindness. Most bridges lock or burn on one chain and mint or release on another, as StartupDefense, Presto Research, and ChainUp describe. The user is holding a claim whose value depends on the bridge’s ability to honor redemption.
“If the smart contracts are audited, it’s safe” confuses code quality with system security. Chainlink highlights private key compromise and unsafe upgradability as major vulnerabilities, and those can bypass correct code paths. A bridge can be perfectly coded and still be drained if the signing authority is compromised.
“Bridge exploits are always smart contract hacks” misses the delivery layer. Presto Research’s BGP hijacking examples show user funds can be stolen without breaking the bridge contracts at all. The practical takeaway is to classify every incident as a verification failure, an authority failure, or an instruction-path failure. That lens is usually enough to understand the risk in under a minute, even when the post-mortem is full of jargon.
Sources
Frequently Asked Questions
What happens to wrapped tokens after a bridge gets hacked?
If the locked collateral on the source chain is stolen, the wrapped token on the destination chain can become effectively unbacked because redemption through the bridge no longer works. Presto Research describes this as the core risk of lock-and-mint designs. In practice, the wrapped asset can trade like a claim on nothing when the bridge’s vault is gone.
How does a false deposit bridge exploit work?
A false deposit exploit mints or releases assets on the destination chain without a legitimate deposit on the source chain. Presto Research describes this pattern as exploiting a logic flaw to trigger issuance without valid collateral. Chainlink also lists smart contract and verification failures that allow minting or withdrawals without proper backing.
Why are multisig and validator bridges risky?
They are risky because control of cross-chain messages often reduces to control of private keys. Chainlink lists private key compromise as a major bridge vulnerability and points to incidents like Ronin and Harmony involving compromised multisig keys. If enough keys are taken, the bridge can approve withdrawals that look valid on-chain.
Can a bridge be exploited without hacking the smart contracts?
Yes. Presto Research describes BGP hijacking attacks that redirect users to phishing front-ends, causing deposits to go to attacker-controlled contracts even if the bridge contracts are not broken. This is an infrastructure and user-path failure rather than an on-chain exploit.
What is the safest way to bridge between chains?
The provided sources do not rank a single “safest” bridge, but they do show what to evaluate: what the bridge uses as proof, who controls signing authority, and how upgrades are governed. ChainUp explains that verification can rely on trusted validators or more trust-minimized light clients. Chainlink and StartupDefense emphasize that private key management, upgradability controls, and monitoring materially affect real-world risk.