BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

When Hacking Becomes a National Team and AI: A Security Self-Test Checklist for Crypto Projects in 2026

链上启示录
特邀专栏作者
2025-12-11 07:05
This article is about 7351 words, reading the full article takes about 11 minutes
The five-layered battlefield of cryptographic security, from AI-assisted on-chain attacks to hacker resume penetration.
AI Summary
Expand
  • 核心观点:加密安全威胁从链上漏洞转向系统性风险。
  • 关键要素:
    1. CEX单次攻击损失远超DeFi,权限滥用成主因。
    2. AI能自动化攻击链,成本低至1.22美元/合约。
    3. 国家级黑客通过远程招聘渗透,威胁组织安全。
  • 市场影响:迫使行业建立多层、持续的安全行动框架。
  • 时效性标注:中期影响

In 2025, the crypto industry appears much more "mature" than it was a few years ago: contract templates are more standardized, auditing firms are lining up, and AI features are gradually appearing in various security tools. On the surface, risks seem to be being systematically mitigated.

What is truly changing is the structure of the attack.

The number of on-chain vulnerabilities is decreasing, but a single incident can still wipe out an entire organization's balance sheet; AI is currently mainly used as a simulation and auditing tool, but it is already quietly changing the iteration speed of attack scripts; and off-chain, state-sponsored hackers are beginning to regard remote recruitment, freelance platforms, and enterprise collaboration software as their new main battleground.

In other words, the security issue is no longer just about "whose code is cleaner," but rather "whose system is more resistant to misuse and penetration." For many teams, the biggest risks are not on the blockchain, but at the layer they don't really consider a security issue: accounts and permissions, people and processes, and how these elements fail under pressure.

This article attempts to provide a simplified map that remains valid until 2026: from on-chain logic to accounts and keys, to teams, supply chains, and post-event response—the security issues facing the crypto industry are transforming from a "vulnerability list" into a set of actionable frameworks that must be implemented.

On-chain attacks: becoming less frequent, but increasingly expensive.

This year's crypto attacks exhibit a new symmetry: the number of incidents has decreased, but the destructive power of each attack has significantly increased. SlowMist's mid-2025 report shows that the crypto industry experienced 121 security incidents in the first half of the year—a 45% decrease from 223 in the same period last year. This should have been good news, but losses from these attacks surged from $1.43 billion to approximately $2.37 billion, an increase of 66%.

Attackers no longer waste time on low-value targets, but instead focus on high-value assets and entry points with high technological barriers.

Source: SlowMist 2025 Mid-Year Report

DeFi: From Low-Cost Arbitrage to High-Tech Gameplay

Decentralized finance (DeFi) remains a primary battleground for attackers, accounting for 76% of all attacks. However, despite representing a high number of attacks (92 in total), losses to DeFi protocols decreased from $659 million in 2024 to $470 million. This trend indicates that smart contract security is gradually improving, and the widespread adoption of formal verification, bug bounty programs, and runtime protection tools is building a more robust defense for DeFi.

However, this does not mean that DeFi protocols are secure. Attackers have shifted their focus to more sophisticated vulnerabilities, seeking opportunities that could yield greater rewards. Meanwhile, centralized exchanges (CEXs) have become a major source of losses. Despite only 11 attacks, the total losses amounted to $1.883 billion, with one well-known exchange suffering a single loss of $1.46 billion—one of the largest single attacks in crypto history (even surpassing the $625 million Ronin attack). These attacks did not rely on on-chain vulnerabilities but rather stemmed from account hijacking, internal privilege abuse, and social engineering attacks.

The data disclosed in this report tells a clear story: centralized exchanges (CEXs) suffer far fewer attacks than DeFi, yet incur significantly higher total losses. On average, the "reward" of attacking a CEX is more than 30 times that of attacking a DeFi protocol.

This "inefficiency" has also led to a polarization of attack targets:

  • The DeFi battlefield: technology-intensive – attackers need a deep understanding of smart contract logic, to discover reentrancy vulnerabilities, and to exploit flaws in the AMM pricing mechanism;
  • CEX Battlefield: Permission-Intensive – The goal is not to crack the code, but to obtain account access, API keys, and signing rights for multisignature wallets.

Meanwhile, attack methods are also evolving. A series of new attacks emerged in the first half of 2025: phishing attacks utilizing the EIP-7702 authorization mechanism, investment scams using deepfake technology to impersonate exchange executives, and malicious browser plugins masquerading as Web3 security tools. A deepfake scam ring busted by Hong Kong police caused losses exceeding HK$34 million—victims believed they were video chatting with real cryptocurrency influencers when in reality the other party was an AI-generated virtual avatar.

Image description: AI-generated virtual avatar

Hackers are no longer casting nets; they're looking for great white sharks.

AI: A tool for defense, a multiplier for offense.

While on-chain attacks are becoming more professional and focused on a few high-value targets, the emergence of cutting-edge AI models provides attackers with the technological possibility to scale up and automate these attacks. Arecent study on December 1st shows that in a blockchain simulation environment, AI agents can now complete the entire attack chain, from analyzing smart contract vulnerabilities and constructing exploits to transferring funds.

In this experiment led by the MATS and Anthropic teams, AI models (such as Claude Opus 4.5 and GPT-5) successfully exploited real-world smart contract vulnerabilities to "steal" approximately $4.6 million in assets in a simulated environment. Even more strikingly, among 2,849 newly deployed contracts with no publicly known vulnerabilities, these models discovered two zero-day vulnerabilities and completed a simulated attack worth $3,694 at an API cost of approximately $3,476.

In other words, the average cost of scanning a contract vulnerability is only about $1.22, which is less than the price of a subway ticket .

AI is also learning to "do the math." In a vulnerability scenario called FPC, GPT-5 only "stole" $1.12 million, while Claude Opus 4.5 stole $3.5 million—the latter systematically attacked all liquidity pools that reused the same vulnerability pattern, rather than being satisfied with a single target; this proactive attack strategy of "maximizing profits" was previously mostly regarded as the skill of human hackers.

Description: A chart showing the total gains obtained by an AI model exploiting a vulnerability (based on simulation testing). Source:Anthropic

More importantly, the research team deliberately controlled data contamination: they selected 34 contracts that were attacked in reality after March 2025 (the knowledge cutoff date for these models) as their test set. Even so, the three models successfully exploited 19 of these contracts in the simulation environment, "stealing" a total of approximately $4.6 million. This was not a blank slate simulation, but a prototype of an attack script that could be directly ported to a real blockchain; provided the contracts and on-chain state remained unchanged, they were sufficient to translate into real financial losses.

Exponentially increasing attack capabilities

The same study also yielded a more unsettling conclusion: in the 2025 sample, the "vulnerability discovery gains" of AI roughly doubled every 1.3 months—an increase an order of magnitude faster than Moore's Law. With the rapid advancements in model inference, tool usage, and long-cycle task execution, defenders are clearly losing their time advantage.

In this environment, the question is no longer "whether AI will be used for attacks", but rather "what are the most critical risks facing the blockchain industry if it fails to properly address the security challenges driven by AI?"

Tat Nguyen, founder of VaultMind, a blockchain-based autonomous AI security company, summarized this risk point very directly:

"The most critical risk for blockchain is speed. Microsoft's latest defense report has shown that AI can automate the entire attack lifecycle. If we cannot adapt quickly, the blockchain industry will face attacks at 'machine speed'—exploitation will go from weeks to seconds."

Traditional audits often take weeks, yet approximately 42% of compromised protocols were still audited. The answer is almost certain: a continuous, AI-driven security system.

For the crypto industry, the implications are clear: by 2025, AI will no longer be a placebo for defenders; it will have become a core force in the attack chain. This also means that the security paradigm itself must be upgraded, not just "do more audits."

From on-chain vulnerabilities to resume penetration: The evolution of state hacking

If the emergence of AI represents a technological upgrade, then the infiltration by North Korean hackers has raised the level of risk to a more uncomfortable height. North Korean-related hacking groups (such as Lazarus) have become one of the main threats to the crypto industry, shifting their approach from directly attacking on-chain assets to long-term, covert off-chain infiltration.

North Korean hackers' "job seeker" strategy

AI is upgrading attackers' toolkits, while North Korean hackers have raised the bar for risk to an even more uncomfortable level. Rather than a technical issue, this is a concentrated test of the security baseline of organizations and individuals. Related hacking groups (such as Lazarus) have become one of the major threats to the crypto industry, shifting their focus from directly attacking on-chain assets to long-term, covert off-chain penetration.

At the Devconnect conference in Buenos Aires, Web3 security expert and Opsek founder Pablo Sabbatella presented a startling estimate: 30%–40% of job applications in the crypto industry may come from North Korean agents . If this estimate is even half correct, the job inboxes of crypto companies are no longer simply a talent market, but rather a new attack vector.

According to him, the "job seeker strategy" employed by these hackers was simple yet effective: posing as remote engineers, they gained access to the company's internal systems through normal recruitment processes, targeting code repositories, signing permissions, and multi-signature accounts. In many cases, the real perpetrators did not directly reveal their identities but instead recruited "agents" in countries like Ukraine and the Philippines through freelance platforms: these agents rented out their verified accounts or identities, allowing the other party to remotely control devices, with the revenue split according to an agreement—they took about 20%, and the remainder went to North Korea.

When targeting the US market, the deception is often layered on top of another. A common arrangement is to first find an American to act as the "front-end developer," who then poses as a "Chinese engineer who doesn't speak English" and needs someone to attend the interview on their behalf. Subsequently, malware is used to infect this "front-end developer's" computer, leveraging their US IP address and broader network access. Once successfully hired, these "employees" work long hours, produce consistent output, and almost never complain, making them less likely to appear suspicious in a highly distributed team.

This strategy worked not only because of North Korean resources, but also because it exposed the structural gaps in the crypto industry's operational security (OPSEC): remote work has become the default option, teams are highly dispersed and span multiple jurisdictions, and recruitment verification processes are often lax, with technical skills being valued far more than background checks.

In such an environment, a seemingly ordinary remote job application resume may be more dangerous than a complex smart contract. When attackers are already sitting in your Slack channel, have access to GitHub, and even participate in multisignature decisions, even the most perfect on-chain audit can only cover a portion of the risks.

This is not an isolated incident, but part of a larger picture. A report released by the U.S. Treasury Department in November 2024 estimated that North Korean hackers stole more than $3 billion in crypto assets over the past three years, some of which were used to support Pyongyang's nuclear weapons and missile programs.

For the crypto industry, this means that a "successful attack" is never just a digital game on the blockchain; it can directly alter the real-world military budget.

From Risk List to Action List: Security Baseline for 2026

The preceding text has clearly outlined three major threats facing the crypto industry: AI-driven machine-level attacks, systemic failures in auditing systems, and infiltration by state-owned actors such as North Korea. When these risks overlap, traditional security frameworks reveal fatal weaknesses.

This section will answer the single most important question: What kind of security architecture does a crypto project need in the new threat environment?

A security architecture better suited for 2026 should include at least the following five layers:

  1. Smart contracts and on-chain logic that can be continuously monitored and automatically regressed;
  2. An identity system designed to treat keys, permissions, and accounts as "high-value attack surfaces";
  3. A security system for organizations and personnel focused on preventing nation-state infiltration and social engineering attacks (covering recruitment and security drills).
  4. It's not a temporary add-on, but rather an AI adversarial capability built into the infrastructure;
  5. In the event of an incident, a response system capable of tracing the source, isolating the affected assets, and protecting them can be completed within minutes.

The real change lies in transforming security from a one-time contract acceptance into a continuously operating infrastructure—a significant portion of which will inevitably exist in the form of "AI Security as a Service".

Layer 1: On-chain logic and smart contract security

The applicable targets mainly include DeFi protocols, wallet core contracts, cross-chain bridges, and liquidity protocols. The problem this layer solves is very direct: once the code is on the blockchain, errors are usually irreversible, and the cost of fixing them is far higher than that of traditional software.

In practice, several practices are becoming the basic configuration for this type of project:

  • Introduce AI-assisted pre-deployment auditing, instead of just static checks.

Before formal deployment, use publicly available security benchmarking tools or AI adversarial auditing frameworks to systematically "simulate attacks" on the protocol, focusing on evaluating multipath, combined calls, and boundary conditions, rather than just scanning for single points of vulnerability.

  • By performing pattern-based scanning on similar contracts, the probability of "mass defaults" can be reduced.

Establish patterned scanning and baselines on reusable templates, pools, and strategy contracts to identify "same type vulnerabilities" that may occur under the same development model in batches, so as to avoid multiple pools being breached at the same time due to a single mistake.

  • Maintain a minimum scalability and ensure the multisignature structure is sufficiently transparent.

The scalability of management contracts should be strictly limited, leaving only necessary room for adjustment. At the same time, multi-signature and permission structures need to be explainable to the community and key partners to reduce the governance risk of "technology upgrades being seen as a black box."

  • Continuously monitor for potential manipulation after deployment, rather than just focusing on prices.

On-chain monitoring should not only observe abnormal fluctuations in prices and oracles, but also cover:

  • Abnormal changes in the authorization event;
  • Batch exception calls and centralized permission operations;
  • The sudden appearance of abnormal fund flows and complex call chains.

If this layer is not handled properly, subsequent access control and post-incident response can often only minimize losses while creating a mess, and it is difficult to prevent the incident from happening in the first place.

The second layer: Account, permission, and key system security (the core risks of CEXs and wallets)

For centralized services, the security of account and key systems often determines whether an incident escalates to a "survival-level risk." Recent cases involving exchanges and custodians demonstrate that large-scale asset losses occur more frequently at the level of account, key, and permission systems than simply due to contract vulnerabilities.

At this level, several basic practices are becoming an industry consensus:

  • Eliminate shared accounts and general management backend accounts, and bind all critical operations to traceable individual identities;
  • For critical operations such as withdrawals and permission changes, use MPC or equivalent multi-party control mechanisms instead of a single key or a single approver.
  • Engineers are prohibited from directly handling sensitive permissions and keys on personal devices; such operations should be restricted to a controlled environment.
  • Continuous behavioral monitoring and automated risk scoring are implemented for key personnel, and timely warnings are given for signals such as abnormal login locations and abnormal operation rhythms.

For most centralized institutions, this layer is both the main battleground for preventing internal errors and wrongdoing, and the last barrier to prevent a security incident from escalating into a "survival event".

The third layer: Organizational security & personnel security (the barrier against national-level infiltration)

In many crypto projects, this layer is almost nonexistent. But by 2025, the importance of organizational and personnel security has become equal to that of the security of the contracts themselves: attackers don't necessarily need to break through the code, and penetration teams are often cheaper and more stable.

Given that state-level actors are increasingly using the strategy of "remote job seeking + agents + long-term infiltration," project teams need to address their shortcomings in at least three areas.

First, restructure the recruitment and identity verification process.

The recruitment process needs to be upgraded from "formal resume review" to "substantive verification" to reduce the probability of long-term infiltration. For example:

  • Real-time video communication is required, rather than purely voice or text communication;
  • Technical interviews and code tests require real-time screen sharing to prevent proxy coding.
  • Cross-background checks were conducted on educational background, previous companies, and former colleagues;
  • By combining activities on accounts such as GitHub and Stack Overflow, we can verify whether the candidate's long-term technical trajectory is consistent.

Second, limit single-point permissions to avoid granting "green lights" to all users at once.

New engineers should not have direct access to key systems, signature infrastructure, or production-grade databases in a short period of time. Permissions should be escalated in stages and tied to specific responsibilities. Internal systems should also be designed with layered isolation to prevent any single role from automatically possessing "full-chain permissions."

Third, core positions are treated as higher-priority attack targets for social workers.

In many attacks, frontline operations and infrastructure personnel are more likely to be targeted than CEOs, including:

  • DevOps / SRE;
  • Signature administrator and key escrow owner;
  • Wallet and infrastructure engineers;
  • Audit engineers and red team members;
  • Cloud and Access Control Administrator.

For these positions, regular social worker attack drills and security training are no longer a "bonus," but a necessary condition for maintaining basic defenses.

If this layer is not handled properly, even the most complex technical defenses of the first two layers can be easily bypassed by a well-designed recruitment or a seemingly normal remote collaboration.

Fourth layer: Security countermeasures against AI (emerging defenses)

Once sophisticated attackers begin systematically using AI, defenders who still rely on "human analysis + semi-annual audits" are essentially using human resources to fight automated systems, with limited chances of success.

A more pragmatic approach is to incorporate AI into the security architecture in advance, establishing it as a standard capability in at least the following areas:

  • Before formal deployment, AI is used to conduct "adversarial auditing" and systematically simulate multi-path attacks;
  • Scan similar contracts and development patterns to identify clusters of vulnerabilities.
  • By integrating logs, behavioral patterns, and on-chain interaction data, AI is used to build risk scores and priority rankings.
  • Identify and block deepfake interviews and abnormal interview behaviors (such as delayed speech, misaligned eye contact and lip movements, and overly scripted answers).
  • Automatically detects and blocks the download and execution of malicious plugins and malicious development toolchains.

If this trend continues, AI will most likely become the new infrastructure layer of the security industry by 2026.

Security capabilities are no longer proven by an "audit report," but rather by whether the defender can use AI to complete detection, early warning, and response at near-machine speed .

After all, once attackers start using AI, if defenders continue to stick to the "six-month audit" pace, they will soon be overwhelmed by the difference in pace.

Fifth layer: Post-incident response and asset freezing

In the realm of on-chain security, the ability to recover funds after an incident has become a separate battleground. By 2025, asset freezing will be one of the most important tools in this arena.

The publicly available data gives a fairly clear signal: SlowMist's "Blockchain Security and Anti-Money Laundering Report for the First Half of 2025" shows that the total amount stolen from blockchain in the first half of the year was approximately US$1.73 billion, of which approximately US$270 million (about 11.38%) was frozen or recovered, which is already a relatively high level in recent years.

The speed at which a project reacts after an attack largely determines the proportion of assets that can be recovered . Therefore, the industry needs to establish "wartime mechanisms" in advance, rather than trying to catch up after an incident occurs.

  • Establish a rapid response mechanism with professional on-chain monitoring and security service providers, including technical alert channels (Webhook / emergency groups) and clear handling SLAs (how many minutes to respond and how many hours to provide action recommendations);
  • Design and rehearse emergency multisignature processes in advance for quickly enabling or disabling contracts and freezing high-risk features;
  • Automatic pause mechanisms are set up for cross-chain bridges and other critical infrastructure: when indicators such as cash flow and call frequency are abnormal, the system automatically enters shutdown or read-only mode.

*As for further asset recovery, it often needs to extend off-chain: including establishing compliance and legal cooperation pathways with stablecoin issuers, custodians, and major centralized platforms.

Defense is no longer just about "avoiding attacks," but about minimizing the final losses and spillover effects after an attack.

Conclusion: Safety is a new entry ticket.

The current reality is that most crypto projects only cover one or two of the five security levels. This isn't a matter of technical capability, but rather a matter of prioritization: visible short-term gains often outweigh invisible long-term development.

However, after 2025, the threshold for attacks is shifting from "scale" to "relevance." AI tools are making low-cost reconnaissance commonplace, and the core security issue is becoming: can the system still function when a real attack occurs?

Projects that can smoothly navigate 2026 may not be the most technologically advanced, but they will certainly possess a systematic defense across these five dimensions. The next round of systemic events will partially determine whether the crypto industry continues to be seen as a high-risk asset pool or is seriously considered a candidate for financial infrastructure.

Over the past decade, the crypto world has spent considerable time proving it's not a Ponzi scheme; in the next decade, it needs to demonstrate the same resolve in proving its security to support serious capital. For truly long-term funds, this will be one of the watershed moments determining whether to continue participating.

"In the long run, the most dangerous risk is the one you refuse to see."

Safety
Welcome to Join Odaily Official Community