As Hackers "More Efficiently" Utilize AI, How Will the "Spear and Shield" Arms Race in Web3 Escalate?
- Core Viewpoint: As AI technology is used by hackers to launch highly customized and automated social engineering attacks, Web3 security threats have entered a new stage of industrialization, while the defense system remains relatively lagging. The integration of AI and Web3 has the potential to reshape the security paradigm by building intelligent protection solutions covering the entire transaction lifecycle, making security a scalable default capability.
- Key Elements:
- Evolution of Attack Methods: AI can analyze user preferences, generate customized phishing content, and simulate social relationships, shifting social engineering attacks from mass emails to "precision targeting."
- Risks Spanning the Entire Transaction Process: From phishing pages before interaction, malicious contracts during interaction, unlimited permission signatures during authorization, to MEV attacks after submission, risks are omnipresent.
- AI Empowering User-Side Protection: Can serve as a 7x24 security assistant, identifying fraudulent rhetoric through NLP, and intuitively presenting the consequences of malicious authorization code to users through transaction simulation.
- AI Empowering Protocols and Products: Enables the shift from static auditing to real-time defense. For example, automated auditing tools can quickly scan code logic and simulate extreme scenarios to identify vulnerabilities in advance.
- Positioning and Boundaries of AI: AI is an auxiliary tool, not a panacea. It aims to reduce the cost of human judgment errors and cannot replace user sovereignty or automatically intercept all attacks.
- Security Evolution Trend: Security is transforming from a static requirement of "safeguarding the mnemonic phrase" into a continuous, dynamic, and intelligent process. AI is making decentralized systems more user-friendly.
Looking back at the year 2025 that just passed, if you feel that on-chain scams are becoming increasingly "tailored to you," it's not an illusion.
With the deep proliferation of LLMs, social engineering attacks launched by hackers have evolved from bulky mass emails to "precision baiting": AI can analyze your on-chain/off-chain preferences to automatically generate highly enticing, customized phishing content, even perfectly mimicking your friend's tone and logic on social channels like Telegram.
It can be said that on-chain attacks are entering a truly industrialized stage. In this context, if the shields in our hands remain in the "manual era," security itself will undoubtedly become the biggest bottleneck for Web3's mass adoption.
1. Web3 Security Stalling: When AI Intervenes in On-Chain Attacks
If the Web3 security issues of the past decade stemmed more from code vulnerabilities, then a clear change after entering 2025 is that attacks are becoming "industrialized," while security protections have not been upgraded in sync.
After all, phishing websites can be generated in bulk via scripts, and fake airdrops can be automatically and precisely delivered, making social engineering attacks rely not on a hacker's deceptive talent but on model algorithms and data scale.
To understand the severity of this threat, we can deconstruct a simple on-chain Swap transaction. You will then discover that throughout the entire lifecycle from transaction creation to final confirmation, risks are almost omnipresent:
- Before Interaction: You might have entered a phishing page disguised as an official website, or used a DApp frontend with a malicious backdoor;
- During Interaction: You might be interacting with a token contract containing "backdoor logic," or the counterparty itself is a flagged phishing address;
- During Authorization: Hackers often trick users into signing seemingly harmless signatures that actually grant them "unlimited withdrawal permissions";
- After Submission: Even if all operations are correct, at the final step of submitting the transaction, MEV scientists might still be lurking in the mempool, waiting to sandwich attack and plunder your potential profits;
And it's not limited to Swaps. Extending further to all interaction types including transfers, staking, minting, etc., in this chain-like process of transaction creation, validation, broadcasting, on-chain inclusion, and final confirmation, risks are everywhere. A problem at any point could cause a secure on-chain interaction to fail at the last hurdle.
It can be said that, based on the current account system, even the most secure private key protection cannot withstand one mistaken click by a user; the most rigorous protocol design can be bypassed by one authorization signature; the most decentralized system is most easily breached by "human vulnerabilities." This means a fundamental problem emerges—if attacks have entered an automated and intelligent stage, while defense remains at "manual judgment," security itself becomes the bottleneck (Extended reading: "The $3.35 Billion 'Account Tax': When EOA Becomes a Systemic Cost, What Can AA Bring to Web3?").
Ultimately, ordinary users still lack a one-stop solution that can provide security protection for the entire transaction lifecycle. AI, however, holds the promise of helping us build a security solution for end-users (C-side) that covers the entire transaction lifecycle, offering a 7×24-hour defense line to protect user assets.
2. What Can AI × Web3 Do?
So, let's theoretically envision, in the face of this technologically asymmetric game, in what aspects can the combination of AI x Web3 reconstruct a new paradigm for on-chain security?
First, for ordinary users, the most immediate threat is often not protocol vulnerabilities, but social engineering attacks and malicious authorizations. At this level, AI plays the role of a 7×24-hour, tireless security assistant.
For example, AI can use Natural Language Processing (NLP) technology to identify highly suspicious communication tactics in social media or private chat channels:
Take receiving a "free airdrop" link. An AI security assistant would not only check the URL against blacklists but also analyze the project's social media buzz, domain registration age, and the fund flow of its smart contract. If the link leads to a newly created, fake contract with no funds, the AI would display a huge red cross on your screen.
"Malicious authorization" is currently the leading cause of asset theft. Hackers often trick users into signing seemingly harmless signatures that grant "unlimited withdrawal permissions":
When you click to sign, the AI would first run a transaction simulation in the background. It would plainly tell you: "If this operation is executed, all ETH in your account will be transferred to address A." This ability to translate obscure code into intuitive consequences is the strongest barrier against malicious authorizations.
Secondly, on the protocol and product side, it enables a shift from static auditing to real-time defense. In the past, Web3 security relied heavily on periodic manual audits, which were often static and lagging.
Now, AI is being embedded into real-time security chains. Like the now-familiar automated auditing: compared to traditional audits requiring human experts to spend weeks reviewing code, AI-driven automated audit tools (such as smart contract scanners combined with deep learning) can complete logical modeling of tens of thousands of lines of code in seconds.
Based on this logic, current AI can simulate thousands of extreme transaction scenarios, identifying subtle "logic traps" or "reentrancy vulnerabilities" before code deployment. This means that even if developers accidentally leave a backdoor, the AI auditor can issue a warning before assets are attacked.
Furthermore, security tools like GoPlus can intercept transactions before hackers strike. Services like GoPlus SecNet, which allow users to configure on-chain firewalls and provide RPC network services for real-time transaction security checks, can proactively block risky transactions to prevent asset loss. This includes transfer protection, authorization protection, anti-honeypot token purchase blocking, MEV protection, etc. These can check whether transaction addresses and assets involved in transfers, trades, and other interactions pose risks before the operation. If risks exist, the transaction is proactively blocked.
I even strongly support GPT-style AI services, such as providing a 7×24-hour on-chain security assistant for most novice users. It could guide users in solving various Web3 security issues they encounter and quickly provide solutions for sudden security incidents.
The core value of such systems naturally lies not in being "100% correct," but in shifting the risk discovery time from "after the fact" to "during the event" or even "beforehand."
3. Where Are the Boundaries of AI × Web3?
Of course, it's the usual cautious optimism. When discussing the new potential that AI × Web3 can bring in areas like security, it's necessary to remain restrained.
Because ultimately, AI is just a tool. It should not replace user sovereignty, cannot hold assets for users, and certainly cannot automatically "intercept all attacks." Its reasonable positioning leans more towards reducing the cost of human judgment errors as much as possible without altering decentralization.
This means that while AI is powerful, it is not omnipotent. A truly effective security system must be the result of the combined action of AI's technical advantages, users' vigilant security awareness, and collaborative design between tools, rather than betting security entirely on a single model or system.
Just like the decentralized values Ethereum has always upheld, AI should exist as an auxiliary tool. Its goal is not to make decisions for people, but to help people make fewer mistakes.
Looking back at the evolution of Web3 security, a clear trend emerges. Early security was simply "keep your seed phrase safe." The mid-stage was "don't click unfamiliar links, cancel invalid authorizations promptly." Today, security is becoming a continuous, dynamic, and intelligent process.
In this process, the introduction of AI does not weaken the significance of decentralization; instead, it makes decentralized systems more suitable for long-term use by ordinary users. It hides complex risk analysis in the background, presenting key judgments as intuitive prompts to users, transforming security from an additional burden into a "default capability."
This also echoes a judgment I have repeatedly mentioned before: AI and Web3/Crypto are essentially a mirrored comparison of "productive forces" and "production relations" in the new era (Extended reading: "When Web3 Collides with d/acc: What Can Crypto Do in the Age of Technological Acceleration?"):
If we view AI as an evolving "spear"—it greatly enhances efficiency but can also be used for large-scale malicious acts—then the decentralized system built by Crypto is precisely a "shield" that must evolve synchronously. From the perspective of d/acc, the goal of this shield is not to create absolute security, but to ensure the system remains trustworthy even in the worst-case scenario, giving users the space to exit and self-rescue.
In Conclusion
The ultimate goal of Web3 has never been to make users understand more technology, but to have technology protect users without them noticing.
Therefore, when attackers have already started using AI, a defense system refusing to become intelligent is itself a risk. Precisely because of this, protecting asset security is an endless, infinite game. In this era, users who know how to use AI to arm themselves will become the hardest fortress to breach in this game.
The significance of AI × Web3 perhaps lies right here—not in creating absolute security, but in making security a capability that can be replicated at scale.


