
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works .
The attack in question was a Python script that allowed bypassing 2FA in a "popular open-source, web-based system administration tool." According to the GTIG, the exploit's code bore all the hallmarks of AI usage and abuses a logic flaw. GTIG remarks that for authorization flows, even the latest LLMs "struggle to navigate complex enterprise […] logic," but they're really good at contextual reasoning. This means they have the ability to read source code and validate the developer's intention versus what's actually implemented, and thus quickly find unconsidered corner cases.
That's only one small slice of the report, though, seeing as GTIG found pervasive usage of AI over a good handful of cybersecurity operation types. Malicious hackers have always had their own software suites for creating and distributing exploits, but they can now rely on bots to significantly augment their capabilities. Agents can alter their source code in real-time or tweak their attack as they go along in an effort to evade detection.
Google reports that state hackers from China, Russia and Iran are using Gemini in 'all stages' of attacks
90-day vulnerability disclosure may be dead due to AI, leaving systems exposed to zero-day attacks
Anthropic's latest AI model identifies 'thousands of zero-day vulnerabilities' in 'every major operating system and every major web browser'
The bots are also used to improve obfuscation in several layers, be it in adding filler code to their attack logic or adding multiple layers of indirection so that the code manages to hide its true intention. Needless to say, all these characteristics make it much harder for security software to detect or contain; examples include CANFAIL and LONGSTREAM.
Software like the PROMPTSPY Android backdoor leverages Google Gemini (the cloud service, not the on-device variant) to deviously manipulate the user's phone. Nifty tricks, including taking screenshots and working out the UI elements presented to the user to then simulate interactions on their behalf, down to capturing PIN/pattern authentication, or intercepting Uninstall button clicks.
Additionally, the GTIG found instances of malware that can modify its own source code and create exploit payloads dynamically, and generate decoy code.
All those real-time morphing abilities extend to phishing and network attacks. For example, malfeasants ask bots to generate a company's organizational chart and generate custom phishing emails laden with real information collected from news, LinkedIn pages, or press releases.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/cyber-security/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/cyber-security/google-finds-first-ai-developed-zero-day-that-bypasses-2fa-self-morphing-malware-and-gemini-powered-backdoors-signal-a-new-era-of-cybercrime#main
- https://www.tomshardware.com/subscription
- Standard 90-day vulnerability disclosure policy is likely dead thanks to AI, expert warns that AI can weaponize patches in 30 minutes — LLM-assisted bug-hunting
- Standard 90-day vulnerability disclosure policy is likely dead thanks to AI, expert warns that AI can weaponize patches in 30 minutes — LLM-assisted bug-hunting
- NVIDIA and ServiceNow Partner on New Autonomous AI Agents for Enterprises
- Intel, SK hynix shares surge following reports of chip packaging partnership — SK is said to be testing Intel's 2.5D EMIB for HBM integration
- Former Epic director is building a European rival to the Unreal and Unity game engines — 'The Immense Engine' dev sees opportunity for AI agents to 'do the work
Informational only. No financial advice. Do your own research.