AI-Powered Attacks

by Jon Lober | NOC Technology

Why Your Old Security Playbook Is Obsolete

It's 6 AM on a Monday. Your CFO gets an email from what looks like your bank, referencing a wire transfer you actually discussed last week. The grammar is perfect. The tone matches previous messages. The logo is spot-on. She clicks the link and enters her credentials.


You just got hit by an AI-powered phishing attack, and your email filter never saw it coming.


This isn't hypothetical. Google's Threat Intelligence Group (GTIG) confirmed in February 2026 what security professionals feared: nation-state hackers and cybercriminals are no longer experimenting with AI. They're deploying it in active operations against businesses like yours.


The implications are stark. Traditional security tools (signature-based antivirus, static firewalls, manual monitoring) were designed for a different era. They cannot keep pace with attacks that adapt in real-time, generate thousands of unique phishing messages, or evolve faster than your security team can respond.


For small and medium-sized businesses in St. Louis, this isn't an abstract threat. It's a fundamental shift that demands a new approach.


AI Is Now an Active Weapon

According to GTIG's quarterly reports from late 2025 and early 2026, government-backed hackers from Iran, China, North Korea, and Russia are using large language models like Gemini to accelerate every phase of cyberattacks.


Iranian threat group APT42 uses AI to craft "hyper-personalized, culturally nuanced" phishing lures. Russian group APT28 deployed AI-integrated malware called PROMPTSTEAL against Ukrainian targets (the first confirmed use of malware querying an AI model in live military operations).


But it doesn't stop there. Underground forums now sell AI attack tools to anyone with a credit card. Tools like FraudGPT, WormGPT, and the newer Xanthorox offer "uncensored" AI capable of developing malware, crafting phishing campaigns, and automating attacks. KELA researchers documented a 200% increase in mentions of malicious AI tools across cybercrime forums in 2024, with the trend accelerating into 2025 and 2026.


The barrier to launching sophisticated attacks has collapsed. Your St. Louis accounting firm or manufacturing company is now a target for anyone willing to pay $50 for an AI attack subscription.


5 Reasons Traditional Security Tools Can't Stop AI-Powered Attacks


1. AI Attackers Move Faster Than Signature-Based Detection

Traditional antivirus relies on "signatures" (known patterns of malicious code). When new malware appears, researchers analyze it, create a signature, and push updates. This worked when new variants appeared slowly.

AI has shattered that model.


Google identified a malware family called PROMPTFLUX that uses AI to rewrite its own code hourly. The malware prompts an AI model to "act as an expert VBScript obfuscator" and regenerate itself in a new form while preserving all malicious functionality. Each iteration looks different to signature-based detection. By the time a signature is created for one version, thousands of unique variants already exist.


What this means: If your endpoint protection relies primarily on signature matching, you're vulnerable to malware specifically designed to evade it. You could be hit by malware your antivirus has literally never seen before.


2. Phishing Has Become Indistinguishable from Legitimate Email

For years, security training taught employees to spot phishing by looking for poor grammar, awkward phrasing, or suspicious addresses. These "tells" made sense when attackers manually crafted messages (often in languages they didn't speak fluently).


AI eliminated these tells entirely.


Iranian threat actor APT42 uses Gemini to create messages that mirror the professional tone of target organizations. The AI helps attackers research specific individuals, generate native-sounding text, maintain believable multi-turn conversations, and tailor content to local culture with fluent English, Farsi, or Hebrew.


North Korean attackers use AI to generate cover letters and job applications good enough to land interviews at Western companies (part of a scheme to place clandestine workers who send earnings back to the regime).


What this means: Your bookkeeper might receive an email that appears from a long-time client, references a real project, uses correct industry terminology, and sounds exactly like previous legitimate messages. The only difference? It contains a credential-harvesting link. Traditional email filters won't catch it.


3. Reconnaissance Happens at Machine Speed

Before attacking a target, hackers need information: organizational structure, key employees, technology systems, security weaknesses. Traditional reconnaissance required manual research (browsing LinkedIn, reading websites, searching public records). This took time and limited targeting scope.


AI made reconnaissance nearly instantaneous.


Google observed threat actors using Gemini to compile detailed information on specific individuals, map organizational hierarchies, research companies across 11 sectors and 13 countries, and synthesize open-source intelligence to profile high-value targets.


One Chinese threat actor (UNC6418) used Gemini to gather sensitive account credentials and email addresses, then launched a phishing campaign against those exact accounts shortly afterward. The speed from research to attack was dramatically compressed.


What this means: An attacker targeting your construction company could use AI to analyze your LinkedIn profiles, website content, job postings, vendor relationships, and public records in minutes. They'd know your key personnel, technology stack, partners, and likely vulnerabilities before sending a single packet to your network.


4. Malware Now Adapts in Real-Time

Traditional malware was static. Once deployed, it did what it was programmed to do. Security teams could analyze samples, understand behavior, and develop countermeasures.


AI-integrated malware is fundamentally different.

Google's 2025-2026 research identified multiple malware families using AI during active operations:

  • PROMPTFLUX: Uses Gemini's API to regenerate its own code hourly, achieving "just-in-time" self-modification
  • PROMPTSTEAL: Queries an AI model to generate commands on the fly (rather than hard-coding commands that could be flagged)
  • PROMPTLOCK: Ransomware that uses AI to dynamically generate and execute malicious scripts at runtime
  • HONESTCUE: A downloader that calls the Gemini API to generate code enabling download and execution of second-stage malware


Russian threat actor APT28 deployed PROMPTSTEAL against Ukraine, marking the first observation of malware querying an AI model in live operations.


What this means: Imagine ransomware that, when it detects your endpoint protection software, asks an AI: "How should I encrypt files to avoid detection by [specific security product]?" Then it implements those specific evasion techniques. This isn't science fiction. It's the direction threat actors are actively exploring.


5. The Underground Market Has Lowered the Barrier to Entry

Sophisticated cyberattacks once required sophisticated attackers. Building malware, crafting phishing campaigns, and researching vulnerabilities demanded technical skills that limited who could execute effective attacks.


AI tools have commoditized these capabilities.


The underground marketplace for AI-enabled attack tools has matured significantly. Services like FraudGPT and WormGPT advertise on Telegram and underground forums as "uncensored" AI for malware development. Xanthorox bills itself as "the killer of WormGPT" and offers custom AI for offensive operations (though investigation revealed it runs on jailbroken commercial AI products). COINBAIT is a phishing kit built using an AI-powered development platform to create sophisticated credential-harvesting pages mimicking major cryptocurrency exchanges.


These tools offer subscription pricing, free tiers with ads, and customer support. Exactly like legitimate software.


What this means: Your St. Louis business isn't just a target for sophisticated nation-state actors. You're also a target for low-skill criminals who bought an AI attack toolkit for $50 and are launching automated attacks against thousands of businesses simultaneously.


What Businesses Need Instead

A Layered Security Approach

If traditional tools can't stop AI-powered attacks alone, what works? The answer is a multilayered cybersecurity strategy that combines technology, human expertise, and proactive monitoring.


Layer 1: AI-Powered Defense Tools

If attackers are using AI, defenders need AI too. Modern security solutions use machine learning to:

  • Detect anomalous behavior rather than relying solely on signatures
  • Analyze email content for sophisticated phishing attempts
  • Identify unusual patterns in network traffic
  • Spot insider threats or compromised accounts based on behavioral changes


These tools don't replace traditional security. They augment it with capabilities that can match the speed and adaptability of AI-powered attacks.


Layer 2: Human Expertise + Technology

AI alone isn't enough on either side. The most effective security combines automated tools with experienced professionals who can:

  • Interpret alerts and distinguish real threats from false positives
  • Conduct threat hunting to find attackers who evade automated detection
  • Respond to incidents with judgment that AI lacks
  • Stay current on evolving threat actor tactics


For most St. Louis businesses, maintaining this expertise in-house isn't practical. A managed IT services partner with cybersecurity expertise brings these capabilities without the overhead of a full internal security team.


Layer 3: Proactive Monitoring and Response

Waiting for attacks to trigger alerts isn't enough. Proactive security includes:

  • 24/7 monitoring of endpoints, networks, and cloud systems
  • Threat intelligence tracking emerging attack techniques before they hit your industry
  • Vulnerability management to patch weaknesses before attackers exploit them
  • Security awareness training reflecting current attack methods (not outdated examples)


Layer 4: Robust Backup and Recovery

Even the best defenses sometimes fail. When they do, your ability to recover determines whether an attack is a disruption or a disaster.

Disaster recovery planning for the AI era means:

  • Air-gapped backups that ransomware can't reach
  • Regular testing to verify backups actually work
  • Documented recovery procedures for quick restoration
  • Business continuity plans accounting for various attack scenarios


Layer 5: Strategic Security Planning

Cybersecurity isn't a product. It's an ongoing process that needs to evolve with your business and the threat landscape. A virtual CIO (vCIO) can help you:

  • Align security investments with actual business risks
  • Develop long-term security roadmaps
  • Make informed decisions about security technology
  • Ensure your security posture keeps pace with AI-driven threats


The Bottom Line

Google's research makes clear that AI-powered attacks are no longer theoretical. They're happening now. Nation-state actors use AI to accelerate reconnaissance, craft flawless phishing campaigns, and develop malware that evolves in real-time. Cybercriminals sell AI attack tools to anyone willing to pay.


Traditional security tools (signature-based antivirus, static firewalls, perimeter-focused defenses) were designed for a different era. They remain necessary but are no longer sufficient.


The businesses that will navigate this landscape successfully are those adopting a layered security approach: AI-enhanced detection, human expertise, proactive monitoring, robust backup systems, and strategic planning.

The question isn't whether your business will face AI-powered attacks. The question is whether you'll be prepared when they arrive.


Ready to Assess Your Security Posture?

If you're not sure how your current defenses stack up against AI-enabled threats, we can help you find out. Our security assessments identify gaps, prioritize risks, and recommend specific improvements for St. Louis businesses facing modern attack vectors.



Talk to a Security Expert - No pressure, just answers.


Frequently Asked Questions

Are small businesses really targets for AI-powered attacks? +
Yes, and increasingly so. AI tools have lowered the barrier to entry for cyberattacks, meaning even unsophisticated criminals can launch effective campaigns against thousands of businesses simultaneously. Your business doesn't need to be specifically targeted. Automated AI attacks hit everyone they can reach, and SMBs often make more attractive targets because successful attacks are less likely to trigger law enforcement response.
How quickly are AI attack methods evolving? +
Extremely quickly. Google's Threat Intelligence Group releases quarterly updates, and each report documents significant new developments. In just the past year, we've seen malware that uses AI for self-modification (PROMPTFLUX), the first confirmed AI-integrated malware in live military operations (PROMPTSTEAL against Ukraine), and maturing underground marketplaces selling AI attack tools. This pace shows no signs of slowing.
Can AI-powered security tools actually stop AI-powered attacks? +
Yes, but they're part of a solution, not a silver bullet. AI-powered security tools excel at behavioral analysis and detecting anomalies that signature-based tools miss. However, they work best when combined with human expertise for interpretation and response, traditional security controls for defense in depth, and proactive measures like patching, training, and backup.
What's the first step to address AI-enabled threats? +
Start with an honest assessment of your current security posture. Many businesses don't know what vulnerabilities they have or whether their existing tools would catch modern attack techniques. A security assessment identifies gaps, prioritizes risks, and recommends improvements. From there, focus on fundamentals: robust backup, modern endpoint protection, multi-factor authentication, and ongoing security awareness training.
How do we stay current when threats change so rapidly? +
This is exactly why many businesses partner with managed security providers rather than handling security entirely in-house. Keeping up with evolving threats is a full-time job. A security-focused managed IT provider stays current so you don't have to, translating threat intelligence into practical protections for your specific environment and adjusting your security posture as new threats emerge.
What makes AI-generated phishing different from traditional phishing? +
Traditional phishing often had telltale signs: grammatical errors, awkward phrasing, generic greetings. AI eliminates these red flags. AI-generated phishing features perfect grammar, contextually appropriate content, personalized details about the target, and tone matching previous legitimate communications. Standard email filters and traditional employee training may not catch these sophisticated attacks.
Is this threat specific to certain industries? +
No. Google's research shows threat actors targeting companies across 11 sectors and 13 countries. While certain industries (healthcare, legal, financial services) face additional regulatory pressure, every business with valuable data, customer information, or money to steal is a potential target. AI-powered attacks are opportunistic and automated, casting wide nets rather than focusing on single industries.

Sources

  1. Google Threat Intelligence Group, "GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use," February 2026. https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use
  2. Google Threat Intelligence Group, "GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools," November 2025. https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools
  3. KELA Research, "AI Tools Promoted by Threat Actors in Underground Forums," November 2025. https://cybersecuritynews.com/ai-tools-promoted-by-threat-actors
  4. Trend Micro, "The State of Criminal AI: Crime as a Service, AI as the Multiplier," 2025-2026. https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/the-state-of-criminal-ai
how will you handle in-office stream of ncaa march madness games?
By Jon Lober March 2, 2026
Whether you block or allow streaming of the NCAA tournament games at work, you have some IT decisions to make ahead of selection Sunday on March 15.
By Jon Lober March 1, 2026
Local HIPAA IT support for St. Louis medical practices. Patient care continuity, clinical integration, secure EHR. Local technicians, never overseas.
By Jon Lober February 27, 2026
Medical practices pay $200-$300/user/month for HIPAA-compliant IT support. Learn what drives healthcare IT costs and how to budget for compliance requirements.
More Articles