ASD Warns of AI-Powered Phishing: What SMBs Should Know


The Australian Signals Directorate recently issued an advisory about AI-enhanced phishing attacks targeting Australian organisations. It’s worth paying attention to.

Here’s what’s actually changing and what you should do about it.

What ASD Is Warning About

The advisory highlights several concerning trends:

More convincing text: Attackers are using large language models (think ChatGPT and similar) to generate phishing emails with perfect grammar and natural language. The days of spotting phishing by bad spelling are over.

Personalisation at scale: AI enables attackers to customise messages for individual targets using publicly available information. Your LinkedIn profile, company website, and social media provide the raw material.

Voice cloning: AI-generated voice calls that sound like executives or trusted contacts. “Hi, it’s [CEO name], I need you to process an urgent payment.”

Deepfake video: While still relatively rare, video calls with AI-generated participants are becoming possible. Someone could impersonate a supplier or executive on a video call.

Why This Matters for SMBs

Small businesses are often seen as easier targets:

  • Less security training
  • Fewer technical controls
  • More trusting relationships
  • Less formal processes for verification

AI-powered phishing increases the threat:

  • Attacks look more legitimate
  • Personalisation makes messages more convincing
  • Scale means more attempts against more targets

The barriers to sophisticated attacks are dropping. What required significant effort before now requires modest investment in AI tools.

What’s Actually Different

Let me be honest: AI-enhanced phishing is evolutionary, not revolutionary. The fundamentals haven’t changed.

Same goals: Attackers still want credentials, money, or access. The objectives are unchanged.

Same delivery: Email remains the primary vector. Text messages and calls are secondary.

Same weaknesses: They’re still exploiting human psychology - urgency, authority, fear of missing out.

What’s different: The quality is higher. The personalisation is better. The scale is larger.

This means the bar for employee awareness is higher. “Look for spelling mistakes” isn’t sufficient anymore.

Updated Security Awareness

Training needs to evolve:

Old advice (still valid but insufficient):

  • Check the sender address
  • Look for spelling and grammar errors
  • Hover over links before clicking
  • Be suspicious of urgent requests

Additional guidance needed:

Verify through separate channels: If an email asks you to do something unusual, verify through a different communication method. Call the person (using a number you have on file, not from the email). Walk to their desk. Use internal chat.

Question the unusual: “This seems like something [person] would ask for” isn’t enough when AI can mimic tone perfectly. “Does this request follow our normal process?” is more reliable.

Establish verification procedures: For high-risk actions (payments, data transfers, access changes), require out-of-band verification regardless of how legitimate the request seems.

Be sceptical of voice and video: AI voice cloning is good enough to fool casual listeners. If someone calls with an unusual request, verify. “I’ll call you back at your regular number.”

Technical Controls That Help

Email authentication: DMARC, SPF, and DKIM help prevent email spoofing. They don’t stop all phishing but reduce one category.

Advanced email filtering: Microsoft Defender for Office 365, Google’s advanced protections, or third-party solutions like Proofpoint can detect AI-generated phishing using their own AI.

Link protection: URL rewriting and click-time analysis catch malicious links even when they look legitimate.

Impersonation protection: Configure protection for executives and high-risk users. Microsoft 365 can warn about emails impersonating specific people.

MFA everywhere: If credentials are phished, MFA provides a backstop. Phishing-resistant MFA (passkeys, FIDO2 keys) is even better.

Process Changes

Technical controls and training aren’t enough. Processes need updating:

Payment verification: Any payment instruction received electronically should be verified through a known phone number before execution. No exceptions for urgency.

Supplier changes: Changes to supplier bank details should require multi-person approval and direct verification with the supplier.

Access requests: Unusual access requests should be verified with the requester and their manager through established channels.

Data transfers: Requests to send sensitive data externally should follow documented approval processes.

These process controls work regardless of how sophisticated the phishing is. They rely on verification, not detection.

Voice and Video Risks

This is newer territory for most businesses.

Voice clone risks:

  • Calls impersonating executives
  • Voicemail messages with urgent requests
  • Call-back numbers that reach attackers

Mitigation:

  • Establish that sensitive requests will never come by phone alone
  • Create code words or verification questions for high-risk calls
  • Always call back using numbers from your contact records

Video risks (emerging):

  • Video calls with AI-generated participants
  • Deepfake video messages

Mitigation:

  • Be sceptical of unusual video requests from people you know well
  • Verify through separate channels if something seems off
  • Remember that live video can be deepfaked

What ASD Recommends

The advisory includes specific recommendations:

For organisations:

  • Implement email authentication (SPF, DKIM, DMARC)
  • Deploy advanced email filtering
  • Enable MFA on all accounts
  • Conduct updated security awareness training
  • Establish verification procedures for sensitive actions
  • Report incidents to ReportCyber

For individuals:

  • Verify unusual requests through separate channels
  • Be sceptical of urgency
  • Don’t trust caller ID or display names
  • Report suspicious communications

The Insurance Angle

Cyber insurers are paying attention to AI-enhanced phishing.

Expect:

  • Questions about email security controls
  • Questions about payment verification processes
  • Requirements for multi-factor authentication
  • Interest in security awareness training practices

Documentation matters: Keep records of training completed, processes in place, and controls implemented. This supports claims if an incident occurs.

Getting Help

For businesses that want to improve their phishing defences, AI consultants Sydney and similar firms can help:

  • Assess current email security configuration
  • Implement advanced filtering and protection
  • Design verification processes for high-risk actions
  • Conduct targeted awareness training

The combination of technical controls and process changes is more effective than either alone.

Simulated Phishing

One practical approach: run simulated phishing campaigns.

Benefits:

  • Identify employees who need additional training
  • Test whether awareness training is working
  • Create teachable moments (being caught is memorable)
  • Measure improvement over time

Considerations:

  • Make it educational, not punitive
  • Follow up clicks with immediate training
  • Track metrics but don’t create fear

Several platforms offer this: KnowBe4, Proofpoint, Cofense, and others. Some IT providers include this in their services.

The AI Arms Race

Here’s the honest picture: attackers are using AI, and defenders are using AI. It’s an arms race.

Attacker AI:

  • Better phishing text
  • Personalisation at scale
  • Voice and video synthesis
  • Automated reconnaissance

Defender AI:

  • Better phishing detection
  • Anomaly detection for unusual requests
  • Natural language analysis of email content
  • Behaviour analysis for compromised accounts

The tools are getting better on both sides. Neither has a permanent advantage.

What Actually Works

Let me summarise what actually reduces AI-phishing risk:

  1. Email security configured properly (you probably already have it - verify it’s enabled)

  2. MFA everywhere (catches credential theft regardless of how convincing the phishing was)

  3. Verification processes (call back on known numbers before acting on sensitive requests)

  4. Updated training (employees who question unusual requests, even convincing ones)

  5. Incident reporting culture (people who report suspicious messages without embarrassment)

None of this is revolutionary. It’s the same fundamentals, applied more rigorously because the threat is more sophisticated.

Practical Steps This Week

Day 1:

  • Verify email security is properly configured (DMARC, SPF, DKIM)
  • Check that advanced phishing protection is enabled

Day 2:

  • Review payment processes - are verification steps documented and followed?
  • Ensure supplier bank detail changes require verification

Day 3:

  • Send a reminder to staff about verifying unusual requests
  • Share the ASD advisory with management

Day 4:

  • Review MFA coverage - are all accounts protected?
  • Consider phishing-resistant MFA for high-risk users

Day 5:

  • Plan updated security awareness training
  • Consider simulated phishing if you’re not already doing it

Reporting

If you receive suspected AI-enhanced phishing:

  • Report to ACSC via ReportCyber (cyber.gov.au/report)
  • Report to your IT team or provider
  • Report through your email platform’s reporting mechanism

Reporting helps build intelligence that protects other Australian businesses.

Final Thought

AI is making phishing more convincing. But it’s not making fundamental defences obsolete.

Strong email security. Universal MFA. Verification processes. Security-aware employees.

Working with specialists like AI consultants Melbourne can help implement these defences effectively. But the core message is simple: the basics still work, you just need to apply them more rigorously.

The attackers have better tools. Make sure your defences are keeping pace.