New ACSC Guidelines on AI Security for Business


The ACSC has published new guidelines on AI security, recognising that businesses of all sizes are now adopting AI tools.

This isn’t just about enterprise machine learning systems. It covers the ChatGPT integrations, AI assistants, and automation tools that SMBs are actually using.

Here’s what matters.

What the Guidelines Cover

The ACSC guidance addresses several key areas:

AI system selection: Choosing AI tools and vendors with appropriate security considerations.

Data protection: Protecting sensitive data when using AI systems, including data sent to external AI services.

Prompt injection and manipulation: Understanding how AI systems can be manipulated and protecting against it.

Output verification: Not blindly trusting AI-generated content, especially for security-critical decisions.

Access control: Managing who can use AI systems and what they can do with them.

Monitoring and logging: Keeping records of AI system usage for security and compliance purposes.

Why This Matters for SMBs

AI adoption has exploded. Most businesses now use at least some AI tools:

  • ChatGPT or similar for content and analysis
  • Microsoft Copilot in Office applications
  • AI features in existing business software
  • AI-powered security tools

This creates new security considerations:

  • Sensitive data might be shared with AI services
  • AI outputs might be used without adequate verification
  • New attack vectors emerge (prompt injection)
  • Compliance obligations may be affected

The ACSC guidance helps businesses think through these issues.

Data Protection Considerations

The core concern: When you use external AI services, you’re potentially sharing data with third parties.

What the guidelines recommend:

Classify your data: Know what’s sensitive. Don’t send confidential business data, customer information, or proprietary content to AI services without understanding the implications.

Review terms of service: Understand how AI providers use your data. Some use inputs for training. Some don’t. Know the difference.

Consider Australian requirements: Privacy Act obligations apply regardless of whether you’re using AI. If you’re processing personal information through AI services, understand the implications.

Practical steps:

  1. Establish clear policies on what data can be used with AI
  2. Review AI service terms before adoption
  3. Consider enterprise AI options that offer better data protection
  4. Train staff on appropriate AI data handling

Prompt Injection Risks

This is a newer attack category specific to AI systems.

What it is: Prompt injection involves crafting inputs that manipulate AI systems to behave unexpectedly. An attacker might embed hidden instructions in documents or websites that AI systems then process.

Example scenario: An employee uses an AI assistant to summarise emails. A malicious email contains hidden instructions that cause the AI to reveal confidential information or take unauthorised actions.

The guidelines recommend:

  • Treat AI outputs with appropriate scepticism
  • Implement access controls on AI capabilities
  • Monitor for unusual AI behaviour
  • Use AI in human-in-the-loop configurations for sensitive operations

For most SMBs: The immediate risk is limited, but awareness matters. Don’t give AI systems unconstrained access to sensitive data or automated decision-making authority.

Output Verification

The concern: AI systems can be wrong. They hallucinate. They make mistakes. They can be manipulated.

The guidelines recommend:

  • Don’t use AI outputs for security-critical decisions without human verification
  • Fact-check AI-generated content before publishing or acting on it
  • Maintain audit trails of AI-assisted decisions
  • Have processes to identify and correct AI errors

Practical application: If you’re using AI to help with security monitoring, compliance, or customer-facing content, verify important outputs. AI is an assistant, not an authority.

Access Control for AI Tools

Questions to consider:

  • Who in your organisation can use AI tools?
  • What data can they input?
  • What actions can AI take on their behalf?

The guidelines recommend:

  • Inventory AI tools in use
  • Apply role-based access controls
  • Integrate AI access with identity management
  • Monitor usage patterns

For SMBs: Even if you don’t have sophisticated access control systems, establish policies. Know who’s using what AI tools. Set expectations for appropriate use.

AI in Security Tools

Many security tools now use AI. The guidelines address both using AI for security and securing AI systems.

AI security tools:

  • EDR with AI-powered detection
  • AI-enhanced email filtering
  • Behavioural analytics
  • Automated threat response

Considerations:

  • AI security tools improve detection but aren’t infallible
  • Over-reliance on AI can create blind spots
  • AI decisions should be reviewable and reversible
  • Logging of AI security decisions supports incident response

For SMBs using AI security tools: Understand what your tools do with AI. Maintain human oversight. Don’t assume AI catches everything.

Vendor Assessment

When adopting AI tools, the guidelines recommend assessing vendors:

Key questions:

  • How is your data used and protected?
  • Where is processing performed (data residency)?
  • What security certifications do they hold?
  • How are AI models secured against manipulation?
  • What logging and monitoring is available?
  • What’s the incident response process?

For enterprise AI tools: Microsoft Copilot, Google Duet AI, and similar enterprise options typically offer better data protection than consumer services. Understand the differences.

Building an AI Policy

The ACSC recommends organisations have clear AI usage policies.

Policy elements:

  • Approved AI tools and services
  • Data classification and handling for AI
  • Prohibited uses (sensitive data, automated decisions without oversight)
  • Access control and authentication requirements
  • Monitoring and logging expectations
  • Incident reporting for AI-related issues

Start simple: Even a one-page policy establishing basic expectations is better than no policy.

Training Requirements

Staff need to understand AI security:

Key topics:

  • What data shouldn’t go into AI systems
  • How to verify AI outputs
  • Recognising when AI might be manipulated
  • Appropriate vs inappropriate AI use cases
  • Reporting concerns about AI use

This can be part of broader security awareness training.

The Insurance Angle

Cyber insurers are beginning to consider AI:

Potential questions:

  • What AI tools does your organisation use?
  • What data is processed by AI systems?
  • Do you have AI usage policies?
  • How are AI outputs verified?

Documentation to maintain:

  • AI tool inventory
  • Usage policies
  • Data handling procedures
  • Training records

Being able to demonstrate responsible AI governance supports insurance discussions.

Working with AI Specialists

For businesses wanting to adopt AI securely, working with firms like AI consultants Melbourne can help.

They can:

  • Assess AI usage risks and opportunities
  • Recommend secure AI tools and configurations
  • Develop AI governance policies
  • Implement security controls for AI adoption
  • Provide training on secure AI use

The combination of AI expertise and security expertise is valuable as businesses navigate this space.

Practical Steps for SMBs

This month:

  1. Inventory AI tools currently in use
  2. Review terms of service for primary AI tools
  3. Establish basic data handling expectations

Next quarter: 4. Develop formal AI usage policy 5. Include AI topics in security training 6. Review vendor security for key AI services

Ongoing: 7. Monitor AI usage patterns 8. Update policies as AI landscape evolves 9. Stay current with ACSC guidance updates

The Bigger Picture

AI is becoming embedded in how businesses operate. This is generally positive - AI improves productivity and capabilities.

But it creates new security considerations:

  • Data shared with AI services
  • Decisions influenced by AI outputs
  • New attack vectors targeting AI systems

The ACSC guidance helps businesses think through these issues systematically.

Don’t Overcomplicate

For most SMBs, AI security isn’t an emergency requiring major investment.

The practical approach:

  • Know what AI tools you’re using
  • Be thoughtful about what data goes into them
  • Verify important outputs
  • Have basic policies in place

This is manageable. It’s not fundamentally different from other security considerations - it’s about understanding risks and implementing appropriate controls.

What’s Next

AI security guidance will evolve. The ACSC is likely to update recommendations as the technology and threat landscape change.

Stay informed:

  • Subscribe to ACSC alerts
  • Follow updates from major AI vendors
  • Review your practices periodically

The businesses that approach AI thoughtfully - embracing the benefits while managing the risks - will be best positioned as AI becomes more central to operations.

Getting Started

If you’re not sure where to begin, Team400 and similar AI consultants Brisbane can help assess your current AI usage and develop appropriate governance.

But the basics are accessible to any business:

  1. Know what AI you’re using
  2. Establish data handling expectations
  3. Verify important outputs
  4. Train your people

Start there. Build from that foundation.