Back to Home
AI Consulting

Is Your Client Data Safe with AI? A Guide for Australian Professional Services

83% of professional services firms cite data security as their top AI concern. This guide breaks down exactly which AI tools are safe for client data, the three rules every Australian firm should follow, and how to build a one-page AI policy for your practice.

13Labs Team5 April 202612 min read
data securityprofessional servicesAI policycomplianceprivacyaccountinglegalfinancial planning

Contents

The Real Concern Holding Firms Back

Data security is the single biggest barrier to AI adoption in Australian professional services. According to Workday's 2026 AI Impact Report, 83% of professional services firms cite data security as their top concern when considering AI tools. That number is not surprising. Accountants, lawyers, and financial planners handle some of the most sensitive information in the country. The good news: the answer to whether AI is safe for client data is straightforward. Yes, it is safe. But only if you use the right tools with the right configuration. Free-tier AI products are not safe for client data. Paid business and enterprise tiers from major providers are. This guide gives you the specific details. Which tools are safe, which are not, and exactly how to set your firm up so your team can use AI confidently without putting client information at risk. "The businesses I work with are not worried about whether AI is useful. They are worried about whether it is safe for their client data. Once you show them the answer is yes, with the right setup, adoption follows immediately." - Callum Holt, Founder, 13Labs

What Actually Happens to Your Data in AI Tools

Every AI tool has a data policy that governs whether your inputs are used to train future models. This is the critical distinction. If your inputs train the model, other users could theoretically extract fragments of your data. Here is the breakdown for major providers as of early 2026. **ChatGPT (OpenAI)** The free and Plus tiers train on your inputs by default. This means anything you paste into ChatGPT free could be used to improve future models. Not safe for client data. ChatGPT Team ($30/user/month) and Enterprise tiers explicitly exclude your data from training. Both are SOC 2 compliant. Safe for client data. **Claude (Anthropic)** The free tier on claude.ai may use your conversations for training. Claude Pro is clearer about data handling, but the API is where Anthropic draws the firmest line: API inputs are not used for training. Claude Team and Enterprise explicitly guarantee no training on your data. SOC 2 Type II certified. Safe for client data. **Microsoft Copilot for M365** If your firm already uses Microsoft 365, Copilot keeps data within your existing tenant. Your prompts and outputs stay inside the same security boundary as your email and SharePoint files. Microsoft does not use your data for model training. Safe if you are already on M365. **Google Gemini for Workspace** The enterprise Workspace tier is safe. Google does not train on enterprise Workspace data. The free consumer tier of Gemini does use your data for training. Not safe for client data. The pattern is clear. Free tiers are almost never safe. Paid business tiers from major providers are.

The Three Rules for AI and Client Data

You do not need a 50-page policy to keep client data safe with AI. Three rules cover the vast majority of situations. **Rule 1: Never put client data into free-tier AI tools.** This is the one non-negotiable. Free versions of ChatGPT, Gemini, and Claude may train on your inputs. Tell your team plainly: if you are not paying for it, do not paste client information into it. A single team member pasting a tax return into free ChatGPT creates a compliance risk for the entire firm. **Rule 2: Paid business and enterprise tiers from major providers are safe.** ChatGPT Team, Claude Team, Microsoft Copilot for M365, and Google Workspace Enterprise all contractually guarantee your data is not used for training. They carry SOC 2 compliance. These are the same providers trusted by banks, hospitals, and government agencies. Your firm's data gets the same protections. **Rule 3: For the truly sensitive, run models locally.** Some matters are so sensitive that even a paid cloud tool feels uncomfortable. Merger negotiations. High-profile litigation. In these cases, you can run AI models entirely on your own hardware using tools like Ollama with Meta's Llama models. Your data never leaves your machine. The trade-off is that local models are less capable than cloud options, but for summarising documents or drafting correspondence, they work well. These three rules, printed on a card and pinned to every desk, would solve the data security problem for most professional services firms overnight. According to Thomson Reuters' 2025 Legal Technology Survey, 53% of law firms have no AI policy at all. That is the real risk.

Setting Up Your Firm for Safe AI Use

Getting your firm onto a safe AI platform takes less than an hour. Here is the practical setup. **Step 1: Choose your platform.** If your firm is already on Microsoft 365, start with Copilot. It integrates with your existing tools and your IT governance stays the same. If not, Claude Team or ChatGPT Team both cost $30 per user per month and are ready to use immediately. Both offer SOC 2 compliance and explicit no-training guarantees. **Step 2: Set up the team workspace.** Create your firm's workspace on the chosen platform. Add team members. This gives you a central admin panel where you can manage users, review usage, and enforce policies. It takes about 15 minutes. **Step 3: Create a one-page AI policy.** We cover this in detail in the next section. The key point: keep it to one page. A 30-page document nobody reads is worse than a one-page document everyone follows. **Step 4: Anonymise by default.** Train your team to never copy-paste client names, ABNs, or identifying details into prompts. Use placeholders instead. "[Client A]" instead of the actual name. "[Company B]" instead of the real entity. The AI does not need real names to draft a letter, summarise a contract, or analyse a tax position. Only 13% of workers have received any AI training, despite 55% wanting it (Workday 2026). That gap explains why 92% of users abandon AI tools within 90 days, according to BridgeView IT's 2026 Workplace AI Adoption Report. The fix is not better tools. It is better onboarding. Spend one hour training your team on these four steps and your adoption rate will be dramatically higher.

Industry-Specific Considerations for Australian Firms

Different professions carry different obligations. Here is what matters for each. **Accounting Firms** The Australian Privacy Act 1988 governs how you handle client financial data. Tax agents have additional obligations under the Tax Agent Services Act 2009. The Tax Practitioners Board expects you to maintain confidentiality of client information regardless of the tools you use. Using a paid AI tier with a no-training guarantee satisfies these requirements in the same way that using cloud accounting software like Xero does. The key is ensuring you can demonstrate your data handling practices if audited. Keep a record of which AI platform you use and its data policy. **Legal Practices** The Law Institute of Victoria published its Ethical and Responsible Use of AI Guidance Note in August 2025. It does not ban AI use. Instead, it requires lawyers to understand how their AI tools handle data and to maintain client legal privilege. Pasting privileged communications into a free AI tool that trains on inputs would arguably breach privilege. A paid enterprise tier with contractual data protections maintains the same confidentiality boundary as your existing cloud email provider. **Financial Planning** ASIC's regulatory framework requires financial planners to maintain detailed records for at least seven years. If you use AI to help draft Statements of Advice or analyse client portfolios, you need an audit trail. Most paid AI platforms provide conversation history and export capabilities. The critical requirement: every AI-assisted output must have a human review before it reaches the client. ASIC expects a human-in-the-loop for all client-facing advice. The OAIC (Office of the Australian Information Commissioner) has published guidance emphasising that organisations remain responsible for personal information regardless of whether AI tools are involved in processing it. The tool does not change your obligation. You are still accountable.

Building Your Firm's One-Page AI Policy

A good AI policy is short enough to read in two minutes and clear enough to follow without interpretation. Here is a framework you can adapt. **Section 1: Approved Tools** List the specific AI tools your firm has approved. For example: "Staff may use Claude Team and Microsoft Copilot for M365. No other AI tools are approved for work purposes." Be specific. Naming the tools removes ambiguity. **Section 2: Data Rules** Three lines are enough. Do not enter client names, ABNs, or identifying details into AI tools. Use placeholders such as [Client A] or [Company B]. Do not upload original client documents. Summarise or anonymise first. **Section 3: Review Requirements** All AI-generated outputs must be reviewed by a qualified professional before use. AI drafts are starting points, not finished products. Staff must verify all factual claims, legal references, and calculations. Workday's 2026 research found that 37% of time saved by AI is lost to rework when outputs are not reviewed properly. **Section 4: What AI Is Good For** List approved use cases. Drafting correspondence. Summarising lengthy documents. Research assistance. Brainstorming. This gives your team permission to use AI productively rather than guessing what is allowed. **Section 5: Reporting** If anyone accidentally enters sensitive data into an unapproved tool, they should report it to [named person] immediately. No blame. Fast response matters more than punishment. Print this policy. Pin it next to every screen. Review it quarterly. A living one-page policy beats a forgotten 30-page manual every time.

Frequently Asked Questions

**Can I use ChatGPT for client work if I have the paid version?** Yes, but only ChatGPT Team or Enterprise. The individual Plus plan ($20/month) still allows OpenAI to train on your inputs by default. ChatGPT Team ($30/user/month) and Enterprise explicitly exclude your data from training and are SOC 2 compliant. Check your subscription tier before using it for client work. **Does the Australian Privacy Act specifically mention AI tools?** The Privacy Act does not name specific technologies. It requires organisations to protect personal information regardless of the tools used to process it. The OAIC has published supplementary guidance confirming that AI tools do not change your privacy obligations. You remain responsible for any personal information your firm handles, whether processed by a human or an AI. **What if a team member accidentally pastes client data into a free AI tool?** Act quickly. Document the incident including what data was shared, which tool was used, and when it occurred. Most AI providers allow you to delete conversation history. Contact the provider's support team to request data deletion. Assess whether the breach is notifiable under the Privacy Act. The threshold is whether serious harm is likely. Report internally and use it as a training opportunity. **Are locally-run AI models like Ollama actually practical for daily use?** For specific tasks, yes. Local models running on a modern laptop can summarise documents, draft emails, and answer research questions competently. They are slower than cloud models and less capable on complex reasoning tasks. For most day-to-day professional services work, local models through Ollama with Llama 3 are a viable option when cloud tools feel too exposed. **Do I need to tell clients I am using AI?** No Australian law currently requires blanket disclosure of AI use to clients. However, professional conduct rules in law and financial planning emphasise transparency. Best practice is to include a line in your engagement letter stating that the firm may use AI tools with appropriate data protections. This builds trust and avoids surprises. The Law Institute of Victoria's 2025 guidance recommends informing clients when AI has materially contributed to their matter."

Get a Data Security Briefing for Your Firm

Our AI workshops for professional services firms include a hands-on data security briefing. Your team will leave knowing exactly which tools are safe, how to anonymise prompts, and with a one-page AI policy tailored to your practice.

View Professional Services Workshop