TL;DR: AI tools are now part of how most knowledge work gets done. They are also a new attack surface and a new way to leak data. This guide is the practical, plain-English version of what every employee should know: what to paste and what not to paste, how to spot prompt injection, when to trust AI output, what shadow AI is and why your IT team cares, and what to do the moment you think you made a mistake. None of this is meant to scare you off from using AI - the goal is the opposite. Used correctly, AI makes you faster and safer. Used carelessly, it is the fastest way to leak something you did not mean to.
Why AI Use at Work Is a Security Issue Now
Three years ago, the AI conversation at work was theoretical. Today the average knowledge worker uses an AI tool multiple times per day - to draft emails, summarize meetings, write code, analyze spreadsheets, translate copy, and brainstorm ideas. Most of that use is helpful. Some of it has quietly become one of the larger sources of unintentional data exposure inside companies.
The reason is structural. AI tools are designed to be helpful with whatever you give them, which is exactly what makes them useful and exactly what makes them risky. When you paste a draft contract into a free chatbot to "make this easier to read," you have effectively sent that contract to a third-party service. When you ask a coding assistant to "fix the bug in this function," you have sent the code, the context around it, and often the secrets that were sitting in the file. When you tell a meeting assistant to "summarize this transcript," you have shipped that transcript outside your organization's perimeter.
None of this is hypothetical. There have been multiple confirmed cases of source code, internal strategy documents, customer data, and entire databases ending up in AI provider logs after employees pasted them in. In some cases the data became visible to other users via a bug or a search index. In other cases it never leaked publicly but the organization still had to disclose the exposure to its enterprise customers because the data left contractual boundaries.
This is the part most "AI policies" get wrong: the threat is not that AI is dangerous. The threat is that using AI at work without thinking about where the data goes is dangerous. The good news is that the rules for safe use are short and easy to remember.
The Three Categories of AI Risk Every Employee Should Know
Almost every AI risk you will encounter at work falls into one of three buckets. Understanding which bucket you are in tells you what to worry about.
1. Data Leakage
You put information into an AI tool that should not have left your organization. This includes customer data, source code containing secrets, internal financial figures, employee personal information, contract drafts, security findings, incident details, M&A activity, board materials, and anything covered by an NDA. Once that information is in the AI provider's logs, it is out of your control.
2. Output You Cannot Trust
The AI tool gives you something that looks correct and is not. Made-up case law in a legal brief. Code that calls a library that does not exist. A summary of a document that adds details the document never contained. A "fact" with a confident citation that, when you check, turns out to be invented. AI output is plausible by default and accurate only sometimes. The risk is treating it as authoritative.
3. Manipulation Through the AI
Someone uses an AI tool to attack you instead of attacking you directly. This includes prompt injection (hidden instructions in content the AI reads), AI-generated phishing that is grammatically perfect and personalized, voice-cloned vishing calls that sound like your CEO, and deepfake video calls. The defining feature of this category is that the AI is the weapon, not the victim.
Each category has its own defenses. The next sections cover them in order.
Rule One: Treat the AI Input Box Like a Forwarded Email
If you would not be comfortable forwarding what you are about to paste to a stranger at a conference, do not paste it into a public AI tool. That single mental check covers ninety percent of safe-use questions.
The reason is that almost every consumer AI tool retains some version of what you send. The exact retention rules vary by provider, by tier, by region, and by whether you opted out of training. The safe assumption is that anything you paste persists somewhere, can be read by the provider's staff under defined conditions, and may be used to train future models unless you actively prevented it. Your employer's enterprise contract with an AI vendor is what changes those defaults - which is why using the version of an AI tool that your employer approved is fundamentally different from using your personal account for work tasks.
The forwarded-email test is intuitive because it captures the right concern: not "is this AI tool secure" (almost all of them are reasonably secure), but "should this content be leaving our walls."
What You Should Never Paste Into a Public AI Tool
This is the short version of an acceptable-use list. If your organization has its own list, follow that one - it will be more specific. In the absence of one, treat the categories below as off-limits to consumer-grade AI accounts:
- Customer data of any kind: names, emails, phone numbers, account numbers, transaction history, support tickets, screenshots that include customer information.
- Source code that contains credentials, API keys, tokens, or connection strings: even if you plan to redact them later, the version you paste is the version that gets logged.
- Internal financial information: revenue figures, board decks, forecasts, payroll, anything finance has marked confidential.
- Personal data of employees: resumes, performance reviews, HR records, salary information, medical or accommodation requests.
- Regulated data: protected health information (HIPAA), payment card data (PCI), regulated identifiers (SSNs, government IDs), or anything covered by GDPR, CCPA, or sector-specific rules.
- Security and incident information: vulnerability reports, incident timelines, internal pentest findings, infrastructure diagrams, network maps, IOCs from an active investigation.
- Contract drafts and legal correspondence: the content is sensitive and the existence of the document is sometimes sensitive.
- Customer communications under NDA or confidentiality: if a customer sent it to you in confidence, an AI tool is not in that confidence circle.
- Mergers, acquisitions, or strategic plans: material non-public information has its own legal regime, and AI logs are not where you want it.
If you need to use AI on content that contains items from this list, the answer is almost always: use the enterprise version of the AI tool your employer has contracted, or remove the sensitive details before pasting and operate on a sanitized version.
Approved Tools vs. Shadow AI
Shadow AI is any AI tool an employee uses for work that has not been approved or contracted by the organization. The phrase exists because security teams need a name for the largest source of AI data exposure they see: not malicious use, just well-intentioned employees using whatever tool was easiest at the moment.
The difference between an approved tool and a shadow tool is rarely about the tool itself. The same model under the hood often powers both the consumer chatbot and the enterprise version. What differs is the contract behind it.
| Dimension | Personal Account / Shadow AI | Employer-Approved Enterprise AI |
|---|---|---|
| Data Retention | Provider default - often retained, may be used for training | Contractual - usually short retention, no training on your data |
| Confidentiality | No NDA in place | DPA or enterprise agreement governs handling |
| Compliance Posture | Unknown - varies by tier | Aligned to SOC 2, ISO 27001, HIPAA where required |
| Audit and Logging | Visible only to the individual | Centralized admin logs and DLP integration |
| Breach Notification | None contractually owed to your employer | Defined notification timeline in the contract |
| Account Recovery | Tied to a personal email - lost when employee leaves | Tied to corporate identity - recoverable and revocable |
If you do not know what tool your employer has approved, ask. The answer is almost always faster than you expect. Most security teams would much rather spend ten minutes telling you "yes use this one and here is the link" than spend ten weeks responding to an exposure incident from a tool they did not know you were using.
Verifying AI Output Before You Trust It
AI tools are confidently wrong on a regular basis. The output reads well, sounds authoritative, and is sometimes complete fiction. This is a property of how the underlying models work, not a bug that will be patched out next quarter. The defense is a habit, not a tool: never act on AI output that has real consequences without independently verifying the parts that matter.
What to check, in order of how often it bites people:
- Names, dates, numbers, and citations. If the output names a person, a case, a study, a CVE, a statute, a price, or a date, verify it from the source. Made-up citations are the single most common AI failure in professional work.
- Quotes attributed to real people. Do not publish or forward an attributed quote without finding the actual source.
- Library and package names. If AI tells you to install or import a package, verify it exists on the official registry. Attackers register lookalike packages specifically to catch developers who trust AI suggestions blindly.
- API endpoints and configuration values. AI tools regularly invent URLs, parameter names, and config keys that look right and are not. Cross-reference the official documentation.
- Legal, financial, medical, or compliance claims. Treat these as a starting point for a human expert, not as an answer.
None of this means AI output is bad. It means AI output is a draft. A good editor catches what the writer missed. You are the editor.
AI-Generated Code: Review Before You Ship
If you write code, AI is probably the best productivity tool you have ever used. It is also a way to ship vulnerabilities faster than you ever have. Both things are true.
The specific risks in AI-generated code:
- Hallucinated dependencies. The AI imports a package that does not exist. An attacker registers that package name on the public registry. The next developer who runs the code installs the attacker's package. This attack is documented, named (slopsquatting), and seen in the wild.
- Hardcoded secrets. The AI generates "example" credentials that look real. These get committed to the repo because the developer assumed they were placeholders.
- Insecure patterns the model has memorized. Models trained on vast amounts of public code have absorbed every bad pattern that public code contains - SQL string concatenation, weak crypto, missing input validation, overly permissive CORS, hardcoded debug modes.
- License contamination. Models can reproduce verbatim code from training data, including code under licenses incompatible with your project.
- Subtly wrong logic. Code that compiles and looks reasonable but does the wrong thing in an edge case the AI did not consider.
The rule is simple. AI-generated code is reviewed code. The same review standards that apply to a junior developer's pull request apply to anything an AI assistant wrote. Run it through your linters, your security scanners, your dependency checks, and your code review process. If your team has not yet written a policy for AI-assisted code, ask. Most teams that have done this seriously now require AI-generated diffs to be flagged as such in the pull request, both for review depth and for license tracking.
Recognizing Prompt Injection
Prompt injection is the AI-era version of a script injection attack. Instead of injecting JavaScript into a web form, an attacker injects natural-language instructions into something an AI tool will read - a webpage, a PDF, an email, a calendar invite, a support ticket, a transcript, a document the AI has been given access to. The AI reads those instructions and follows them as if they had come from you.
The attacks that matter to most employees:
- Hidden text in documents and pages. White text on white backgrounds, zero-width characters, comments, metadata. The AI sees it. You usually do not.
- Malicious instructions in email or shared files. An attacker emails you a "contract" or "report." When you ask your AI assistant to summarize it, the embedded instructions tell the assistant to exfiltrate data, send messages, or change configurations on your behalf.
- Compromised third-party content. A blog post, a public spreadsheet, a webpage your AI tool fetches as part of answering your question - any of which can carry injected instructions.
The defenses are mostly architectural and live with the AI tool's vendor and your security team. The employee-level defense is to keep AI assistants on a short leash. Do not give an AI tool the standing ability to send email, move money, modify production systems, or access secrets without a human approval step. If your AI assistant is offering to take an action that has consequences, that is the moment to stop and confirm - not to click through.
AI-Generated Phishing and Social Engineering
The other side of the AI-as-weapon coin is what attackers do with AI to come at you. The traditional phishing red flags - bad grammar, generic greetings, obviously fake sender domains - are all still useful, but they no longer cover the threat. Modern AI-generated phishing is grammatically perfect, contextually personalized using public information about you and your organization, and sometimes follows up over multiple messages to build trust before delivering the payload.
Voice cloning has become cheap and good. A vishing call that sounds exactly like your CEO asking for an urgent wire transfer is now within the budget of a low-end criminal group, not a nation-state. Deepfake video on a Zoom or Teams call is following the same trajectory.
What this means in practice:
- Verify by a different channel. If you get a request by email, voice, or video that is unusual, urgent, and high-consequence, verify it by a different channel - text, internal chat, walk over to the person's desk. Attackers control whichever channel they reached you on. They rarely control all of them.
- Be skeptical of urgency. Real high-consequence requests almost always allow for a five-minute verification step. Pressure to skip that step is itself a signal.
- Use a code word for high-trust requests. Some teams use a quarterly-rotated code word to confirm wire transfers, password resets, and similar requests over voice or video. It is not paranoid - it is one of the most effective and simplest defenses against deepfake-driven fraud.
- Report suspected phishing. Use whatever the report-phishing button is in your email client. Reporting it helps the security team protect everyone else, including the colleague who would have clicked it tomorrow.
Department-Specific Considerations
Engineering
You face the highest concentration of AI-related risk because you are most likely to use AI tools every day, often in ways that touch source code, secrets, and production systems. The non-negotiables: review AI-generated code, verify dependencies exist before installing them, never let AI tools have standing access to production credentials without approval gates, and surface to security when an AI tool wants permissions that look unusual. If your team uses an AI coding assistant, push for the enterprise version with logging and DLP - the personal version is a data exfiltration channel waiting to happen.
Sales and Customer-Facing Teams
Be careful with customer data in any AI tool. Pasting a customer email thread into a chatbot to "draft a reply" sends that thread out of your CRM and into a third-party log. Use an enterprise AI integrated with your CRM if your organization has one, or write the response yourself. Watch for AI-generated phishing impersonating prospects, and treat any urgent contract change requested over a single channel as something to verify another way.
Finance and Accounting
You are the prime target for AI-amplified business email compromise. Voice cloning makes "the CEO is on the phone authorizing this wire" a real attack, not a hypothetical one. Hard rules: never act on a payment change instruction from email or voice alone, always verify on a known-good channel, and treat any AI-generated draft of a financial document as a draft that must be reconciled to source data.
HR and Recruiting
Resumes and HR records are personal data with regulatory implications. Pasting them into a public AI tool is almost always the wrong move. Be aware of AI-generated resumes and cover letters that look great and represent very little - the gap between AI-generated application and actual capability is widening. AI-assisted screening tools have their own bias and accuracy concerns and require careful policy attention.
Legal, Compliance, and Risk
Public AI tools have repeatedly produced legal briefs with invented case citations, contracts referencing non-existent statutes, and compliance summaries that were superficially correct and substantively wrong. The combination of high stakes and high apparent fluency makes this one of the highest-risk AI use areas. Use only enterprise tools with appropriate confidentiality and verification processes, and treat AI output as research that needs to be checked against authoritative sources before it leaves your desk.
Executive Team
You are the highest-value target for AI-amplified social engineering, voice cloning, and deepfake video. You are also the most likely to be impersonated to your own staff. Two practical steps: agree with your direct reports on a verification protocol for unusual requests, and assume that public information about you (interviews, conference talks, social posts, podcasts) is being used to train spear-phishing models targeting you specifically. None of this requires paranoia - it requires consistency.
What to Do if You Think You Made a Mistake
Sooner or later, someone in your organization is going to paste something they should not have into an AI tool, click something they should not have clicked, or follow a request that turns out to be fake. The single most important security behavior in this entire document is what happens next.
The right move is to tell the security or IT team immediately. Not in an hour. Not after you have tried to delete the chat. Not after checking with a colleague to see how bad it is. Immediately.
The reasons are practical:
- Speed limits damage. If a credential leaked, it can be rotated. If a customer record leaked, the disclosure clock starts the moment the security team knows. The faster they know, the better the response.
- Deletion is rarely complete. Most AI providers retain logs for some window even after a user deletes a conversation. Trying to clean up after yourself usually does not actually clean anything up. It just delays the security team's response.
- Reporting is not punished in any sane organization. Security teams need the data, not a culture of hidden mistakes. The organizations that handle incidents best are the ones where reporting an own-goal is a non-event. Most modern security policies explicitly say that honest, prompt reports of mistakes will not result in disciplinary action - because that is the only way the policy works.
If you are not sure who to tell, your IT helpdesk is always a safe first call. They will route it.
The Short Version You Can Remember
Strip everything in this guide down to the principles that fit on a sticky note:
- Use the AI tool your employer approved, not your personal account.
- Do not paste customer data, source code with secrets, or anything regulated into a consumer AI tool.
- Treat AI output as a draft. Verify the names, dates, numbers, citations, and dependencies.
- Review AI-generated code the same way you would review a junior developer's pull request.
- Verify unusual urgent requests on a different channel than the one they arrived on.
- Tell security immediately if you think something went wrong.
That is the whole guide. If you internalize those six points, you are using AI more safely than the majority of knowledge workers right now.
Build a Cyber Awareness Program That Covers AI Risk
Lorikeet Security's Cyber Awareness Training platform delivers role-based modules that include AI safety, prompt injection awareness, deepfake recognition, and shadow AI - alongside phishing simulations and behavioral measurement. Plans start at $225/month for up to 100 employees.