An Employee's Guide to Using AI Safely and Securely | Lorikeet Security Skip to main content
Back to Blog

An Employee's Guide to Using AI Safely and Securely

Lorikeet Security Team April 28, 2026 13 min read

TL;DR: AI tools are now part of how most knowledge work gets done. They are also a new attack surface and a new way to leak data. This guide is the practical, plain-English version of what every employee should know: what to paste and what not to paste, how to spot prompt injection, when to trust AI output, what shadow AI is and why your IT team cares, and what to do the moment you think you made a mistake. None of this is meant to scare you off from using AI - the goal is the opposite. Used correctly, AI makes you faster and safer. Used carelessly, it is the fastest way to leak something you did not mean to.

Why AI Use at Work Is a Security Issue Now

Three years ago, the AI conversation at work was theoretical. Today the average knowledge worker uses an AI tool multiple times per day - to draft emails, summarize meetings, write code, analyze spreadsheets, translate copy, and brainstorm ideas. Most of that use is helpful. Some of it has quietly become one of the larger sources of unintentional data exposure inside companies.

The reason is structural. AI tools are designed to be helpful with whatever you give them, which is exactly what makes them useful and exactly what makes them risky. When you paste a draft contract into a free chatbot to "make this easier to read," you have effectively sent that contract to a third-party service. When you ask a coding assistant to "fix the bug in this function," you have sent the code, the context around it, and often the secrets that were sitting in the file. When you tell a meeting assistant to "summarize this transcript," you have shipped that transcript outside your organization's perimeter.

None of this is hypothetical. There have been multiple confirmed cases of source code, internal strategy documents, customer data, and entire databases ending up in AI provider logs after employees pasted them in. In some cases the data became visible to other users via a bug or a search index. In other cases it never leaked publicly but the organization still had to disclose the exposure to its enterprise customers because the data left contractual boundaries.

This is the part most "AI policies" get wrong: the threat is not that AI is dangerous. The threat is that using AI at work without thinking about where the data goes is dangerous. The good news is that the rules for safe use are short and easy to remember.


The Three Categories of AI Risk Every Employee Should Know

Almost every AI risk you will encounter at work falls into one of three buckets. Understanding which bucket you are in tells you what to worry about.

1. Data Leakage

You put information into an AI tool that should not have left your organization. This includes customer data, source code containing secrets, internal financial figures, employee personal information, contract drafts, security findings, incident details, M&A activity, board materials, and anything covered by an NDA. Once that information is in the AI provider's logs, it is out of your control.

2. Output You Cannot Trust

The AI tool gives you something that looks correct and is not. Made-up case law in a legal brief. Code that calls a library that does not exist. A summary of a document that adds details the document never contained. A "fact" with a confident citation that, when you check, turns out to be invented. AI output is plausible by default and accurate only sometimes. The risk is treating it as authoritative.

3. Manipulation Through the AI

Someone uses an AI tool to attack you instead of attacking you directly. This includes prompt injection (hidden instructions in content the AI reads), AI-generated phishing that is grammatically perfect and personalized, voice-cloned vishing calls that sound like your CEO, and deepfake video calls. The defining feature of this category is that the AI is the weapon, not the victim.

Each category has its own defenses. The next sections cover them in order.


Rule One: Treat the AI Input Box Like a Forwarded Email

If you would not be comfortable forwarding what you are about to paste to a stranger at a conference, do not paste it into a public AI tool. That single mental check covers ninety percent of safe-use questions.

The reason is that almost every consumer AI tool retains some version of what you send. The exact retention rules vary by provider, by tier, by region, and by whether you opted out of training. The safe assumption is that anything you paste persists somewhere, can be read by the provider's staff under defined conditions, and may be used to train future models unless you actively prevented it. Your employer's enterprise contract with an AI vendor is what changes those defaults - which is why using the version of an AI tool that your employer approved is fundamentally different from using your personal account for work tasks.

The forwarded-email test is intuitive because it captures the right concern: not "is this AI tool secure" (almost all of them are reasonably secure), but "should this content be leaving our walls."


What You Should Never Paste Into a Public AI Tool

This is the short version of an acceptable-use list. If your organization has its own list, follow that one - it will be more specific. In the absence of one, treat the categories below as off-limits to consumer-grade AI accounts:

If you need to use AI on content that contains items from this list, the answer is almost always: use the enterprise version of the AI tool your employer has contracted, or remove the sensitive details before pasting and operate on a sanitized version.


Approved Tools vs. Shadow AI

Shadow AI is any AI tool an employee uses for work that has not been approved or contracted by the organization. The phrase exists because security teams need a name for the largest source of AI data exposure they see: not malicious use, just well-intentioned employees using whatever tool was easiest at the moment.

The difference between an approved tool and a shadow tool is rarely about the tool itself. The same model under the hood often powers both the consumer chatbot and the enterprise version. What differs is the contract behind it.

Dimension Personal Account / Shadow AI Employer-Approved Enterprise AI
Data Retention Provider default - often retained, may be used for training Contractual - usually short retention, no training on your data
Confidentiality No NDA in place DPA or enterprise agreement governs handling
Compliance Posture Unknown - varies by tier Aligned to SOC 2, ISO 27001, HIPAA where required
Audit and Logging Visible only to the individual Centralized admin logs and DLP integration
Breach Notification None contractually owed to your employer Defined notification timeline in the contract
Account Recovery Tied to a personal email - lost when employee leaves Tied to corporate identity - recoverable and revocable

If you do not know what tool your employer has approved, ask. The answer is almost always faster than you expect. Most security teams would much rather spend ten minutes telling you "yes use this one and here is the link" than spend ten weeks responding to an exposure incident from a tool they did not know you were using.


Verifying AI Output Before You Trust It

AI tools are confidently wrong on a regular basis. The output reads well, sounds authoritative, and is sometimes complete fiction. This is a property of how the underlying models work, not a bug that will be patched out next quarter. The defense is a habit, not a tool: never act on AI output that has real consequences without independently verifying the parts that matter.

What to check, in order of how often it bites people:

None of this means AI output is bad. It means AI output is a draft. A good editor catches what the writer missed. You are the editor.


AI-Generated Code: Review Before You Ship

If you write code, AI is probably the best productivity tool you have ever used. It is also a way to ship vulnerabilities faster than you ever have. Both things are true.

The specific risks in AI-generated code:

The rule is simple. AI-generated code is reviewed code. The same review standards that apply to a junior developer's pull request apply to anything an AI assistant wrote. Run it through your linters, your security scanners, your dependency checks, and your code review process. If your team has not yet written a policy for AI-assisted code, ask. Most teams that have done this seriously now require AI-generated diffs to be flagged as such in the pull request, both for review depth and for license tracking.


Recognizing Prompt Injection

Prompt injection is the AI-era version of a script injection attack. Instead of injecting JavaScript into a web form, an attacker injects natural-language instructions into something an AI tool will read - a webpage, a PDF, an email, a calendar invite, a support ticket, a transcript, a document the AI has been given access to. The AI reads those instructions and follows them as if they had come from you.

The attacks that matter to most employees:

The defenses are mostly architectural and live with the AI tool's vendor and your security team. The employee-level defense is to keep AI assistants on a short leash. Do not give an AI tool the standing ability to send email, move money, modify production systems, or access secrets without a human approval step. If your AI assistant is offering to take an action that has consequences, that is the moment to stop and confirm - not to click through.


AI-Generated Phishing and Social Engineering

The other side of the AI-as-weapon coin is what attackers do with AI to come at you. The traditional phishing red flags - bad grammar, generic greetings, obviously fake sender domains - are all still useful, but they no longer cover the threat. Modern AI-generated phishing is grammatically perfect, contextually personalized using public information about you and your organization, and sometimes follows up over multiple messages to build trust before delivering the payload.

Voice cloning has become cheap and good. A vishing call that sounds exactly like your CEO asking for an urgent wire transfer is now within the budget of a low-end criminal group, not a nation-state. Deepfake video on a Zoom or Teams call is following the same trajectory.

What this means in practice:


Department-Specific Considerations

Engineering

You face the highest concentration of AI-related risk because you are most likely to use AI tools every day, often in ways that touch source code, secrets, and production systems. The non-negotiables: review AI-generated code, verify dependencies exist before installing them, never let AI tools have standing access to production credentials without approval gates, and surface to security when an AI tool wants permissions that look unusual. If your team uses an AI coding assistant, push for the enterprise version with logging and DLP - the personal version is a data exfiltration channel waiting to happen.

Sales and Customer-Facing Teams

Be careful with customer data in any AI tool. Pasting a customer email thread into a chatbot to "draft a reply" sends that thread out of your CRM and into a third-party log. Use an enterprise AI integrated with your CRM if your organization has one, or write the response yourself. Watch for AI-generated phishing impersonating prospects, and treat any urgent contract change requested over a single channel as something to verify another way.

Finance and Accounting

You are the prime target for AI-amplified business email compromise. Voice cloning makes "the CEO is on the phone authorizing this wire" a real attack, not a hypothetical one. Hard rules: never act on a payment change instruction from email or voice alone, always verify on a known-good channel, and treat any AI-generated draft of a financial document as a draft that must be reconciled to source data.

HR and Recruiting

Resumes and HR records are personal data with regulatory implications. Pasting them into a public AI tool is almost always the wrong move. Be aware of AI-generated resumes and cover letters that look great and represent very little - the gap between AI-generated application and actual capability is widening. AI-assisted screening tools have their own bias and accuracy concerns and require careful policy attention.

Legal, Compliance, and Risk

Public AI tools have repeatedly produced legal briefs with invented case citations, contracts referencing non-existent statutes, and compliance summaries that were superficially correct and substantively wrong. The combination of high stakes and high apparent fluency makes this one of the highest-risk AI use areas. Use only enterprise tools with appropriate confidentiality and verification processes, and treat AI output as research that needs to be checked against authoritative sources before it leaves your desk.

Executive Team

You are the highest-value target for AI-amplified social engineering, voice cloning, and deepfake video. You are also the most likely to be impersonated to your own staff. Two practical steps: agree with your direct reports on a verification protocol for unusual requests, and assume that public information about you (interviews, conference talks, social posts, podcasts) is being used to train spear-phishing models targeting you specifically. None of this requires paranoia - it requires consistency.


What to Do if You Think You Made a Mistake

Sooner or later, someone in your organization is going to paste something they should not have into an AI tool, click something they should not have clicked, or follow a request that turns out to be fake. The single most important security behavior in this entire document is what happens next.

The right move is to tell the security or IT team immediately. Not in an hour. Not after you have tried to delete the chat. Not after checking with a colleague to see how bad it is. Immediately.

The reasons are practical:

If you are not sure who to tell, your IT helpdesk is always a safe first call. They will route it.


The Short Version You Can Remember

Strip everything in this guide down to the principles that fit on a sticky note:

That is the whole guide. If you internalize those six points, you are using AI more safely than the majority of knowledge workers right now.

Build a Cyber Awareness Program That Covers AI Risk

Lorikeet Security's Cyber Awareness Training platform delivers role-based modules that include AI safety, prompt injection awareness, deepfake recognition, and shadow AI - alongside phishing simulations and behavioral measurement. Plans start at $225/month for up to 100 employees.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

Lorikeet Security helps modern engineering teams ship safer software. Our work spans web applications, APIs, cloud infrastructure, and AI-generated codebases — and everything we publish here comes from patterns we see in real client engagements.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!