Thursday, February 19, 2026

Microsoft Admits Copilot Bug Exposed Confidential Emails

Share

- Advertisement -
  • Microsoft confirmed a bug let Copilot summarise emails marked confidential.
  • Sent and Draft folders were affected, but Inbox was not.
  • The issue bypassed data loss prevention and sensitivity labels.
  • A fix is rolling out, and Microsoft is monitoring affected users.

Microsoft has acknowledged a flaw in Microsoft 365 Copilot Chat that allowed the AI assistant to access and summarise emails marked as confidential. The issue, which bypassed existing data loss prevention safeguards, meant that certain protected messages could be processed despite restrictions designed to block AI access.

The company says the bug has been fixed, but the rollout is still ongoing. In the meantime, it is monitoring the situation and contacting affected customers.

How the Copilot Email Bug Happened

The vulnerability, tracked internally as CW1226324, was identified on January 21, 2026. According to Microsoft’s advisory, a coding error caused Copilot Chat to incorrectly process emails stored in Sent Items and Draft folders, even when those emails had sensitivity labels applied.

Inboxes appear to have remained protected. However, access to Sent and Draft folders is significant. Email threads typically include original incoming messages, meaning Copilot could potentially summarise conversations that contained external correspondence as well.

Microsoft confirmed that confidential labels, which are meant to prevent unauthorized AI processing, were not respected due to the code issue. As a result, Copilot Chat could generate summaries of content that should have been off limits.

For organizations relying on Microsoft 365’s built in compliance tools, this undermines confidence in how sensitivity labels are enforced across AI powered services.

- Advertisement -

Why This Matters for Enterprise Security

For business users, sensitivity labels and data loss prevention policies are not optional extras. They are critical compliance tools used to protect financial data, legal communications, internal strategy discussions, and personal information.

When those controls fail, even temporarily, it raises serious concerns about governance and risk exposure.

Copilot is deeply integrated across Microsoft 365 apps, including Outlook, Word, Teams, and SharePoint. Its ability to scan, summarise, and generate content based on user data is part of its core appeal. But that capability also increases the importance of airtight boundaries.

If AI systems can access data that has been explicitly flagged as confidential, trust becomes fragile. Even though Microsoft states the issue was limited to specific folders and has now been addressed, enterprises will likely reassess how AI assistants interact with sensitive corporate data.

The timing is particularly awkward. The European Parliament recently moved to ban AI tools on worker devices over concerns about data being transmitted to cloud based systems. While Microsoft maintains that Copilot operates within enterprise compliance frameworks, this incident may reinforce fears that AI integrations introduce new vulnerabilities.

Microsoft’s Response and Ongoing Monitoring

Microsoft began deploying a fix in early February. The company says it is continuing to monitor the situation while verifying that the patch resolves the issue fully.

- Advertisement -

Importantly, this is not being described as a breach involving external attackers. There is no indication that data was exfiltrated outside Microsoft’s systems. Instead, the issue centers on internal policy enforcement, where Copilot processed content it should have ignored.

Still, the distinction may offer little comfort to organizations that depend on strict data segmentation. Even internal AI processing can create compliance headaches, especially in regulated industries such as healthcare, finance, and government.

Microsoft is reportedly reaching out to impacted users as the fix continues to roll out. Transparency will be key. Enterprise customers expect clear communication when security controls do not perform as intended.

This episode highlights a broader challenge facing AI vendors. As generative tools become more embedded in productivity software, the line between helpful automation and overreach grows thinner. Each integration must respect existing security frameworks without exception.

For now, Microsoft insists the issue is contained and nearing full resolution. But the incident serves as a reminder that even mature platforms can stumble when layering AI into complex enterprise ecosystems.

Follow TechBSB For More Updates

- Advertisement -
Rohit Belakud
Rohit Belakud
Rohit Belakud is an experienced tech professional, boasting 7 years of experience in the field of computer science, web design, content creation, and affiliate marketing. His proficiency extends to PPC, Google Adsense and SEO, ensuring his clients achieve maximum visibility and profitability online. Renowned as a trusted and highly rated expert, Rohit's reputation precedes him as a reliable professional delivering top-notch results. Beyond his professional pursuits, Rohit channels his creativity as an author, showcasing his passion for storytelling and engaging content creation. With a blend of skill, dedication, and a flair for innovation, Rohit Belakud stands as a beacon of excellence in the digital landscape.

Read More

Trending Now