- A Copilot Chat bug allowed confidential emails to be summarized despite DLP policies.
- The issue affected Sent Items and Drafts folders starting January 21.
- Microsoft says no unauthorized access occurred, but behavior defied intended safeguards.
- A code fix is rolling out globally, with most environments already patched.
Microsoft has confirmed that a software bug in Microsoft 365 Copilot allowed its AI assistant to summarize emails that were supposed to remain confidential, bypassing established data loss prevention policies.
The issue, first detected on January 21 and tracked internally as CW1226324, affected the Copilot Chat work tab. According to a service alert obtained by BleepingComputer, the tool was incorrectly processing emails stored in users’ Sent Items and Drafts folders.
These messages included content protected by sensitivity labels that are specifically designed to prevent automated systems from accessing restricted information.
Copilot Chat, which began rolling out to Word, Excel, PowerPoint, Outlook, and OneNote for paying Microsoft 365 business customers in September 2025, is marketed as a content aware AI assistant that respects existing security controls. In this case, however, the guardrails did not work as intended.
Microsoft acknowledged that emails marked with confidential labels were being summarized despite active DLP policies. Organizations rely heavily on these controls to prevent sensitive material such as financial records, intellectual property, or internal strategy documents from being exposed, whether intentionally or by mistake.
What went wrong inside the work tab
At the heart of the problem was a code issue that allowed Copilot to pick up items in Sent Items and Draft folders, even when confidentiality labels were in place. In simple terms, Copilot’s work tab did not properly honor the restrictions applied to certain emails.
Importantly, Microsoft stressed that this did not grant unauthorized users access to new information. The AI only surfaced content that the individual user was already permitted to view. Still, the behavior contradicted the intended Copilot experience, which is meant to exclude protected content entirely from AI processing.
The bug primarily affected emails authored by users and stored locally in desktop versions of Microsoft Outlook. Because these emails were already within the user’s account, access controls technically remained intact. However, the fact that AI summaries could be generated from labeled confidential material raised concerns about compliance and internal governance.
For enterprises operating in regulated industries, even limited exposure of protected information can trigger audits or policy reviews. Sensitivity labels and DLP frameworks are implemented precisely to avoid such gray areas.
Fix rollout and current status
Microsoft began deploying a fix in early February, describing it as a targeted code correction. By mid February, the company said the update had reached the majority of affected environments. A smaller number of more complex service environments were still receiving the patch as deployment continued.
In an updated service alert issued on February 20, Microsoft stated that the root cause had been addressed for most customers and that no new email messages would be affected once the fix was applied. The company characterized the incident as an advisory, a classification typically used for service issues with limited scope or impact.
Notably, Microsoft has not disclosed how many users or organizations were affected. It also has not provided a firm timeline for complete remediation across all environments. Instead, it said it continues to monitor the rollout and contact a subset of affected users to confirm the solution is working as expected.
A reminder of AI’s compliance challenges
The episode underscores the delicate balance between AI productivity tools and enterprise security controls. As companies integrate generative AI into daily workflows, trust hinges on the technology’s ability to respect existing governance frameworks.
Copilot’s promise lies in its seamless integration with workplace data. But that same integration means even small configuration errors can ripple across compliance boundaries. Sensitivity labels and DLP rules are not optional add ons in enterprise IT. They are foundational safeguards.
Microsoft’s statement emphasizes that no external data exposure occurred and that access permissions were not bypassed. Even so, the incident highlights how AI systems must be rigorously tested against edge cases, especially when handling protected communications.
For IT administrators, this serves as a reminder to monitor AI tool behavior closely and review audit logs where possible. For Microsoft, it is another example of the scrutiny facing AI driven workplace software as adoption accelerates.
Follow TechBSB For More Updates
