Microsoft Fixes Copilot Bug That Surfaced Confidential Emails in Enterprise Accounts
Configuration Error Allowed AI Assistant to Summarise Draft and Sent Emails Marked as Confidential

What Happened
Microsoft has confirmed that a configuration error in its enterprise AI assistant, Microsoft 365 Copilot Chat, caused the system to access and summarise certain confidential emails unintentionally.
The issue affected enterprise users of Microsoft 365 Copilot Chat — a generative AI assistant integrated into workplace tools such as Outlook and Teams. According to Microsoft, the bug allowed Copilot Chat to process and summarise email content stored in users’ Draft and Sent folders, including messages labeled as confidential.
The company stated that while access controls and data protection policies remained intact — meaning users were not shown information they were not already authorised to view — the behavior did not align with Microsoft’s intended design. Copilot is supposed to exclude protected content from AI access, particularly when sensitivity labels or data loss prevention (DLP) policies are applied.
The issue was first reported by tech outlet Bleeping Computer, which cited a Microsoft service alert stating that emails with confidential labels were “being incorrectly processed” by Copilot Chat.
Microsoft said it became aware of the problem in January and has since deployed a configuration update worldwide for enterprise customers.
The incident also appeared in a support dashboard for NHS workers in England, where the root cause was attributed to a code issue. NHS officials indicated that although the system processed draft or sent emails, patient data was not exposed beyond authorized users.
Microsoft emphasized that Copilot did not grant access to any information outside of existing user permissions.
Why It Matters
The Copilot incident highlights growing tensions between rapid AI deployment and enterprise data governance.
Microsoft has heavily promoted Copilot as a secure, enterprise-grade AI assistant designed to operate within corporate compliance frameworks. Tools like Microsoft 365 Copilot Chat are marketed as productivity enhancers capable of summarizing emails, generating documents, and answering internal knowledge queries — all while respecting strict data protection controls.
However, this bug illustrates a critical challenge: AI systems integrated into workplace software must interpret and respect complex metadata such as sensitivity labels and DLP policies. Even small configuration errors can lead to unintended data exposure within authorized boundaries.
While no external breach occurred, the distinction between “authorized access” and “appropriate AI processing” is significant.
In traditional IT systems, a confidential label typically prevents redistribution or visibility to unauthorized parties. When AI tools summarize or surface that content, even for the original author, questions arise about compliance interpretation and internal security protocols.
Experts argue that such incidents are likely to increase as AI capabilities expand.
Nader Henein, a data protection analyst at Gartner, described these types of errors as “unavoidable” given the rapid rollout of new AI features. Enterprise environments often lack mature governance frameworks to evaluate each update before deployment.
Professor Alan Woodward of the University of Surrey emphasized the importance of default privacy safeguards and opt-in controls, noting that AI tools evolving at high speed inevitably introduce bugs — and that unintentional data leakage is a foreseeable risk.
The broader issue is structural.
AI assistants operate across multiple data sources simultaneously — emails, chat logs, documents, and collaborative platforms. As they synthesize information, they blur traditional boundaries between discrete files and contextual access. Ensuring compliance with privacy policies becomes more complex when AI models interpret rather than merely store data.
For regulated industries such as healthcare, finance, and government, even internal data mishandling can raise audit and compliance concerns.
The Bigger Picture
This incident does not indicate malicious access or an external hack. Instead, it underscores the fragility of AI governance frameworks during a period of aggressive innovation.
Tech companies face competitive pressure to embed generative AI into core productivity tools. At the same time, enterprise customers expect airtight security and regulatory compliance.
These dual demands can conflict.
As organizations adopt AI assistants more widely, they may need to reassess:
Whether AI features are enabled by default
How sensitivity labels are interpreted by AI systems
Whether summaries should be restricted for protected content
What monitoring mechanisms detect unintended processing
The Copilot error may serve as a cautionary example for other enterprise AI deployments.
The central challenge is not whether AI tools can increase productivity — but whether they can do so while maintaining trust in digital confidentiality frameworks.
As generative AI becomes embedded in workplace infrastructure, governance may need to evolve as quickly as the technology itself.



Comments
There are no comments for this story
Be the first to respond and start the conversation.