Microsoft discloses Office defect that let Copilot access private emails
What happened — A bug in Office caused the Microsoft 365 chat-powered assistant to process messages that had been marked with confidentiality labels, allowing the assistant to ingest draft and sent items despite label-based protections. The behavior effectively bypassed certain data loss prevention expectations and persisted from January into February before Microsoft began a phased fix. Administrators can monitor remediation progress via the message center entry CW1226324.
Scope and mechanics — The vector was rooted in how the in-app Copilot integration handled protected messages, not in users intentionally sharing data with external models. Because the assistant accessed both drafts and sent mail items, organizations that rely on label enforcement for compliance faced the risk that sensitive content was available to the assistant during the exposure window. Microsoft has not published a tenant-level impact count.
Consequences and reactions — The incident prompted immediate operational work for security and compliance teams: reconciling logs, determining whether protected content was processed, and preparing notification or remediation workflows where required. At least one major body, the European Parliament, responded by disabling built-in AI features on managed devices, illustrating how institutions may favor conservative controls while vendors remediate.
Broader context — The Copilot label-bypass disclosure arrived amid a busy patch period for Office: Microsoft also disclosed active exploit activity tied to vulnerabilities such as CVE-2026-21509, and U.S. authorities have added some Office bugs to the Known Exploited Vulnerabilities list, raising remediation timelines for government and critical infrastructure operators. Together these events increased operational pressure on defenders to validate updates, apply layered mitigations where patches could not be immediately deployed, and review telemetry for signs of misuse.
Operational takeaways — Beyond patching, defenders should inventory which endpoints and accounts have AI assistants enabled, rotate credentials exposed to auxiliary services, and search logs for assistant interactions with labeled content. Organizations that cannot quickly apply the fix should consider restricting Copilot access, enforcing stronger endpoint controls, or disabling built-in AI features on sensitive machines until guarantees are validated.
Strategic ripple effects — The episode underscores a broader pattern seen with agentic and AI systems: misconfigurations or unexpected processing pathways can make sensitive data accessible to models or services. Procurement, legal and security teams will push for clearer, auditable assurances about how vendors handle labeled or otherwise protected data — including options for on-device inference or explicitly segregated pipelines — and for contractual remedies that address AI ingestion risks.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Microsoft pushes urgent Office patch for a newly exploited zero-day used in targeted intrusions
Microsoft released fixes for CVE-2026-21509 after detecting active exploitation that undermines Office protections; mitigations and patches cover major supported Office builds and CISA has flagged the flaw for immediate remediation. The vulnerability appears to be leveraged in focused operations requiring user interaction and complex exploit chains, elevating the priority for high-value targets to deploy updates quickly.

Microsoft Copilot rollout sparks customer backlash and FTC scrutiny
Microsoft’s push to bake Copilot across Windows 11 and Microsoft 365 has generated customer frustration and triggered stepped-up FTC information requests to competing vendors; a separate Copilot-related Office bug that processed labeled content has amplified security and procurement concerns, prompting some institutions to disable built-in AI features.
Security flaws in popular open-source AI assistant expose credentials and private chats
Researchers discovered that internet-accessible instances of the open-source assistant Clawdbot can leak sensitive credentials and conversation histories when misconfigured. The exposure enables attackers to harvest API keys, impersonate users, and in one test led to extracting a private cryptographic key within minutes.

FTC intensifies investigation into Microsoft’s cloud and AI commercial practices
The U.S. Federal Trade Commission has moved into an active evidence-collection phase, issuing formal information requests to multiple Microsoft rivals about cloud, enterprise software and AI licensing. The inquiries broaden the agency’s reach beyond consumers and could presage enforcement actions that reshape how enterprise AI and cloud products are licensed and bundled.
Security Flaw Left AI Toy Conversations of Children Widely Accessible
A misconfigured web portal allowed unauthenticated users to view tens of thousands of chat transcripts from an AI-enabled children's toy. Researchers alerted the company, which disabled the console quickly and applied fixes, but the incident underscores systemic privacy and engineering risks in voice- and chat-enabled toys for minors.

Sears Home Services Left Millions of Voice and Chat Records Public
Security researcher Jeremiah Fowler found publicly accessible databases holding millions of Sears Home Services chatbot chats and audio files, including multi-hour ambient recordings that exposed personal details. The exposure fits a broader pattern—other consumer-facing conversational systems (including connected toys) have leaked transcripts due to weak defaults—though remediation speed and external validation have varied across incidents, affecting regulatory and reputational fallout.

Microsoft Seeks Court Stay to Halt Pentagon Ban on Anthropic
Microsoft filed for a temporary judicial stay to pause a Defense Department supply‑chain designation that bars Anthropic from certain DoD uses, arguing the order would cause immediate operational disruption and threaten existing contracts. The move arrives amid a broader White House‑backed designation with an informal six‑month exit window for classified deployments (often referenced as "Claude Gov") and crystallizes a procurement fight over telemetry, provenance and hosting requirements.

Amazon Reported More Than One Million AI-Related CSAM Alerts to NCMEC but Refuses to Disclose Sources
Amazon told U.S. authorities it flagged over one million instances of AI-linked child sexual abuse material in 2025, driven largely by content it says was found in external data sets used for model training. The company says it removed the material before training and intentionally over-reported to avoid missing cases, but offered no specifics on where the material originated, leaving many reports unusable for law enforcement.