Microsoft is finally addressing the backlash over adding too many AI features in Windows 11. The company has confirmed that a future update will include a safeguard, and Windows will prompt the user for consent before any AI-powered agent or tool can access personal files or other sensitive data on the PC.
This move from Microsoft comes amid growing concerns about the increasing number of AI features being added to Windows 11. Agentic AI can now handle local files like documents, photos, and emails. A new prompt will appear every time an AI agent tries to access local files or sensitive data. It will also clearly explain what data it needs to access and why, so the user can decide whether to approve or deny the request on a case-by-case basis.
The feature aligns with Microsoft’s broader “Responsible AI” principles, which emphasise safety, user consent, and accountability. The company noted that this change will apply not just to Microsoft’s own tools like Windows Copilot, but also to third-party AI apps built on the Windows platform. Developers will need to follow new API policies that enforce the same privacy rules, ensuring a consistent and transparent experience across the system.
This update is expected to roll out as part of a major Windows 11 feature release in 2026, though early testing is likely to begin through the Windows Insider Program. The change also highlights Microsoft’s continued effort to balance innovation with user trust, especially as AI becomes more embedded in everyday PC tasks.
Simply put, the next time an AI on your PC wants to peek into your folders or files, Windows will make sure you’re the one calling the shots. And in a world where AI is doing more behind the scenes than ever before, that’s a reassuring step in the right direction.