Thursday, November 20, 2025

Malware 11

There have inevitably been numerous meetings in Microsoft (and other companies) where someone has pointed out that their spicy chatbot rollout has numerous catastrophic privacy and security issues and is therefore completely unsuitable and dangerous, and was overruled by someone else saying, "Yeah, well, you know, that's just, like, your opinion, man."
Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”