As a rush of cybercriminals, state-backed hackers, and scammers continue to flood the zone with digital attacks and aggressive campaigns worldwide, it’s no surprise that the maker of the ubiquitous Windows operating system is focused on security defense. Microsoft’s Patch Tuesday update releases frequently contain fixes for critical vulnerabilities, including those that are actively being exploited by attackers out in the world.
The company already has the requisite groups to hunt for weaknesses in its code (the “red team”) and develop mitigations (the “blue team”). But recently, that format evolved again to promote more collaboration and interdisciplinary work in the hopes of catching even more mistakes and flaws before things start to spiral. Known as Microsoft Offensive Research & Security Engineering, or Morsethe department combines the red team, blue team, and so-called green team, which focuses on finding flaws or taking weaknesses the red team has found and fixing them more systematically through changes to how things are done within an organization.
“People are convinced that you cannot move forward without investing in security,” says David Weston, Microsoft’s vice president of enterprise and operating system security who’s been at the company for 10 years. “I’ve been in security for a very long time. For most of my career, we were thought of as annoying. Now, if anything, leaders are coming to me and saying, ‘Dave, am I OK? Have we done everything we can?’ That’s been a significant change.”
Morse has been working to promote safe coding practices across Microsoft so fewer bugs end up in the company’s software in the first place. OneFuzz, an open source Azure testing framework, allows Microsoft developers to be constantly, automatically pelting their code with all sorts of unusual use cases to ferret out flaws that wouldn’t be noticeable if the software was only being used exactly as intended.
The combined team has also been at the forefront of promoting the use of safer programming languages (like Rust) across the company. And they’ve advocated embedding security analysis tools directly into the real software compiler used in the company’s production workflow. That change has been impactful, Weston says, because it means developers aren’t doing hypothetical analysis in a simulated environment where some bugs might be overlooked at a step removed from real production.
The Morse team says the shift toward proactive security has led to real progress. In a recent example, Morse members were vetting historic software—an important part of the group’s job, since so much of the Windows codebase was developed before these expanded security reviews. While examining how Microsoft had implemented Transport Layer Security 1.3, the foundational cryptographic protocol used across networks like the internet for secure communication, Morse discovered a remotely exploitable bug that could have allowed attackers to access targets’ devices.
As Mitch Adair, Microsoft’s principal security lead for Cloud Security, put it: “It would have been as bad as it gets. TLS is used to secure basically every single service product that Microsoft uses.”