by Lieuwe Jan Koning, CTO ON2IT

Lieuwe Jan KoningWith over 300 managed customers who still have a substantial footprint of on-premise Microsoft Exchange servers, we don’t have to explain to you that our mSOCtm and CIRT-teams have been busy since March 2, 2021 (some of our SOC-engineers might call that the understatement of the year).

On March 2 our colleagues from Volexity reported the in-the-wild exploitation of four Microsoft Exchange Server vulnerabilities: CVE-2021-26855, CVE-2021-26857, CVE-2021-26858 and CVE-2021-27065. As a result of these vulnerabilities, adversaries can access exposed Microsoft Exchange servers and allow the installation of additional tools to facilitate long-term access into victims’ environments.

As you know, the same day, Microsoft released an emergency out-of-band security update to patch these vulnerabilities. They strongly advised immediately updating all Microsoft Exchange servers to the latest available patched versions.

So from March 2 onward the only safe assumption is that an unpatched Exchange server is a breached server

From that moment on, the clock started ticking for the sysadmins of the hundreds of thousands of Exchange servers around the world (and their risk officers or CISO’s). Exploits had been seen as early as January, so from March 2 onward the only safe assumption is that an unpatched Exchange server is a breached server.

Even more seriously, an unpatched server is a server on which all the mailboxes may have been exfiltrated, and we must assume that the passwords of users who logged on to their webmail are compromised. Even now, the global impact of this breach is still sinking in.

You can’t say: I did not see that coming

But one thing is certain: you can’t say that you did not see it coming. The global Solarwinds attacks back in December should have been a wakeup call. Although those attacks, in retrospect, seem less impactful than the current Exchange turmoil, they exposed weaknesses and the very same patterns we are now observing in the turmoil surrounding the Microsoft Exchange CVE’s.

The publication of extremely high-risk CVE’s starts a frantic activity, both in the SOC and across customer IT-departments

We all know the drill. The publication of extremely high-risk CVE’s starts a frantic activity, both in the SOC and across customer IT-departments. What is the impact? Where are the servers? How effective is the remediation? Are my security instruments and playbooks set up for the detection and remediation of exploits? Have I already been compromised? How do I hunt for indicators of malicious actors?

Some of our customers’ IT-departments acted with great speed and effectiveness in the wake of the first public announcements by Microsoft and our subsequent alert messages. Other customers needed more active support from our mSOCtm teams. In some cases, for example for customers who had explicitly not included their Exchange Servers in their managed service contracts with us, we deployed server agents and extended SOC-onboarding in extremely tight timeframes to remediate the immediate threats and prevent further exploitation.

Important lessons

A few important lessons from the SolarWinds (and Citrix, and ZScaler, and earlier attacks) become even more visible in the light of the Exchange vulnerabilities:

Doing the right thing, right, the first (and every) time

There is no excuse for sloppy execution of patch and update management. Never.

You can’t fight what you don’t see

Lack of visibility in your infrastructure makes it extremely difficult to detect vulnerabilities being exploited. You can’t detect adversaries that access Microsoft Exchange Servers and install additional tools to facilitate long-term access into your environments.

When you don’t have Cortex XDR agents (or equivalent agents) deployed onto vulnerable servers and don’t use SSL Inbound Inspection for all traffic destined for Exchange servers using SSL or TLS, it becomes much more difficult to detect threats and prevent them. Especially in this Hafnium case: it is relatively easy for an attacker to cover their tracks after exploitation, and exploit tools are freely available on GitHub today.

So even if the server itself is now patched, and there are no signs of a previous hack, the hacker may still have had access and installed other malicious Remote Access Tools. But which one? We need to find them all.

Don’t assume there is a trust relationship between servers in your infrastructure

One of the basic tenets of our Zero Trust strategy is to establish appropriate access policies and required security controls for all logical segments in your infrastructure and then, not only enforce these policies, but also inspect them continuously.

You might not be able to stop a zero-day attack, but you can lower the impact substantially by blocking traffic from exploits that have established a foothold in one segment of your network to other network segments.

Policies are effective only when they are effectively implemented in security instruments such as firewalls

Security policies deteriorate with age. You need automated and rigorous tools and procedures to ensure that firewall rule sets, cloud configurations, user management and remote access capabilities are operationalized according to your risk policies and best practices. This obviously includes the correct usage of measures such as threat prevention DNS-monitoring, URL-filtering, or intrusion prevention and SSL decryption.

Customers of ON2IT’s Managed Prevention and Compliance (MPC) service can be sure that continuous inspection of their operational measures against their policies and industry best practices is something we take very seriously.

Most of these lessons are not new

We believe and invest in new ways to beat unknown attacks and attackers, such as attack surface visibility from the outside, AI-based behavioral analytics and the automation of repetitive workflows. But these tools are only as effective when we learn from the lessons today.

Lieuwe Jan Koning

Lieuwe Jan Koning