Skip to main content

Postmortem: how a Word document became Domain Admin

by Datatrek NightWatch 4-minute read

Disclaimer: identifying details of the affected organisation have been altered. The technical chain is real and was reconstructed from agent telemetry, network logs, and the post-event interview with the IT lead. Published with the customer's consent.

Summary

  • T+0: Finance user opens a Greek-language invoice email with macro-enabled .docm.
  • T+0:01: Macro spawns regsvr32.exe with a remote scriptlet — first stage download.
  • T+0:04: Scheduled task created for persistence, named to mimic an Office update.
  • T+0:12: Credential dump via legitimate-looking signed binary (LoL-bin variant).
  • T+0:31: Lateral movement to file server using cached domain admin token.
  • T+0:47: Domain admin obtained from DC via remote DCSync.
  • T+0:53: Datatrek SOC alerts on anomalous DCSync from a workstation; XEDR isolates initial host.
  • T+1:08: Hunt complete. All persistence removed. No exfiltration.

What worked

The XNDR catch. The customer had Datatrek XEDR on the DCs but only single-EDR on workstations. The single-EDR did not catch the credential dump — the binary looked legitimate. What did catch it was XNDR's network-side detection of the unusual DCSync traffic. That's the thesis of defense-in-depth: when one layer misses, another doesn't.

The 5-month SIEM retention. Once we knew what to look for, we walked the SIEM backwards. The same C2 domain had been pinged twice in the previous 11 weeks, by the same workstation, brief enough that no detection fired. With shorter retention we'd have lost that evidence and would not have known if this was a single event or a sustained operation.

The dual-EDR on the DC. The attacker had domain admin for 6 minutes before our DC's second EDR alerted on the DCSync attempt. The first EDR saw the same activity but its rule for that pattern was permissive (signed binary, kerberos-issued ticket, "looked normal"). The second EDR's behavioural rule fired immediately. Two independent vendors, two different decisions; we only need one to be right.

What didn't work

The customer's email gateway. The malicious doc was not caught by their commercial email gateway. The macro logic was novel enough that signature scanning missed it. The doc was technically blockable by macro-disable policy — but the customer had a legitimate need for macros in Excel from one specific supplier, and had loosened policy domain-wide as a result. We fixed this by scoping the exception to that supplier only, via document signature verification.

Their existing backup strategy. Pre-Datatrek, this customer ran nightly snapshots to a NAS. Domain admin would have allowed deletion. Had we not caught the breach in time, the attacker's next move would almost certainly have been backup deletion before encryption. They'd onboarded Datatrek S3 Backup with Object Lock three weeks earlier. The lock would have held even with valid creds.

Tier-1 outsourced helpdesk. The customer's tier-1 IT helpdesk was outsourced overseas. The first user actually called them at T+0:14 because Word was "slow." Tier-1 logged a ticket and went home. The first contact with Datatrek was at T+0:53, via our own SOC alert, not via the customer.

What we recommended

  1. Macro policy by signing certificate, not domain-wide allow.
  2. Move workstations to dual-EDR — at least the executive and finance subset who handle invoices.
  3. Block DCSync from anything-not-DC at the firewall, not just at the EDR layer.
  4. Drop the outsourced tier-1. The cost-saving was 80% wiped out by a single 47-minute window of unmonitored compromise.
  5. Add a NightWatch direct-line. When something feels weird, calling us first is faster than logging a ticket.

What we learned

The single biggest predictor of a contained vs catastrophic incident, in our experience, is detection breadth, not depth. One excellent EDR is worse than two pretty-good EDRs from different vendors. One brilliant analyst is worse than three engineers on shift. NIS2 talks about this in abstract terms — "appropriate technical measures" — but operationally what it means is: don't bet your company on a single product or a single human being noticing.

That's why every Datatrek service is engineered as a layer in a posture, not a standalone product. If you take XEDR without XNDR or without SIEM, you've still made a meaningful improvement — but you've also kept some of the single-point-of-failure risk that this incident exploited.

The customer is fine. The DCs were rebuilt over the following weekend out of pure paranoia (we had no evidence of DC-level persistence, but DC trust is hard to re-earn after a domain admin event). Backups were intact. No exfil. No ransom paid.

This one didn't make the news. Most don't. We publish anonymized postmortems because the lessons should be public, even when the names can't be.

Talk to the SOC