opsec
Threat Modeling Guide: Know Your Adversary
Build a personal threat model in four steps — asset, adversary, capability, risk. The OPSEC framework that underpins every guide on this site.
A threat model is the answer to four questions: what do you need to protect, who wants it, what can they actually do, and what happens if you get it wrong? Everything else in OPSEC — the tools, the workflows, the paranoia — flows from those four answers. Skip the threat model and you're guessing.
This guide walks through threat modeling from first principles. By the end, you'll have a framework you can apply to any situation — whether you're a journalist protecting a source, a researcher studying extremist networks, or an activist operating in a hostile jurisdiction.
What a Threat Model Is
Threat modeling is a structured way to reason about risk before you're under pressure. The term comes from software security — Microsoft formalized it with STRIDE in the late 1990s — but it applies equally to personal operational security.
The EFF's Surveillance Self-Defense defines it as "thinking about who might want to compromise your digital security and what they want from you." That's a good start, but it undersells the specificity required. "The government might be watching me" is not a threat model. "The tax authority in jurisdiction X has subpoena power over my domestic bank and can compel my ISP to produce 90 days of connection logs" — that's a threat model.
The goal is not to become paranoid. It's to spend security effort where it actually changes outcomes.
The Four Questions You Must Answer
Every threat model, personal or professional, answers these four in order:
1. What assets are you protecting? Be concrete. "My privacy" is not an asset. "The identity of a source who gave me documents about prison conditions" is an asset. "My physical location during protests in [city]" is an asset. List them. Rank them.
2. Who is your adversary? Again, concrete. "Hackers" is not an adversary. "A subcontractor with access to my employer's HR system" is an adversary. "A state-level signals intelligence agency with upstream ISP access" is an adversary. Different adversaries require different countermeasures.
3. What can your adversary actually do? Capability determines the realistic attack surface. A stalker ex-partner probably can't get a court order for your metadata — but they can access any shared accounts or monitor physical locations. A federal agency can't read end-to-end encrypted messages, but it can subpoena the metadata and compel device access at the border. Know the difference.
4. What are the consequences if you fail? Lost sleep? Lost job? Arrest? Physical harm? The consequence tier determines how aggressively you need to protect each asset. Over-engineering security for low-consequence assets burns time and creates friction that causes people to give up entirely.
Asset → Adversary → Capability → Risk: A Worked Structure
Work through these four layers as a table, then decide on countermeasures.
| Asset | Adversary | Capability | Consequence |
|---|---|---|---|
| Source identity | Federal law enforcement | Subpoena, device seizure, metadata requests | Source imprisonment, harm to journalist |
| Protest location | Corporate surveillance / data brokers | Mobile ad ID tracking, license plate readers | Doxxing, termination |
| Medical history | Insurance adjuster / employer | Data broker records, breach exposure | Discrimination, denial of coverage |
| Financial transactions | Organized fraud ring | Phishing, SIM swap, credential stuffing | Financial loss |
| Online pseudonym | Determined individual / investigative reporter | OSINT, social graph analysis, writing style analysis | Exposure, reputational harm |
Fill in your own rows. You'll quickly see which assets need hardening and which are low-priority.
Common Threat-Model Archetypes
Not everyone has the same threat model. Here are five types we encounter in our readership, with their primary adversary and recommended posture.
| Persona | Primary adversary | Highest-risk asset | Recommended posture |
|---|---|---|---|
| Investigative journalist | State agencies, corporations being investigated | Source identity, unpublished drafts | Tails + Signal + air-gapped device for source comms |
| Political activist (authoritarian state) | State intelligence, informants | Physical location, network identity, associates | Tor + Tails, strict compartmentalization |
| Security researcher | Targeted threat actors, employer liability | Research data, malware samples, lab identity | Qubes OS + isolated analysis qubes |
| Infosec student | Minimal adversarial threat; building habits | Credential hygiene, browser fingerprint | Password manager, Firefox containers, PGP basics |
| Privacy-aware civilian | Data brokers, ad tech, low-level fraud | Financial data, physical address, browsing history | PGP encryption, DNS-over-HTTPS, Tor for sensitive searches |
The activist operating under an authoritarian government has almost nothing in common with the infosec student. Tools that are appropriate for one can be dangerously insufficient for the other.
Walking Through a Worked Example
Consider a journalist contacting a source who leaked internal documents about prison conditions.
Assets: the source's identity, the documents themselves, the journalist's device and accounts.
Adversary: the corrections department being reported on, plus any federal or state law enforcement that department can lean on. Assume they can subpoena metadata from US-based service providers and issue a warrant for physical devices.
Capability: legal process (subpoenas, warrants), device forensics on seized hardware, metadata analysis of calls and emails, potentially a follow-on investigation into the source's workplace access logs.
Countermeasures by layer:
- Communications channel: Signal with disappearing messages (not SMS, not email). Signal encrypts content; metadata (who called whom, when) is still available to Signal under legal process, but message content is not. For initial contact, SecureDrop via Tor Browser.
- Device: Tails OS for any session that touches the source or the documents. Tails leaves no forensic trace on the host machine.
- Documents: Never open on a networked device. Open on an air-gapped machine or inside an isolated Qubes disposable VM.
- Source identity: Never written down, never discussed electronically — not even encrypted. The strongest protection for source identity is that you genuinely can't be compelled to reveal what you don't have stored anywhere.
This is what a real threat model looks like in practice. Not "use Signal and a VPN," but a layered decision tree that addresses each adversary capability.
When to Revise Your Threat Model
Your threat model is not static. Revise it when:
- Your jurisdiction changes. Crossing a border changes what legal authorities can compel and who has physical access to your devices. Border agents in many countries can demand device unlock without a warrant.
- Your beat or work changes. A reporter who starts covering corruption in a new country faces a different adversary than they did covering local city hall.
- A source attribution attempt happens. If you learn that a source has come under scrutiny, your exposure changes immediately. Compartments that felt sufficient may need to be collapsed.
- A new capability appears. Law enforcement and intelligence services regularly acquire new tools. The Pegasus spyware revelations in 2021 changed threat models for journalists globally overnight.
- You make a significant mistake. If an identity leak, accidental login, or metadata exposure occurs, treat it as a trigger for a full model review.
Threat modeling isn't a one-time exercise. High-risk users should review their model at least quarterly.
Tools That Help
Three frameworks are worth knowing, not as rigid checklists but as thinking aids:
EFF Surveillance Self-Defense (ssd.eff.org) — The most accessible starting point for individual threat modeling. It walks through common scenarios with concrete tool recommendations tied to specific adversaries. Free and regularly updated.
NIST SP 800-154 — Data-centric threat modeling from the National Institute of Standards and Technology. Originally aimed at organizations, but the data-flow analysis methodology adapts well to personal use. Useful if you handle sensitive files or work with organizational data.
STRIDE — Microsoft's threat-modeling framework, originally for software. STRIDE stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege. For personal OPSEC, the most applicable components are Spoofing (identity impersonation), Information Disclosure (data leaks), and Repudiation (establishing or denying action). It's more engineering-flavored than the EFF guide but worth reading if you think in systems terms.
No tool replaces the judgment you develop by actually writing out your own threat model and interrogating the assumptions.
What Threat Modeling Does NOT Do
Threat modeling tells you where to direct effort. It does not:
- Guarantee safety. A well-constructed threat model reduces risk; it doesn't eliminate it. Assume you'll get something wrong. Plan for it.
- Replace operational discipline. You can have the best tools in the world and still compromise yourself by logging into a personal account from a session that should stay anonymous. Discipline is the implementation layer; threat modeling is the design layer.
- Age well without maintenance. A 2019 threat model is not a 2026 threat model. Capabilities, jurisdictions, and adversary tactics evolve.
- Work if you share it carelessly. Your threat model itself is sensitive. Telling the wrong person exactly what you're protecting and from whom is its own operational failure.
- Cover physical security. Digital OPSEC is one dimension. Physical security — who has line-of-sight to your screen, who knows your schedule, who can access your hardware — is a separate threat model that deserves its own assessment.
The darkest failure mode in personal security is not technical. It's believing that having a threat model means you've handled it. The map is not the territory.
Frequently Asked Questions
What's the difference between a threat model and a risk assessment?
A risk assessment typically quantifies likelihood and impact across a broad set of scenarios — it's often a compliance artifact. A threat model is more specific: you're identifying concrete adversaries, their concrete capabilities, and the specific assets they'd target. Threat modeling is the input to a risk assessment, not the same thing.
Do I need a threat model if I'm just a private person?
Yes, but it'll be simpler. Most private individuals face data brokers, low-sophistication fraud, and targeted phishing as their primary adversaries — not state agencies. That still warrants a model. It just means your countermeasures are a good password manager, two-factor authentication, and careful browser hygiene rather than Tails OS and air-gapped machines.
How does threat modeling connect to OPSEC?
OPSEC (operational security) is the practice; threat modeling is the planning layer that makes OPSEC decisions defensible rather than arbitrary. You can't know which information to protect, or how aggressively, without first identifying who wants it and what they can do.
Can my threat model be wrong?
Absolutely. You'll misjudge adversary capability, miss an exposure vector, or hold an assumption that stops being true. That's not a reason to skip the exercise. A wrong threat model that you're actively maintaining and updating is far better than no threat model at all.
Is STRIDE useful for individuals, not just software engineers?
STRIDE is more useful as a vocabulary than as a rigid process for individuals. The concepts — especially around information disclosure and spoofing — help you ask the right questions about a situation. You don't need to fill out formal data-flow diagrams; you need the underlying questions the framework forces.
Related guides
- What is OPSEC? The Five Steps Explained
- Compartmentalization OPSEC: One Identity per Purpose
- Tails OS Guide: Installation and Use
- Whonix vs Tails: Choosing the Right Anonymous OS
- Qubes OS Explained: Security by Isolation
- How to Access the Dark Web Safely
- Tor Browser Setup Guide
- PGP Encryption Guide
- How We Vet Markets