Most people who experience identity theft know they’re a victim before they can prove it. The fraudulent accounts appear. The credit report shows applications they didn’t make. The bank calls about transactions in a city they’ve never visited. The damage is real and immediate.
But insurance recovery is a different problem. To collect under a cyber/identity theft insurance policy, the claimant typically needs to demonstrate when and how the compromise occurred — not just that it happened. That requires a forensic reconstruction of the compromise timeline, working backward from evidence the victim may not even know exists.
This composite scenario illustrates patterns common to identity theft forensic engagements. It represents how these investigations typically unfold, not a specific client matter.
The Insurance Problem
Identity theft insurance policies vary widely, but most require the policyholder to demonstrate that unauthorized access to their accounts and personal information occurred within the policy period. They typically also require documentation of specific losses and, increasingly, a forensic investigation report as a condition of significant claims.
The engagement begins when an attorney representing the policyholder contacts a forensic examiner. The client has experienced identity theft — multiple fraudulent accounts, unauthorized credit applications, funds transferred from financial accounts — and needs to establish a documented record of how and approximately when the compromise occurred.
This is investigative forensics working from the victim’s side. The goal isn’t to identify the perpetrator (that’s law enforcement’s work). The goal is to document the mechanism and timeline of the compromise from available digital evidence.
What Evidence Exists
Identity theft investigations work from a different artifact set than device-based investigations. The victim’s devices matter, but so do external records from service providers, financial institutions, and identity monitoring services.
Phishing and email compromise. The most common vector for identity theft is credential phishing — a convincing fake login page captures the victim’s username and password. The forensic evidence: the phishing email itself (with headers showing origin and routing), the victim’s email login records (IP addresses, device fingerprints, session durations), and in some cases, email forwarding rules added by the attacker to silently copy incoming communications.
Data breach correlation. Many identity theft events trace back to credentials compromised in a third-party data breach years earlier. The victim used the same password on a breached service as on their financial accounts. HIBP (Have I Been Pwned) and commercial breach intelligence feeds can document which breaches contain the victim’s email addresses.
Account access logs. Financial institutions, email providers, and major platforms maintain access logs showing login times, IP addresses, device identifiers, and geography. Obtaining these records via formal request — some platforms provide them to account holders, others require legal process — can document unauthorized access events with precise timestamps.
Device examination. The victim’s primary computer, phone, and any other devices they used for financial accounts. Looking for: credential-stealing malware, browser history showing visits to phishing sites, saved password stores that may have been exfiltrated, and any evidence of unauthorized remote access.
The Email Header Analysis
In the majority of identity theft cases we examine, the entry point was the victim’s email account. Compromised email is the master key — it enables password resets on every other account, intercepts two-factor authentication codes, and gives the attacker persistent visibility into the victim’s communications.
The first step: obtain the email account’s access logs. Google’s Gmail provides “Last account activity” details showing recent login IPs and device types, but more complete logs require a Google Data Export or legal process to Google. Microsoft 365 and Outlook.com have similar logs available through the account security settings.
In this scenario, the client’s Gmail access log (obtained via the client’s own Google account export) showed login events from two unfamiliar IP addresses on consecutive days. The IP addresses resolved to a VPN/proxy service commonly used to mask geographic location.
The login times were notable: 2:47 a.m. and 3:22 a.m. local time. The client was not a night owl and confirmed they were asleep during those hours.
The access logs also showed the creation of a forwarding rule within 10 minutes of the first unauthorized login. Every email sent to the client’s address was being silently forwarded to a disposable address at a free email provider. This rule had been active for 31 days before the client noticed and removed it — 31 days during which the attacker received copies of every incoming email, including bank statements, credit card notifications, and account confirmation emails.
Phishing Email Recovery and Analysis
Working backward from the account compromise, we needed to identify how the attacker obtained the client’s Gmail credentials.
The client recalled receiving what they thought was a Google security alert approximately three weeks before the unauthorized access events. They had clicked a link and entered their credentials on what turned out to be a phishing page.
The phishing email itself was no longer in the inbox — the client had deleted it after recognizing it was fraudulent. But two recovery paths existed.
Gmail Trash. Deleted Gmail messages remain in Trash for 30 days before permanent deletion. The email was still in Trash. We extracted it via the Google Takeout export, which preserves all messages in MBOX format including messages in Trash.
Email header analysis. The phishing email’s headers showed the routing path from origin to delivery. Key findings:
- The sending address claimed to be from `security-noreply@google.com` — but the `From:` header showed a spoofed address while the SMTP envelope sender was an unrelated domain
- DMARC result: `fail` — the email failed Google’s own DMARC check, meaning it didn’t actually come from Google’s infrastructure
- The originating IP (the bottom-most `Received:` header) resolved to a cloud hosting provider in Eastern Europe
- The `Return-Path:` header pointed to a domain registered 19 days before the email was sent — a fresh registration, common in phishing infrastructure
This email analysis documented the attack vector: a phishing email that spoofed Google’s sending address, originated from Eastern European infrastructure, and delivered on the same domain registered specifically for this campaign.
See the [email header analysis guide](/email-header-analysis-authentication/) for a detailed breakdown of how SPF, DKIM, and DMARC authentication results identify spoofed emails.
Device Examination: Malware Assessment
With the phishing timeline established, we examined the client’s primary computer — a Windows laptop — for evidence of credential-stealing malware.
Phishing attacks that capture credentials via a fake login page typically don’t install malware on the victim’s device. The credential is captured at the web server level when the victim submits the form. But some phishing campaigns deliver a credential-stealing payload alongside the fake login (or separately, via a malicious attachment). We needed to determine whether the client’s device had been compromised beyond the credential theft.
Startup programs and scheduled tasks. We examined startup registry keys, scheduled tasks, and service entries for unfamiliar executables. None were found.
Browser extension audit. Malicious browser extensions can steal credentials, cookies, and session tokens. We audited installed extensions in Chrome and Edge against the client’s recollection of what they’d installed and against known-good extension databases. One extension — installed approximately one month before the incident — had no corresponding entry in the Chrome Web Store and used a permissions set (access to all websites, read browsing history) inconsistent with its claimed function. We flagged this for removal and documented it as a finding of concern, without asserting definitively that it was malicious.
Network adapter history. We checked for evidence of VPN software or remote access tools that the client didn’t install. None found.
Prefetch analysis. Windows prefetch showed no execution of known credential-stealing tools (Mimikatz, LaZagne, etc.) during the relevant period.
Assessment: no evidence of malware beyond the suspicious browser extension. The primary vector was the phishing credential capture, with the extension warranting further investigation.
Access Log Correlation and Timeline Construction
The full timeline constructed from all evidence sources:
Day -37: Data breach correlation shows the client’s email and password hash from a previously disclosed breach of an e-commerce site. The client had reused this password on their Gmail account.
Day -22: The client’s email address is observed in a phishing campaign targeting Gmail users.
Day -19: The phishing domain used in the campaign is registered (domain registration records, public WHOIS).
Day -14: Client receives phishing email (email header timestamp, corroborated by Trash folder metadata). Client clicks link and enters credentials on phishing page (client’s own account).
Day -13: Suspicious browser extension installed on client’s computer (Chrome extension installation timestamp from browser profile data).
Day 0: First unauthorized Gmail login at 2:47 a.m. from VPN IP. Forwarding rule created 10 minutes later. Second login at 3:22 a.m.
Days 1-31: All incoming email silently forwarded to attacker-controlled address. Client’s email used for account takeover of financial accounts — unauthorized password reset requests visible in Gmail’s sent mail (forwarding rule didn’t affect outbound mail).
Day 31: Client discovers forwarding rule, removes it, resets Gmail password. Contacts attorney.
Day 43: Forensic examination begins.
This timeline documented the full compromise sequence — from credential breach correlation to phishing to account takeover to financial fraud — with specific dates and supporting artifact sources for each event.
The Insurance Documentation Package
The insurance claim required specific documentation. We organized our report to address each element the policy required:
Date of compromise. The unauthorized Gmail login on Day 0, supported by access log records obtained from Google account export. Documented precisely: 2:47 a.m. local time on a specific date.
Method of compromise. Phishing email targeting Gmail credentials, with email header analysis supporting the mechanism.
Scope of unauthorized access. Gmail account access (confirmed by access logs), financial account password resets (documented from Gmail sent mail records), and the browser extension finding as a secondary potential vector.
Duration of access. 31-day period during which the email forwarding rule was active, during which the attacker had access to all incoming email communications.
Evidence of identity theft. The forwarding rule creation immediately following unauthorized access, the password reset emails for financial accounts found in Gmail’s sent mail, and the timeline correlation between the access events and the fraudulent account activity.
The report did not speculate about who conducted the attack or attempt to identify the perpetrator. It documented what occurred and when, from the available evidence.
Frequently Asked Questions
What records can a victim obtain from their own accounts without legal process?
Google (Gmail, Google Account), Microsoft (Outlook, Microsoft Account), Apple (iCloud), and most financial institutions provide account holders with access to their own security and login records. Google’s account activity page shows recent logins with IP and device information. Google Takeout allows full export of Gmail, including Trash and forwarding rule history. Financial institutions typically provide transaction logs and authentication records upon request, often faster than subpoena. Social platforms (Facebook, Instagram) provide security log exports through their “Download Your Information” features. Collect all of these at the start of any identity theft investigation — they’re available immediately, free of charge, and often contain the most important evidence.
Can IP addresses alone establish the location of an attacker?
No, and overstating geographic conclusions from IP addresses is a common error. An IP address identifies the registered owner of that address block — often an ISP, cloud hosting provider, or VPN service. It can establish geography at the city or regional level when not masked by VPN. When the access occurs through a VPN or proxy (common in sophisticated identity theft), the IP resolves to the VPN provider’s infrastructure, not the attacker’s actual location. Document what the IP resolves to and what it doesn’t prove. “Access originated from an IP registered to VPN service X” is accurate. “The attacker was in [country]” is not supported by the IP alone.
How important is the data breach correlation to an insurance claim?
It depends on the policy. Some identity theft policies cover all unauthorized access, regardless of how credentials were obtained. Others specifically exclude losses resulting from credential reuse on breached third-party services. Review the policy language carefully before relying on breach correlation as a key element of the claim. Even where the breach correlation isn’t required for coverage, it contributes to the timeline — establishing when and how the attacker obtained credentials places the compromise in a broader context that strengthens the overall narrative.
What’s the most common mistake victims make that complicates the forensic investigation?
Changing passwords and security settings immediately after discovery, without documenting the state of the account first. Changing a password removes the compromised credential from the account but doesn’t create a record of what the account state was — forwarding rules, authorized applications, trusted devices — at the time the compromise was discovered. Before changing anything, take screenshots or export the account settings. In Gmail: screenshot the forwarding and filters page, connected apps page, and security events page before resetting. These screenshots, timestamped by the device, document the attacker’s persistence mechanisms before removal and are often key evidence in the insurance claim.
How long do email providers retain access logs?
This varies significantly by provider and plan. Gmail retains “last account activity” for approximately 28 days in the standard account view, but Google’s own internal logs are retained longer and are accessible via legal process. For personal Gmail accounts, the Google Takeout export is your best tool for victim-accessible records. Enterprise Google Workspace accounts have longer admin-accessible audit logs. Microsoft 365 business plans retain login and audit logs for 90 days as standard, with extended retention available on higher-tier plans. Consumer Outlook.com accounts have more limited log access. For any investigation requiring records beyond the account holder’s immediate access, preserve what’s available now and pursue legal process for extended records promptly — providers’ log retention schedules mean older data may not be available if you wait.