A divorce attorney hands you a ZIP file. Inside: a ChatGPT export her client downloaded from their spouse’s shared laptop. The conversations allegedly show the spouse planning asset concealment. Before anyone takes that to a judge, someone needs to answer a hard question — is this actually what it appears to be?
AI conversation exports are showing up in civil litigation, criminal investigations, and workplace misconduct cases at a pace that’s outrunning the legal standards for authenticating them. Most examiners haven’t dealt with one before. Most attorneys don’t know what questions to ask.
This piece covers what’s actually inside these export files, what metadata exists (and what doesn’t), how tampering is detected — or not — and how to build a defensible chain of custody for AI conversation records.
What This Article Covers
- How to export conversation history from ChatGPT and Claude
- What metadata is included in each export format
- Authentication challenges specific to AI-generated records
- Tampering detection methods and their limitations
- Chain of custody best practices for AI conversation evidence
Exporting ChatGPT Conversations
OpenAI gives users the ability to download their entire conversation history through the account settings panel. The export arrives as a ZIP file containing several files, but the one that matters most forensically is `conversations.json`.
What’s in the JSON File
Each conversation object contains:
- A conversation ID (UUID format)
- A title (auto-generated by the model from the first few exchanges)
- A `create_time` timestamp (Unix epoch, UTC)
- An `update_time` timestamp
- A mapping of message nodes, each containing:
- Message ID
- Author role (`user`, `assistant`, or `system`)
- Content text
- Individual `create_time` per message
- Model used (e.g., `gpt-4o`, `gpt-4-turbo`)
- Status flags
The `chat.html` file included in the export renders a human-readable version of the same data. There’s also a `user.json` file with basic account metadata and a `message_feedback.json` if the user rated any responses.
What’s Missing
Here’s the gap that matters in court: the export contains no cryptographic signature. OpenAI does not sign these exports. There’s no hash value embedded, no timestamp from a trusted third party, and no mechanism inside the file itself to prove it hasn’t been edited since download.
The timestamps are user-account timestamps — they reflect when the conversation happened according to OpenAI’s servers, but that data lives only inside the JSON you’re holding. If someone opens that file in a text editor and changes a date or a message, there’s nothing inside the file that breaks.
Exporting Claude Conversations
Anthropic’s export process works differently. As of early 2026, Claude.ai allows users to request a data export through account settings. The export arrives via email as a downloadable archive and typically contains:
- A `conversations.json` file structured similarly to ChatGPT’s format
- Individual conversation files in some export versions
- Basic account information
The metadata fields in Claude exports include conversation IDs, timestamps, and model version identifiers. The format has evolved — examiners should document the exact structure they receive rather than assuming consistency with prior exports.
One practical difference: Claude’s system is built on Anthropic’s Constitutional AI framework, which means the model itself sometimes includes caveats or self-corrections within the conversation thread. These can appear as separate message nodes authored by the `assistant` role and are part of the authentic record.
Authentication Challenges
This is where forensic examiners earn their fees.
The Core Problem
Both ChatGPT and Claude conversation exports are user-initiated downloads. The user requests the export, receives a file, and that file sits on their device. Unlike a subpoena to a platform that returns records directly, these exports pass through the subject’s hands before reaching the examiner.
That chain is broken by design.
What Corroborates the Record
Since the export itself can’t self-authenticate, examiners need to build authentication from surrounding evidence:
Device artifacts. The download event should appear in browser history, download logs, and potentially in OS-level file system metadata. On Windows, the `$MFT` entry for the ZIP file will show a creation timestamp. On macOS, the Spotlight metadata (`.DS_Store`, `com.apple.quarantine` extended attributes) can show when a file was downloaded from the web. The quarantine xattr specifically records the originating URL and download date.
Account access logs. OpenAI and Anthropic both maintain server-side logs of account access and data export requests. These can be obtained via legal process — a subpoena or preservation letter to the platform. The platform’s own record of when an export was generated is far more reliable than the file itself.
Browser forensics. Chrome, Firefox, Edge, and Safari all maintain history databases. The SQLite databases (`History` in Chrome, `places.sqlite` in Firefox) will show visits to the ChatGPT or Claude export request pages, and the download database will show the file receipt.
Email headers. For Claude exports delivered by email, the original email headers establish when Anthropic’s servers sent the archive. Headers are harder to fabricate convincingly than file metadata.
Timestamp consistency. Cross-reference conversation timestamps against other known device activity. If a conversation timestamp shows 2:00 AM on a Tuesday but device logs show no network activity that night, that’s a flag worth examining.
The Tampering Problem
Because these files are plain JSON with no embedded integrity checks, tampering is trivially easy and forensically difficult to prove definitively.
What tampering detection can catch:
- File system metadata inconsistencies. If the `conversations.json` file has a `date modified` timestamp that post-dates the export email or the ZIP file’s own creation date, that’s meaningful.
- Internal timestamp logic breaks. Each message has its own `create_time`. If message 5 has a timestamp earlier than message 4 in the same conversation thread, the file has been edited.
- Character encoding artifacts. Editing a JSON file in Word or a basic text editor can introduce non-UTF-8 characters, BOM markers, or line ending changes (CRLF vs. LF) inconsistent with the original export format.
- JSON structure anomalies. Legitimate exports from each platform have consistent field patterns. Missing fields, extra fields, or unexpected nesting can indicate manual editing — though platform updates can also explain some variation.
What tampering detection cannot reliably catch: a technically sophisticated actor who edits the JSON correctly, preserves formatting, maintains timestamp logic, and then rebuilds the ZIP file with matching metadata. That level of fabrication would require deliberate effort, but it’s not impossible.
This is why platform-side records — obtained directly from OpenAI or Anthropic via legal process — are far more probative than user-provided exports.
Chain of Custody for AI Conversation Records
Chain of custody for these files follows the same principles as any digital evidence, but a few specifics matter here.
Acquisition
When you receive an AI conversation export as evidence:
- Hash the original ZIP file immediately on receipt using SHA-256. Document the hash value, the tool used, the date, and who performed the procedure.
- Do not extract the ZIP before hashing. The ZIP is the original evidence container.
- Extract to a working copy and hash the extracted files individually. Compare `conversations.json` hash values in your report.
- Store the original ZIP in write-protected storage. All analysis work happens on verified copies.
Documentation
Your examination notes should capture:
- Who provided the file, when, and through what mechanism (email, USB, cloud share)
- The complete file path and name as received
- File system metadata (created, modified, accessed timestamps) before any examination
- The hash values and the tool used to generate them
- Any observable anomalies noted on first inspection
Preservation Requests
If you’re working with an attorney, push early for a litigation hold letter to OpenAI or Anthropic. Both companies have legal response teams that handle preservation requests for account data. Getting a certified copy of the conversation records directly from the platform eliminates the authentication problems that plague user-provided exports.
OpenAI’s legal process page and Anthropic’s legal contact should be part of every attorney’s playbook when AI conversations are at issue. The platform-side records include server logs, access timestamps, and model version data that the user export does not.
Practical Notes for Examiners
A few things learned from hands-on work with these files:
The ChatGPT JSON format has changed at least three times in the past two years as OpenAI has updated their export system. Don’t assume a 2024 export looks the same as a 2026 export. Document the specific fields present in the file you examined.
Claude exports are less common in litigation today but will likely increase. The Constitutional AI self-correction behavior means some messages may appear that look like they’re “from the AI” challenging the user — that’s normal behavior, not evidence of tampering.
Shared accounts complicate attribution significantly. If two people used the same OpenAI account, the conversation history belongs to both of them. The export cannot tell you which physical person typed which message without corroborating device evidence.
For [more on authentication challenges in emerging evidence types](/digital-evidence-authentication/), the core principles haven’t changed — it’s the application that’s new.
FAQ
Can I subpoena OpenAI or Anthropic directly for conversation records?
Yes. Both companies have established legal process response procedures. OpenAI publishes a transparency report and has a law enforcement guidelines page. Anthropic handles legal requests through their legal team. A properly served subpoena or court order can compel production of account records, access logs, and conversation data. This is almost always preferable to relying on a user-provided export.
Are AI conversation exports admissible as evidence?
Admissibility depends on the jurisdiction and how the evidence is authenticated. Federal courts apply FRE 901, which requires the proponent to produce evidence sufficient to support a finding that the item is what the proponent claims. A user-provided export without corroboration is a harder authentication argument than platform-certified records. State rules vary. Work with counsel on the specific authentication strategy before the exhibit list is due.
What if the subject deleted their conversations before the export?
If conversations were deleted before the export was generated, they typically won’t appear in the export file. OpenAI and Anthropic both allow users to delete conversations, and deleted conversations are not included in data exports. Server-side legal process may or may not recover deleted records depending on the platform’s retention policies at the time of deletion.
How do I handle AI conversation exports in a criminal case?
The same chain of custody principles apply, but the stakes for procedural compliance are higher. Hash on receipt, maintain write-protected originals, and document every step. If you’re the defense examiner, consider independently requesting platform records through the discovery process. If the prosecution’s export and the platform’s records diverge, that’s a significant finding.
Can metadata from the export prove who actually typed the messages?
The export metadata proves that messages were sent from an account — it does not prove who physically typed them. Attribution to a specific person requires corroborating evidence: device forensics showing the account was accessed from a particular device, geolocation data, login records showing only one person had account credentials, or other contextual evidence. AI conversation exports are account records, not biometric records.
Sarah Chen is a digital forensics examiner with experience in civil litigation support and electronic discovery. She holds the CCE and CCPA certifications.