ChatGPT and Claude are the two dominant AI assistants, but they have distinct writing styles. We tested both against major AI detectors to see which one is harder to detect—and the results might surprise you.
The Experiment
We generated 50 text samples from each AI model across five categories:
- Academic essays
- Blog posts
- Business emails
- Creative writing
- Technical documentation
Each sample was then run through five major AI detectors: GPTZero, Turnitin, Originality.ai, Copyleaks, and Content at Scale.
The Results
Overall Detection Rates
| AI Model | Average Detection Rate |
|---|---|
| ChatGPT-4 | 94% |
| ChatGPT-4 Turbo | 91% |
| Claude 3.5 Sonnet | 87% |
| Claude 3 Opus | 83% |
Key Finding: Claude's outputs were detected at a slightly lower rate than ChatGPT's, with Claude 3 Opus being the hardest to detect at 83%.
Why Claude is Slightly Harder to Detect
1. More Variable Sentence Structure
Claude tends to vary its sentence length more naturally. While ChatGPT often produces consistently medium-length sentences, Claude mixes short punchy statements with longer, more complex ones—a pattern more typical of human writing.
2. Less Formulaic Transitions
ChatGPT loves transitions like "Furthermore," "Moreover," and "Additionally." Claude uses these less frequently and opts for more varied connective phrases, making its text less predictable.
3. Occasional Informality
Claude sometimes drops in conversational phrases or mild opinions that break up the "perfect" academic tone that AI detectors look for. ChatGPT tends to maintain a more consistent, formal voice.
4. Different Training Data
Since most AI detectors were initially trained primarily on ChatGPT outputs, they're naturally better at detecting ChatGPT's specific patterns. Claude's different training approach creates slightly different linguistic fingerprints.
Detection by Content Type
Academic Essays
Both models were highly detected (95%+) for academic content. Detectors are specifically tuned for academic settings, and both AIs produce similarly formal academic prose.
Blog Posts
Claude performed notably better here (78% vs 89% for ChatGPT). Blog posts allow for more casual writing, where Claude's natural variability shines.
Business Emails
Similar detection rates (around 85%) for both. Business writing is formulaic by nature, making both AIs equally detectable.
Creative Writing
The biggest gap: Claude at 75% vs ChatGPT at 92%. Claude's creative writing has more stylistic variation and unexpected word choices.
Technical Documentation
Both highly detected (90%+). Technical writing follows strict patterns that both AIs handle similarly.
Does It Really Matter?
Here's the honest truth: both models are highly detectable. The difference between 83% and 94% detection doesn't matter much when you need your content to pass.
Whether you use ChatGPT or Claude, you'll likely need to humanize the output if you want it to bypass AI detectors reliably.
Tips for Less Detectable AI Content
Regardless of which AI you use:
- Give specific prompts: Vague prompts produce generic, detectable content
- Ask for a specific tone: "Write casually" or "be conversational" helps
- Request varied sentence structure: Explicitly ask for mix of short and long sentences
- Add personal touches manually: Include anecdotes, opinions, or unique phrases
- Use an AI humanizer: Tools like AI Undetectable can transform either model's output to pass detectors
Our Recommendation
Use whichever AI you prefer for its other qualities—Claude for nuance and following instructions, ChatGPT for speed and plugins. Then run the output through a quality humanizer to ensure it passes detection.
The marginal detectability difference between models is far less important than what you do with the output afterward.
Make Any AI Content Undetectable
Whether you use ChatGPT, Claude, or any other AI, our humanizer transforms the output to bypass all major detectors.
Try Free →