How to Fix Invalid JSON from ChatGPT (Parse Errors Explained)
If you're building an AI workflow, you've likely encountered this frustraing error in your logs: SyntaxError: Unexpected token in JSON at position...
Even with strict prompting and "JSON mode", Large Language Models like ChatGPT, Claude, and Gemini will occasionally output invalid JSON. Here is why it happens and how to fix it.
Why AI Generates Invalid JSON
- Markdown Artifacts: The LLM wraps the JSON in
\`\`\`jsonblocks, causingJSON.parse()to fail. - Truncation: Due to token limits, the response gets cut off mid-way, leaving unclosed brackets like
{ "data": [ {"id": 1}. - Unescaped Characters: The model includes raw quotes or newlines inside string values (e.g.,
"message": "He said "Hello""instead of\\"Hello\\"). - Trailing Commas: LLMs often format JSON like human-written code, mistakenly leaving a comma after the final array item.
How to Fix It Automatically
When you're dealing with a one-off error, the fastest way to solve it is using an intelligent JSON repair tool.
Our JSON Indenter and Formatter includes built-in repair capabilities. When you paste LLM-generated JSON, it automatically detects and fixes:
- Missing quotes around keys.
- Trailing commas.
- Single quotes (converting them to double quotes).
- Basic truncation issues (by attempting to close open objects/arrays).
Best Practices for Prompting JSON
To prevent these errors from happening in the first place, use the following prompt instructions:
Output ONLY raw, valid JSON.
Do not include markdown formatting like ```json.
Do not include any conversational text before or after the JSON.
Ensure all strings are properly escaped.And consider optimizing your context window by converting bulky prompts into TOON format before sending them to the LLM!