Verify whether claims are actually supported by evidence.
CiteGuardian checks cited sources, extracts the real claims, and shows you what holds up, what falls apart, and what was never supported in the first place.
Works on AI answers, articles, reports, and any highlighted text in your browser.
5 free verifications — no credit card required
Built for people who can't afford fake confidence
Check AI-generated answers, articles, reports, research summaries, or any claim-heavy text.
CiteGuardian breaks the text into individual claims, reviews the cited sources, and checks whether the evidence really supports what's being said.
See which claims are supported, which are unsupported, which are contradicted by the evidence, and which citations are just there for show.
Five independent layers cross-check every claim — no single model making a guess
Breaks text into individual factual claims so each one gets checked on its own terms.
Finds the most relevant passages in your sources using semantic matching, not just keywords.
Evaluates whether the evidence actually supports, contradicts, or fails to back each claim.
Deterministic rules catch precision errors, misattributions, and contradictions the AI might miss.
Removes the citation and re-runs the check. If the verdict doesn't change, the citation is decorative.
Built for the three ways evidence goes wrong
Sources that look credible but do not actually support the claim. The citation adds authority, not proof.
When the cited evidence says the opposite of what the text claims it shows.
Statements that sound factual and well-sourced but are not backed by the cited evidence.
Real examples of what CiteGuardian catches
"NHS guidance states tea and coffee do not count toward daily fluid intake."
Evidence says: NHS guidance explicitly states tea and coffee do count toward fluid intake.
"A WHO review in 2024 found hybrid work reduces burnout by 37%."
Evidence says: No direct evidence found for the specific percentage or attribution to WHO.
That's a real problem in AI-generated content, but it doesn't stop there. Analysts, researchers, writers, and teams using internal reports all face the same risk: claims that look sourced, but don't stand up.
CiteGuardian helps you catch weak evidence before it becomes:
Not just for AI — anywhere claims meet evidence
Add verification to generated answers before they reach users.
Check whether reports, summaries, and cited statements are actually supported.
Spot overclaims, weak sourcing, and evidence mismatches before they create risk.
Review AI-assisted or human-written content before publishing.
Paste text and sources, get a full report with per-claim verdicts, evidence excerpts, confidence scores, and alignment breakdowns.
Best for: one-off checks where you want the full picture.
Highlight any text on a web page, right-click, and verify. The page itself becomes the source — no copy-paste needed.
Best for: spot-checking articles, AI answers, and reports as you read.
Add a verification step to your RAG pipeline, content workflow, or review process. Includes a pass/fail gate for blocking hallucinated responses.
Best for: programmatic verification in production systems.
Upload a JSON file of answers and sources, or use the batch API endpoint. Results delivered via polling or webhook callback.
Best for: auditing document sets, model outputs, or content backlogs.
Monthly subscriptions with included credits. Start free, upgrade when you need more.
5 credits on signup
75 credits/month
250 credits/month
Verifications cost 1–4 credits based on claim count. See full pricing details
Verify the evidence before you trust the claim.