Ethics and use

HallucinX is an audit aid, not a substitute for attorney review. Here's how we think about that responsibility.

Your duty, not ours.

Every bar association that has addressed AI use in legal practice has reached the same conclusion: attorneys may use AI tools, but must independently verify the output before filing. This is a professional responsibility, not a suggestion.

  • California's Practical Guidance for Generative AI requires lawyers to review AI-generated “citations to authority for accuracy before submission to the court.”
  • Florida Ethics Opinion 24-1 requires attorneys to verify the accuracy and sufficiency of all AI-generated research.
  • Texas Opinion 705 — issued in direct response to Mata v. Avianca — states that attorneys must independently verify any AI output and cannot rely on it blindly.

HallucinX is designed to support that verification step. It does not replace it.

What we do.

  • Extract citations from your brief.
  • Verify each citation against CourtListener's public database of over ten million judicial opinions.
  • Apply fabrication heuristics — for example, flagging volumes or pages outside a reporter's published range, or opinions that claim to be published in a series with complete digital coverage but cannot be located.
  • Classify each citation against what CourtListener's database returns, with a specific reason for every flag the tool raises. Methodology defines the full classification scheme.

What we don't do.

  • We don't rewrite your brief.
  • We don't suggest alternative citations.
  • We don't train AI models on your documents.
  • We don't store your documents. They never reach our servers in the first place. (See our privacy policy.)
  • We don't retain logs of which citations you looked up.

Coverage and limits.

CourtListener's database is extensive but not complete. State court coverage varies by jurisdiction. Very recent opinions may not yet be indexed. A citation HallucinX cannot verify is not necessarily fabricated — it may simply be outside our source's coverage. We mark these citations “Check manually” rather than flagging them as fabrications. Attorney review remains required.

The tool can also miss citations entirely. Extraction failures from typos, OCR errors, or unusual formatting can prevent a citation from being checked at all. An attorney should compare the citation count HallucinX reports against their own count of citations in the brief and investigate any mismatch. Methodology covers these cases in detail.

And the tool can flag citations that are real. Approximately 10.85% of legitimate citations in benchmark testing were classified as Unverified or Alert and required attorney follow-up. A flag is a prompt for review, not a finding of fabrication.

The cost of being deterministic.

HallucinX is more rigid than it could be. Using a language model to evaluate citations would let the tool reason about edge cases — citation forms it has never seen, jurisdictions outside its heuristic set, fabrications that don't match a known pattern. It would also reproduce the failure mode the tool exists to catch. We chose rigidity over reach. Every flag has a concrete reason an attorney can show a judge; no flag rests on model probability.

Bar association guidance.

The legal profession has addressed AI use directly. We encourage attorneys to read the primary sources: