AI Legal Research
Understanding the capabilities, limitations, and ethical obligations when using AI-assisted legal research tools.
Critical Warning: AI Tools Can Hallucinate
AI legal research tools can generate completely fabricated case citations, holdings, and legal analysis. Studies show hallucination rates between 17% and 33% for legal AI tools. Attorneys have been sanctioned for submitting AI-generated briefs containing fake cases. Every citation and legal proposition from any AI tool must be independently verified using traditional research methods before relying on it in any professional context.
Learning Objectives
After completing this section, you will be able to:
- Evaluate AI legal research tools and their current capabilities and limitations
- Identify hallucination risks and apply verification protocols to AI-generated content
- Apply ethical rules governing the use of AI in legal practice
Before You Read
You should understand traditional research methods and citator verification before using AI tools, because AI output must always be verified through those methods.
AI in Legal Research Today
The legal research landscape has been transformed by the integration of artificial intelligence, particularly large language models (LLMs), into mainstream legal research platforms. As of 2025-2026, all three major legal research providers—Westlaw, LexisNexis, and Bloomberg Law—have deployed AI-powered features designed to help attorneys research more efficiently.
These tools promise to revolutionize how lawyers find and analyze legal information by:
- Natural language queries — Asking questions in plain English rather than constructing Boolean searches
- Document summarization — Quickly distilling lengthy cases, contracts, or briefs
- Research memos — Generating draft analysis with supporting citations
- Document review — Analyzing contracts and identifying key provisions
- Drafting assistance — Helping create initial drafts of legal documents
The technology is evolving rapidly, and capabilities that seem cutting-edge today may be standard features tomorrow. However, the fundamental limitations of these tools—particularly their tendency to generate plausible-sounding but incorrect information—remain a serious concern that attorneys must understand and manage.
The Rapid Pace of Change
AI capabilities in legal research are advancing quickly. The specific features, interfaces, and even product names discussed here may change by the time you read this. Always consult the product documentation provided by each vendor for current capabilities and limitations. The fundamental principles—the need for verification, the ethical obligations, and the importance of building foundational skills—will remain constant regardless of how the technology evolves.
Understanding the AI Tool Landscape
AI tools for legal work fall into two fundamentally different categories, and understanding this distinction is critical for responsible use.
Two Categories of AI Tools
General-Purpose AI Is Not Legal Research
General-purpose AI tools (ChatGPT, Claude, Gemini, Copilot, etc.) are not legal research tools. They are not connected to legal databases, cannot verify citations, and have demonstrated hallucination rates of 69-88% on legal research queries. Several attorneys have been sanctioned for relying on these tools for case research. Never use general-purpose AI for locating legal authorities.
Category 1: General-Purpose AI (Not for Legal Research)
Tools like ChatGPT, Claude, Gemini, and Copilot are general-purpose language models designed to assist with a wide range of tasks. While they may be useful for general writing assistance, brainstorming, or explaining concepts in plain language, they:
- Have no connection to legal databases like Westlaw or Lexis
- Cannot verify that cases, statutes, or citations actually exist
- Frequently fabricate plausible-sounding but entirely fictitious legal authorities
- Cannot check whether authorities are still good law
- Have no access to recent legal developments
Category 2: Legal-Specific AI Tools (Use with Verification)
Major legal research platforms have developed AI tools that are integrated with their authoritative legal databases. These tools—while still requiring verification—draw from actual legal content and can link to source documents:
- Westlaw: CoCounsel, Deep Research AI, AI-Assisted Research
- Lexis: Lexis+ AI, Protégé
- Bloomberg Law: Bloomberg Law AI features
Even these legal-specific tools have hallucination rates of 17-33% and require verification of every citation and proposition.
General-Purpose AI
Not for Legal Research
ChatGPT, Claude, Gemini, Copilot, Perplexity
- No connection to legal databases
- Cannot verify citations exist
- 69–88% hallucination rate on legal queries
- Cannot check if authorities are good law
- Attorneys have been sanctioned
Never use for locating legal authorities
Legal-Specific AI
Use with Verification
Deep Research AI, CoCounsel, Lexis+ AI, Protégé
- Integrated with legal databases
- Links to source documents
- 17–33% hallucination rate (still significant)
- Shows KeyCite/Shepard's flags
- Still requires independent verification
Every citation must be verified
Westlaw AI Tools
Thomson Reuters offers several AI capabilities through Westlaw Advantage (launched August 2025) and CoCounsel:
Deep Research AI
Deep Research is Westlaw's agentic AI research capability—the system generates multi-step research plans, executes them iteratively, and produces comprehensive reports with citations to Westlaw sources.
- Creates research plans that users can review before execution
- Generates reports with arguments for and against legal positions
- Integrates with Westlaw's Key Numbers, KeyCite, and statutory annotations
- All citations link directly to Westlaw sources with KeyCite flags
- Comprehensive analyses take approximately 10 minutes to complete
CoCounsel
CoCounsel provides guided workflows and task-specific AI features:
- Litigation Document Analyzer — Analyzes briefs and motions, identifies arguments, suggests counterarguments with supporting authority
- Contract analysis — Reviews contracts and identifies key provisions and risks
- Timeline generation — Extracts and organizes key events from documents
- Deposition preparation — Generates potential deposition questions
- Research queries — Natural language questions with cited sources
AI-Assisted Research
- Natural language search that interprets legal questions
- AI-generated summaries of search results
- Integration with traditional Westlaw search and KeyCite
Lexis AI Tools
LexisNexis has deployed AI capabilities across its platform through Lexis+ AI:
Lexis+ AI
- AI Legal Search — Interprets natural language queries and locates relevant cases
- Case summarization — Extracts key facts, holdings, and reasoning
- Document drafting — Generates motions, complaints, and correspondence tailored to jurisdiction
- Contract analysis — Identifies missing clauses, inconsistencies, and risks
- Integration with Shepard's for citation verification
Protégé
LexisNexis offers two versions of its AI assistant:
- Protégé Legal AI — Optimized for legal research, drafting, and analysis with citations verified through Shepard's
- Protégé General AI — Allows access to general-purpose AI models (GPT-5, Claude) within the secure Lexis environment, with a "Best Fit" mode that automatically selects the optimal model for each task
Note: Protégé for law students is designed for learning, not professional research.
Bloomberg Law AI
Bloomberg Law has integrated AI features focused on practical law applications:
- AI-powered search — Natural language queries across Bloomberg Law content
- Document analysis — Contract review and key term extraction
- Brief analysis — Reviewing briefs for potential issues
- Integration with Bloomberg Intelligence — Combining legal AI with business and market analysis
Other Legal AI Tools
Beyond the major platforms, other legal-specific AI tools include:
- vLex Vincent AI — AI-powered research across vLex's international legal database, particularly strong for comparative and international law
- Harvey — Enterprise AI platform for large law firms, custom-trained on firm-specific documents (primarily available through firm-wide subscriptions)
Even Legal-Specific AI Requires Verification
The distinction between general-purpose and legal-specific AI matters, but it does not eliminate the need for verification. Legal-specific tools integrated with Westlaw, Lexis, or Bloomberg still hallucinate in 17-33% of queries. Every citation, holding, and legal proposition must be independently verified—see the Verification Requirements section below.
What AI Can and Cannot Do
Understanding the appropriate use cases for AI legal research tools is essential for using them effectively while avoiding serious pitfalls.
Good Use Cases for AI Legal Research
AI tools can be genuinely helpful for certain tasks when used appropriately:
Initial Orientation and Overviews
- Getting a quick overview of an unfamiliar area of law
- Understanding the general framework before diving into detailed research
- Identifying key concepts and terminology to use in traditional searches
- Note: Always verify the overview against authoritative secondary sources
Summarization
- Summarizing lengthy cases to quickly assess relevance
- Distilling key points from contracts or transactional documents
- Creating initial summaries of deposition testimony
- Note: Always read the full document for anything you will rely on
Drafting Assistance
- Generating initial drafts of routine documents
- Creating outlines for memoranda or briefs
- Suggesting language for standard provisions
- Note: All AI-generated drafts require substantial human review and editing
Issue Spotting
- Identifying potential issues you may have overlooked
- Suggesting additional search terms or concepts
- Flagging areas that warrant deeper research
Document Comparison and Review
- Comparing contract versions and identifying changes
- Extracting key terms and provisions from documents
- Initial categorization in document review projects
Poor Use Cases for AI Legal Research
AI tools should not be used—or used only with extreme caution—for these tasks:
Final Authority on the Law
- AI-generated statements about what the law "is" must always be verified
- AI tools cannot serve as your final research step
- Never cite an AI-generated research memo without independent verification of every citation
Complex Legal Analysis
- Nuanced interpretation of ambiguous statutes
- Analysis requiring deep understanding of procedural context
- Strategic decisions about which arguments to make
- Predicting how courts will rule on novel issues
Jurisdiction-Specific Research
- AI tools may not reliably distinguish binding from persuasive authority
- May miss important jurisdictional variations
- May not accurately assess whether cases are still good law
Finding Controlling Authority
- AI may miss the most relevant cases for your specific facts
- May not understand procedural posture nuances
- Cannot replace systematic digest or citator research
The Starting Point Rule
Think of AI tools as a starting point, never an endpoint. AI can help you begin your research more efficiently, but traditional research methods—secondary sources, digest searches, citator checks—must confirm and complete your analysis. The research is not done until you have verified everything through authoritative sources.
The Hallucination Problem
The most serious risk with AI legal research tools is "hallucination"—the generation of plausible-sounding but completely fabricated information. This is not a bug that will be fixed; it is a fundamental characteristic of how large language models work.
What the Research Shows
A landmark Stanford study published in 2023 (and updated through 2024) tested legal AI tools and found alarming hallucination rates:
- General-purpose AI tools (like ChatGPT) hallucinated in approximately 69-88% of legal research queries
- Legal-specific AI tools (integrated with legal databases) performed better but still hallucinated in approximately 17-33% of queries
- Hallucinations included completely fabricated case names, citations, holdings, and even entire judicial opinions
- The AI tools often expressed high confidence in fabricated information
These are not edge cases or unusual scenarios. In routine legal research queries, AI tools generate false information at rates that would be unacceptable in any professional context.
The Statistics Are Stark
Even the best-performing legal AI tools hallucinate in roughly one out of every five to six queries. Would you use a research tool that gave you false information 17-33% of the time? That is the reality of AI legal research tools today. Every single output must be verified.
Real Cases of Sanctions
The consequences of relying on unverified AI output are not hypothetical. Multiple attorneys have faced sanctions for submitting court filings containing AI-generated fabrications:
Mata v. Avianca (S.D.N.Y. 2023)
Two attorneys were sanctioned $5,000 for submitting a brief containing six completely fabricated case citations generated by ChatGPT. The non-existent cases included made-up party names, citations, and holdings. The attorneys claimed they did not know ChatGPT could generate false information—an excuse the court did not accept.
Park v. Kim (E.D.N.Y. 2024)
An attorney was sanctioned for citing a non-existent case that was generated by AI. The court noted that the attorney had a duty to verify the existence and accuracy of cited authorities.
Multiple State Court Cases
Similar incidents have occurred in state courts across the country, with attorneys facing sanctions, referrals to disciplinary authorities, and significant reputational damage.
Why Hallucination Happens
Understanding why AI tools hallucinate helps explain why this problem cannot be easily fixed:
- Probability-based generation — LLMs generate text by predicting the most likely next word based on patterns in training data. They do not "know" facts; they produce statistically likely sequences of text.
- No true understanding — AI tools do not understand legal concepts, precedent, or the difference between real and fabricated cases. They produce text that looks like legal analysis without comprehending what it means.
- Confidence without knowledge — AI tools cannot assess their own accuracy. They produce fabricated information with the same confident tone as accurate information.
- Training data limitations — AI tools may have incomplete or outdated training data, leading to gaps filled with plausible-sounding fabrications.
Types of Hallucinations in Legal AI
AI hallucinations in legal research take several forms:
- Fabricated citations — Completely made-up case names, reporters, and page numbers
- Wrong holdings — Real cases cited for propositions they do not support
- Misattributed quotes — Quotations attributed to cases that do not contain them
- Incorrect procedural history — Wrong information about appeals, reversals, or current status
- Fabricated statutory text — Made-up provisions or incorrect citations to statutes
- Mixed information — Combining real information from different sources in misleading ways
Verification Requirements
Given the hallucination problem, rigorous verification is not optional—it is a professional and ethical requirement. Every piece of information from an AI tool must be independently confirmed.
Every Citation Must Be Verified
For any citation provided by an AI tool, you must:
- Confirm the case exists — Search for the exact citation in Westlaw, Lexis, or another authoritative database
- Read the actual case — Do not rely on AI summaries; read the court's actual words
- Verify the holding — Confirm the case actually stands for the proposition cited
- Check the citation format — Ensure the reporter, volume, and page numbers are correct
- Run citator analysis — Use KeyCite or Shepard's to confirm the case is still good law
No Shortcuts
You cannot verify AI output by asking the AI tool to verify itself. You cannot verify by asking a different AI tool. You cannot verify by checking if the citation "looks right." The only acceptable verification is retrieving the actual source from an authoritative legal database and reading it yourself.
How to Verify AI Output
Search for Each Citation
Enter the exact citation into Westlaw, Lexis, or Bloomberg Law. If the case does not appear, it may be fabricated. Be aware that AI may get citation details slightly wrong even for real cases.
Read the Relevant Portions
Do not just confirm the case exists—read the sections relevant to your research question. Verify that the case actually discusses the legal issue AI claims it addresses.
Verify the Holding
Confirm that the case's holding matches what the AI represented. AI frequently cites real cases for propositions they do not support or overstates what a case actually decided.
Check Current Status
Run the case through KeyCite or Shepard's. A case that was good law when the AI was trained may have been overruled, distinguished, or superseded by statute.
Verify Any Quotations
If the AI provides quotations, search for the exact language in the case. AI frequently fabricates or alters quotations, sometimes subtly changing meaning.
Use Traditional Research to Confirm
Beyond citation verification, use traditional research methods to confirm AI-generated analysis:
- Secondary sources — Check treatises and encyclopedias to confirm the AI's description of the law
- Digest research — Use Key Numbers to find additional cases and confirm you have not missed important authorities
- Citator expansion — Use citing references to find related cases the AI may have missed
- Statutory research — Verify any statutory claims against the actual code
Documenting Provenance: The "How Would You Have Found It?" Test
Beyond verifying that AI-generated authorities are accurate, effective legal research requires understanding how you could have found each authority through conventional methods. This "provenance" requirement serves both pedagogical and practical purposes.
Why Provenance Matters
For any authority you first locate through an AI tool, you should be able to answer: "How could I have found this case, statute, or secondary source without AI?"
This requirement serves several purposes:
- Builds foundational skills — Understanding how authorities connect through digests, citators, secondary sources, and statutory annotations develops research judgment that AI cannot replace
- Reveals additional authorities — When you trace the conventional path to an AI-found case, you often discover related authorities the AI missed
- Ensures comprehensiveness — If you cannot identify a conventional research path to an authority, that may indicate gaps in your research strategy
- Demonstrates competence — Supervisors and courts expect lawyers to understand the structure of legal authority, not just to retrieve AI-generated lists
Documenting How You Found Each Authority
For each authority in your research, record its provenance—how you located it. This documentation should include:
If found through traditional methods:
- The secondary source, digest topic, citator, or search that led you to the authority
- Example: "Found in § 24.5 of Prosser on Torts" or "KeyCite citing references for Smith v. Jones, filtered to Ninth Circuit"
If found through AI:
- The AI tool used and the prompt or query you entered
- A description of how you could have found this authority using non-AI methods
Provenance Documentation Example
Authority: Martinez v. ABC Corp., 456 F.3d 789 (9th Cir. 2024)
How found: Westlaw Deep Research AI—prompt was: "What cases address employer liability for AI-generated hiring discrimination in the Ninth Circuit?"
How I could have found it without AI:
- Terms & Connectors search in Ninth Circuit cases:
employ! /p liab! /p "artificial intelligence" /p discrim! /p hir! - KeyCite citing references for Ricci v. DeStefano, filtered by "discrimination" headnotes and Ninth Circuit
- ALR annotation on "Employer Liability for Algorithmic Discrimination" (if one exists)
The Conventional Path Requirement
If AI leads you to a useful authority, go back and identify the conventional research path. This typically involves one or more of the following:
- Secondary sources — Which treatise section, ALR annotation, or practice guide discusses this issue and cites this authority?
- Digest/Key Numbers — Under which West Key Number(s) is this case classified? (Westlaw only)
- Citators — Which well-known case cites this authority? Could you have found it through KeyCite or Shepard's citing references?
- Statutory annotations — If the case interprets a statute, does it appear in the Notes of Decisions or Case Notes?
- Terms and Connectors search — What Boolean search in the relevant jurisdiction's case law would have retrieved this case?
"Search by Case Name" Does Not Count
If AI gives you a case name or citation, searching for that exact citation in Westlaw or Lexis does not satisfy the conventional path requirement. You already know the case exists—the question is how you would have discovered it without AI. The goal is to demonstrate that you could navigate the legal research ecosystem, not just that you can type a citation into a search box.
Benefits of Tracing the Conventional Path
When you identify how you could have found an AI-suggested authority through conventional methods, you often discover:
- More relevant authorities — The secondary source section or digest topic that contains your AI-found case likely contains other relevant cases
- Better understanding — Seeing where an authority fits in the research structure deepens your understanding of the legal framework
- Research gaps — If you cannot identify a conventional path, you may have missed an important research avenue
- Verification confidence — Finding the same authority through multiple methods increases confidence in its relevance
Ethical Considerations
Using AI in legal research raises significant ethical obligations that attorneys must understand and address.
ABA Formal Opinion 512 (July 2024)
The American Bar Association issued Formal Opinion 512 providing guidance on lawyers' ethical obligations when using generative AI. Key points include:
- Competence (Rule 1.1) — Lawyers must understand the capabilities and limitations of AI tools they use, including the risk of hallucinations and errors
- Diligence (Rule 1.3) — Using AI does not excuse lawyers from diligent review; output must be carefully verified
- Confidentiality (Rule 1.6) — Lawyers must ensure confidential client information is not improperly disclosed to AI tools or third parties through AI use
- Supervision (Rules 5.1 and 5.3) — Lawyers must supervise others who use AI and ensure proper verification occurs
- Communication (Rule 1.4) — In some circumstances, lawyers may need to communicate with clients about AI use
- Fees (Rule 1.5) — Fees must remain reasonable; attorneys cannot bill clients for time "saved" by AI as if the work was done manually
The Competence Obligation
The duty of competence now includes technological competence—understanding both the benefits and risks of tools you use. "I didn't know AI could hallucinate" is not an acceptable excuse. If you use AI tools, you must understand their limitations.
State Bar Guidance
Many state bars have issued their own guidance on AI use, sometimes with additional or more specific requirements. Check your jurisdiction for applicable rules. Common themes include:
- Requirements to verify AI-generated work product
- Obligations to maintain client confidentiality when using AI
- Guidance on disclosure of AI use to clients or courts
- Billing considerations for AI-assisted work
Confidentiality Concerns
Using AI tools raises significant confidentiality issues:
What You Input Matters
- Information entered into AI tools may be stored, logged, or used for training
- Third-party AI providers may have access to your inputs
- Even "private" or "enterprise" AI deployments may have data retention policies
Best Practices for Confidentiality
- Review the privacy policies and terms of service for any AI tool you use
- Use firm-approved AI tools that have appropriate data protection agreements
- Avoid inputting sensitive client information when possible
- Consider anonymizing or generalizing queries when researching sensitive matters
- When in doubt, do not input confidential information
Disclosure Requirements
Some courts now require disclosure of AI use in filed documents:
- Several federal district courts have issued standing orders requiring disclosure
- Some courts require certification that AI-generated content has been verified by a human
- Requirements vary by jurisdiction and are evolving rapidly
- Always check local rules and standing orders before filing
Check Local Rules
Before using AI to assist with any court filing, check whether the court has disclosure requirements or standing orders regarding AI use. These requirements are proliferating rapidly, and failure to comply may result in sanctions or striking of filings.
Best Practices for AI Legal Research
Following these practices will help you use AI tools effectively while avoiding the serious pitfalls.
1. Use AI as a Starting Point, Not an Endpoint
AI can accelerate the early stages of research but cannot replace thorough analysis:
- Use AI to get oriented in an unfamiliar area
- Use AI to identify potential search terms and concepts
- Use AI to generate initial drafts for further refinement
- Never consider research complete until you have verified through traditional methods
2. Always Verify Everything
No exceptions. Every citation, every holding, every legal proposition must be confirmed:
- Check that cited cases exist and say what AI claims
- Verify all quotations against original sources
- Confirm statutory text against the actual code
- Run citator checks on all authorities
3. Document Your AI Use
Maintain records of how you used AI in your research:
- Keep records of AI queries and outputs for your work files
- Document your verification steps
- Note which portions of work product were AI-assisted
- This protects you if questions arise later about your research process
4. Do Not Input Confidential Information
Protect client confidentiality when using AI:
- Avoid entering client names, case details, or sensitive facts into AI tools
- Use generalized or hypothetical queries when possible
- Review the data practices of any AI tool before using it
- Use only firm-approved tools with appropriate data protection
5. Maintain Skepticism
Approach AI output with appropriate skepticism:
- Do not assume AI output is correct because it sounds authoritative
- Be especially skeptical of citations you cannot easily locate
- Question analysis that seems too convenient or perfectly on point
- Remember that AI tools are designed to sound confident even when wrong
6. Build Traditional Skills First
Before relying on AI tools, develop strong foundational research skills:
- Learn to use digests, citators, and secondary sources effectively
- Understand legal research methodology
- Develop the judgment to evaluate whether AI output makes sense
- AI is most useful to those who already know how to research without it
For 1L Students: Special Considerations
First-year law students face unique considerations when it comes to AI legal research tools.
When AI May Be Prohibited
Your law school likely has policies regarding AI use that you must follow:
- Legal writing assignments — Many professors prohibit AI use entirely on research and writing assignments
- Exams — AI use on exams is virtually always prohibited
- Honor code implications — Using AI when prohibited may constitute an honor code violation with serious consequences
- Specific course policies — Always check the syllabus and ask your professor if unclear
Academic Integrity Is Non-Negotiable
If your professor prohibits AI use, using it anyway is academic dishonesty regardless of how you "verify" the output. The consequences—including potential dismissal from law school—far outweigh any perceived benefit. When in doubt, ask your professor before using any AI tool.
Building Foundational Skills First
Even when AI use is permitted, there are strong reasons to develop traditional research skills first:
Understanding the Law
- Working through research manually builds understanding of legal structure and hierarchy
- Reading cases develops legal analysis skills that AI cannot provide
- Understanding where law comes from helps you evaluate AI output
Developing Judgment
- You cannot verify AI output if you do not know what correct research looks like
- Supervisors and judges expect you to know how to research without AI
- AI tools may not always be available or appropriate
Career Preparation
- Job interviews often test traditional research skills
- Many employers expect demonstrated competence with fundamental methods
- The ability to research without AI is a competitive advantage
Appropriate Student Uses of AI
When permitted by your professor, AI may be appropriate for:
- Getting initial orientation in an unfamiliar area (with verification)
- Understanding concepts explained in class or readings
- Generating practice questions or hypotheticals for study
- Explaining the structure of complex documents
AI should not be used for:
- Completing research assignments when prohibited
- Generating text to submit as your own work
- Avoiding engagement with primary sources
- Replacing the learning process that assignments are designed to provide
The Long View
The attorneys who will be most effective with AI are those who deeply understand traditional legal research. AI is a tool that augments human judgment—it does not replace it. The time you invest now in learning fundamental research skills will pay dividends throughout your career, regardless of how AI technology evolves.
Summary: Key Takeaways
AI legal research tools offer genuine benefits but come with serious risks and ethical obligations:
- AI tools hallucinate frequently — Studies show 17-33% error rates even for legal-specific tools. Every output requires verification.
- Verification is not optional — Confirm every citation, holding, and legal proposition through authoritative sources.
- Ethical obligations apply — Competence, diligence, and confidentiality rules govern AI use. Know your obligations.
- AI is a starting point, not an endpoint — Use AI to accelerate early research, not to replace thorough analysis.
- Traditional skills remain essential — You cannot effectively use or verify AI output without strong foundational research skills.
- When in doubt, verify — The cost of verification is always less than the cost of relying on fabricated information.
Next Steps
To use AI tools responsibly, you must first master traditional research methods:
- Research Process — Learn the systematic approach to legal research
- Secondary Sources — Understand how to find and use authoritative secondary sources
- Citators — Master KeyCite and Shepard's for verification
- Platform Comparison — Understand traditional research capabilities across platforms
Check Your Understanding
- A partner asks you to use an AI tool to draft a research memo. What steps must you take before submitting any AI-generated analysis?
- What are the primary ethical obligations regarding AI use in legal practice under current ABA guidance?
- Why is it dangerous to rely on an AI tool's citation without independent verification, even when the tool links to a real database?