• AI for Impact
  • Posts
  • AI & Hallucinations ....+ Great Opps, PCDN AI for Impact Newsletter, Feb. 3, 2025

AI & Hallucinations ....+ Great Opps, PCDN AI for Impact Newsletter, Feb. 3, 2025

In partnership with

The #1 AI Meeting Assistant

Typing manual meeting notes drains your energy. Let AI handle the tedious work so you can focus on the important stuff.

Fellow is the AI meeting assistant that:

✔️ Auto-joins your Zoom, Google Meet, and Teams calls to take notes for you.
✔️ Tracks action items and decisions so nothing falls through the cracks.
✔️ Answers questions about meetings and searches through your transcripts, like ChatGPT

Try Fellow today and get unlimited AI meeting notes for 30 days.

AI Hallucinations: What You Need to Know

Did you hear about the New York City law firm that got fined $5,000 last year? One of their lawyers used ChatGPT to draft a brief, and it was filled with inaccuracies—even citing completely fabricated cases!

This isn't an isolated incident. Researchers from Stanford and Yale have found that many AI-generated legal outputs are plagued with similar errors. Why? It’s all due to a phenomenon called hallucination, where AI generates responses that don’t match reality.

I've seen this myself many times when I try to ask any of the AI tools (some are better than others) to find some data or a source for a particular issue. For example, help me find data that shows how much was spent on ethical AI around the world in 2024.

Even though I am strong at prompting and I give specific instructions only rely on verified sources and cites, often I get some interesting suggestions but upon closer examination many times about 85% turn out to be made up.

One way I've found around this is to create my own databases through specific tools where I only upload trusted sources and than run queries only on this data set. There are a number of specific research tools available for this (affor.ai is my favorite)

Here’s the kicker: experts argue that these hallucinations are not fixable. They're a fundamental part of how generative AI works. These models are designed to respond to prompts, even if their information is incomplete or incorrect.

So what’s the solution?

Well, pairing AI with robust fact-checking systems is one promising way. Companies like Vectara are already working on hallucination detectors, which could significantly reduce misinformation in AI outputs.

This recalibrated approach could transform how we use AI in fields like healthcare, education, and even journalism by creating more specialized tools tailored for specific contexts rather than relying on general-purpose models.

For those interested in the nitty-gritty details and future implications of these advancements, check out this insightful article from Scientific American

What are your thoughts on AI for Good? Hit reply to share your thoughts or favorite resources or fill the super quick survey below.

Got tips, news, or feedback? Drop us a line at [email protected] or simply respond to this message or take 15 seconds to fill out the survey below. Your input makes this newsletter better every week.

Share your feedback on the AI for Impact Newsletter

Login or Subscribe to participate in polls.

News & Resources

😄 Joke of the Day
Why did the AI go to art school? Because it wanted to learn how to draw better conclusions!

🌍 News

  • 🇧🇷 Brazil's AI Regulation Challenges Big Tech
    Brazil is crafting AI regulations aimed at balancing innovation with ethical oversight, potentially influencing global policies. The country seeks to hold major technology companies accountable while fostering responsible AI development. Read more

  • 💬 Understanding ChatGPT: OpenAI's Groundbreaking Chatbot
    ChatGPT, developed by OpenAI, represents a significant advancement in AI-driven conversational agents. This article explores its development, capabilities, and the implications for human-computer interactions. Read more

  • 🤔 A Fork in the Road: Navigating AI’s Future
    This thought-provoking essay delves into the critical decisions facing AI development today, emphasizing the importance of ethical considerations and the societal impacts of AI technologies. Read more

  • 📜 AI-Generated Content and Copyright Protection
    The U.S. Copyright Office has ruled that AI-generated text based on prompts isn't eligible for copyright protection. This decision raises important questions about authorship and intellectual property in the age of AI. Read more

Tool Spotlight
🌱 Climate TRACE – This AI-powered platform tracks greenhouse gas emissions worldwide using satellite data and machine learning, helping policymakers fight climate change. Explore more

💼 Organization to Watch
🌍 World Economic Forum (WEF) – A key player in AI policy and global governance, WEF is shaping the future of responsible AI. They’re hiring across AI, policy, and sustainability roles. Browse jobs

🔗 LinkedIn Connection
👩‍💻 Rebecca Finlay – CEO of Partnership on AI, driving ethical AI adoption and collaboration between tech, academia, and policymakers. Connect here

Superhuman AIKeep up with the latest AI news, trends, and tools in just 3 minutes a day. Join 1,000,000+ professionals.

AI for Impact Opportunities

PCDN Weekly Impact NewsletterWorld's Best Human Curated Social Impact Job, Funding, Upskillling, and Socent Opportunities + News on Future of Work, Sustainability and AI For Impact.
AI for WorkWork Smarter, Faster, and Perform Better with AI for Work