• AI, Ignorance, and Overconfidence: The Dangerous Mix of AI and the Dunning-Kruger Effect
    Feb 5 2025
    Have you ever met someone who talks like they’ve got a PhD in everything, but when you dig a little deeper, you realize they barely scratched the surface? That’s the Dunning-Kruger Effect in action—the classic case of people who don’t know what they don’t know. It’s like reading a social media thread on quantum mechanics from someone who, upon further review, has zero scientific background but confidently explains black holes as if they just wrapped up a dissertation on the subject. It’s that dangerous mix of ignorance and overconfidence. The less people understand a topic, the more convinced they are that they’ve mastered it. Meanwhile, the actual experts—the ones who’ve spent years in the trenches—tend to be the most cautious. They’ve seen the complexities, the unknowns, and the things they still don’t fully grasp. Now, here’s the kicker: I believe AI is making this problem a whole lot worse. AI: The Perfect Fuel for Overconfidence Artificial Intelligence, in all its glory, has given us instant knowledge—or at least, the illusion of it. Type in a question, and boom, you’ve got an answer. But here’s the problem: a half-baked answer delivered with confidence is worse than no answer at all. I actually shared a post earlier this week here on this very topic, “The AI Advice Trap: Why Context Matters.” AI-generated content, no matter how advanced, often lacks context, nuance, and real-world experience. It pieces together patterns from existing data, but it doesn’t think, doesn’t understand, and definitely doesn’t care whether you make a terrible decision based on its response. Yet, because AI sounds authoritative, people believe it. They take half-truths and incomplete data, slap a coat of confidence on it, and suddenly they’re self-proclaimed experts. See where this is going? The Recipe for Disaster: AI + Dunning-Kruger Let’s break this down: AI gives quick, surface-level answers – People read them and assume they now “get it.” They skip the deep research – After all, why question something that sounds so certain? Hey, don’t roll your eyes. This happens all the time. I’m guilty of this myself. People make decisions based on incomplete knowledge – Sometimes small ones (bad takes on X), sometimes massive ones (misguided business strategies, health choices, or legal advice). They spread misinformation – And because confidence sells, others start believing them, too. This is how we end up with people confidently debating complex fields—economics, medicine, law, technology—after skimming an AI-generated summary. It’s intellectual fast food. Easy to consume, temporarily satisfying, but ultimately lacking the nutrients that real expertise provides. But AI Is So Smart… Isn’t It? It depends on what you mean by “smart.” AI can analyze vast amounts of data in seconds, generate well-structured content, and even mimic the tone of a seasoned professional. But intelligence? That’s something else entirely. Think about it like this: A calculator is great at math, but it doesn’t understand numbers. It just follows rules. AI does the same—it predicts patterns and assembles information in ways that look intelligent, but it doesn’t have insight, judgment, or common sense. It doesn’t know when it’s wrong, and worse, it doesn’t care when it’s misleading you. And here’s the real danger: people assume AI is always right. They trust it blindly, not realizing that it can be confidently wrong—which, ironically, is exactly what the Dunning-Kruger Effect describes in humans. Real-World Consequences: When AI-Backed Overconfidence Goes Wrong This isn’t just an abstract problem. We’re already seeing the fallout of AI-fueled overconfidence in the real world: Misinformation on steroids – AI-generated content is flooding the internet with convincing but inaccurate takes on politics, science, and finance. People believe and share it without question. DIY medical and legal advice – People are using AI to diagnose themselves or craft legal arguments, often with disastrous consequences. Businesses making high-stakes decisions based on AI shortcuts – AI tools can be useful, but when leaders make major strategic moves based on AI’s “best guess” rather than expert analysis, things spiral fast. AI isn’t the problem. The problem is people treating AI-generated content as gospel while skipping the necessary critical thinking. So, What’s the Fix? We can’t put the AI genie back in the bottle, but we can change how we interact with it. Here’s how: Stay skeptical. AI is a tool, not an oracle. Treat it like an assistant, not an expert. Do the work. If a topic matters, dig deeper. Read books, talk to real professionals, challenge your assumptions. Embrace uncertainty. The smartest people admit what they don’t know. It’s a sign of wisdom, not weakness. Fact-check everything. AI can be confidently wrong—don’t let...
    Show more Show less
    10 mins
  • What You Need to Know Before Using DeepSeek AI
    Jan 30 2025

    (The lawyer in me couldn’t just skim the TOS—I had to roll up my sleeves, dig in, and uncover what’s really lurking behind the legalese.)

    Most people don’t read the Terms of Service. But in this case, you probably should.

    Because hidden in the fine print are details that could impact your privacy, security, and even your business strategy.

    Here’s what stood out:

    Data Retention: Deleting your account doesn’t mean your data is erased—DeepSeek keeps it.

    Surveillance: The app has the right to monitor, process, and collect user inputs and outputs, including sensitive information.

    Legal Exposure: DeepSeek is governed by Chinese law, meaning state authorities can demand access to your data.

    Unilateral Changes: They can update the terms at any time—without your consent.

    Translation? You’re not just using the AI. The AI is using you.

    So before you hit “accept,” ask yourself: Are you comfortable with these trade-offs?

    Mitch Jackson | links https://linktr.ee/mitchjackson

    _____

    Past episodes https://mitch-jackson.com/podcast

    Show more Show less
    6 mins
  • Why Mitch Jackson Deleted His Accounts on X, Facebook, Instagram, Threads and TikTok
    Jan 28 2025

    I wanted to share why I decided to delete my accounts on X, Facebook, Instagram, Threads, and TikTok.

    These platforms, under their current leadership, have become engines of misinformation, conspiracy theories, and real societal harm. The companies behind them amplify falsehoods and division on a massive scale, and their open alignment this past week with the Trump administration, make it clear to me where their priorities lie: profit over truth, and power over accountability.

    Now, I’m not thrilled that companies like Google, Microsoft and Amazon have also donated funds to the new administration, but here’s the key difference—they don’t own social media platforms that deliberately spread this toxicity. While I respect their right to make donations, subject to major concerns I have with Citizens United v. FEC (link below), I can’t say the same for these other platforms and companies that take things to another level and actively create environments that erode democracy and harm society in the process. For me, what I observed last week was a line I couldn’t ignore anymore.

    For those who would like to stay connected, I’m doubling down on creating content and engaging with others on LinkedIn and Bluesky. Links to these platforms, along with my other spaces and projects, can be found below.

    Thank you for understanding.

    Mitch Jackson, Esq.

    LinkedIn: https://linkedin.com/in/mitchjackson

    Bluesky: https://bsky.app/profile/mitch.social

    __________

    Learn more about Citizens United v. FEC here https://en.wikipedia.org/wiki/Citizens_United_v._FEC

    Show more Show less
    3 mins
  • Most Lawyers Are Looking at AI All Wrong
    Jan 15 2025

    This episode dives into a critical question: how can lawyers use AI to not just work faster, but to elevate their client experience? Join us as we shift the focus from speed and accuracy to what really matters—creating trust, delivering clarity, and anticipating client needs.

    From clear communication and personalized service to proactive problem-solving, this podcast explores the five things clients expect most in 2025. Drawing on 30+ years of legal expertise and real-world insights, Mitch wrote this week's LinkedIn newsletter issue adressing the topics discussed in this episode. In less than 5 minutes you'll see how to blend cutting-edge technology with the human touch that sets great lawyers apart.

    Whether you’re an attorney or just curious about how AI is reshaping client expectations, this is the conversation you don’t want to miss.

    The related written article is here https://www.linkedin.com/pulse/most-lawyers-looking-ai-all-wrong-mitch-jackson-xgp9c/?trackingId=sT%2BIXBu1SeOER9PvHOeBMw%3D%3D

    Past podcast episodes are here https://mitch-jackson.com/podcast

    Show more Show less
    7 mins
  • AI and the Art of Connection: Revolutionizing Communication and Engagement
    Jan 8 2025

    This episode advocates for using AI tools to enhance communication and audience engagement. It highlights AI-powered applications like Perplexity for interactive podcasts and NotebookLM for transforming content into engaging audio dialogues.

    The author emphasizes that the communicator's role is shifting from simply delivering information to strategically shaping intent. AI tools like Google AI Studio are presented as valuable resources for analyzing and improving communication effectiveness across various platforms, ultimately helping communicators refine their message and resonate more deeply with their audience. The integration of AI and traditional storytelling is proposed as a key to impactful communication.

    Book link on Amazon https://a.co/d/1YsFrMm

    Connect with Mitch https://linktr.ee/mitchjackson

    Past episodes https://mitch-jackson.com/podcast

    Show more Show less
    6 mins
  • DAO Spending and Investment Agreements: Securing Accountability and Preventing Misuse
    Jan 8 2025

    This episode discusses the importance of legally sound agreements for Decentralized Autonomous Organizations (DAOs). Jackson advocates for a comprehensive agreements which he calls "DAO Fund Distribution Agreement" to ensure accountability and prevent misuse of funds when DAOs distribute money to third parties.

    The proposed agreement includes clauses addressing purpose, monthly accounting, audit rights, reporting standards, milestones, transparency, dispute resolution, termination, governing law, and indemnification. Jackson emphasizes the critical role of written agreements in protecting all parties involved in DAO transactions and promoting project sustainability.

    Read the original article this conversation was based upon here https://www.linkedin.com/posts/mitchjackson_dao-spending-and-investment-agreements-securing-activity-7282419031444205569-tpG9?utm_source=share&utm_medium=member_desktop

    Listen to past episodes of this podcast here https://mitch-jackson.com/podcast

    Show more Show less
    15 mins
  • A conversation about Mitch's first children's book, Little Heroes
    Jan 7 2025

    Enjoy this overview of my first children's book, "Little Heroes- Big tips for bright futures."

    It's FREE and packed with grown-up success tips for 6-10 year-olds.

    It offers bite-sized ideas, simple enough to spark curiosity, with audio chapters that work perfectly for car rides or bedtime (even if mom or dad falls asleep first).

    Why is this episode shared here in the AI in Law Podcast?

    The answer is easy. Mitch used AI to help pull content from his popular new book, Power Moves, and then asked AI to help him rewrite these favorite lessons for kids. The artwork in each chapter is was created using Midjourney (AI) and the podcast episodes of each chapter using Eleven Labs (AI).

    We think the end result is great and it was fun leveraging AI to do the book project. Also please keep in mind that we'll be adding 5-10 new written and audio chapters in the near future.

    The written book and audio podcast versions are here https://mitch-jackson.com/little-heroes/

    Power Moves and other books written by Mitch are here https://mitchjackson.xyz

    Past podcast episodes are here https://mitch-jackson.com/podcast

    Show more Show less
    18 mins
  • Virtual Reality and False Memories Discussed
    Jan 6 2025

    Virtual reality (VR) is revolutionizing how we remember and experience information, impacting business and law significantly. VR's ability to create realistic memories raises ethical concerns, as it can be used to manipulate perceptions in marketing or influence legal proceedings.

    This episode explores the potential for false memories induced by VR and the consequent legal and ethical implications, particularly in the courtroom where the reliability of VR-generated memories is questioned.

    Businesses must consider liability related to using VR for marketing or training, and the legal system faces challenges in distinguishing genuine from artificially-created memories. Ultimately, the text emphasizes the need for responsible VR implementation, guided by ethical frameworks and regulations.

    Past episodes here https://mitch-jackson.com/podcast

    Read the newsletter issue (with related video links) focusing on this topic: https://www.linkedin.com/pulse/vrs-impact-memory-ethics-mitch-jackson-4fewc/?trackingId=wWnqlx0mQ3SEFruyRn5RVA%3D%3D

    Show more Show less
    10 mins