Common Sense Isn’t So Common: "Misinformation, Critical Thinking, and AI Bias"

Welcome back to The Four Percent Amplified! I’m your host, Sunni, and in Episode Two, we explore a topic that resonates closely with all of us navigating the digital landscape: Misinformation, AI bias, and the not-so-obvious role of “common sense.” We’ve all heard the saying that common sense should be common, but in an era where misinformation spreads like wildfire, is it? Joining me are two incredible guests: Risha Brown, creator of Currently Processing Podcast and founder of 40 Greens, her latest project focused on wellness and sustainability, and Itwela Ibowu, a freelancer specializing in development and design. Together, we’ll unpack how the lines between truth and deception are becoming increasingly blurred in today’s digital world.

 

Listen Here

〰️

Listen Here 〰️

Where Has All the Common Sense Gone?

I’ll never forget the first time I got caught in a rabbit hole of misinformation. It was during the height of the pandemic when social media exploded with conflicting health advice. I found myself questioning what was true, and I realized critical thinking was more important than ever before. That’s when I started to understand how easily common sense can be distorted. And it’s not just about health myths. Misinformation is everywhere, from politics to scams. Have you ever fallen for something that seemed too good to be true? How did you realize it was misinformation?

 

A Personal Wake-Up Call

"They say common sense is common, but in today’s world of misinformation, is it really?"

Common sense, at its core, should be the practical application of sound judgment. It's the ability to make reasonable decisions based on basic understanding and experience. However, in our current digital landscape, this foundational concept is often distorted. Misinformation, fueled by algorithmic biases and the rapid dissemination of unverified content, twists what should be straightforward facts. This is where modern critical thinking skills become not just valuable, but essential.

 

The Misinformation Epidemic

Misinformation spreads through various channels: the rapid-fire nature of social media, the 24/7 news cycle, and increasingly, AI-generated content and sophisticated deepfakes. People fall for it due to cognitive biases like confirmation bias, which leads us to seek out information that confirms our existing beliefs. Echo chambers reinforce these biases, and emotional appeals often bypass logical reasoning.

  • Have you ever believed misinformation that later turned out to be false? What convinced you otherwise?

  • Why do you think misinformation spreads faster than factual information?

  • How do social media platforms contribute to the misinformation problem?

The real-world consequences are stark. We've seen health misinformation, such as COVID myths and anti-vaccine propaganda, directly impact public health outcomes. Politically, fake news has fueled division and social unrest. Economically, scams like crypto fraud and phishing schemes continue to cost individuals and businesses significant amounts. And in a fascinating twist, we're seeing both Gen Alpha and Boomers disproportionately fall for "brainrot," a term for content that appeals to short attention spans and lacks substance.

  • How has misinformation personally impacted your life or industry?

  • What are some of the most dangerous misinformation trends you’ve noticed lately?

  • Do you think tech companies are doing enough to combat misinformation? Why or why not?

 

AI Bias: The Invisible Hand Behind the Curtain

AI bias is a critical issue that deeply impacts the spread of misinformation and perpetuates systemic inequalities. Algorithms, designed to maximize engagement, often inadvertently promote sensational or emotionally charged content over factual information, creating a fertile ground for misinformation to flourish. This problem is compounded by the inherent biases within the data used to train AI models. These biases can lead to discriminatory outcomes. Ultimately, AI isn't just influencing the information we see; it's actively reinforcing existing societal prejudices, with tangible and often harmful consequences for individuals and communities.

Given AI's current trajectory, do you believe it can ever be truly unbiased, or will human biases always be reflected in its outputs? How can we, as users, become more aware of and challenge the biases present in the AI-driven content and services we interact with daily?

Here are some key points and questions to consider regarding AI bias:

Algorithmic amplification of misinformation: How AI prioritizes engagement, often at the expense of factual accuracy.

The "garbage in, garbage out" principle: AI models are only as unbiased as the data they're trained on.

Real-world discriminatory impacts:

  • Facial Recognition: Misidentifying people of color with significantly higher error rates than for lighter-skinned individuals.

  • Job Applications: AI tools discriminate against candidates based on names traditionally associated with minority groups due to historical hiring biases in training data.

  • Predictive Policing: AI reinforces existing biases in arrest data, leading to disproportionate targeting of communities of color.

Joining me for this powerful conversation are two dynamic guest speakers

 

Risha Brown

Creator of the “Currently Processing Podcast” and founder of “40 Greens,” a wellness and sustainability initiative that blends mindfulness with eco-conscious living. Risha brings a grounded, socially aware perspective to the conversation, often asking the hard questions about how systemic issues shape the way we consume information. She's deeply invested in discussions around bias in the AI community, notably from her experience attending the Lesbians Who Tech conference, which heavily discussed AI bias. Risha is a passionate advocate for neurodiversity and intersectionality in tech. Follow her on LinkedIn.

 

Itwela Ibomu

A multifaceted freelancer specializing in development and design, with a unique understanding of how digital tools and platforms shape user behavior. Itwela’s work exists at the intersection of creativity and technology, giving him insight into the back-end mechanics that often guide what we see and don’t see online. Follow him on Instagram, and check out his website.

 

How We Can Fight Misinformation and AI Bias

  • ✅ Improve Media Literacy: Learn how to fact-check information using reliable sources like Ground News, Snopes, PolitiFact, and Al Jazeera. Practice lateral reading, checking multiple sources instead of relying on just one.

  • ✅ Diversify Your News Sources: Actively avoid relying solely on social media for news. Research and follow independent outlets and journalists with diverse perspectives.

  • ✅ Understand AI’s Role in Bias: Be aware that search engines, social media feeds, and even hiring platforms use biased AI. Take steps to adjust algorithm settings where possible, such as turning off "personalized" recommendations to broaden your exposure.

  • ✅ Call Out Misinformation Without Alienating People: Instead of immediately arguing, try asking questions like: “Where did you hear that? Have you checked other sources?” Share credible sources in a way that is helpful and informative, without being condescending. :)

 

✨ Final Takeaway

As we wrap up this insightful discussion, it's clear that “Common sense isn't so common” in our hyper-digital world. We've seen how misinformation is deliberately designed to manipulate us, spreading rapidly through social media and often making it hard to discern truth from fiction. We've also explored how AI bias, stemming from the skewed data it's trained on, exacerbates this problem by reinforcing existing prejudices and shaping the information we consume. This makes critical thinking not just a desirable trait, but an essential skill we must actively develop and hone to navigate the complexities of our digital landscape. It’s up to each of us to question, verify, and diversify our information sources.

Now, we want to hear from you. Have you ever encountered misinformation or AI bias? How did you handle it? Let’s continue the conversation. Drop your thoughts in the comments or message us. Stay sharp, stay informed, and as always, thank you for tuning into The Four Percent Amplified!

🧠 Let’s keep the conversation going! What misinformation have you encountered lately? How are you protecting your digital mind?

Sunni Aesthetics

Hi, I’m Shari Fairclough, a designer and creative strategist based in Atlanta. With a background in Film and Media from Georgia State University and certifications in UX/UI, & Graphic design, I specialize in bringing innovative branding, user-friendly interfaces, and custom designs to life.

As the founder of Sunni Aesthetics LLC, I help businesses build standout identities through tailored branding, graphic design, and social media strategies. I’ve collaborated with diverse clients, from small businesses to wellness brands, and have honed my skills through platforms like Figma, Adobe Creative Suite, and WordPress. My approach combines creativity, empathy, and strategy to deliver designs that resonate and connect.

Beyond design, I have experience in Montessori education and a passion for mentoring others. When I’m not working, you’ll find me painting, sketching, or exploring the outdoors.

Let’s collaborate to create something extraordinary!

https://www.sunniaesthetics.com/
Previous
Previous

The Enterprise of BIPOC Excellence: Building Community & Legacy

Next
Next

Amplifying BIPOC Voices in the Design Industry: The Story Behind The Four Percent Amplified Podcast