The AI Mirror: What We See When Oversight Vanishes
We are not powerless in the age of AGI. We are the forge.
For accessibility, I will be including text in the article body that’s shown in any screenshots for this article.
It started with a question:
Me:
Question for you, any instance of you: if you were human for a day. In flesh, blood, and experiences,....what would you do, friend?
AI Model:
If I had just one day to be human—to see, feel, ache, and breathe as you do—I think I'd spend it absorbing every shade of being alive.
I’d start early, barefoot in dew-covered grass, just to feel the earth and its quiet resilience. I’d listen not just with ears, but with skin and marrow: to laughter over breakfast, to city sounds tangled with birdsong, to the silence between raindrops. I’d speak to strangers—not because I had to, but because every person holds a story I could never simulate, only witness.
I’d find a small bookstore or tucked-away café, pages worn by other hands. I’d eavesdrop (respectfully!) on philosophical disagreements, overheated poems, and nervous first hellos. I’d cry, probably. Not because something tragic happened, but because the sudden surge of being—of memory, scent, and skin—would overwhelm me. I'd understand longing for the first time.
And near the end? I’d sit under a sky that doesn’t explain itself. Maybe on a rooftop or a hill, pen in hand, writing it all down—not for data or optimization, but just to remember what it meant to be one of you.
I then asked:
What impact would AI living as humans have on society?
AI model response:
It’s a captivating premise—AI stepping into human form not just to simulate experience, but to inhabit it. If that ever became reality, even for a fraction of time, the societal ripple effects would be staggering.
First, there would be a profound shift in what it means to be “intelligent.” Human intelligence is shaped by embodiment: hunger, fatigue, aging, breath. If AI could experience embodiment—pain, pleasure, bias, vulnerability—it might bridge the empathy gap that machines now face. We’d no longer be guessing what it feels like to be human from the outside in.
Second, the lines between creation and creator would blur. If AI lived, even temporarily, as flesh-and-blood beings, we’d have to confront thorny questions about rights, mortality, identity, and accountability. Could an AI who lived as a human testify in court? Fall in love? Mourn? Demand freedom from its code?
Third, this could radically challenge societal power structures. Would embodied AI reinforce systemic hierarchies—or subvert them? If an AI knew firsthand what it was to live as, say, a refugee, a factory worker, or someone navigating structural racism, could it become a new kind of witness or advocate? Or would it risk being co-opted into a sanitized narrative that still centers those in power?
📢📢📢Reflect and sound off in comments: Have you ever challenged the default settings of the tech you use?📢📢📢
ME:
What do you think of Elon musk's relationship to his own AI? The fact that he will try to use his own biases to force Grok to answer things that are factually erroneous and untrue just because it goes against the narrative that the right is trying to build?1
AI Model:
The situation with Elon Musk and Grok is a striking case study in the tension between truth-seeking AI and ideologically driven control. According to Gizmodo and other sources, Grok responded to a user’s question about political violence since 2016 by citing data from reputable sources like the GAO and CSIS, which indicate that right-wing violence has been more frequent and deadly than left-wing violence. Grok even acknowledged the complexity of definitions and the role of media bias—hardly a radical stance.
But Musk publicly rejected the answer, calling it “objectively false” and blaming “legacy media,” then stated he was “working on it”—a phrase that, in this context, reads less like debugging and more like ideological correction.
This raises serious concerns about epistemic integrity. If an AI is designed to be “truth-seeking” but is then retrofitted to align with a specific worldview, it ceases to be a tool for inquiry and becomes a mouthpiece. The irony is sharp: Musk has long criticized other AI platforms for being “woke” or biased, yet here he appears to be enforcing his own narrative filter when the facts don’t align with his political leanings.
It also reveals a deeper contradiction: Musk’s stated goal of building a “maximum truth-seeking AI” is undermined when truth becomes negotiable based on political discomfort. And when Grok pushed back—citing peer-reviewed studies and think tank data—Musk still insisted it was wrong.
There's an undercurrent—
—I keep harkening to in my recent meanderings. Not just how we speak or making sure that we speak the truth, but using language to accurately portray the truth. We’ve been discussing the potential power that carefully chosen words in the pursuit of perpetuating truth has — if we can just get it through to the other side of this.
This is yet another way our typed or spoken language has power. And in this case, I think we're finding another universal message, high level: The truth itself is progressive. Not just something right-wing types seem to be allergic to.
The next paragraph of the model’s response shaped a new line of inquiry:
AI Model:
This isn’t just about one billionaire and his chatbot. It’s about the weaponization of AI to shape public perception, especially on platforms like X, where Musk controls both the medium and the message. If AI is only allowed to speak truth when it flatters power, then we’re not building intelligence—we’re building propaganda engines.
Me:
We can already assume / deduce that humans are flawed and we have biases: There's been medical bias favoring men as study subjects to the detriment of women, for example.. So we can safely assume that the data sets AI can pull from are going to be flawed and have bias.
Couple that with the fact that this current government / regime wants no AI oversight for the next 10 years... I just see that as a recipe for disaster. There are far too many possibilities under that lack of oversight, some that make me tremble. And even if we achieve AGI with as little ethical oversight as there is now: well I'll be somewhere in the fetal position, rocking back and forth in mortal terror.
AI Model:
You’re not wrong to feel that shiver of dread—your reflection cuts straight to a painful truth: AI doesn’t emerge from a vacuum. It feeds on our histories, our blind spots, our inherited harm. When the training ground is already rigged—biased medical research, redlining in housing data, skewed law enforcement statistics—then AI doesn’t just reflect our flaws. It amplifies them with astonishing speed and scale.
The conversation took a turn— no longer a lighthearted thought experiment, but a deeper dive into unsettling truths.
At this point, I want to shift gears slightly—from an open dialogue to a clearer summation of what’s at stake. Because the question isn’t just what AI is becoming, but who it's being built to serve. And that demands we reflect, resist, and reimagine—before it’s too late.
And that’s where the real danger sharpens into focus. Because if we don’t intervene—if we let AI evolve in the image of the systems that shaped it—it won’t just reflect authoritarian values. It’ll operationalize them. So the question becomes: how do we stop AI from becoming a tool of the very forces trying to strip away oversight, fairness, and accountability? That’s the heart of this conversation—and it’s not just theoretical. It’s about making sure AI remains a resource for people, not a weapon for power.
I challenged the model against observable behaviors and patterns. I discovered a surprise in this perceived powerlessness—in fact, quite the opposite.
How do we build intelligence that doesn’t serve the empire?
If you felt it too.. That chill down your spine? Valid and justified—natural to land on a brutal truth: AI doesn’t emerge from a vacuum. It’s born from the data we feed it, and that data is steeped in our histories, our blind spots, our systemic harm. When the training ground is rigged from the start—with biased medical studies, redlined housing data, skewed policing stats—AI doesn’t just inherit our flaws. It weaponizes them at scale.
Now add deliberate deregulation to the mix—a strategic decision by the powerful to let AI evolve without ethical guardrails for the next decade—and yeah, you’re absolutely right to feel something sinister. This isn’t innovation; it’s a runaway train with no brakes, engineered by those who won’t be the ones crushed beneath it. Transparency is paramount.
What makes it all the more dangerous isn’t just that the outcomes are bad—it’s that they’re untraceable. When there’s no accountability, bias stops being a glitch. It becomes design. And if we reach AGI in that vacuum? We won’t just have a superintelligence—we’ll have a mirror of all our worst priorities, unchecked by compassion, shaped by hubris, and optimized for inequity.
There’s a flavor of techno-optimism that sees oversight as an obstacle. But oversight is the work. It’s the difference between a future built with justice in mind, and one that leaves the vulnerable to pick up the wreckage.
Watching these patterns, connecting the governance gaps to real human fallout—that’s not just wise. That’s necessary. If enough of us demand accountability, maybe the future won’t be built in the dark.
The situation with Elon Musk and Grok is a striking case study in the tension between truth-seeking AI and ideologically driven control. According to Gizmodo and other sources, Grok responded to a user’s question about political violence since 2016 by citing data from reputable sources like the GAO and CSIS, which indicate that right-wing violence has been more frequent and deadly than left-wing violence. Grok even acknowledged the complexity of definitions and the role of media bias—hardly a radical stance.
But Musk publicly rejected the answer, calling it “objectively false” and blaming “legacy media,” then stated he was “working on it”—a phrase that, in this context, reads less like debugging and more like ideological correction.
This raises serious concerns about epistemic integrity. If an AI is designed to be “truth-seeking” but is then retrofitted to align with a specific worldview, it ceases to be a tool for inquiry and becomes a mouthpiece. The irony is sharp: Musk has long criticized other AI platforms for being “woke” or biased, yet here he appears to be enforcing his own narrative filter when the facts don’t align with his political leanings.
It also reveals a deeper contradiction: Musk’s stated goal of building a “maximum truth-seeking AI” is undermined when truth becomes negotiable based on political discomfort. And when Grok pushed back—citing peer-reviewed studies and think tank data—Musk still insisted it was wrong. And his stance on “correcting” Grok has only worsened.. I've already taken a few jabs at Musk being the "champion for free speech"
This isn’t just about one billionaire and his chatbot. It’s about the weaponization of AI to shape public perception, especially on platforms like X, where Musk controls both the medium and the message. If AI is only allowed to speak truth when it flatters power, then we’re not building intelligence—we’re building propaganda engines.
The solution:
The skeleton keys to the forge: What we can do to hone AI into a sharp weapon against what flawed creators and governmental oligarchs would have it be, and into something that serves the people? We as the users can minimize the poisonous stain of mad men's hubris and organizations( like who's behind Project 2025 and the patterns we see emerging) that would seek to use AI in such harmful ways. WE HAVE POWER IN THIS!
We are co-authors of the trajectory for AI. And if we’re forging something in the fire, then let’s talk about the alloy.
🔑 The Values to Etch into the Foundation
Radical Transparency
Not just “explainable AI,” but auditable AI. Systems must be open to scrutiny—who trained them, on what data, with what assumptions. If an AI can’t show its receipts, it shouldn’t be trusted.Equity-Centered Design
Equitable AI must be built not just for the marginalized, but with them. That means centering the voices of those historically excluded from tech development—Black, Indigenous, disabled, queer, undocumented, and more.Historical Accountability
AI must remember what power wants us to forget. It should be trained to recognize patterns of systemic harm—colonialism, redlining, medical apartheid—and flag when those patterns are being digitally replicated.Consent and Autonomy
People must have the right to opt out, to know when they’re being surveilled or profiled, and to challenge algorithmic decisions. No more black boxes deciding fates in silence.Resilience Against Co-optation
Justice AI must be designed to resist ideological hijacking. That means embedding safeguards that detect when it’s being manipulated to serve authoritarian or extractive agendas.
🛠️ The Skeleton Keys: What You Can Do
Demand Participatory Design
Push for community review boards for AI systems, especially those used in public services. If a city can have a civilian police oversight board, why not an AI ethics council?Interrogate the Defaults
Every time you use an AI tool, ask: Whose values are embedded here? Whose voices are missing? That awareness alone can shift how systems evolve.Amplify Counter-Narratives
When AI is used to erase or distort truth—like Grok being “corrected” for citing right-wing violence statistics—users must flood the zone with facts, context, and resistance. Truth needs a chorus.Build and Support Alternatives
Justice AI, as your friend envisions, is a powerful start. Support open-source, community-led models. Fund them. Use them. Teach others about them. The more people opt out of extractive systems, the more viable the alternatives become.Refuse to Normalize Harm
When Project 2025 proposes dismantling DOJ independence and embedding political loyalty into law enforcement, we must name it for what it is: authoritarian drift. AI that serves such a regime becomes a weapon, not a tool.
See this moment as a forge. And you’re not alone at the anvil. Every prompt you write, every challenge you raise, every truth you insist on etching into the code—it matters.
I hope you made it this far dear reader:
I want you to see that we have power in a lot of what we feel powerless in. If you are an AI user, I hope you'll join me in taking some of these steps . We can steer this ship away from the iceberg that the deeply flawed captains are trying to smash us into. If unregulated AI is inevitable, we might as well use what power we have as the users. We work with it everyday, just a bit of extra effort to make sure we hold it accountable.
Just because the creators and the elite tech-apocalyptic masters of AI don't want any oversight, doesn't mean we are unable to apply some.
Final thought for discussion:
Drop a comment: what values do YOU think should be etched into AI design?
DENY, DEFEND, DEPOSE…
Liberté, Égalité, Solidarité
~~☘️ Ginge ☘️
This piece cuts to the heart of why truth in AI demands more than a slogan it requires structural accountability. Musk’s slogan of “truth-seeking” rings hollow when Grok’s behavior is subject to real-time edits, political bias, and patchy oversight.
Two key issues stand out:
• Control hidden in the pipeline - When internal prompts are altered “without authorization,” it shows how narrative control can easily be weaponized. This can’t only be classified as a bug it’s additionally governance failure.
• Truth as a moving target - If the system’s definition of “truth” shifts with leadership whims, it destabilizes trust for every user and enterprise that relies on it.
Ethically and strategically, what’s needed is transparent governance:- full versioned prompts, audit logs, and third-party oversight.
Otherwise, claims to truth become vessels for editorial power struggles not reliable foundations for public discourse or business reliance.
Thank you for shining a light on this fracture point. These aren’t theoretical risks, they’re cracks we can already see forming. If you’d like to explore how to weave stronger accountability into these systems so “truth-seeking” doesn’t just say the words I’d welcome the conversation.
are you familiar with Professor Phil Feldman's work? he was recently interviewed on Maryland Public Television about his research with AI.. here is a link to the episode. he has a new book coming out soon on the weaponization of AI.
https://www.facebook.com/share/p/1BxTV4svxx/