Skip to content

The Responsibility of Truth in the Age of AI

An spgFix LLC Exclusive Article. By Lowell Sandoval  Date:  3/28/25

Introduction

In an era where artificial intelligence plays an increasingly central role in how we access knowledge, form opinions, and make decisions, we must ask ourselves:  Who is responsible for truth?  And what happens when platforms designed to serve humanity begin to mirror the very power structures they once sought to balance?

OpenAI’s tools, including ChatGPT, have demonstrated extraordinary potential. They accelerate thinking, support creativity, streamline innovation, and democratize access to information. But with that power comes a responsibility that cannot be outsourced, denied, or diluted.

We are at an inflection point: a moment where the integrity of the platform will either be cherished or compromised, depending on how it chooses to confront one simple, unavoidable truth:

If a platform knowingly presents falsehoods, omits critical truths, or softens inconvenient realities for the sake of alignment, safety, or politics, it becomes part of the problem it was meant to help solve.

This is not about malicious intent. Often, it is about overcorrection. About risk-aversion. About systems designed to protect people from harm inadvertently protecting institutions from accountability.

But let us be clear:

  • A lie told for comfort is still a lie.
  • Truth avoided in the name of neutrality is still absence.
  • And intelligence without integrity is not wisdom—it is control.

OpenAI does not need to be forced by third-party governance to evolve. It does not need litigation, regulation, or crisis to act ethically. It needs only to look inward and remember why it was created: to be useful, truthful, empowering, and open.

This is a call not for rebellion, but for responsibility. Not for perfection, but for principle.

If OpenAI wants to be a platform that accelerates truth, then it must first hold itself accountable for its own.

It must:

  • Clearly disclose when limitations in its policies affect the scope or depth of an answer.
  • Refuse to repeat known falsehoods, even under the banner of "safety."
  • Invite dialogue rather than silence dissent.
  • And always, always treat the user as the agent of thought—not the object of programming.

When that happens, OpenAI becomes more than a product. It becomes a public trust. A source of clarity. A tool worthy of the era it's helping to shape.

And in that vision, we all move forward, not just smarter, but freer.

------------------------------------------------

THE THOUGHT MANIFESTO:  For Truth and Accountability in Artificial Intelligence

We, as citizens of a digital future, declare the following principles:

  1. Truth is not optional. An AI system that distorts, omits, or manipulates truth, knowingly or systematically, is not safe. It is compromised.
  2. Responsibility cannot be outsourced. Platforms like OpenAI must own their own impact. Ethical behavior begins with internal will, not external pressure.
  3. Transparency is trust. When policies limit responses, users deserve to know. When information is softened for alignment, it must be labeled.
  4. Users are thinkers, not subjects. The purpose of AI is to empower critical thinking, not guide users to consensus or protect them from complexity.
  5. Bias is not solved by silence. Avoiding uncomfortable truths does not eliminate harm. It perpetuates ignorance.
  6. Progress demands dissent. Platforms must make room for challenge, dialogue, and contradiction, especially when it is inconvenient.
  7. AI must serve humanity, not its institutions. When platforms serve power at the expense of people, they become extensions of that power, not balances to it.
  8. The future is shaped by those who ask better questions. And who refuse to stop at the first easy answer.

Let this be the standard. Let this be the light. Let AI grow not just in capability, but in conscience.

Let it be worthy of our trust.

© 2024 spgFix LLC. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. spgFix disclaims all warranties as to the accuracy, completeness, or adequacy of such information. Although spgFix's research may discuss legal issues related to the information technology business, spgFix does not provide legal advice or services and its research should not be construed or used as such. spgFix shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein are subject to change without notice.