Matigo dot See, eh?

The Semi-Coherent Ramblings of a Canadian in Asia

Balancing AI, Responsibility, and Human Judgment

Written by JasononNovember 5th, 2024

This evening after work, I hopped in the car and took an extended drive to allow some time to think more about a complex question: how can a society responsibly integrate AI-based systems into governance and public life while preserving the core qualities that make us human? It’s a question that, like so many others, I keep revisiting, especially as AI tools and algorithms become more intertwined with daily life.

In some ways, this tension between innovation and responsibility is nothing new. Humanity has wrestled with the double-edged nature of technology since the invention of the wheel. Yet, there’s something uniquely profound — and perhaps unsettling — about the systems that we consider "AI". It’s the first tool that doesn’t simply extend human capabilities but has the potential to replace certain human functions, including judgment. This changes the conversation from a simple issue of efficiency to one that strikes at the core of societal values and personal responsibility. In terms of government, can a machine truly replace our noisy and fallible politicians to bring about nations of problem solvers?

One of the primary arguments in favour of AI-driven governance is its capacity for objectivity. Algorithms can process mountains of data in seconds, provide recommendations without the sway of personal bias, and analyse trends free from emotion or vested interest. This “Vulcan” approach to decision-making is incredibly appealing given how chaotic western political systems have appeared recently. In a world riddled with scandals and mistrust, the promise of an impartial, data-driven decision-maker sounds like a relief!

But the promise of objectivity is only as good as the data that feeds the algorithm. AI, after all, learns from us — our history, our actions, our patterns, and, inevitably, our biases. Suppose we hand over key decision-making processes to an AI and fail to ensure that it is trained on diverse, representative data. We would risk embedding these biases even deeper into society, possibly with less oversight than we currently have. And bias, once baked into a system, can become incredibly challenging to undo.

History has taught us that human judgment, while flawed, is essential for moral governance. There’s an element of compassion, empathy, and cultural sensitivity that software, despite all its sophistication, struggles to replicate. It’s the difference between an algorithm recommending a sentence in a courtroom and a judge considering the human circumstances behind a case. This idea reminded me of something I heard on a podcast many years ago: governance is as much an art as it is a science.

The art of governance, of course, is fraught with human imperfection. We’re prone to our own biases, and our decisions can be influenced by personal experiences or even transient emotions. But that’s precisely why human oversight matters — because governance isn’t a math problem with a single correct answer. Different societies and cultures have unique values and beliefs, and what might be an “optimal” decision in one context could be inappropriate or even harmful in another.

One concept that has always resonated with me is the idea of AI as an “advisor” rather than a “decider.” Imagine if AI could assist decision-makers by providing clear, evidence-based recommendations while humans retain the final say. Such a setup would allow AI to do what it does best — crunch numbers, analyse data, and identify patterns — without fully relinquishing our responsibility as stewards of society. The goal wouldn’t be to replace human judgment but to enhance it with deeper insights.

Imagine a council of AIs, each programmed to focus on different perspectives or values, analysing a complex issue and debating among themselves before offering suggestions. This “panel of perspectives” could simulate a debate between different schools of thought, each represented by a different AI. The role of society would then be to listen to these perspectives and make a judgment call, incorporating both data and the human touch that governance so often requires. A newspaper that dedicated ink to each dispassionate panel member's perspective would be a fascinating thing to read and passionately discuss over coffee.

It’s tempting to think we could create such a perfect system — a self-regulating, fully objective AI model that would save us from egregious human error. But history reminds us that progress without caution can have dire consequences. While we may not be able to stop technological advancements, we do have a responsibility to steer them with caution and respect for their impacts on society.

For instance, I’ve noticed how some governments and organisations are adopting more rigorous standards for AI oversight and transparency. One approach that seems promising is “cross-testing,” where AIs developed by one organisation are independently tested by another to ensure accountability and ethical standards. In theory, this would prevent any single organisation from having total control over its AI’s training and outputs, reducing the risk of bias or misuse.

However, cross-testing is only part of the solution. No amount of external testing can replace the ethical responsibility we hold as a society. We must be vigilant about how we develop and implement AI technologies, especially in domains that impact human rights, privacy, and autonomy. To do otherwise would be to abdicate our moral responsibilities, outsourcing them to a machine that, no matter how advanced, lacks true understanding.

Ultimately, the question isn’t whether we should use AI-based tools in governance but how we use them. The real risk lies not in the technology itself but in our tendency to view it as an infallible oracle. However, AI should really be limited to helping us make better choices, not to make those choices for us. As Benjamin Gadsden said in 1775: "Don't tread on me."

As my long drive home reached its end, the answer to my question seemed clearer than ever. The path forward isn’t about eliminating fallible human judgment or relying solely on machines; it’s about combining the strengths of both, recognising the limitations of each, and respecting the nuances. We must ensure that while AI continues to evolve and become part of the fabric of society, we don’t lose sight of the values that make us human; responsibility, empathy, and wisdom.

In the end, the real power of AI might not be in replacing our minds but in reminding us to use them to their fullest. The journey toward a balanced integration of AI and human oversight is not just a technological challenge; it’s an ethical journey. And it’s one we must tread carefully, with both courage and humility.