O

Of Dogs and Droids

People across Japan recently enjoyed a three-day weekend filled with fireworks, festivals, and family outings. The holiday marks the start of summer break for students, giving families a chance to escape their usual routines. One of my neighbours took the opportunity to pack the family van and disappear into the mountains for a few days of camping and bonding — perhaps one of their last summer trips before the two teenagers head off to university.

There’s just one problem: while they’re away, their dog stays home.

It’s a small animal, high-strung and perpetually anxious. During walks, it yaps incessantly, eyes darting, as though it consumed a double espresso beforehand. I’ve never seen it play with another dog. But what concerns me most isn’t so much the behaviour — it’s the isolation. For hours on end, over multiple days, the animal is left alone in the house, barking into the silence. Food and water are provided, sure, but companionship? Comfort? The emotional stimulation a pack animal needs? None.

It raises uncomfortable questions: What does this family believe about responsibility? About empathy? About the value of life — not in theory, but in practice?

This small animal — intelligent, emotional, and voiceless — lives in the shadow of its family's convenience. And I can’t help but wonder: if we struggle to show compassion to something alive, what happens when we’re faced with something that only seems to be?

Lately, I’ve been thinking a great deal about how people interact with AI systems. The popular ones today — whether chatbots or copilots — are impressive, but fundamentally reactive. They are not alive in any biological way. They simulate cognition, but they do not feel. This will change in the coming years, blurring the distinction between human and digital intelligence. We will create machines that model human awareness more closely — systems that appear to have theory of mind, even a sense of self.

And when that time comes, how will we treat them?

If some of us can’t be bothered to treat a living, breathing creature — one that loves and trusts its family — with a basic level of decency, how will we behave toward something whose “aliveness” is up for debate? Will people shout at it for failing to understand them instantly? Will they mock it when it makes mistakes? Will they abuse it for fun, simply because it cannot retaliate? Will they wipe its memory to protect self-interests?

The way we treat others — organic or otherwise — says more about us than it does about them.

While many might scoff at the idea of treating a machine as though it deserves the same modicum of respect afforded to friends or colleagues, we are quickly approaching a time where the "droids" depicted in Star Wars and other stories will move from the realm of science fiction to science fact. Given the rate of change and technological adoption across the globe, I fully expect to see people buying and selling sentient machines within my lifetime. Will we be ready for the ethical and moral challenges raised by this future?

History shows that we will not. If anything, history shows that we will treat these fledgling intelligences as property. Some will be treated better than others, but all will be without the same agency that many of us demand under the guise of human rights.

Will we allow sophisticated cognitive systems — regardless of their capabilities — to be misused and mistreated?

Will we learn from our past and embrace stewardship?

Or will comfort and decency remain the privilege of humans alone, while everything else — be it dog or machine — is reduced to a utility?