Collaboration vs. Extraction
Over the past few months I’ve invested a remarkable amount of time and money into studying and working with AI systems. Not only the commercial products like ChatGPT and Deepseek, but locally hosted models from Ollama and HuggingFace. There’s enormous potential not just to use these tools to achieve goals, but to collaborate with them—solving problems just beyond our current ability. Tasks that once seemed daunting or ‘impossible’ now feel approachable—especially for those willing to push themselves a little further. It’s like playing a sport with someone just slightly better than you; there’s a good chance you’ll lose, but there’s also a chance you’ll win.
This is one of the more interesting aspects that I see from LLMs; the freedom to approach new things with a partner who knows just enough to get you started. However, as I dig through videos on YouTube and academic papers on actual usage patterns, I wonder if we’re missing the potential that the current generation of AI models – Weak AI1 systems, as Nils Nilsson calls them – provides.
But despite this promise of partnership, most interactions I observe with AI resemble something much simpler—extraction. A person asks the AI to perform a task, and that’s that. This could be text generation, code generation, image generation, and now video and music generation. A person says “make this thing”, perhaps requesting a few edits along the way, then uses the output to complete a task. There is nothing inherently wrong with this at the moment, but is this really the best use of everyone’s time? The person completes a task without learning, and the AI executes it with all the recognition of a nameless contractor.
This raises a question: If AI can be a thinking partner, why do we treat it like a vending machine?
Over the past few years a lot of people have asked me my opinions about the potential dangers of AI taking away jobs, leaving a large percentage of the population unemployed. My general response to this has been along the lines of “If you’re really good at what you do, you’ll always have a job.” I stand by this statement, too. To this day there are craftsmen who still make silverware by hand despite the fact that factories can stamp out thousands of perfectly functional forks and spoons by the hour. There are still painters despite the proliferation of cameras. There are still tailors despite the plethora of clothing sizes at Walmart. If someone is good at something, that job will continue to exist for customers who want something generic tools can’t provide.
There’s no reason why this can’t also be true of people who are concerned their jobs may be taken away by AI systems.
Despite the hype, the big LLMs offered under the banners of ChatGPT and Deepseek are good, but not perfect. A skilled person who knows their field very well will spot errors in the output of the large language models about 30% of the time. This number gets smaller every few months, but what this means is that the current generation of AIs are equivalent to a human who has been doing a job for less than five years. A remarkable feat of engineering, but not yet enough to replace the most competent or creative people in an office. What this offers is a window of opportunity for those who might worry they’re next on the chopping block, and those who want to keep up with their peers. Rather than use generative AI models as a simple “do this and that” tool, it would be better to collaborate.
What I have found while working with the larger LLMs is that a slower, more deliberate approach often makes for a better result. This is my process:
- Ask a question
- Step away to reflect
- Return with refined questions or counterpoints
- Treat the conversation as an evolving dialogue, not a transaction
Unlike many of the examples on YouTube, I do not use AI for “simple” tasks. I won’t ask Claude to write a sort function. I won’t ask Gemini to summarize an email. I won’t ask ChatGPT to write a post for social media. These things do not require an AI. What I will do instead is ask a question. I already know what I know, but is there something I don’t know that can make for a better result? This is where reflection comes in, as I’ll ask a question, go back and forth a bit, then step away from the keyboard to reflect on the AI’s response. There will often be additional questions and, thankfully, we can return to past conversations and continue them without losing context.
Does this always result in something better? I don’t know. But what I can say is that I usually learn something new about the thing I am trying to do, whether it’s solving a problem or understanding a concept. At the end of the day, it’s that personal improvement that makes these AIs remarkably powerful. The system goes from being a servant to a collaborator, which I find to be a healthier relationship. It’s impressive that these tools can do things on their own—but it’s far more valuable when we do things with them.
Maybe the real question isn’t whether AI will replace us—but whether we’re willing to let it challenge us to improve.
Nils Nilsson, one of the forefathers of modern artificial intelligence, refers to AI systems that aid people as “weak AI” while AI systems that think independently are “strong AI”.