Last week I was having a rather heated discussion with a pair of software developers who enjoy working on really big projects. Big, in this case, does not necessarily mean that the programs they make are feature-packed management systems that cater to everybody, but instead software that builds on the work of hundreds or thousands of people around the world. This is the antithesis of how I write software, and often a bone of contention. My software tends to lean heavily towards a minimalistic approach. So much so that I will often write close to 95% of the code that makes the tool work. Many developers see this as a glorious waste of time and make extensive use of frameworks and libraries of code that are available online. By plugging these various elements together, they can often have the core of an application built in a weekend or less. For the developers I was debating with last week, they built a scheduling application in a week. I had done the same but needed two weekends. Their web application weighs in at 4,291 kilobytes in total. Mine is 312 kilobytes. They both accomplish the same top-level goal, but only mine is able to work on browsers from 2007 and smart phones. Which one is "better"? Does it even matter anymore?
A lot of software developers I've spoken to over the last few years tend to lean heavily towards speed of development over speed of execution or efficiency. When their projects become computationally intensive, the most common response is to "throw more power at it" rather than asking themselves how to make the most of the hardware people actually have. Given the incredible surplus of computing infrastructure we have around the world, this attitude certainly follows the same pattern we see with other commodities when presented with a seemingly limitless surplus: use as much as you can get and to heck with the consequences.
We can see time and again how people's perception of resources have changed when presented with an oversupply. An abundance of electrical power, food, clean water, education, telecommunications networks, fossil fuels, and human labour has made it possible for us to create the world we have around us, and we waste a large percentage of these resources without a second thought. It's really no surprise that people are treating processing power the very same way. My perceptions of how we should use this resource are on the fringe, much like the ideologies of people who do their best to live green or completely off the grid.
The unbridled use of software has pushed hardware to where it is today and some pretty amazing things have become possible as a result. Entire worlds of visual splendor and imagination can be rendered in real time to act as a background in video games and big-budget movies. Complex problems involving weather prediction can run again and again, allowing agencies to notify communities that might be affected by exceptionally strong storms. Our words can be transcribed as text despite heavy accents and other verbal aberrations. Heck, in the next 25 years we're expecting that the confluence of better hardware and software will put untold millions of people out of work as unskilled manual labour jobs are replaced by incredibly dexterous robots and 3D printing. It's been said for decades, but we really are on the cusp of becoming a post-scarcity world, where just about anything we want can be provided relatively cheaply, when and how we want it. More than this, there is already a great deal of work being done to create software that writes itself. In the next few years it may become commonplace for anybody to pick up their cell phone and ask its digital assistant to create a program that will solve a specific need.
Computer, create a program for the robotic mower to cut a fractal pattern into the lawn and send the drone up to take a time lapse of the work. Oh, and post the completed timelapse to YouTube with some catchy music when it's done.
It's just a matter of time before this sort of situation is commonplace and people begin exploring a whole new set of boundaries for software development for the hardware they have, and I believe this will also be the point at which many people stop being willfully ignorant of technology and hop in to shift some paradigms. The code will likely be inefficient as heck and require a lot more processing power or memory than a custom-crafted solution but, at the end of the day, it won't matter anymore.
When the barriers to entry are stripped away, people are capable of some pretty amazing things …
… and I'll be out of a job, if I'm still working as a software developer when we reach this point.
What comes after this is anyone's guess, though I do hope that we -- as a species -- see little point in continuing the chrade of working 40 hours a week to support a family or desired standard of living. Unemployment rates will undoubtedly grow around the world as more specialised machines begin making products on-demand, reducing a great deal of waste and human error. Warehouses full of finished products will be less necessary, as will shopping malls consisting of big-box retailers.
We'll still need people of various professions, but the days of the unskilled worker are numbered. Part time jobs may pop up every now and again, but they won't be as common as they are today. This is going to very quickly lead to an interesting problem: an abundance of time.
History shows that societies dramatically change with the introduction of abundance, and new forms of art and entertainment become possible as a direct result of that abundance. What might we do with ourselves if presented with a 4-day work week? How about a two-day work week?
This is a topic for a future blog post.