On August 20, 1977, NASA launched the first Voyager probe into space — the Voyager 2 — where it began its long journey past the major worlds of our solar system and into the endless expanse of interstellar space. Despite leaving the relative safety of our world more than 41 years ago, the machine continues to operate to a certain degree and sends data back for further science. By all accounts, the computers on the Voyager probes is obsolete. Operating with just 69.63 kilobytes of memory and a digital 8-track cassette for storage that needs to be written over after data is transmitted, it's amazing how much the machine has been able to do. The original code for the missions was written in FORTRAN5, then later updated to FORTRAN77, and now has a little bit of modern C thrown in where it makes sense. The CPU won't win any speed competitions when maxed out at 81,000 instructions per second, but it's clearly been enough to get the job done up to now. And all this has me thinking …
At the day job there is a piece of legacy software that has been in use for almost two decades. Originally written in the 90s and looking every bit like an OS/2 Warp application ported to Access 97, the tool has certainly made it possible for schools to organise and deliver tens of millions of lessons around the world in several dozen languages. Despite the company's best attempts to replace the old application, local schools have steadfastly refused to give up the legacy system because "it just works". I'm part of a rather large team tasked with standardising the technology stack used across the organisation and this software is once again on the chopping block to be replaced by something that executives are desperately hoping is superior in every conceivable manner. However, as we work towards replacing this homegrown CMS with something "in the cloud", I can't help but think about the sorts of software that is written — and will be written — for computers that will leave our world and likely never come back. The code written by people at NASA, JAXA, the ESA, and other space agencies around the world has to be some of the most reliable software ever created otherwise multi-million dollar investments can be lost1.
Then I look at the code that is written for corporations and wonder why the bar is allowed to be set so low. Software with glaring memory issues, or inefficient processor usage, or excessive network usage, or just crap database queries…. The list is really endless when it comes to "business software". Many organisations will try to solve their software problems by throwing more hardware at it, as if money could just make problems go away. But such a lazy "solution" is not possible for systems that are sent into space. The robots on Mars and the satellites throughout the solar system can receive software updates, but they're forever stuck with the hardware they shipped with. Could businesses do something similar?
Mind you, I'm not suggesting organisations never upgrade their hardware. Business needs evolve over time, and this generally leads to more complexity in the systems that must support those requirements. But if the software that businesses rely on had to be written with the expectation that there may not be any hardware updates for the next decade or two, would the developers charged with converting the business requirements into lines of code put more emphasis on performance and stability than they do today?
This may just be wishful thinking.
Mind you, mistakes still happen. The ESA has not had a very good track record getting robots to the surface of Mars intact