When one works from home the day of the week is generally less pronounced than if one were to work elsewhere. Sure, Mondays still feel like Mondays, and Fridays are something to look forward to. The feel of the day, however, is different. Maybe it's the lack of a commute on public transit with world-weary passengers. Maybe it's the lack of a rushing around in the morning to eat, dress, and leave by a certain time. More than once I've remarked that I'd lose track of the days if it weren't for the regularity of garbage day, which is Tuesday and Friday. Yet despite the reduced emphasis on which specific day of the week the calendar claims we're on, there is still a general feeling based on decades of weekly patterns. As one would expect when enjoying the second day of a holiday, today felt very much like a Sunday.
Over the last couple of months I've noticed a problem grow from being an intermittent issue to a regular problem with the server hosting 10Cv4. The 10C platform tries to be as multi-lingual as possible, which means that just about every standard label, title, and description is stored in a language-specific file to be loaded and written to the HTML templates before being sent to a person visiting a site. This is generally "old hat" for systems, as this sort of mechanism has been used for well over a decade. What's odd, however, is that the language files are no longer being consistently added to the HTML templates. This results in people seeing the labels for the strings rather than the proper text. This is not at all acceptable.
The error logs on the server are empty. The cached elements contain the proper strings. Everything that has been working for several years across multiple versions of the 10C platform should continue to work. Yet it doesn't. The system memory is sitting at 70% used, the swap in use is at 45MB of 2048MB. The CPU rarely exceeds 10%. The I/O wait, however, shows signs of constraints. This should not prevent files from being read before responses are sent back to browsers, but it does point to something I can investigate further ... which I did.
Long story short, the VM that is powering 10Cv4 appears to be running on an over-provisioned server. The service averages 38 requests per second when North America is asleep and closer to 50 requests per second the rest of the time. A lot of this is bots scraping sites and searching for WordPress and/or phpMyAdmin exploits. This is not at all a crazy amount of traffic and the server can keep up with it relatively easily with information that's already stored in its various cached systems. The problem is that any time the server needs to access the storage medium — which is apparently an SSD according to the plan I signed up for — data transfer speeds are limited to just 350KB/sec., or 877x slower than I would expect from a consumer-grade SSD, which is likely not what is installed in the servers at Sakura, where I've been a customer since the very first version of 10C was released as "Noteworthy" back in 2012.
To say that I am not happy would be an understatement.
The service rep I spoke to at Sakura said that occasionally "neighbourhood instances may interfere with performance" and suggested I upgrade to their dedicated servers for $100 a month1, but this just isn't in the budget. Instead, I asked if they could move my instance to a less crowded machine and was given a non-committal "we'll see"2 as an answer. It's time to move.
Fortunately, I do know of a server where I can host 10C for a decent price and have consistently acceptable storage and network speeds. As an added bonus, the machine is well maintained and easily upgradeable. This computer, as one might expect, is right here in my house.
This is what I plan on doing about the lacklustre speed of 10Cv4 on Sakura:
- get the darned API documentation for v5 written and published
- get the basic themes for v5 ready this week (blogging and admin)
- test migrate all the content from v4 to v5 no later than next week
- migrate and deploy v5 by March 1
Given the financial realities of the project, this is the best that I can hope to do for now. By hosting everything from my house, it will be possible to guarantee a certain level of speed while also making it possible to more quickly adapt as time goes on. I will not need to think about the architectural limitations of a vendor or worry about what other VMs are doing to impact the performance of my systems (unless they're my VMs, of course). Ultimately, it should be a win-win for everyone concerned3. The only thing that I'll need to worry about when hosting from home is what to do in the event of a prolonged power failure. My web server is good for 3 hours on battery and the home network is good for about the same. Beyond this, websites could temporarily vanish. This hasn't happened in the 9 months that I've lived in this house, but one can never predict bad weather.
Hopefully failure isn't something that I'll need to contend with for a little while.