Thinking About v6

On May 8th, 2019 the foundation of what became the fifth version of the Midori Core PHP framework that I've built so many projects on top of was committed to GitHub. Midori Core 5.0 — generally referred to as v5 for some of the public-facing sites — was designed around the idea that the database should not only contain the bulk of the data validation rules for a system, but the queries as well. Prior to this major release, every query that was sent to the database came from a flat file that was populated with properly-escaped values. With v5 I wanted to try doing something that I had only ever done with .Net-based enterprise projects that relied on the might and versatility of SQL Server: put the bulk of all read and write operations into stored procedures.

This worked rather well for a while … and then MySQL, the database engine used by Midori Core for its entire history, developed a critical bug in version 8.0.221 that has yet to be fixed. This bug is encountered when a stored procedure is writing data to a table that has triggers in place that validate or transform data before it's written to disk. What this means is that just posting something to a 10C-powered site could crash the database, forcing it to restart; a process that takes about 10 seconds. During the restart, every site and service on 10C that requires reading non-cached data is non-operational. As one would expect, this situation is completely unacceptable. Bugs were filed. Data was shared. Teeth were gnashed. Unfortunately, a fix has not yet been issued and it doesn't look like anyone is going to get to it anytime soon. As a result, every system that uses Midori Core v5.x is forced to use no version of MySQL newer than 8.0.21.

However, v5 does have done thing going for it that none of the other versions of Midori Core could do: it can work with SQL Server and PostgreSQL.

Soon after MySQL started crashing every time a butterfly flapped its wings, I started thinking about making the back-end of the system work with other database engines. Microsoft's SQL Server was the first option, as it's the database I've worked with the longest and it does have a Linux version that can run quite well on any modern version of Ubuntu. PostgreSQL is a system I've avoided for a long time but have recently come to understand better as a result of a large project at work. Both of these options work quite well with stored procedures and triggers, making them viable alternatives to MySQL, and both have a large community of developers and administrators who genuinely seem to enjoy helping people when there are problems to overcome. Which one would be better for something like 10Centuries, though?

As with anything involving technology, the answer is "it depends".

I do have one system built on Midori Core that operates with a SQL Server back end. It's been running for well over a year and hasn't encountered a single database-related problem. The site isn't heavily trafficked, receiving perhaps 50-thousand requests per week, and runs on a t3.small EC2 instance on Amazon's Web Services. Average response time for most operations is under 90ms, which is acceptable.

Then there's the new system that I've been working on for the last few weeks which is using the same Midori Core v5 foundation as the other system, but with PostgreSQL handling the data. Many of the core tables are designed the same between the two databases. The functions that request data from these core tables are essentially the same, aside from some syntactic differences. Average response time for most operations is under 70ms, which is also acceptable.

Of course, the dozen or so systems that I've built with MySQL ask the back end also have relatively quick response times, but none quite as consistent as the projects running SQL Server or PostgreSQL. The key difference between the systems running MySQL and those that are not is the type of queries that are sent to the database. Since the MySQL bug has caused so much trouble with one of the underlying goals of the v5 project — namely, keeping all of the validation and interaction rules in the database itself — I've had to drastically simplify how systems running that database engine work with the data. This means more rules and validation happening in the middle-layer — the API — rather than the database. While this is fine for smaller projects involving a single developer, it can create "gaps" when multiple people work on the same project. Gaps are not good, as they lead to inconsistent results.

So with v5 now being compatible with three different relational databases — without the use of an ORM2 — why would I think about the next major release of the core framework?

For something completely different: modularity.

The majority of the projects that I've built on the Midori Core have the same foundational files containing the same foundational functions that I have used for years. The patterns are consistent and the methodology is reliable. That said, when a bug — or security problem — is found in the core files, it's a pain to go through a dozen or more projects, find the offending code, patch it, then deploy releases across a myriad of servers. What I would really like to do is structure the projects in such a way that Midori Core has its own source code repository — which it does — and other projects are forks of the Core — which is not how things are structured right now. Then, when the Core receives updates, that can be rolled out to the other projects via pull requests that kick off some unit testing to ensure that there won't be serious problems when deploying the updated code in a number of different projects. By having the Core able to work with multiple database engines, I'm one step closer to having a better update process in place. The projects become more modular.

There are other items that I would want to tackle with a fresh version number, such as better caching mechanisms, a more consistent use of "history tables" in the database, more consistent use of CSS3 across sites to take advantage of visitor colour theming preferences, and more. All of this could be built into the v5 platform, perhaps as part of the upcoming 5.4 release of Midori Core, but I am very tempted to use this recent functional improvement as the impetus for a better-structured system that I can continue to build on for several years before thinking about the next problem to solve.

Does this mean that 10C will be upgrading to v6 in the near future? Not likely. Heck, 10C was using the v4 platform for a full 8 months after v5 was in a stable-enough place to handle the work. That said, I would be very curious to know whether a PostgreSQL-based 10Centuries would be noticeably faster than the current MySQL-powered one.


  1. MySQL 8.0.22 was released on October 19, 2020. More than a year later, the bug that prevents me from upgrading the database continues to exist.

  2. An Object Relation Model … basically a piece of software that makes my job 50x harder when something doesn't work as expected. I do not use ORMs in my projects, preferring the "old school" approach of writing the queries myself. This allows for some better tweaking and more nuanced questions.