I only have incremental progress to report this weekend.
I rewrote the pianotell-web Docker container.
I ditched the shinsenter/PHP + nginx container and rebased on the official PHP containers + Caddy.
The shinsenter container is great to start with as it already solves a lot of problems you’re likely to encounter when starting from scratch. I discovered that the hard way and had to read up a bit on PHP production settings and also figure out how to properly install PHP extensions on Docker.
On the other hand, shinsenter was too complex and had too many layers of scripts that got in my way. Every time I tried to change something straightforward, I had to chase down several layers of scripts to figure out why that caused something else to break.
However, the primary reason I made this change was to adopt Caddy for the web server instead of nginx. I was struggling a bit with nginx, but Caddy is such a breath of fresh air! Not only is it extremely easy to configure, it magically takes care of retrieving and updating TLS certificates on-demand.
So I solved a big problem by adopting Caddy — no need for me to implement Certbot afterall to manage the certificate. It would have have yet another thing to monitor, but Certbot is also a problem to properly test locally. I can easily test Caddy because it even issues self-signed certificates for localhost.
The other nice thing I achieved with the rewrite was to make my container and config definitions work both for my local laptop instance and for the production deployment. So on my laptop, the containers assume PianoTell is running on https://localhost/ but now the very same containers can handle https://forum.pianotell.com/ on the server. Including taking care of the certificate.
I had an assumption that I would need to build my Docker images for multiple platforms. For some reason, I thought the images I was using on my laptop were AMD64 (i.e. targeting Intel/AMD CPUs) but my chosen server runs on ARM64 (i.e. the stuff that powers many smartphones). After going down the route of enabling multi-platform builds, I realized that Docker was already using an ARM64 build of Linux locally because my Mac is on Apple silicon. That makes perfect sense, but I was confused because Linux seems to identify ARM64 as aarch64 internally and I didn’t connect the dots. So this work ended up be being unnecessary.
Finally, I added a proper healthcheck dependency on mariadb.
The backup story is a work in progress.
A long time ago, @rogerch had warned me that managing the SQL server would be the most challenging part of all this, and only now, at the end, do I understand.
There is certainly a lot to read and understand.
To over-simplify a bit, the best way for resilience here would be to have a secondary replica of the primary server that’s getting a constant stream of updates from the primary. If the primary dies, then the secondary can be made primary with a new secondary replica. The next best thing would be to have a streaming backup of every update that can be restored manually.
Finally, the simplest is to take a point in time snapshot backup and hope you never have to restore and potentially lose data given that the backup would only be periodical. There are other hacky and unreliable ways of backing up like copying the database files on the filesystem.
It’s not an either/or situation, and there are a lot of nuances, but you get the idea.
I plan to take a periodic cloud backup to start — perhaps once every hour.
I looked at a few solutions like this one, but I wasn’t too comfortable with them for various reasons. The one I linked is quite excellent, but the client bits for MariaDB are out of date and MariaDB recommends using the same versions across the board.
So instead, I’m working on rolling my own solution based on the MariaDB official images. That way I’m in full control. I’ve created a backup user on the SQL server and I’ve figured out the mariadb-dump command that I’m going to use.
I prefer mariadb-dump instead of mariadb-backup because the backup format is human readable and very portable. It's so simple to understand.
Now I just need to add cron, rclone, and get it all working together.
This is the kind of silly stuff I find myself doing on the weekend now!:
FROM mariadb:11.1.5 AS cloudbackup-base
# we only need the MariaDB client components +
# we need cron for the scheduler and rclone for the cloud backup
RUN apt-get -y purge mariadb-server && apt -y autoremove && \
apt-get update && apt-get -y --no-install-recommends install mariadb-client cron rclone && \
rm -rf /var/cache/apt/archives /var/lib/apt/lists
# logic to set up the scheduler will go here
#COPY ./crontab /etc/crontab
#RUN crontab /etc/crontab
#RUN touch /var/log/cron.log
# a possibly futile attempt to reduce the final image size
FROM ubuntu:jammy
COPY --from=cloudbackup-base / /
CMD cron -f
Overall, I feel like I’m a little behind from where I wanted to be at this point.
My goals for this weekend were to deploy to the cloud and validate for a few days, but I want to focus on completing the backup part of the story before I start working on that.
Thanks for listening to my gibberish! It helps me document my decisions, where I’m at, and clarify my own understanding on the next steps!