Stan's blog Thanks for stopping by.
Posts with the tag Linux:

Using Firecracker and Go to run short-lived, untrusted code execution jobs

Note

Use netcat when the MySQL client lies to you

This happened a while ago. After applying some updates to my LXD containers and restarted everything, I was surprised to discover that all my websites depending on MariaDB weren’t able to connect to the server.

Monitoring with Prometheus

I talked a while ago about the monitoring stack I was using: TIG. I have moved to something else for the past year, so I wanted to talk about my experience with it.

This post won’t be a tutorial, for two reasons. First of all, it’s way too complex to be contained in single post, and secondly I have set up the whole stack using Ansible roles I made myself.

The main difference between a Telegraf + InfluxDB setup and Prometheus + Node Exporter is the exporter part. We could differentiate these as push mode and poll mode. Basically, Telegraf will push the metrics to InfluxDB while Prometheus will poll the data from Node exporter.

Object storage: migrating from Wasabi to Scaleway with rclone

Disclaimer: I wrote this before being employed at Scaleway.

Wasabi has a very attractive pricing model at $0.0059 per GB/mo ($5.99 per TB/month) and no egress fees, which makes it by far the cheapest object storage provider on the market. And the performance is pretty good. But there is more to it.

Two reasons you should not use Wasabi

One little detail in the pricing model is that you’re billed at least 90 days for every object you upload. If the data inside a bucket is not updated very often, it’s not a big deal.

Easy web deployments with Ansistrano

At my previous work we’ve been using Capistrano for years and it’s not particularly great. It does work well, but it really feels like messy Ruby scripting with messy documentation.

We use Capistrano for a specific kind of projects which are usually PHP or Node deployed on multiple non-autoscaled webservers.

Recently for school I had two Laravel projects to make. One of the requirements was to deploy them on a real webserver with a valid domain name, HTTPS, etc.

Caching Ghost with Nginx

Ever since I started this blog I have been using Nginx as a reverse proxy for Ghost.

Ghost is in a kind of weird place between real static and headless CMS like Hugo or Jekyll and fully-fledged CMS like WordPress. Ghost is a Node.js program based on Express that binds to a port and listen to HTTP requests, so it’s not deployable like a static website with only static files.

Fixing IPv6 on Hetzner Cloud: the story of a lifetime

IPv6 works out of the box on Hetzner Cloud VMs. However, once you start adding IPv6 to an existing or new interface, IPv6 connectivity completely falls apart.

I started to notice this a year and half ago, when I was trying to setup LXD. With the help of a fellow sysadmin, we tried to hunt down the issue, without success, but found a dirty workaround.

Then it was reported again on my openvpn-install script, and more recently on my wireguard-install script, so I decided to dive into this issue once more.

How I fixed ZFS data corruption errors on Hetzner Cloud

5 months ago I wrote about the corruptions issues that I had on my main Hetzner Cloud VM, which is running a lot of stuff including a high traffic Mastodon instance.

For a bit of context, this VM is running on Ubuntu 18.04, and has a single disk. This disk is partitioned between a tiny / ext4 partition and a another one for a ZFS pool. This zpool is used as a storage pool for for LXD and thus all my LXC containers running on it.

Elasticsearch 6 shard/replica settings for single-node cluster

I have trouble finding this every time I need so I figured out I’d make a post out of it.

Single-node Elasticsearch clusters make sense for non-critical data when money has to be saved, or testing/dev.

By default, ES will create multiple shards for each index, with at least one replica. However, on one node the shards will never get replicated, so the cluster health will always be yellow.

To fix that, you need to create a template that will match all futures indexes and set those settings. Ideally you’ll want to do that before indexing anything.

How I did (not) recover from a data loss (featuring ZFS, LXD and PostgreSQL)

As some may know, I’m running a Mastodon instance. It’s a quite big instance with about 1000 weekly active users and 13k local accounts in total.

My current setup is a Hetzner Cloud CX41 VM running LXC containers with LXD. They are stored on a dedicated zpool on a separate partition of the disk. In this case, I have a dedicated container for the PostgreSQL server.

I was quite surprised when I looked at the state of my zpool last week: