Posts with the tag linux:
Note
This post has been discussed on Hacker News
This semester I have been working on a school project where the main requirement was that we needed to execute user-submitted code in one form or another.
My team’s subject was a code benchmarking platform, where users could create benchmarks (eg: sort these two arrays as fast as possible) and then anyone could submit solutions to the benchmarks.
In this post, I want to dive in the code execution part of the project, the approach I took and how I used Firecracker, with concrete code snippets.
This happened a while ago. After applying some updates to my LXD containers and restarted everything, I was surprised to discover that all my websites depending on MariaDB weren’t able to connect to the server.
I talked a while ago about the monitoring stack I was using: TIG. I have moved to something else for the past year, so I wanted to talk about my experience with it.
This post won’t be a tutorial, for two reasons. First of all, it’s way too complex to be contained in single post, and secondly I have set up the whole stack using Ansible roles I made myself.
Disclaimer: I wrote this before being employed at Scaleway.
Wasabi has a very attractive pricing model at $0.0059 per GB/mo ($5.99 per TB/month) and no egress fees, which makes it by far the cheapest object storage provider on the market. And the performance is pretty good. But there is more to it.
Two reasons you should not use Wasabi One little detail in the pricing model is that you’re billed at least 90 days for every object you upload.
At my previous work we’ve been using Capistrano for years and it’s not particularly great. It does work well, but it really feels like messy Ruby scripting with messy documentation.
We use Capistrano for a specific kind of projects which are usually PHP or Node deployed on multiple non-autoscaled webservers.
Recently for school I had two Laravel projects to make. One of the requirements was to deploy them on a real webserver with a valid domain name, HTTPS, etc.
Ever since I started this blog I have been using Nginx as a reverse proxy for Ghost.
Ghost is in a kind of weird place between real static and headless CMS like Hugo or Jekyll and fully-fledged CMS like WordPress. Ghost is a Node.js program based on Express that binds to a port and listen to HTTP requests, so it’s not deployable like a static website with only static files.
IPv6 works out of the box on Hetzner Cloud VMs. However, once you start adding IPv6 to an existing or new interface, IPv6 connectivity completely falls apart.
I started to notice this a year and half ago, when I was trying to setup LXD. With the help of a fellow sysadmin, we tried to hunt down the issue, without success, but found a dirty workaround.
Then it was reported again on my openvpn-install script, and more recently on my wireguard-install script, so I decided to dive into this issue once more.
5 months ago I wrote about the corruptions issues that I had on my main Hetzner Cloud VM, which is running a lot of stuff including a high traffic Mastodon instance.
For a bit of context, this VM is running on Ubuntu 18.04, and has a single disk. This disk is partitioned between a tiny / ext4 partition and a another one for a ZFS pool. This zpool is used as a storage pool for for LXD and thus all my LXC containers running on it.
I have trouble finding this every time I need so I figured out I’d make a post out of it.
Single-node Elasticsearch clusters make sense for non-critical data when money has to be saved, or testing/dev.
By default, ES will create multiple shards for each index, with at least one replica. However, on one node the shards will never get replicated, so the cluster health will always be yellow.
To fix that, you need to create a template that will match all futures indexes and set those settings.
As some may know, I’m running a Mastodon instance. It’s a quite big instance with about 1000 weekly active users and 13k local accounts in total.
My current setup is a Hetzner Cloud CX41 VM running LXC containers with LXD. They are stored on a dedicated zpool on a separate partition of the disk. In this case, I have a dedicated container for the PostgreSQL server.
I was quite surprised when I looked at the state of my zpool last week: