Why? Why would you run Docker on ZFS? I discovered ZFS when I was playing with LXD, because it’s the recommended storage driver for it. While ZFS has a lot of great features, the ones I like the most are the RAM cache, compression, and snapshots. After moving from LXD to Docker (so leaving ZFS), I felt the difference in speed the cache gave me, and I saw some files like databases being 3 times bigger.
Posts with the tag sysadmin:
ZFS is great file system that comes with a lot of benefits, and I’ve come to use on my servers with LXC or Docker. Even if RAIDZ or self-healing are useless on a VM, we can still benefit from compression, snapshots, cache, etc. The proper way to create a ZFS pool is to dedicate a device or partition to the zpool. I’m using the new Hetzner cloud offer a lot recently and that’s also where I use ZFS.
I’ve been using Munin for the past years as my monitoring tool. It works well, it’s light, and super easy to set up. It’s a bit old and limited though, so it’s time to look at what kind of monitoring software we have in 2018. Instead of having one software that does everything nowadays we like to separate the roles this way: The collector, which you will install on the machines you want to monitor The database that will store all the measurements The visualization system, e.
I’ve been moving my services to Docker lately because it suits my needs an ease my life a lot, but I was kind of stuck when wanting to move my Diaspora pod into containers. Indeed, the Diaspora project doesn’t have any official Docker image, including a Dockerfile, docker-compose.yml or any kind of instructions or guide, because none of the core developers actually use Docker so they’re still searching for someone to maintain one.
In my first post I said I installed Ghost with ghost-cli, the classic way. I did also say that I wanted to run it in Docker but that I didn’t know Docker enough to do it. In fact, I tried to set up Ghost in Docker a few times while being bored at school, but I didn’t succeed, so it ended up like it is now. For the past week though, I’ve been learning and using Docker a lot, and finally moved a dozen services into containers.
In my first post, I said that I set up my Ghost blog with a MySQL database. Why is that? Because ghost-cli wants you to use a MySQL database and I happened to have a MariaDB server on my VM and so I just added another database to it SQLite is a better choice However, Ghost supports SQlite as a storage backend. In fact, SQLite can handle more load than this blog could ever have, considering I use Nginx cache on my reverse proxy.
If you’re not using a CMS like WordPress, chances are your CMS or blog engine doesn’t support comments. That makes sense if it’s a static blog built with a tool like Hugo, Jekyll or Zola, but does less when your blog is powered by a database, like Ghost is. I think comments are part of a blog and it’s important to enable them. Readers can thank you for your work, report a mistake, discuss the article, etc.
There are different storage types for LXC containers, from a basic storage directory to LVM volumes and more complex file systems like Ceph, Btrfs, or ZFS. In this post, we’re gonna setup a ZFS pool for our LXC containers, via LXD. Why ZFS? ZFS is an awesome file system. It’s a 128 bits file system meaning that we can store a nearly unlimited amount of data (no one will never attain its limit).
After having played around a bit with LXC and discovered its main features, you may want to have a proper network setup for your containers. There are multiple network setups possible and multiple ways to implement them. In this post, we are going to setup a bridge, using lxc-net. It requires very little configuration and should be enough for a simple LXC architecture. More details about this bridge setup: Containers will have an IPv4 within their own subnet Containers will be able to access each other within this subnet The host will be able to access the containers trough this subnet Containers will have access to the internet thanks to the bridge interface Note that I’m using Debian 9 for this tutorial.
Munin has plugins that allow us to get nice graphs of the DNS queries made to Unbound on the machine. However, they aren’t working by default! This tutorial assumes Munin and Unbound are already configured and working on your server. I’m using Debian 8 and 9, but it should work on Ubuntu, and certainly most distributions. Tip You can use my script if you want to install Unbound a local DNS resolver on your machine.