Caddy’s default TLS configuration is very good. It includes a lot of features that older and well-known servers like Nginx or Apache don’t enable by default. From the tls module’s documentation: Caddy implements these TLS features for you automatically. It is the only server to do so by default: Session identifiers Session ticket key rotation OCSP stapling Dynamic record sizing Application-layer protocol negotiation Forward secrecy HTTP/2 (for the HTTP server) Certificate management (including auto-renew) Man-In-The-Middle detection (for HTTPS sites) Pretty awesome isn’t it?
Posts with the tag sysadmin:
TLS 1.3 is the new TLS version that will power a faster and more secure web for the next few years. The final release of TLS .13 has been out since august 2018. The final draft is supported by OpenSSL in its 1.1.1 version. LibreSSL does not support TLS 1.3 as of today, since they want to do a clean implementation. Nginx supports TLS 1.3 since version 1.13.0 (released in April 2017), when built against OpenSSL 1.
When running a GlusterFS cluster, you may want to use the volume(s) on the servers themselves. During the boot process, GlusterFS will take a bit of time to start. systemd-mount, which handles the mount points from /etc/fstab, will run before the glusterfs-server service finishes to start. The mount will fail so you will end up without your mounted volume after a reboot. After doing to some research to fix this issue, I stumbled upon this Ubuntu bug report from 2011 (!
In a lot of cases, you don’t want CloudFront to overwrite the Cache-Control headers sent by the origin. In my case, my origin is an AWS S3 bucket where each object has its own Cache-Control metadata, which are then translated to headers. By the way, this is the only way to implement these headers on S3, because CloudFront can’t add them if they’re not already sent by the origin. You can only overwrite or forward them.
When I began publishing public Docker images, I was using the GitHub integration with the Docker Hub to automatically build and publish my images. However, the Docker Hub is very slow to build images and has very, very limited configuration options. Then I discovered Drone which allowed me to build images on my own server, tag them, etc. The thing is I’m limited by the drone-docker plugin and I can’t do everything I want with it.
In my last post I presented Drone, an extremely light CI/CD server. One cool and satisfying thing is to be automatically notified of the output of your pipelines. In a company, you would probably use a Slack or HipChat bot. For a personal use, I think a Telegram bot is a good idea. Let’s setup one! Creating a Telegram bot Setting up a bot is free and actually very easy. You can do everything from a Telegram client.
Continuous Integration and Continuous Delivery are very trendy topics in the DevOps world right now. There are quite a lot of services and software to build, test and deploy your code, but actually, a few are free and open-source and self-hostable. The most well-know softwares corresponding to these characteristics are Jenkins and GitLab CI. However, Jenkins has a huge memory footprint since it runs on Tomcat (Java). As for GitLab CI, it’s very good but requires you to run your own GitLab (which is huge) or to be on gitlab.
Warning Edit (2020): I highly discourage using Wasabi. They have a very misleading pricing policy and you will end up with bad surprises on your invoices at the end of the month. For the past few months, I have been using Borg to backup my servers. It was working great and was pretty reliable, but a bit complicated. My previous setup: SSH + Rsync + Borg Here’s the setup:
I’m currently running Ubuntu 18.04 and I noticed that by default I was using systemd-resolved for DNS: stanislas@xps ~> cat /etc/resolv.conf # This file is managed by man:systemd-resolved(8). Do not edit. # # This is a dynamic resolv.conf file for connecting local clients to the # internal DNS stub resolver of systemd-resolved. This file lists all # configured search domains. # # Run "systemd-resolve --status" to see details about the uplink DNS servers # currently in use.
Warning Edit (2020): I highly discourage using Wasabi. They have a very misleading pricing policy and you will end up with bad surprises on your invoices at the end of the month. As you may know I have been running a Mastodon instance for over a year now and in such a long period of time the instance accumulated a lot of data. The context The DB contains about 20 million toots for about 20 GB.