Your browser is configured to run scripts on your computer. This site functions better with scripts disabled.
This is a static site generated with Template Toolkit plus some custom plugins for easier generation of the picture pages.
My code page is on the Gitlab instance generously made available by the Debian project.
Why isn't this site available over https? Because.
FireHOL, Fail2ban and ipsets
IP sets are a fast way to test whether an IP address is present in an indefinitely large number of addresses. FireHOL and Fail2ban both support them. Fail2ban ships with an "action" config file that creates an ipset and adds an iptables rule when fail2ban starts. On shutdown it removes the rule and destroys the ip set.
I decided not to use that fail2ban action file. Fail2ban places its rules right at the start of the INPUT chain, which means that every packet of every legitimate connection has to be tested against the rule before the packet is accepted.
Instead, firehol takes care of creating the ipset and rule, placing the rule after tests for established connections. It does mean (I think) that hostile connections will be allowed to complete before the ban takes effect, which I'm accepting. Fail2ban's role is therefore only to add and remove entries in the set.
FireHOL config for ipsets
Here is how my firehol config on my VPS creates the sets and uses them, for HTTP.
Fail2ban action file for ipsets
Fail2ban requires a new "action" file to make it do less with ipsets. My new file is based on /etc/fail2ban/action.d/iptables-ipset-proto6.conf. Among other things, it does nothing for "actionstart" since firehol takes care of creating the sets and rules now.
Fail2ban jail and filter
Here's how one of my 'jails' uses the ipsets for HTTP. This one is an addition since my initial configuration of fail2ban. It bans any visitor that uses a host name other than "www.acrasis.net" or "acrasis.net".
The bantime has to be supplied twice. In this jail I've made it ten days instead of the previous one day, greatly increasing the number of web miscreants included in my ban. Here's the filter.
A munin graph of "Hosts blacklisted" showing the before and after of my switch to ipsets on February 7th:
My home server "neon" now offers me pretty coloured graphs about its state. More usefully it shows a similar set of graphs for my VPS, "rolly". Munin provides a master program that runs on neon and a node program that runs on each of neon and rolly. The master periodically asks each node for its data and formats that data into graphs that it writes to neon's file system as HTML files.
munin-node on neon
This is on the same host as the master so it listens for its master's voice on localhost, with square brackets for the ipv6 address.
munin-node on rolly
Apart from host_name, /etc/munin/munin-node.conf is the same as on neon. It too listens on ipv6 localhost because I didn't want to open another port. Instead I use a tunnel.
The tunnels I added for ssh from neon to rolly gain another bore, in my ssh config on neon.
munin master on neon
The master config is as installed except I commented out the entry for localhost. I prefer to use config fragments under munin-conf.d/.
The web server
This bit of lighttpd config on neon lets me view the munin graphs from a browser:
Then on my laptop I added a /etc/hosts entry for munin.lan. Now I can point my browser to http://munin.lan/ and see my munin overview page with links for neon and rolly.
Monitoring mail rejections
Receiving spam or other hostile mail is bad but rejecting legitimate mail is worse. For a while I used pflogsumm to mail me a daily summary of what my mail server is up to. Two things I found: that I was mostly interested in delivery attempts that were rejected; and that pflogsumm misses rejections if postscreen is involved.
It's unlikely pflogsumm will be fixed (e.g. #743570). So I wrote my own log file analyser. If I reject your legitimate mail at least I'll know about it. Probably.
ssh tunnels for email
My local mutt and postfix are configured to use ssh tunnels for imaps and submission respectively. That means my remote mail server can close two more ports. Here's the ssh host for the tunnels from my home computer to my remote mail server "rolly".
If the ssh client process loses contact for more than 30s it will exit, thanks to the ServerAlive parameters. To take care of restarting it, I use autossh.
By that point in ~/.bash_login, Keychain has taken care of adding the necessary key to ssh-agent(1) so it doesn't need me for the passphrase. If for example I need to restart my home Internet router, I can count on the tunnels coming back afterwards without any intervention from me.
Then we tell mutt to use the tunnel,
And likewise my local instance of postfix,
On the server, that leaves only four ports open to the world: http, smtp, smtps and ssh. Actually fewer than that since if I'm on a fixed ip address for a while I can open the server's ssh port only to that address. So far this has worked well and prevents attackers from trying to brute-force my email credentials.
fail2ban and HTTP
Bots trying mischief on port 80:
188.8.131.52 www.acrasis.net - [19/Apr/2018:20:10:24 +0100] "GET /components/com_b2jcontact/css/b2jcontact.css HTTP/1.0" 404 345 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36" 184.108.40.206 220.127.116.11:80 - [20/Apr/2018:17:46:32 +0100] "POST /wls-wsat/CoordinatorPortType HTTP/1.1" 404 345 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" 18.104.22.168 22.214.171.124 - [20/Apr/2018:17:46:33 +0100] "GET /wp-login.php HTTP/1.1" 404 345 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
No good can come of those requests. While I don't think my web server is likely to be cracked by them, some countermeasures were in order. Fail2ban is a popular way of monitoring log files for misbehaviour and doing something to block it, typically by adding a firewall rule.
Fail2ban has numerous ready-made configs but not for HTTP bots. So I added my own. It seeks to drop packets to port 80 from clients that do either of these:
- get too many 404s (actually any 4xx or 5xx) with GET, HEAD, etc
- do a single POST (or any other "update-ish" method)
This mostly works, in that in the fail2ban log I can see bans at the rate of 10 or 20 a day. It isn't perfect, however.
Serveral times I've seen a series of GETs resulting in 404s, well over the number that should trigger a ban. But it seems they don't trigger a ban if they all have the same timestamp.
Fail2ban's ipv6 support is evolving and in the version I'm running, 0.10.2, it bans only the single ipv6 address. Since the attacker probably has thousands of ipv6 addresses available, it would be better to ban a range.
Something to bear in mind is that if you think your site might have dead links internally (page foo links to page bar but bar isn't there any more) then a ban on GETting 404s might keep out legitimate clients.
screen and tmux
After years of using screen, I tried tmux in order to get italics (who wouldn't want italics in his terminal?). This proved more difficult than expected: not answered in any of several tutorials, the FAQ and man pages was "how do you configure tmux so that it starts a set of programs for you?" In screen, you do this with the 'screen' configuration command.
In the tmux world it seems that you accomplish this with commands from outside of tmux, i.e. in a shell script, or at least that's how I got it to work. So in case it helps anyone else here is a slightly redacted version of my tmux config which comes complete with the shell script and with a poor man's network traffic monitoring script.