<?xml version="1.0" encoding="UTF-8"?> 
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <atom:link href="https://dataswamp.org/~solene/rss.xml" rel="self" type="application/rss+xml" />
  <title>Why self hosting is important</title>
<pre># Introduction

Computers are amazing tools and Internet is an amazing network, we can share everything we want with anyone connected.  As for now, most of the Internet is neutral, meaning ISP have to give access to the Internet to their customer and don't make choices depending on the destination (like faster access for some websites).

This is important to understand, this mean you can have your own website, your own chat server or your own gaming server hosted at home or on a dedicated server you rent, this is called self hosting.  I suppose putting the label self hosting on dedicated server may not make everyone agree, this is true it's a grey area.  The opposite of self hosting is to rely on a company to do the job for you, under their conditions, free or not.

# Why is self hosting exactly?

Self hosting is about freedom, you can choose what server you want to run, which version, which features and which configuration you want.  If you self host at home, You can also pick the hardware to match your needs (more Ram ? More Disk? RAID?).

Self hosting is not a perfect solution, you have to buy the hardware, replace faulty components, do the system maintenance to keep the software part alive.

# Why does it matter?

When you rely on a company or a third party offering services, you become tied to their ecosystem and their decisions.  A company can stop what you rely on at any time, they can decide to suspend your account at any time without explanation.  Companies will try to make their services good are appealing, no doubt on it, and then lock you in their ecosystem.  For example, if you move all your projects on github and you start using github services deeply (more than a simple git repository), moving away from Github will be complicated because you don't have _reversibility_, which mean the right to get out and receive help from your service to move away without losing data or information.

Self hosting empower the users instead of making profit from them.  Self hosting is better when it's done in community, a common mail server for a group of people and a communication server federated to a bigger network (such as XMPP or Matrix) are a good way to create a resilient Internet while not giving away your rights to capitalist companies.

# Community hosting

Asking everyone to host their own services is not even utopia but rather stupid, we don't need everyone to run their own server for their own services, we should rather build a constellation of communities that connect using federated protocol such as Email, XMPP, Matrix, ActivityPub (protocol used for Mastodon, Pleroma, Peertube).

In France, there is a great initiative named CHATONS (which is the french word for KITTENS) gathering associative hosting with some pre-requisites like multiple sysadmin to avoid relying on one person.

=> https://www.chatons.org/en [English] CHATONS website
=> https://www.chatons.org/ [French] Site internet du collectif CHATONS

In Catalonia, a similiar initiative started:

=> https://mixetess.org/ [Catalan] Mixetess website

# Quality of service

I suppose most of my readers will argue that self hosting is nice but can't compete with "cloud" services, I admit this is true.  Companies put a lot of money to make great services to get customers and earn money, if their service were bad, they wouldn't exist long.

But not using open source and self hosting won't make alternatives to your service provider greater, you become part of the problem by feeding the system.  For example, Google Mail GMAIL is now so big that they can decide which domain is allowed to reach them and which can't.  It is such a problem that most small email servers can't send emails to Gmail without being treated as spam and we can't do anything to it, the more users they are, the less they care about other providers.

Great achievements can be done in open source federated services like Peertube, one can host videos on a Peertube instance and follow the local rules of the instance, while some other big companies could just disable your video because some automatic detection script found a piece of music or inappropriate picture.

Giving your data to a company and relying on their services make you lose your freedom.  If you don't think it's true this is okay, freedom is a vague concept and it comes with various steps on a high scale.

# Tips for self hosting

Here are a few tips if you want to learn more about hosting your own services.

* ask people you trust if they want to participate, it's better to have more than only one person to manage servers.
* you don't need to be an IT professional, but you need to understand you will have to learn.
* backups are not a luxury, they are mandatory.
* asking (for contributing or as a requirement) for money is fine as long as you can justify why (a peertube server can be very expensive to run for example).
* people around usually throw old hardware, ask friends or relative if they have old unused hardware.  You can easily repair "that old Windows laptop I replaced because wifi stopped working" and use it as a server.
* electricity usage must be considered but on the other hand, buying a brand new hardware to save 20W is not necessarily more ecological.
* some services such as email servers can't be hosted on most ISP connection due to specific requirements
* you will certainly need to buy a domain name
* redundancy is overkill most of the time, shit happens but in redundant servers shit happens twice more often

=> https://indieweb.org/ IndieWeb website: a community proposing alternatives to the "corporate web".

There is a Linux disribution dedicated to self hosting named "Yunohost" (Y U No Host) that make the task really easy and give you a beginner friendly interface to manage your own service.

=> https://yunohost.org/#/index_en Yunohost website
=> https://yunohost.org/en/administrate/whatisyunohost Yunohost documentation "What is Yunohost ?"

# Conclusion

I'm self hosting since I first understood running a web server was the only thing I required to have my own PHP forum 15 years ago.  I mostly keep this blog alive to show and share my experiments, most of the time happening when playing with my self hosting servers.

I have a strong opinion on the subject, hosting your own services is a fantastic way to learn new skills or perfect them, but it's also important for freedom.  In France we even have associative ISP and even if they are small, their existence force the big ISP companies to be transparent on their processes and interoperatibility.

If you disagree with me, this is fine.
  <pubDate>Fri, 23 Jul 2021 00:00:00 GMT</pubDate>
  <title>Self host your Podcast easily with potcasse</title>
<pre># Introduction

I wrote « potcasse », pronounced "pot kas", a tool to help people to publish and self host a podcast easily without using a third party service.  I found it very hard to find information to self host your own podcast and make it available easily on "apps" / podcast players so I wrote potcasse.

# Where to get it

Get the code from git and run "make install" or just copy the script "potcasse" somewhere available in your $PATH.  Note that rsync is a required dependency.

=> https://tildegit.org/solene/potcasse Gitea access to potcasse
=> git://bitreich.org/potcasse direct git url to the sources

# What is it doing?

Potcasse will gather your audio files with some metadata (date, title), some information about your Podcast (name, address, language) and will create an output directory ready to be synced on your web server.

Potcasse creates a RSS feed compatible with players but also a simple HTML page with a summary of your episodes, your logo and the podcast title.

# Why potcasse?

I wanted to self host my podcast and I only found Wordpress, Nextcloud or complex PHP programs to do the job, I wanted something static like my static blog that will work on any hosting platform securely.

# How to use it

The process is simple for initialization:

* init the project directory using "potcasse init"
* edit the metadata.sh file to configure your Podcast

Then, for every new episode:

* import audio files using "potcasse episode" with the required arguments
* generate the html output directory using "potcasse gen"
* use rsync to push the output directory to your web server

There is a README file in the project that explain how to configure it, once you deploy you should have an index.html file with links to your episodes and also a link for the RSS feed that can be used in podcast applications.

# Conclusion

This was a few hours of work to get the job done, I'm quite proud of the result and switched my podcast (only 2 episodes at the moment...) to it in a few minutes.  I wrote the commands lines and parameters while trying to use it as if it was finished, this helped me a lot to choose what is required, optional, in which order, how I would like to manually make changes as an author etc...

I hope you will enjoy this simple tool as much as I do.
  <pubDate>Wed, 21 Jul 2021 00:00:00 GMT</pubDate>
  <title>Simple scripts I made over time</title>
<pre># Introduction

I wanted to share a few scripts of mine for some time, here they are!

# Scripts

Over time I'm writing a few scripts to help me in some tasks, they are often associated to a key binding or at least in my ~/bin/ directory that I add to my $PATH.

## Screenshot of a region and upload

When I want to share something displayed on my screen, I use my simple "screen_up.sh" script (super+r) that will do the following:

* use scrot and let me select an area on the screen
* convert the file in jpg but also png compression using pngquant and pick the smallest file
* upload the file to my remote server in a directory where files older than 3 days are cleaned (using find -ctime -type f -delete)
* put the link in the clipboard and show a notification

This simple script has been improved a lot over time like getting a feedback of the result or picking the smallest file from various combinations.

```script shell requiring scrot, pngquant, ImageMagick and notify-send
test -f /tmp/capture.png && rm /tmp/capture.png
scrot -s /tmp/capture.png
pngquant -f /tmp/capture.png
convert /tmp/capture-fs8.png /tmp/capture.jpg
FILE=$(ls -1Sr /tmp/capture* | head -n 1)

MD5=$(md5 -b "$FILE" | awk '{ print $4 }' | tr -d '/+=' )

ls -l $MD5

scp $FILE perso.pw:/var/www/htdocs/solene/i/${MD5}.${EXTENSION}
echo "$URL" | xclip -selection clipboard

notify-send -u low $URL

## Uploading a file temporarily

Second most used script of mine is a uploading file utility.  It will rename a file using the content md5 hash but keeping the extension and will upload it in a directory on my server where it will be deleted after a few days from a crontab.  Once the transfer is finished, I get a notification and the url in my clipboard.

```script shell

if [ -z "$1" ]
        echo "usage: [file]"
        exit 1
MD5=$(md5 -b "$1" | awk '{ print $NF }' | tr -d '/+=' )

scp "$FILE" perso.pw:/var/www/htdocs/solene/f/${NAME}

echo -n "$URL" | xclip -selection clipboard

notify-send -u low "$URL"

## Sharing some text or code snippets

While I can easily transfer files, sometimes I need to share a snippet of code or a whole file but I want to ease the reader work and display the content in an html page instead of sharing an extension file that will be downloaded.  I don't put those files in a cleaned directory and I require a name to give some clues about the content to potential readers.  The remote directory contains a highlight.js library used to use syntactic coloration, hence I pass the text language to use the coloration.


if [ "$#" -eq 0 ]
        echo "usage: language [name] [path]"
        exit 1

cat > /tmp/paste_upload <<EOF
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <link rel="stylesheet" href="default.min.css">
        <script src="highlight.min.js"></script>

        <pre><code class="$1">

# ugly but it works
cat /tmp/paste_upload | tr -d '\n' > /tmp/paste_upload_tmp
mv /tmp/paste_upload_tmp /tmp/paste_upload

if [ -f "$3" ]
    cat "$3" | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload
    xclip -o | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' >> /tmp/paste_upload

cat >> /tmp/paste_upload <<EOF

</code></pre> </body> </html>

if [ -n "$2" ]

FILE=$(date +%s)_${1}_${NAME}.html

scp /tmp/paste_upload perso.pw:/var/www/htdocs/solene/prog/${FILE}

echo -n "https://perso.pw/prog/${FILE}" | xclip -selection clipboard
notify-send -u low "https://perso.pw/prog/${FILE}"

## Resize a picture

I never remember how to resize a picture so I made a one line script to not have to remember about it, I could have used a shell function for this kind of job.

```shell code

if [ -z "$2" ]

convert -resize "$PERCENT" "$1" "tn_${1}"

# Latency meter using DNS

Because UDP requests are not reliable they make a good choice for testing network access reliability and performance.  I used this as part of my stumpwm window manager bar to get the history of my internet access quality while in a high speed train.

The output uses three characters to tell if it's under a threshold (it works fine), between two threshold (not good quality) or higher than the second one (meaning high latency) or even a network failure.

The default timeout is 1s, if it works, under 60ms you get a "_", between 60ms and 150ms you get a "-" and beyond 150ms you get a "¯", if the network is failure you see a "N".

For example, if your quality is getting worse until it breaks and then works, it may look like this: _-¯¯NNNNN-____-_______  My LISP code was taking care of accumulating the values and only retaining the n values I wanted as history.

Why would you want to do that? Because I was bored in a train.  But also, when network is fine, it's time to sync mails or refresh that failed web request to get an important documentation page.

```shell script

dig perso.pw @  +timeout=1 | tee /tmp/latencecheck

if [ $? -eq 0 ]
        time=$(awk '/Query time/{
                if($4 < 60) { print "_";}
                if($4 >= 60 && $4 <= 150) { print "-"; }
                if($4 > 150) { print "¯"; }
        }' /tmp/latencecheck)
        echo $time | tee /tmp/latenceresult
        echo "N" | tee /tmp/latenceresult
    exit 1

# Conclusion

Those scripts are part of my habits, I'm a bit lost when I don't have them because I always feel they are available at hand.  While they don't bring much benefits, it's quality of life and it's fun to hack on small easy pieces of programs to achieve a simple purpose.  I'm glad to share those.
  <pubDate>Mon, 19 Jul 2021 00:00:00 GMT</pubDate>
  <title>The Old Computer Challenge: day 7</title>
<pre>Report of the last day of the old computer challenge.

# A journey

I'm writing this text while in the last hours of the challenge, I may repeat some thoughts and observations already reported in the earlier posts but never mind, this is the end of the journey.

# Technical

Let's speak about Tech!  My computer is 16 years old but I've been able to accomplish most of what I enjoy on a computer: IRC, reading my mails, hacking on code and reading some interesting content on the internet.  So far, I've been quite happy about my computer, it worked without any trouble.

On the other hand, there were many tasks that didn't work at all:

* Browsing the internet to use "modern" website relying on javascript:  this is because Javascript capable browsers are not working on my combination of operating system/CPU architecture, I'm quite sure the challenge would have been easier with an old amd64 computer even with low memory.
* Watching videos: for some reasons, mplayer in full screen was producing a weird issue, computer stopped working but cursor was still moving but nothing more was possible.  However it worked correctly for most videos.
* Listening to my big FLAC music files, if doing so I wasn't able to do anything else because of the CPU usage and sitting on my desk to listen to music was not an interesting option.
* Using Go, Rust and Node programs because there are no implementation of these languages on OpenBSD PowerPC 32bits.

On the hardware side, here is what I noticed:

* 512MB are quite enough as long as you stay focused on one task, I rarely required to use swap even with multiple programs opened.
* I don't really miss spinning hard drive, in term of speed and noise, I'm happy they are gone in my newer computers.
* Using an external pointing device (mouse/trackball) is so much better than the bad touchpad.
* Modern screens are so much better in term of resolution, colours and contrast!
* They keyboard is pleasant but lack a "Super" modifier key which lead to issues with key binding overlapping between the window manager and programs.
* Suspend and resume doesn't work on OpenBSD, so I had to boot the computer and it takes a few minutes to do so and require manual step to unlock /home which add delay for boot sequence.

Despite everything the computer was solid but modern hardware is such more pleasant to use in many ways, not only in term of raw speed.  When you buy a laptop especially, you should take care about the other specs than the CPU/memory like the case, the keyboard, the touchpad and the screen, if you use a lot your laptop they are as much important as the CPU itself in my opinion.

Thanks to the programs w3m, catgirl, luakit, links, neomutt, claws-mail, ls, make, sbcl, git, rednotebook, keepassxc, gimp, sxiv, feh, windowmaker, fvwm, ratpoison, ksh, fish, mplayer, openttd, mednafen, rsync, pngquant, ncdu, nethack, goffice, gnumeric, scrot, sct, lxappearence, tootstream, toot, OpenBSD and all the other programs I used for this challenge.

# Human

Because I always felt this challenge was a journey to understand my use of computer, I'm happy of the journey.

To make things simple, here is a bullet list of what I noticed

* Going to sleep earlier instead of waiting for something to happen.
* I've spent a lot less time on my computer but at the same time I don't notice it much in term of what I've done with it, this mean I was more "productive" (writing blog, reading content, hacking) and not idling.
* I didn't participate into web forums of my communities :(
* I cleared things in my todo list on my server (such as replacing Spamassassin by rspamd and writing about it).
* I've read more blogs and interesting texts than usual, and I did it without switching to another task.
* Javascript is not ecological because it prevent older hardware to be usable.  If I didn't needed javascript I guess I could continue using this laptop.
* I got time to discover and practice meditation.
* Less open source contribution because compiling was too slow.

I'm sad and disappointed to notice I need to work on my self discipline (that's why I started to learn about meditation) to waste less time on my computer.  I will really work on it, I see I can still do the same tasks but spend less time doing nothing/idling/switching tasks.

I will take care of supporting old systems by my contributions, like my blog working perfectly fine in console web browsers but also trying to educate people about this.

I've met lot of interesting people on the IRC channel and for this sole reason I'm happy I made the challenge.

# Conclusion

Good hardware is good but is not always necessary, it's up to the developers to make good use of the hardware.  While some requirements can evolve over time like cryptography or video codecs, programs shouldn't become more and more resources hungry for the reason that we have more and more available.  We have to learn how todo MORE with LESS with computers and it was something I wanted to highlight with this challenge.
  <pubDate>Fri, 16 Jul 2021 00:00:00 GMT</pubDate>
  <title>The Old Computer Challenge: day 6</title>
<pre># Report

This is the 6th day of the challenge!  Time went quite fast.

# Mood

I got quite bored two days ago because it was very frustrating to not be able to do everything I want.  I wanted to contribute to OpenBSD but the computer is way to slow to do anything useful beyond editing files.

Although, it got better yesterday, 5th day of the challenge, when I decided to move away from claws-mail and switch to neomutt for my emails.  I updated claws-mail to version 4.0.0 freshly released and starting updating the OpenBSD package, but claws-mail switched to gtk3 and it became too slow for the computer.

I started using a mouse on the laptop and it made some tasks more enjoyable although I don't need it too much because most of my programs are in a console but every time I need the cursor it's more pleasant to use a mouse support 3 clicks + wheel.

# Software

The computer is the sum of its software.  Here is a list of the software I'm using right now:

* fvwm2: window manager, doesn't bug with full screen programs and is light enough and I like it.
* neomutt: mail reader, I always hate mutt/neomutt because of the complexity of their config file, fortunately I had some memories of when I used it and I've been able to build a nice simple configuration script and took the opportunity to update my Neomutt cheatsheet article.
* w3m: in my opinion it's the best web browser in terminal :) the bookmark feature works very great and using https://lite.duckduckgo.com/lite for searches works perfectly fine.  I use the flavor with image rendering support, however I have mixed feelings about it because pictures take time to download and render and will always render at their original size which is a pain most of the time.
* keepassxc: my usual password manager, it has a cli command line to manage the entries from a shell after unlocking the database.
* openttd: a game of legend that is relaxing and also very fun to play, runs fine after a few tweaks.
* mastodon: tootstream but it's quite limited sometimes and I also access Mastodon on my phone with Tusky from F-droid, they make a great combination.
* rednotebook: I was already using it on this computer when it was known as the "offline computer", this program is a diary where I write my day when I feel bad (anger, depressed, bored), it doesn't have much entries in it but it really helps me to write things down.  While the program is very heavy and could be considered bloated for the purpose of writing about your day, I just like it because it works and it looks nice.

I'm often asked how I deal with youtube, I just don't, I don't use youtube so problem is solved :-)  I use no streaming services at home.

# Breaking the challenge

I had to use my regular computer to order a pizza because the stupid pizza company doesn't want to take orders by phone and they are the only pizza shop around... :(  I could have done using my phone but I don't really trust my phone web browser to support all the operations of the process.

I could easily handle using this computer for more time if I hadn't so many requirements on web services, mostly for ordering products I can't find locally (pizza doesn't count here) and I hate using my phone for web access because I hate smartphone most of the time.

If I had used an old i386 / amd64 computer I would have been able to use a webkit browser even if it was slow, but on PowerPC the state of web browser with javascript is complicated and currently none works for me on OpenBSD.
  <pubDate>Thu, 15 Jul 2021 00:00:00 GMT</pubDate>
  <title>Filtering spam using Rspamd and OpenSMTPD on OpenBSD</title>
<pre># Introduction

I recently used Spamassassin to get ride of the spam I started to receive but it proved to be quite useless against some kind of spam so I decided to give rspamd a try and write about it.

rspamd can filter spam but also sign outgoing messages with DKIM, I will only care about the anti spam aspect.

=> https://rspamd.com/ rspamd project website

# Setup

The rspamd setup for spam was incredibly easy on OpenBSD (6.9 for me when I wrote this).  We need to install the rspamd service but also the connector for opensmtpd, and also redis which is mandatory to make rspamd working.

```shell instructions
pkg_add opensmtpd-filter-rspamd rspamd redis
rcctl enable redis rspamd
rcctl start redis rspamd

Modify your /etc/mail/smtpd.conf file to add this new line:

```smtpd.conf file
filter rspamd proc-exec "filter-rspamd"

And modify your "listen on ..." lines to add "filter "rspamd"" to it, like in this example:

```smtpd.conf file
listen on em0 pki perso.pw tls auth-optional   filter "rspamd"
listen on em0 pki perso.pw smtps auth-optional filter "rspamd"

Restart smtpd with "rcctl restart smtpd" and you should have rspamd working!

# Using rspamd

Rspamd will automatically check multiple criteria for assigning a score to an incoming email, beyond a high score the email will be rejected but between a low score and too high, it may be tagged with a header "X-spam" with the value true.

If you want to automatically put the tagged email as spam in your Junk directory, either use a sieve filter on the server side or use a local filter in your email client.  The sieve filter would look like this:

```sieve rule

if header :contains "X-Spam" "yes" {
        fileinto "Junk";

# Feeding rspamd

If you want better results, the filter needs to learn what is spam and what is not spam (named ham).  You need to regularly scan new emails to increase the effectiveness of the filter, in my example I have a single user with a Junk directory and an Archives directory within the maildir storage, I use crontab to run learning on mails newer than 24h.

0  1 * * * find /home/solene/maildir/.Archives/cur/ -mtime -1 -type f -exec rspamc learn_ham {} +
10 1 * * * find /home/solene/maildir/.Junk/cur/     -mtime -1 -type f -exec rspamc learn_spam {} +

# Getting statistics

rspamd comes with very nice reporting tools, you can get a WebUI on the port 11334 which is listening on localhost by default so you would require tuning rspamd to listen on other addresses or you can use a SSH tunnel.

You can get the same statistics on the command line using the command "rspamc stat" which should have an output similar to this:

```command line output
Results for command: stat (0.031 seconds)
Messages scanned: 615
Messages with action reject: 15, 2.43%
Messages with action soft reject: 0, 0.00%
Messages with action rewrite subject: 0, 0.00%
Messages with action add header: 9, 1.46%
Messages with action greylist: 6, 0.97%
Messages with action no action: 585, 95.12%
Messages treated as spam: 24, 3.90%
Messages treated as ham: 591, 96.09%
Messages learned: 4167
Connections count: 611
Control connections count: 5190
Pools allocated: 5824
Pools freed: 5801
Bytes allocated: 31.17MiB
Memory chunks allocated: 158
Shared chunks allocated: 16
Chunks freed: 0
Oversized chunks: 575
Fuzzy hashes in storage "rspamd.com": 2936336370
Fuzzy hashes stored: 2936336370
Statfile: BAYES_SPAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 344; users: 1; languages: 0
Statfile: BAYES_HAM type: redis; length: 0; free blocks: 0; total blocks: 0; free: 0.00%; learned: 3822; users: 1; languages: 0
Total learns: 4166

# Conclusion

rspamd is for me a huge improvement in term of efficiency, when I tag an email as spam the next one looking similar will immediately go into Spam after the learning cron runs, it draws less memory then Spamassassin and reports nice statistics.  My Spamassassin setup was directly rejecting emails so I didn't have a good comprehension of its effectiveness but I got too many identical messages over weeks that were never filtered, for now rspamd proved to be better here.

I recommend looking at the configurations files, they are all disabled by default but offer many comments with explanations which is a nice introduction to learn about features of rspamd, I preferred to keep the defaults and see how it goes before tweaking more.
  <pubDate>Tue, 13 Jul 2021 00:00:00 GMT</pubDate>
  <title>The Old Computer Challenge: day 3</title>
<pre>Report of the third day of the old computer challenge.

# Community

I got a lot of feedback from the community, the IRC channel #old-computer-challenge is quite active and it seems we have a small community that may start here.  I received help from various question I had in regards to the programs I'm now using.

# Changes

## Web is a pity

The computer I use is using a different processor architecture than we we are used too.  Our computers are now amd64 (even the intel one, amd64 is the name of the instruction sets of the processors) or arm64 for most tablets/smartphone or small boards like raspberry PI, my computer is a PowerPC but it disappeared around 2007 from the market.  It is important to know that because most language virtual machines (for interpreted languages) requires some architecture specifics instructions to work, and nobody care much about PowerPC in the javascript land (that could be considered wasting time given the user base), so I'm left without a JS capable web browser because they would instantly crash.  The person of cwen@ at the OpenBSD project is pushing hard to fix many programs on PowerPC and she is doing an awesome work, she got JS browsers to work through webkit but for some reasons they are broken again so I have to do without those.

w3m works very fine, I learned about using bookmarks in it and it makes w3m a lot more usable for daily stuff, I've been able to log-in on most websites but I faced some buttons not working because they triggered a javascript action.  I'm using it with built-in support for images but it makes loading time longer and they are displayed with their real size which can screw up the display, I'm think I'll disable the image support...

## Long live to the smolnet

What is the smolnet?  This is a word that feature what is not on the Web, this includes mostly content from Gopher and Gemini.  I like that word because it represents an alternative that I'm contributing too for years and the word carries a lot of meaning.

Gopher and Gemini are way saner to browse, thanks to a standard concept of one item per line and no style, visiting one page feels like all the others and I don't have to look for where the menu is, or even wait for the page to render.  I've been recommended the av-98 terminal browser and it has a very lovely feature named "tour", you can accumulate links from pages you visit and add them to the tour, and them visit the next liked accumulated (like a First in-First out queue), this avoids cumbersome tabs or adding bookmarks for later viewing and forgetting about them.

## Working on OpenBSD ports

I'm working at updating the claws-mail mail client package on OpenBSD, a new major release was done the first day of the challenge, unfortunately working with it is extremely painful on my old computer.  Compiling was long, but was done only once, now I need to sort out libraries includes and using the built-in check of the ports tree takes like 15 minutes which is really not fun.

## I hate the old hardware

While I like this old laptop, I start to hate it too.  The touchpad is extremely bad and move by increments of 5px or so which is extremely imprecise especially for copy/pasting text or playing OpenTTD, not mentioning again that it only has a left click button. (update, it has been fixed thanks to anthk_ on IRC using the command xinput set-prop /dev/wsmouse "Device Accel Constant Deceleration" 1.5)

The screen has a very poor contrast, I can deal with a 1024x768 resolution and I love the 4:3 ratio, but the lack of contrast is really painful to deal with.

The mechanical hard drive is slow, I can cope with that, but it's also extremely noisy, I forgot the crispy noises of the old HDD.  It's so annoying to my hears...  And talking about noise, I'm often limiting the CPU speed of my computer to avoid the temperature rising too high and triggering the super loud small CPU fan.  It is really super loud and it doesn't seem quite effective, maybe the thermal paste is old...

A few months ago I wanted to replace the HDD but I looked on iFixit website the HDD replacement procedure for this laptop and there are like 40 steps to follow plus an Apple specific screwdriver, the procedure basically consists at removing all parts of the laptop to access the HDD which seems the piece of hardware in the most remote place in the case.  This is insane, I'm used to work on Thinkpad laptop and after removing 4 usual screws you get access to everything, even my T470 internal battery is removable.

All of these annoying facts are not even related to the computer power but simply because modern hardware evolved, they are quality of life because they don't make the computer more or less usable, but more pleasant.  Silence, good and larger screens and multiple fingers gestures touchpad bring a more comfortable use of the computer.

## Taking my time

Because of context switching cost a lot of time, I take my time to read content and appreciate it in one shot instead of bookmarking after reading a few lines and never read the bookmark again.  I was quite happy to see I'm able to focus more than 2 minutes on something and I'm a bit relieved in that regards.

## Psychological effect

I'm quite sad to see an older system forcing me to restriction can improve my focus, this mean I'm lacking self discipline and that I've wasted too much time of my life doing useless context/task switching.  I don't want to rely on some sort of limitations to be guards of my sanity, I have to work on this on my own, maybe meditation could be me getting my patience back.

# End of report of day 3

I'm meeting friendly people sharing what I like, I realizing my dependencies over services or my lack of self mental discipline.  The challenge is a lot harder than I expected but if it was too easy that wouldn't be a challenge.  I already know I'll be happy to get back to my regular laptop but I also think I'll change some habits.
  <pubDate>Mon, 12 Jul 2021 00:00:00 GMT</pubDate>
  <title>The Old Computer Challenge: day 1</title>
<pre>Report of my first day of the old computer challenge

# My setup

I'm using an Apple iBook G4 running the operating system development version of OpenBSD macppc.  Its specs are: 1 CPU G4 1.3GHz, 512 MB of memory and an old IDE HDD 40 GB.  The screen is a 4/3 ratio with a 1024x768 resolution.  The touchpad has only one tap button doing left click, the touchpad doesn't support multiple fingers gestures (can't scroll, can't click).  The battery is still holding a 1h40 capacity which is very surprising.

About the software, I was using the ratpoison window manager but I got issue with two GUI applications so I moved to cwm but I have other issues with cwm now.  I may switch to window maker maybe or return to ratpoison which worked very well except for 2 programs, and switch to cwm when I need them...  I use xterm as my terminal emulator because "it works" and it doesn't draw much memory, usually I'm using Sakura but with 32 MB of memory for each instance vs 4 MB for xterm it's important to save memory now.  I usually run only one xterm with a tmux inside.

Same for the shell, I've been using fish since the beginning of 2021 but each instance of fish draws 9 MB which is quite a lot because this mean every time I split my tmux and this spawns a new shell then I have an extra 9MB used.  ksh draws only 1MB per instance which is 9x less than fish, however for some operations I still switch to fish manually because it's a lot more comfortable for many operations due to its lovely completion.

# Tasks

Tasks on the day and how I complete them.

## Searching on the internet

My favorite browser on such old system is w3m with image support in the terminal, it's super fast and the render is very good.  I use https://html.duckduckgo.com/html/ as my search engine.

The only false issue with w3m is that the key bindings are absolutely not straightforward but you only need to know a few of them to use it and they are all listed in the help.

## Using mastodon

I spend a lot of time on Mastodon to communicate with people, I usually use my web browser to access mastodon but I can't here because javascript capable web browser takes all the memory and often crash so I can only use them as a last joker.  I'm using the terminal user interface tootstream but it has some limitations and my high traffic account doesn't match well with it.  I'm setting up brutaldon which is a local program that gives access to mastodon through an old style website, I already wrote about it on my blog if you want more information.

## Listening to music

Most of my files are FLAC encoded and are extremely big, although the computer can decode them right but this uses most of the CPU.  As OpenBSD doesn't support mounting samba shares and that my music is on my NAS (in addition to locally on my usual computer), I will have to copy the files locally before playing them.
One solution is to use musikcube on my NAS and my laptop with the server/client setup which will make my nas transcoding the music I want to play on the laptop on the fly.  Unfortunately there is no package for musikcube yet and I started compiling it on my old laptop and I suppose it will take a few hours to complete.

## Reading emails

My favorite email client at the moment is claws-mail and fortunately it runs perfectly fine on this old computer, although the lack of right click is sometimes a problem but a clever workaround is to run "xdotool click 3" to tell X to do a right click where the cursor is, it's not ideal but I rarely need it so it's ok.  The small screen is not ideal to deal with huge piles of mails but it works so far.

## IRC

My IRC setup is to have a tmux with as many catgirl (irc client) instances as network I'm connected too, and this is running on a remote server so I just connect there with ssh and attach to the local tmux.  No problem here.

## Writing my blog

The process is exactly the same as usual.  I open a terminal to start my favorite text editor, I create the file and write in it, then I run aspell to check for typos, then I run "make" to make my blog generator creates the html/gopher/gemini versions and dispatch them on the various server where they belong to.

# How I feel

It's not that easy!  My reliance on web services is hurting here, I found a website providing weather forecast working in w3m.

I easily focus on a task because switching to something else is painful (screen redrawing takes some times, HDD is noisy), I found a blog from a reader linking to other blogs, I enjoyed reading them all while I'm pretty sure I would usually just make a bookmark in firefox and switch to a 10-tabs opening to see what's new on some websites.
  <pubDate>Sat, 10 Jul 2021 00:00:00 GMT</pubDate>
  <title>Obsolete in the IT crossfire</title>
<pre># Preamble

This is not an article about some tech but more me sharing feelings about my job, my passion and IT.  I've met a Linux system at first in the early 2000 and I didn't really understand what this was, I've learned it the hard way by wiping Windows on the family computer (which was quite an issue) and since that time I got a passion with computers.  I made a lot of mistakes that made me progress and learn more, and the more I was learning, the more I saw the amount of knowledge I was missing.

Anyway, I finally got a decent skill level if I could say, but I started early and so my skill is related to all of that early Linux ecosystem.  Tools are evolving, Linux is morphing into something different a bit more every year, practices are evolving with the "Cloud".  I feel lost.

# Within the crossfire

I've met many people along my ride in open source and I think we can distinguish two schools (of course I know it's not that black and white): the people (like me) who enjoy the traditional ecosystem and the other group that is from the Cloud era.  It is quite easy to bash the opposite group and I feel sad when I assist at such dispute.

I can't tell which group is right and which is wrong, there is certainly good and bad in both.  While I like to understand and control how my system work, the other group will just care about the produced service and not the underlying layers.  Nowadays, you want your service uptime to have as much nine as you can afford (99.999999) at the cost of having complex setup with automatic respawning services on failure, automatic routing within VMs and stuff like that.  This is not necessarily something that I enjoy, I think a good service should have a good foundation and restarting the whole system upon failure seems wrong, although I can't deny it's effective for the availability.

I know how a package manager work but the other group will certainly prefer to have a tool that will hide all of the package manager complexity to get the job done.  Tell ansible to pop a new virtual machine on Amazon using Terraform with a full nginx-php-mysql stack installed is the new way to manage servers.  It seems a sane option because it gets the job done, but still, I can't find myself in there, where is the fun?  I can't get the fun out of this.  You can install the system and the services without ever see the installer of the OS you are deploying, this is amazing and insane at the same time.

I feel lost in this new era, I used to manage dozens of system (most bare-metal, without virtualization), I knew each of them that I bought and installed myself, I knew which process should be running and their usual CPU/memory usage, I got some acquaintance with all my systems.  I was not only the system administrator, I was the IT gardener.  I was working all the time to get the most out of our servers, optimizing network transfers, memory usage, backups scripts.  Nowadays you just pop a larger VM if you need more resources and backups are just snapshots of the whole virtual disk, their lives are ephemeral and anonymous.

# To the future

I would like to understand better that other group, get more confident with their tools and logic but at the same time I feel some aversion toward doing so because I feel I'm renouncing to what I like, what I want, what made me who I am now.  I suppose the group I belong too will slowly fade away to give room to the new era, I want to be prepared to join that new era but at the same time I don't want to abandon the people of my own group by accelerating the process.

I'm a bit lost in this crossfire.  Should a resistance organize against this?  I don't know, I wouldn't see the point.  The way we do computing is very young, we are looking for a way.  Humanity has been making building for thousands and years and yet we still improve the way we build houses, bridges and roads, I guess that the IT industry is following the same process but as usual with computers, at an insane rate that humans can barely follow.

# Next

Please share with me by email or mastodon or even IRC if you feel something similar or if you got past that issue, I would be really interested to speak about this topic with other people.

# Readers reactions

=> https://ew.srht.site/en/2021/20210710-re-obsolete.html ew.srht.site reply

# After thoughts (UPDATE post publication)

I got many many readers giving me their thoughts about this article and I'm really thankful for this.

Now I think it's important to realize that when you want to deploy systems at scale, you need to automate all your infrastructure and then you lose that feeling with your servers.  However, it's still possible to have fun because we need tooling, proper tooling that works and bring a huge benefit.  We are still very young in regards to automation and lot of improvements can be done.

We will still need all those gardeners enjoying their small area of computer because all the cloud services rely on their work to create duplicated system in quantity that you can rely on.  They are making the first most important bricks required to build the "Cloud", without them you wouldn't have a working Alpine/CentOS/FreeBSD/etc... to deploy automatically.

Both can coexist, both should know better each other because they will have to live together to continue the fantastic computer journey, however the first group will certainly be in a small number compared to the other.

So, not everything is lost!  The Cloud industry can be avoided by self-hosting at home or in associative datacenter/colocations but it's still possible to enjoy some parts of the great shift without giving up all we believe in.  A certain balance can be found, I'm quite sure of it.
  <pubDate>Fri, 09 Jul 2021 00:00:00 GMT</pubDate>
  <title>OpenBSD: pkg_add performance analysis</title>
<pre># Introduction

OpenBSD package manager pkg_add is known to be quite slow and using much bandwidth, I'm trying to figure out easy ways to improve it and I may nailed something today by replacing ftp(1) http client by curl.

# Testing protocol

I used on an OpenBSD -current amd64 the following command "pkg_add -u -v | head -n 70" which will check for updates of the 70 first packages and then stop.  The packages tested are always the same so the test is reproducible.

The traditional "ftp" will be tested, but also "curl" and "curl -N".

The bandwidth usage has been accounted using "pfctl -s labels" by a match rule matching the mirror IP and reset after each test.

# What happens when pkg_add runs

Here is a quick intro to what happens in the code when you run pkg_add -u on http://

* pkg_add downloads the package list on the mirror (which could be considered to be an index.html file) which weights ~2.5 MB, if you add two packages separately the index will be downloaded twice.
* pkg_add will run /usr/bin/ftp on the first package to upgrade to read its first bytes and pipe this to gunzip (done from perl from pkg_add) and piped to signify to check the package signature.  The signature is the list of dependencies and their version which is used by pkg_add to know if the package requires update and the whole package signify signature is stored in the gzip header if the whole package is downloaded (there are 2 signatures: signify and the packages dependencies, don't be mislead!).
* if everything is fine, package is downloaded and the old one is replaced.
* if there is no need to update, package is skipped.
* new package = new connection with ftp(1) and pipes to setup

Using FETCH_CMD variable it's possible to tell pkg_add to use another command than /usr/bin/ftp as long as it understand "-o -" parameter and also "-S session" for https:// connections.  Because curl doesn't support the "-S session=..." parameter, I used a shell wrapper that discard this parameter.

# Raw results

I measured the whole execution time and the total bytes downloaded for each combination.  I didn't show the whole results but I did the tests multiple times and the standard deviation is near to 0, meaning a test done multiple time was giving the same result at each run.

operation               time to run     data transferred
---------               -----------     ----------------
ftp http://             39.01           26
curl -N http://	        28.74           12
curl http://            31.76           14
ftp https://            76.55           26
curl -N https://        55.62           15
curl https://           54.51           15

=> static/pkg_add_bench.png Charts with results

# Analysis

There are a few surprising facts from the results.

* ftp(1) not taking the same time in http and https, while it is supposed to reuse the same TLS socket to avoid handshake for every package.
* ftp(1) bandwidth usage is drastically higher than with curl, time seems proportional to the bandwidth difference.
* curl -N and curl performs exactly the same using https.

# Conclusion

Using http:// is way faster than https://, the risk is about privacy because in case of man in the middle the download packaged will be known, but the signify signature will prevent any malicious package modification to be installed.  Using 'FETCH_CMD="/usr/local/bin/curl -L -s -q -N"' gave the best results.

However I can't explain yet the very different behaviors between ftp and curl or between http and https.

# Extra: set a download speed limit to pkg_add operations

By using curl as FETCH_CMD you can use the "--limit-rate 900k" parameter to limit the transfer speed to the given rate.
  <pubDate>Thu, 08 Jul 2021 00:00:00 GMT</pubDate>
  <title>The Old Computer Challenge</title>
<pre># Introduction

For some time I wanted to start a personal challenge, after some thoughts I want to share it with you and offer you to join me in this journey.

The point of the challenge is to replace your daily computer by a very old computer and share your feelings for the week.

# The challenge

Here are the *rules* of the challenge, there are no prize to win but I'm convinced we will have feelings to share along the week and that it will change the way we interact with computers.

* 1 CPU maximum, whatever the model.  This mean only 1 CPU|Core|Thread.  Some bios allow to disable multi core.
* 512 MB of memory (if you have more it's not a big deal, if you want to reduce your ram create a tmpfs and put a big file in it)
* using USB dongles is allowed (storage, wifi, Bluetooth whatever)
* only for your personal computer, during work time use your usual stuff
* relying on services hosted remotely is allowed (VNC, file sharing, whatever help you)
* using a smartphone to replace your computer may work, please share if you move habits to your smartphone during the challenge
* if you absolutely need your regular computer for something really important please use it.  The goal is to have fun but not make your week a nightmare.

If you don't have an old computer, don't worry!  You can still use your regularly computer and create a virtual machine with low specs, you would still be more comfortable with a good screen, disk access and a not too old CPU but you can participate.

# Date

The challenge will take place from 10Th July morning until 17Th July morning.

# Social medias

Because I want this event to be a nice moment to share with others, you can contact me so I can add your blog (including gopher/gemini space) to the future list below.

You can also join #old-computer-challenge on libera.chat IRC server.

=> https://freeshell.de/shvehlav/blog/21-07-08/ prahou's blog, running a T42 with OpenBSD 6.9 i386 with hostname brouk
=> https://kernelpanic.life/misc/old-computer-challenge-and-why-i-need-it.html Joe's blog about the challenge and why they need it
=> https://dataswamp.org/~solene/ Solene (this blog) running an iBook G4 with OpenBSD -current macppc with hostname jeefour
=> gopher://box.matto.nl/0/one-week-with-freebsd-13-on-an-acer-aspire-one-zg5-part-one.txt (gopher link) matto's report using FreeBSD 13 on an Acer aspire one
=> https://celehner.com/2021/07/oldcomp/day1.txt cel's blog using Void Linux PPC on an Apple Powerbook G4
=> https://www.k58.uk/old-computer.html Keith Burnett's blog using a T42 with an emphasis on using GUI software to see how it goes
=> https://kuchikuu.xyz/old-computer-challenge.html Kuchikuu's blog using a T60 running Debian (but specs out of the challenge)
=> https://ohio.araw.xyz/old-computer/ Ohio Quilbio Olarte's blog using an MSI Wind netbook with OpenBSD
=> gemini://carcosa.net/journal/20210713-old-computer-challenge.gmi carcosa's blog using an ASUS eeePC netbook with Fedora i386 downgraded with kernel command line
=> http://tekk.in/oldcomputer.html Tekk's website, using a Dell Latitude D400 (2003) running Slackware 14.2

# My setup

I use an old iBook G4 laptop (the one I already use "offline"), it has a single PowerPC G4 1.3 GHz CPU and 512 MB of ram and a slow 40GB HDD.  The wifi is broken so I have to use a Wifi dongle but I will certainly rely on ethernet.  The screen has a 1024x768 resolution but the colors are pretty bad.

In regards to software it runs OpenBSD 6.9 with /home/ encrypted which makes performance worse.  I use ratpoison as the window manager because it saves screen space and requires little memory and CPU to run and is entirely keyboard driven, that laptop has only a left click touchpad button :).

I love that laptop and initially I wanted to see how far I could use for my daily driver!

=> static/laptop-challenge.jpg Picture of the laptop
=> static/laptop-challenge-screenshot.png Screenshot of the laptop
  <pubDate>Wed, 07 Jul 2021 00:00:00 GMT</pubDate>
  <title>Track changes in /etc with etckeeper</title>
<pre># Introduction

Today I will introduce you to the program etckeeper, a simple tool that track changes in your /etc/ directory into a versioning control system (git, mercurial, darcs, bazaar...).

=> https://etckeeper.branchable.com/ etckeeper project website

# Installation

Your system may certainly package it, you will then have to run "etckeeper init" in /etc/ the first time.  A cron or systemd timer should be set by your package manager to automatically run etckeeper every day.

In some cases, etckeeper can integrate with package manager to automatically run after a package installation.

# Benefits

While it can easily be replicated using "git init" in /etc/ and then using "git commit" when you make changes, etckeeper does it automatically as a safety net because it's easy to forget to commit when we make changes.  It also has integration with other system tools and can use hooks like sending an email when a change is found.

It's really a convenience tool but given it's very light and can be useful I think it's a must for most sysadmins.
  <pubDate>Tue, 06 Jul 2021 00:00:00 GMT</pubDate>
  <title>Gentoo cheatsheet</title>
<pre># Introduction

This is a simple cheatsheet to manage my Gentoo systems, a linux distribution source based, meaning everything installed on the computer must be compiled locally.

=> https://www.gentoo.org/ Gentoo project website

# Upgrade system

I use the following command to update my system, it will downloaded latest portage version and then rebuild @world (the whole set of packages manually installed).

emerge-webrsync 2>&1 | grep "The current local"
if [ $? -eq 0 ]

emerge -auDv --with-bdeps=y --changed-use --newuse @world

# Use ccache

As you may rebuild the same program many times (especially on a new install), I highly recommend using ccache to reuse previous builded objects and will reduce build duration by 80% when you change an USE.

It's quite easy, install ccache package, add 'FEATURES="ccache"' in your make.conf and do "install -d -o root -g portage -p 775" /var/cache/ccache and it should be working (you should see files in the ccache directory).

=> https://wiki.gentoo.org/wiki/Ccache Gentoo wiki about ccache

# Use genlop to view / calculate build time from past builds

Genlop can tell you how much time will be needed or remains on a build based on previous builds information. I find it quite fun to see how long an upgrade will take.

=> https://wiki.gentoo.org/wiki/Genlop Gentoo wiki about Genlop

## View compilation time

From the package genlop

```shell command
# genlop -c

 Currently merging 1 out of 1

 * app-editors/vim-8.2.0814-r100 

       current merge time: 4 seconds.
       ETA: 1 minute and 5 seconds.

## Simulate compilation

Add -p to emerge command for "pretend" and pipe it to genlop -p like this

```shell command
# emerge -av -p kakoune | genlop -p
These are the pretended packages: (this may take a while; wait...)

[ebuild   R   ~] app-editors/kakoune-2020.01.16_p20200601::gentoo  0 KiB

Estimated update time: 1 minute.

# Using gentoolkit

The gentoolkit package provides a few commands to find informations about packages.

=> https://wiki.gentoo.org/wiki/Gentoolkit Gentoo wiki page about Gentoolkit

## Find a package

You can use "equery" from the package gentoolkit like this "equery l -p '*package name*" globbing with * is mandatory if you are not looking for a perfect match.

Example of usage:

```shell command
# equery l -p '*firefox*'
 * Searching for *firefox* ...
[-P-] [  ] www-client/firefox-78.11.0:0/esr78
[-P-] [ ~] www-client/firefox-89.0:0/89
[-P-] [ ~] www-client/firefox-89.0.1:0/89
[-P-] [ ~] www-client/firefox-89.0.2:0/89
[-P-] [  ] www-client/firefox-bin-78.11.0:0/esr78
[-P-] [  ] www-client/firefox-bin-89.0:0/89
[-P-] [  ] www-client/firefox-bin-89.0.1:0/89
[IP-] [  ] www-client/firefox-bin-89.0.2:0/89

## Get the package name providing a file

Use "equery b /path/to/file" like this

```shell command
# equery b /usr/bin/2to3
 * Searching for /usr/bin/2to3 ... 
dev-lang/python-exec-2.4.6-r4 (/usr/lib/python-exec/python-exec2)
dev-lang/python-exec-2.4.6-r4 (/usr/bin/2to3 -> ../lib/python-exec/python-exec2)
  <pubDate>Mon, 05 Jul 2021 00:00:00 GMT</pubDate>
  <title>Listing every system I used</title>
<pre># Introduction

Nobody asked for it but I wanted to share the list of the system I used in my life (on a computer) and a few words about them.  This is obviously not very accurate but I'm happy to write it somewhere.

You may wonder why I did some choices in the past, I was young and with little experience in many of these experiments, a nice looking distribution was very appealing to me.

One has to know (or remember) that 10 years ago, Linux distributions were very different from one to another and it became more and more standardized over time.  At the point that I don't consider distro hoping (the fact to switch from a distribution to another regularly) something interesting because most distributions are derivative from a main one and most will all have a systemd and same defaults.

Disclaimer: my opinions about each systems are personal and driven by feeling and memories, it may be totally inaccurate (outdated or damaged memories) or even wrong (misunderstanding, bad luck).  If I had issues with a system this doesn't mean it is BAD and that you shouldn't use it, I recommend to make your opinion about them.

# The list (alphabetically)

This includes Linux distributions but also BSD or Solaris derived system.

## Alpine

* Duration: a few hours
* Role: workstation
* Opinion: interesting but lack of documentation
* Date of use: June 2021

I wanted to use it on my workstation but the documentation for full disk encryption and the documentation in general was outdated and not accurate so I gave up.

However the extreme minimalism is interesting and without full disk encryption it worked fine.  It was surprising to see how packages were split in such small parts, I understand why it's used to build containers.

I really want to like it, maybe in a few years it will be mature enough.

## BackTrack

* Duration: occasionally
* Role: playing with wifi devices
* Opinion: useful
* Date of use: occasionally between 2006 and 2012

Worked well with a wifi dongle supporting monitor mode.

## CentOS

* Duration: not much
* Role: local server
* Opinion: old packages
* Date of use: 2014

Nothing much to say, I had to use it temporarily to try a program we where delivering to a client using Red Hat.

## Crux

* Duration: a few months maybe
* Role: workstation
* Opinion: it was blazing fast to install
* Date of use: around 2009

I don't remember much about it to be honest.

## Debian

* Duration: multiple years
* Role: workstation (at least 1 year accumulated) and servers
* Opinion: I don't like it
* Date of use: from 2006 to now

It's not really possible to do Linux without having to deal with Debian some day.  It's quite working when installed but I always had painful time with upgrades.  As for using it as a workstation, it was at a time of gnome 2 and software were already often obsolete so I was using testing.

## DragonflyBSD

* Duration: months
* Role: server and workstation
* Opinion: interesting
* Date of use: ~2009-2011

The system worked quite well, I had hardware compatibility issues at that time but it worked well for my laptop.  HAMMER was stable when I used it on my server and I really enjoyed working with this file system, the server was my NAS and Mumble server at that time and it never failed me.  I really think this make a good alternative to ZFS.

## Edubuntu

* Duration: months
* Role: laptop
* Opinion: shame
* Date of use: 2006

I was trying to be a good student at that time and it seemed Edubuntu was interesting, I didn't understand it was just an Ubuntu with a few packages pre-installed.  It was installed on my very first laptop (a very crappy one but eh I loved it.).

## Elementary

* Duration: months
* Role: laptop
* Opinion: good
* Date of use: 2019-now

I have an old multimedia laptop (the case is falling apart) that runs Elementary OS, mainly for their own desktop environment Pantheon that I really like.  The distribution itself is solid and well done, it never failed me even after major upgrades.  I could do everything using the GUI.  I would recommend like it to a Linux beginner or someone enjoying GUI tools.

## EndeavourOS

* Duration: months
* Role: testing stuff
* Opinion: good project
* Date of use: 2021

I never been into Arch but I got my first contact with EndeavourOS, a distribution based on Arch Linux that proposes an installer with many options to install Arch Linux, and also a few helper tools to manage your system.  This is clearly and Arch Linux and they don't hide it, they just facilitate the use and administration of the system.  I'm totally capable of installing Arch but I have to admit if I can save a lot of time to install it in a full disk encryption setup using a GUI I'm all for it.  As an Arch Linux noob, the little "welcome" GUI provided by EndeavourOS was very useful to learn how to use the packages manager and a few other things.  I'd totally recommend it over Arch Linux because it doesn't denature Arch while still providing useful additions.

## Fedora

* Duration: months
* Role: workstation
* Opinion: hazardous
* Date of use: 2006 and around 2014

I started with Fedora Core 6 in 2006, at that time it was amazing, they had many new software and up to date, the alternative was Debian or Mandrake (with Ubuntu not being very popular yet), I used it a long time.  I used it again later but I stumbled on many quality issues and I don't have good memories about it.

## FreeBSD

* Duration: years
* Role: workstation, server
* Opinion: pretty good
* Date of use: 2009 to 2020

This is the first BSD I tried, I heard a lot about it so I downloaded the 3 or 5 CDs of the release with my 16 kB/s DSL line, burned CDs and installed it on my computer, the installer was proposing to install packages at that time but it was doing it in a crazy way, you had to switch CD a lot between the sets because sometimes the package was on CD 2 then CD 3 and CD 1 and CD 3 and CD2....  For some reasons, I destroyed my system a few times by mixing ports and packages which ended in dooming the system.  I learned a lot from my destroy and retry method.

For my first job (I occupied for 10 years) I switched all the Debian servers to FreeBSD servers and started playing with Jails to provide security for web server.  FreeBSD never let me down on servers.  The most pain I have with FreeBSD is freebsd-update updating RCS tags so I had to merge sometimes a hundred of files manually...  At the point I preferred reinstalling my servers (with salt stack) than upgrading.

On my workstation it always worked well. I regret packages quality can be inconsistent sometimes but I'm also part of the problem because I don't think I ever reported such issues.

## Frugalware

* Duration: weeks
* Role: workstation
* Opinion: I can't remember
* Date of use: 2006?

I remember I've run a computer with that but that's all...

## Gentoo

* Duration: months
* Role: workstation
* Opinion: i love it
* Date of use: 2005, 2017, 2020 to now

My first encounter with Gentoo was at my early Linux discovery.  I remember following the instructions and compiling X for like A DAY to get a weird result, the resolution was totally wrong and it was in grey scale so I gave up.

I tried it later in 2017 and I successfully installed it with full disk encryption and used it as my pro laptop, I don't remember I broke it once.  The only issue was to wait the compilation time when I needed a program not installed.

I'm back on Gentoo regularly for one laptop that requires many tweaks to work correctly and I also use it as my main Linux at home.

## gNewSense

* Duration: months
* Role: workstation
* Opinion: it worked
* Date of use: 2006

It was my first encounter with a 100% free system, I remember it wasn't able to play MP3 files :)  It was an Ubuntu derivative and the community was friendly.  I see the project is abandoned now.

## Guix

* Duration: months
* Role: workstation
* Opinion: interesting ideas but raw
* Date of use: 2016 and 2021

I like Guix a lot, it has very good ideas and the consistent use of Scheme language to define the packages and write the tools is something I enjoy a lot.  However I found the system doesn't feel very great for a desktop usage with GUI, it appears quite raw and required me many workaround to work correctly.

Note that Guix is a distribution but also the package manager that can be installed on any linux distribution in addition to the original package manager, in that case we refer to it as Foreign Guix.

## Mandrake

* Duration: weeks?
* Role: workstation
* Opinion: one of my first
* Date of use: 2004 or something

This was one of my first distribution and it came with a graphical installer!  I remember packages had to be installed with the command "urpmi" but that's all.  I think I didn't have access to the internet using my USB modem so I was limited to packages from the CDs I burned.

## NetBSD

* Duration: years
* Role: workstation and server
* Opinion: good
* Date of use: 2009 to 2015

I used NetBSD at first on a laptop (in 2009) but it was not very stable and programs were core dumping a lot, I found the software where not really up to date in pkgsrc too.  However, I used it for years as my first email server and I never had a single issue.

I didn't try it seriously for a workstation recently but from what I've heard it became a good choice for a daily driver.

## NixOS

* Duration: years
* Role: workstation and server
* Opinion: awesome but different
* Date of use: 2016 to now

I use NixOS daily in my professional workstation since 2020, it never failed me even when I'm on the development channel.  I already wrote about it, it's an amazing piece of work but is radically different from other Linux distributions or Unix-like systems.

I'm using it on my NAS and it's absolutely flawless since I installed it.  But I am not sure how easy or hard it would be to run a full featured mail server on it (my best example for a complex setup).

## NuTyX

* Duration: months
* Role: workstation
* Opinion: it worked
* Date of use: 2010

I don't remember much about this distribution but I remember the awesome community and the creator of the distro which is a very helpful and committed person.  This is a distribution made from scratch that is working very well and is still alive and dynamic, kudos to the team.

## OpenBSD

* Duration: years
* Role: workstation and server
* Opinion: boring because it just works
* Date of use: 2015 to now

I already wrote a few times why I like OpenBSD so I will make it short, it just works and it works fine.  However the hardware compatibility can be limited, but when hardware is supported everything just work out of the box without any tweak.

I've been using it daily for years now and it started when my NetBSD mail server had to be replaced by a newer machine at online so I chose to try OpenBSD.  I'm part of the team since 2018 and apart from occasional ports changes my big contribution was to setup the infrastructure to build binary packages for ports changes in the stable branch.

I wish performance were better though.

## OpenIndiana

* Duration: weeks
* Role: workstation
* Opinion: sadness but hope?
* Date of use: 2019

I was a huge fan of OpenSolaris but Oracle killed it.  OpenIndiana is the resurrection of the open source Solaris but is now a bit abandoned from contributors and the community isn't as dynamic as previously.  Hardware support is lagging however the system performs very well and all Solaris features are still there if you know what to do with it.

I really hope for this project to get back on track again and being as dynamic as it used to be!

## OpenSolaris

* Duration: years
* Role: workstation
* Opinion: sadness
* Date of use: 2009-2010

I loved OpenSolaris, it was such an amazing system, every new release had a ton of improvements (packages updates, features, hardware support) and I really thought it would compete Linux at this rate.  It was possible to get free CD over snail mail and they looked amazing.

It was my main workstation on my big computer (I built it in 2007 and it had 2 xeon E5420 CPU and 32 GB of memory with 6x 500GB of SATA drives!!!), it was totally amazing to play with virtualization on it.  The desktop was super fast and using Wine I was able to play Windows video games.

## OpenSuse

* Duration: months
* Role: pro workstation
* Opinion: meh
* Date of use: something like 2015

I don't have strong memories about OpenSuse, I think it worked well on my workstation at first but after some time I had some madness with the package manager that was doing weird things like removing half the packages to reinstall them...  I never wanted to give another try after this few months experiment.

## Paldo

* Duration: weeks? months?
* Role: workstation
* Opinion: the install was fast
* Date of use: 2008?

I remember having played and contributed a bit to packages on IRC, all I remember is the kind community and that it was super fast to install.  It's a distribution from scratch and it still alive and updated, bravo!


* Duration: months
* Role: workstation
* Opinion: many attempts, too bad
* Date of use: 2005-2017

PC-BSD (and more recently TrueOS) was the idea to provide FreeBSD to everyone.  Each release was either good or bad, it was possible to use FreeBSD packages but also "pbi" packages that looked like Mac OS installers (a huge file that you had to double click on it to install).  I definitely liked it because it was my first real success with FreeBSD but sometimes the tools proposed were half backed or badly documented.  The project is dead now.

## PCLinuxOS

* Duration: weeks?
* Role: laptop
* Opinion: it worked
* Date of use: around 2008?

I remember installing it was working fine and I liked it.

## Pop!_OS

* Duration: months
* Role: gaming computer
* Opinion: works!!
* Date of use: 2020-2021

I use this distribution on my gaming computer and I have to admit it can easily replace Windows! :)  Upgrades are painless and everything works out of the box (including the Nvidia driver).

## Scientific Linux

* Duration: months
* Role: workstation
* Opinion: worked well
* Date of use: ??

I remember I used scientific Linux as my main distribution at work for some time, it worked well and remembered me my old Fedora Core.

## Skywave

* Duration: occasionally
* Role: laptop for listening to radio waves
* Opinion: a must
* Date of use: 2018-now

This distribution is really focused into providing tools for using radio hardware, I bought a simple and cheap RTL-SDR usb device and I've been able to use it with pre-installed software.  Really a plug and play experience.  It works as a live CD so you don't even need to install it to benefit from its power.

## Slackware

* Duration: years
* Role: workstation and server
* Opinion: Still Loving You....
* Date of use: multiple times since 2002

It is very hard for me to explain how much and deep I love Slackware Linux.  I just love it.  In the date you can read I started with it in 2002, it's my very first encounter with Linux.  A friend bought a Linux Magazine with Slackware CDs and explanations about the installation, it worked and many programs were available to play with! (I also erased Windows on the family computer because I had no idea what I was doing).

Since that time, I used Slackware multiples times and I think it's the system that survived the longest time every time it got installed, every new Slackware release was a day to celebrate for me.

I can't explain why I like it so much, I guess it's because you deeply know how your system work over time.  Packages didn't manage dependencies at that time and it was a real pain to get new programs, it improved a lot now.

I really can't wait Slackware 15.0 to be out!

## Solaris

* Duration: months
* Role: workstation
* Opinion: fine but not open source
* Date of use: 2008

I remember the first time I heard that Solaris was a system I could install on my machine, I started to install it after downloading 2 parts of the ISO (which had to be joined using cat), I started to install it on my laptop and went to school with the laptop on battery continuing installing (it was very long) and ending the installation process in class (I was in a computer science university so it was fine :P ).

I discovered a whole new world with it, I even used it on a netbook to write some Java SCTP university project.  It was the very introduction to ZFS, brand new FS with many features.

## Solus

* Duration: days
* Role: workstation
* Opinion: good job team
* Date of use: 2020

I didn't try much Solus because I'm quite busy nowadays, but it's a good distro as an alternative to major distributions, it's totally independent from other main projects and they even have their own package manager.  My small experiment was good and it felt quality, it's a rolling release model but the packages are curated to check quality before being pushed to mass users.

I wish them a long and prosper life.

## Ubuntu

* Duration: months
* Role: workstation and server
* Opinion: it works fine
* Date of use: 2006 to 2014

I used Ubuntu on laptop a lot, and I recommended many people to use Ubuntu if they wanted to try Linux.  Whatever we say, they helped to get Linux known and bring Linux to masses.  Some choices like non-free integration are definitely not great though.  I started with Dapper Drake (Ubuntu 6.06 !) on an old Pentium 1 server I had under my dresser in my student room.

I used it daily a few times but mainly at the time the default window manager was Unity.  For some reasons, I loved Unity, it's really a pity the project is now abandoned and lost, it worked very well for me and looked nice.

I don't want to use it anymore as it became very complex internally, like trying to understand how domain names are resolved is quite complicated...

## Void

* Duration: days?
* Role: workstation
* Opinion: interesting distribution, not enough time to try
* Date of use: 2018

Void is an interesting distribution, I use it a little on a netbook with their musl libc edition and I've run into many issues at usage but also at install time.  The glibc version was working a lot better but I can't remember why it didn't catch me more than this.

I wish I could have a lot of time to try it more seriously.  I recommend everyone giving it a try.

## Windows

* Duration: years
* Role: gaming computer
* Opinion: it works
* Date of use: 1995 to now

My first encounter with a computer was with Windows 3.11 on a 486dx computer, I think I was 6.  Since then I always had a Windows computer, at first because I didn't know there were alternatives and then because I always had it as a hard requirement for a hardware, a software or video games.  Now, my gaming computer is running Windows and is dedicated to games only, I do not trust this system enough to do anything else.  I'm slowly trying to move away from it and efforts are giving results, more and more games works fine on Linux.

## Zenwalk

* Duration: months
* Role: workstation
* Opinion: it's like slackware but lighter
* Date of use: 2009?

I don't remember much, it was like Slackware but without the giant DVD install that requires 15GB of space for installation, it used Xfce by default and looked nice.
  <pubDate>Fri, 02 Jul 2021 00:00:00 GMT</pubDate>
  <title>How to choose a communication protocol</title>
<pre># Introduction

As a human being I have to communicate with other people and now we have many ways to speak to each other, so many that it's hard to speak to other people.  This is a simple list of communication protocol and why you would use them.  This is an opinionated text.

# Protocols

We rely on protocols to speak to each other, the natural way would be language with spoken words using vocal chords, we could imagine other way like emitting sound in Morse.  With computers we need to define how to send a message from A to B and there are many many possibilities for such a simple task.

* 1. The protocol could be open source, meaning anyone can create a client or a server for this protocol.
* 2. The protocol can be centralized, federated or peer to peer.  In a centralized situation, there is only one service provider and people must be on the same server to communicate.  In a federated or peer-to-peer architecture, people can join the communication network with their own infrastructure, without relying on a service provider (federated and peer to peer are different in implementation but their end result is very close)
* 3. The protocol can provide many features in addition to contact someone.

## IRC

The simplest communication protocol and an old one.  It's open source and you can easily host your own server.  It works very well and doesn't require a lot of resources (bandwidth, CPU, memory) to run, although it is quite limited in features.

* you need to stay connected to know what happen
* you can't stay connected if you don't keep a session opened 24/7
* multi device (computer / phone for instance) is not possible without an extra setup (bouncer or tmux session)

I like to use it to communicate with many people on some topic, I find they are a good equivalent of forums.  IRC has a strong culture and limitations but I love it.

## XMPP (ex Jabber)

Behind this acronym stands a long lived protocol that supports many features and has proven to work, unfortunately the XMPP clients never really shined by their user interface.  Recently the protocol is seeing a good adoption rate, clients are getting better, servers are easy to deploy and doesn't draw much resources (i/o, CPU, memory).

XMPP uses a federation model, anyone can host their server and communicate with people from other servers.  You can share files, create rooms, do private messages. Audio and video is supported based on the client.  It's also able to bridge to IRC or some other protocol using the correct software.  Multiples options for end-to-end encryption are available but the most recent named OMEMO is definitely the best choice.

The free/open source Android client « Conversations » is really good, on a computer you can use Gajim or Dino with a nice graphical interface, and finally profanity or poezio for a console client.

=> https://en.wikipedia.org/wiki/XMPP XMPP on Wikipedia

## Matrix

Matrix is a recent protocol in the list although it saw an incredible adoption rate and since the recent Freenode drama many projects switched to their own Matrix room.  It's fully open source in client or servers and is federated so anyone can be independent with their own server.

As it's young, Matrix has only one client that proposes all the features which is Element, a very resource hungry web program (web page or run "natively using Electron, a program to turn website in desktop application) and a python server named Synapse that requires a lot of CPU to work correctly.

In regards to features, Matrix proposes end to end encryption, rooms, direct chat, encryption done well, file sharing, audio/video etc...

While it's a good alternative to XMPP, I prefer XMPP because of the poor choice of clients and servers in Matrix at the moment.  Hopefully it may get better in the future.

=> https://en.wikipedia.org/wiki/Matrix_(protocol) Matrix protocol on Wikipedia

## Email

This way is well known, most people have an email address and it may have been your first touch with the Internet.  Email works well, it's federated and anyone can host an email server although it's not an easy task.

Mails are not instant but with performant servers it can only takes a few seconds for an email to be sent and delivered.  They can support end to end encryption using GPG which is not always easy to use.  You have a huge choice for email clients and most of them allow incredible settings choice.

I really like emails, it's a very practical way to communicate ideas or thoughts to someone.

### Delta Chat

I found a nice program named Delta Chat that is built on top of emails to communicate "instantly" with your friends who also use Delta Chat, messages are automatically encrypted.

The client user interface looks like an instant messaging program but will uses emails to transport the messages.  While the program is open source and Free, it requires electron for desktop and I didn't find a way to participate to an encrypted thread using an email client (even using the according GPG key).  I really found that software practical because your recipients doesn't need to create a new account, it will reuse an existing email address.  You can also use it without encryption to write to someone who will reply using their own mail client but you use delta chat.

=> https://delta.chat/en/ Delta Chat website

## Telegram

Open source client but proprietary server, I don't recommend anyone to use such a system that lock you to their server.  You would have to rely on a company and you empower them by using their service.

=> https://en.wikipedia.org/wiki/Telegram_(software) Telegram on Wikipedia

## Signal

Open source client / server but the main server where everybody is doesn't allow federation.  So far, hosting your own server doesn't seem a possible and viable solution.  I don't recommend using it because you rely on a company offering a service.

=> https://en.wikipedia.org/wiki/Signal_(software) Signal on Wikipedia

## WhatsApp

Proprietary software and service, please don't use it.

# Conclusion

I daily use IRC, Emails and XMPP to communicate with friends, family, crew from open source projects or meet new people sharing my interests.  My main requirement for private messages is end to end encryption and being independent so I absolutely require federated protocol.
  <pubDate>Fri, 25 Jun 2021 00:00:00 GMT</pubDate>
  <title>How to use the Open Graph Protocol for your website</title>
<pre># Introduction

Today I made a small change to my blog, I added some more HTML metadata for the Open Graph protocol.

Basically, when you share an url in most social networks or instant messaging, when some Open Graph headers are present the software will display you the website name, the page title, a logo and some other information.  Without that, only the link will be displayed.

# Implementation

You need to add a few tags to your HTML pages in the "head" tag.

    <meta property="og:site_name" content="Solene's Percent %" />
    <meta property="og:title"     content="How to cook without burning your eyebrows" />
    <meta property="og:image"     content="static/my-super-pony-logo.png" />
    <meta property="og:url"       content="https://dataswamp.org/~solene/some-url.html" />
    <meta property="og:type"      content="website" />
    <meta property="og:locale"    content="en_EN" />

There are more metadata than this but it was enough for my blog.

=> https://ogp.me/ Open Graph Protocol website
  <pubDate>Mon, 21 Jun 2021 00:00:00 GMT</pubDate>
  <title>Using the I2P network with OpenBSD and NixOS</title>
<pre># Introduction

In this text I will explain what is the I2P network and how to provide a service over I2P on OpenBSD and how to use to connect to an I2P service from NixOS.

# I2P

This acronym stands for Invisible Internet Project and is a network over the network (Internet).  It is quite an old project from 2003 and is considered stable and reliable.  The idea of I2P is to build a network of relays (people running an i2p daemon) to make tunnels from a client to a server, but a single TCP session (or UDP) between a client and a server could use many tunnels of n hops across relays.  Basically, when you start your I2P service, the program will get some information about the relays available and prepare many tunnels in advance that will be used to reach a destination when you connect.

Some benefits from I2P network:

* your network is reliable because it doesn't take care of operator peering
* your network is secure because packets are encrypted, and you can even use usual encryption to reach your remote services (TLS, SSH)
* provides privacy because nobody can tell where you are connecting to
* can prevent against habits tracking (if you also relay data to participate to i2p, bandwidth allocated is used at 100% all the time, and any traffic you do over I2P can't be discriminated from standard relay!)
* can only allow declared I2P nodes to access a server if you don't want anyone to connect to a port you expose

It is possible to host a website on I2P (by exposing your web server port), it is called an eepsite and can be accessed using the SOCKs proxy provided by your I2P daemon.  I never played with them though but this is a thing and you may be interested into looking more in depth.

=> https://geti2p.net/en/ I2P project and I2P implementation (java) page
=> https://i2pd.website/ i2pd project (a recent C++ implementation that I use for this tutorial)
=> https://en.wikipedia.org/wiki/I2P Wikipedia page about I2P

# I2P vs Tor

Obviously, many people would question why not using Tor which seems similar.  While I2P can seem very close to Tor hidden services, the implementation is really different.  Tor is designed to reach the outside while I2P is meant to build a reliable and anonymous network.  When started, Tor creates a path of relays named a Circuit that will remain static for an approximate duration of 12 hours, everything you do over Tor will pass through this circuit (usually 3 relays), on the other hand I2P creates many tunnels all the time with a very low lifespan.  Small difference, I2P can relay UDP protocol while Tor only supports TCP.

Tor is very widespread and using a tor hidden service for hosting a private website (if you don't have a public IP or a domain name for example) would be better to reach an audience, I2P is not very well known and that's partially why I'm writing this.  It is a fantastic piece of software and only require more users.

Relays in I2P doesn't have any weight and can be seen as a huge P2P network while Tor network is built using scores (consensus) of relaying servers depending of their throughput and availability.  Fastest and most reliable relays will be elected as "Guard server" which are entry points to the Tor network.

I've been running a test over 10 hours to compare bandwidth usage of I2P and Tor to keep a tunnel / hidden service available (they have not been used).  Please note that relaying/transit were desactivated so it's only the uploaded data in order to keep the service working.

* I2P sent 55.47 MB of data in 114 430 packets. Total / 10 hours = 1.58 kB/s average.
* Tor sent 6.98 MB of data in 14 759 packets. Total / 10 hours = 0.20 kB/s average.

Tor was a lot more bandwidth efficient than I2P for the same task: keeping the network access (tor or i2p) alive.

# Quick explanation about how it works

There are three components in an I2P usage.

- a computer running an I2P daemon configured with tunnels servers (to expose a TCP/UDP port from this machine, not necessarily from localhost though)
- a computer running an I2P daemon configured with tunnel client (with information that match the server tunnel)
- computers running I2P and allowing relay, they will receive data from other I2P daemons and pass the encrypted packets.  They are the core of the network.

In this text we will use an OpenBSD system to share its localhost ssh access over I2P and a NixOS client to reach the OpenBSD ssh port.

# OpenBSD

The setup is quite simple, we will use i2pd and not the i2p java program.

```shell commands
pkg_add i2pd

# read /usr/local/share/doc/pkg-readmes/i2pd for open files limits

cat <<EOF > /etc/i2pd/tunnels.conf
type = server
port = 22
host =
keys = ssh.dat

rcctl enable i2pd
rcctl start i2pd

You can edit the file /etc/i2pd/i2pd.conf to uncomment the line "notransit = true" if you don't want to relay.  I would encourage people to contribute to the network by relaying packets but this would require some explanations about a nice tuning to limit the bandwidth correctly.  If you disable transit, you won't participate into the network but I2P won't use any CPU and virtually no data if your tunnel is in use.

Visit http://localhost:7070/ for the admin interface and check the menu "I2P Tunnels", you should see a line "SSH => " with a long address ending by .i2p with :22 added to it.  This is the address of your tunnel on I2P, we will need it (without the :22) to configure the client.

# Nixos

As usual, on NixOS we will only configure the /etc/nixos/configuration.nix file to declare the service and its configuration.

We will name the tunnel "ssh-solene" and use the destination seen on the administration interface on the OpenBSD server and expose that port to on our NixOS box.

```nixos configuration file
services.i2pd.enable = true;
services.i2pd.notransit = true;

services.i2pd.outTunnels = {
  ssh-solene = {
    enable = true;
    name = "ssh";
    destination = "gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p";
    address = "";
    port = 2222;

Now you can use "nixos-rebuild switch" as root to apply changes.

Note that the equivalent NixOS configuration for any other OS would look like that for any I2P setup in the file "tunnel.conf" (on OpenBSD it would be in /etc/i2pd/tunnels.conf).

```i2pd tunnels.conf
type = client
address =  # optional, default is
port = 2222
destination = gajcbkoosoztqklad7kosh226tlt5wr2srr2tm4zbcadulxw2o5a.b32.i2p

# Test the setup

From the NixOS client you should be able to run "ssh -p 2222 localhost" and get access to the OpenBSD ssh server.

Both systems have a http://localhost:7070/ interface because it's a default setting that is not bad (except if you have multiple people who can access the box).

# Conclusion

I2P is a nice way to share services on a reliable and privacy friendly network, it may not be fast but shouldn't drop you when you need it.  Because it can easily bypass NAT or dynamic IP it's perfectly fine for a remote system you need to access when you can use NAT or VPN.
  <pubDate>Sun, 20 Jun 2021 00:00:00 GMT</pubDate>
  <title>Run your Gemini server on Guix with Agate</title>
<pre># Introduction

This article is about deploying the Gemini server agate on the Guix linux distribution.

=> https://geminiquickst.art/ Gemini quickstart to explain Gemini to beginners
=> https://guix.gnu.org/ Guix website

# Configuration

=> https://guix.gnu.org/manual/en/html_node/Web-Services.html#Web-Services Guix manual about web services, search for Agate.

Add the agate-service definition in your /etc/config.scm file, we will store the Gemini content in /srv/gemini/content and store the certificate and its private key in the upper directory.

```Guix configuration file
(service agate-service-type
          (content "/srv/gemini/content")
          (cert "/srv/gemini/cert.pem")
          (key "/srv/gemini/key.rsa"))

If you have something like %desktop-services or %base-services, you need to wrap the services list a list using "list" function and add the %something-services to that list using the function "append" like this.

```Guix configuration file
    (list (service openssh-service-type)
          (service agate-service-type
                    (content "/srv/gemini/content")
                    (cert "/srv/gemini/cert.pem")
                    (key "/srv/gemini/key.rsa"))))


# Generating the certificate

- Create directories /srv/gemini/content
- run the following command in /srv/gemini/

openssl req -x509 -newkey rsa:4096 -keyout key.rsa -out cert.pem -days 3650 -nodes -subj "/CN=YOUR_DOMAIN.TLD"

- Apply a chmod 400 on both files cert.pem and key.rsa
- Use "guix system reconfigure /etc/config.scm" to install agate
- Use "chown agate:agate cert.pem key.rsa" to allow agate user to read the certificates
- Use "herd restart agate" to restart the service, you should have a working gemini server on port 1965 now

# Conclusion

You are now ready to publish content on Gemini by adding files in /srv/gemini/content , enjoy!
  <pubDate>Thu, 17 Jun 2021 00:00:00 GMT</pubDate>
  <title>How to use Tor only for onion addresses in a web browser</title>
<pre># Introduction

A while ago I published about Tor and Tor hidden services.  As a quick reminder, hidden services are TCP ports exposed into the Tor network using a long .onion address and that doesn't go through an exit node (it never leaves the Tor network).

If you want to browse .onion websites, you should use Tor, but you may not want to use Tor for everything, so here are two solutions to use Tor for specific domains.  Note that I use Tor but this method works for any Socks proxy (including ssh dynamic tunneling with ssh -D).

I assume you have tor running and listening on port ready to accept connections.

# Firefox extension

The easiest way is to use a web browser extension (I personally use Firefox) that will allow defining rules based on URL to choose a proxy (or no proxy).  I found FoxyProxy to do the job, but there are certainly other extensions that propose the same features.

=> https://addons.mozilla.org/fr/firefox/addon/foxyproxy-standard/ FoxyProxy for Firefox

Install that extension, configure it:

- add a proxy of type SOCKS5 on ip and port 9050 (adapt if you have a non standard setup), enable "Send DNS through SOCKS5 proxy"  and give it a name like "Tor"
- click on Save and edit patterns
- Replace "*" by "*.onion" and save

In Firefox, click on the extension icon and enable "Proxies by pattern and order" and visit a .onion URL, you should see the extension icon to display the proxy name. Done!

# Using privoxy

Privoxy is a fantastic tool that I forgot over the time, it's an HTTP proxy with built-in filtering to protect users privacy.  Marcin Cieślak shared his setup using privoxy to dispatch between Tor or no proxy depending on the url.

The setup is quite easy, install privoxy and edit its main configuration file, on OpenBSD it's /etc/privoxy/config, and add the following line at the end of the file:

```privoxy config line
forward-socks4a   .onion      .

Enable the service and start/reload/restart it.

Configure your web browser to use the HTTP proxy for every protocol (on Firefox you need to check a box to also use the proxy for HTTPS and FTP) and you are done.

=> https://mastodon.social/@saper Marcin Cieślak mastodon account (thanks for the idea!).

# Conclusion

We have seen two ways to use a proxy depending on the location, this can be quite useful for Tor but also for some other use cases.  I may write about privoxy in the future but it has many options and this will take time to dig that topic.

# Going further

=> https://3g2upl4pq6kufc4m.onion/ Duckduck Go official Tor hidden service access
=> https://check.torproject.org/ Check if you use Tor, this is a simple but handy service when you play with proxies
=> https://help.duckduckgo.com/duckduckgo-help-pages/privacy/no-tracking/ Official Duckduck Go about their Tor hidden service

# TL;DR on OpenBSD

If you are lazy, here are instructions as root to setup tor and privoxy on OpenBSD.

```shell commands
pkg_add privoxy tor
echo "forward-socks4a   .onion      ." >> /etc/privoxy/config
rcctl enable privoxy tor
rcctl start privoxy tor

Tor may take a few minutes the first time to build a circuit (finding other nodes).
  <pubDate>Sat, 12 Jun 2021 00:00:00 GMT</pubDate>
  <title>Guix: easily run Linux binaries</title>
<pre># Introduction

For those who used Guix or Nixos you may know that running a binary downloaded from the internet will fail, this is because most expected paths are different than the usual Linux distributions.

I wrote a simple utility to help fixing that, I called it "guix-linux-run", inspired by the "steam-run" command from NixOS (although it has no relation to Steam).

=> https://tildegit.org/solene/guix-linux-run Gitlab project guix-linux-run

# How to use

Clone the git repository and make the command linux-run executable, install packages gcc-objc++:lib and gtk+ (more may be required later).

Call "~/guix-linux-run/linux-run ./some_binary" and enjoy.

If you get an error message like "libfoobar" is not available, try to install it with the package manager and try again, this is simply because the binary is trying to use a library that is not available in your library path.

In the project I wrote a simple compatibility list from a few experiments, unfortunately it doesn't run everything and I still have to understand why, but it permitted me to play a few games from itch.io so it's a start.
  <pubDate>Thu, 10 Jun 2021 00:00:00 GMT</pubDate>