Arch Update Woes

The main machine I use for any personal projects is currently a Lenovo Thinkpad X1 Carbon running ArchLinux. I’ve been using this machine for almost exactly 4 years at this point, and although I don’t love it, it’s been a good workhorse for me.

One of the big issues that I’ve run into in the past was that an update sometimes screws the system up. I had a few months where the graphical interface would occasionally freeze, which turned out to be caused by a vulkan driver I’d installed. This took a while to figure out because I’d installed the driver like weeks before the issue cropped up, so initially I thought it was a hardware issue.

Most recently, I did a standard system update, which had some errors along the way. I was using yay as my update tool since it has good AUR support, and even that tool stopped working with an error about not being able to open the shared object file. Additionally, after any update, I generally reboot because some random stuff gets weird: I lose some of my custom keymappings, my key repeat rate resets, docker often stops working. Things like that.

So I reboot and after typing in the password to decrypt the root filesystems, I get:

Please enter passphrase disk Linux LVM (cryptlvm): (no echo)
[  OK  ] Finished Cryptography Setup for cryptlvm.
[  OK  ] Reached target Local Encrypted Volumes.
[  OK  ] Reached target System Initializatoin.
[  OK  ] Reached target Basic System.
[**    ] A start job is running for /dev/MyVolGroup/root (31s / 1min 30s)

Where it then timed out and did not complete the boot/login process.

It turned out that this was caused by a deprecated mkinitcpio hook for sd-lvm2, which I found documented here:

I was able to track down this bug by booting off a thumbdrive (luckily I kept really good notes when I set this machine up) and tried to follow some of the steps on the archwiki to set up how the machine boots and got an error when running mkinitcpio -P. A little googling after seeing the error about the unknown hook, I switched the hook name and the machine rebooted fine.

I was a little frustrated and tired when this originally happened, so I ordered a new M3 MacBook Air, which I’m kinda excited to switch to since I’m tired of things breaking and also not having a good email app. Plus I just want my stuff to work and I want a decent keyboard. I love using Linux, and I’m going to miss having a native Docker system, but in the end, the laptop is a tool and it’s best to have the tools work.

My plan for this Thinkpad is to install nixOS which comes with the promise of reproduceable builds, so we’ll see.

Migrate From Tumblr

I recently set up this new blog for myself. In the past I’ve gone through several blogging platforms. LiveJournal was probably my first, followed by Blogger, an experimental jaunt on wordpress for a weekend, then to Tumblr, where I last wrote in 2014. But I really liked some of the content I had on tumblr and they had a way to export all of my posts, so I exported it to json using their API guide.

This site is Hugo, which uses Markdown files as posts and the build process generates static HTML files that I just rsync up to my server running nginx.

Migrating the posts themselves

To convert that JSON export I got from Tumblr into Hugo posts, I wrote a quick bash script.

This script takes a file outfile.json that’s in the current directory and generates a markdown file for each post. It requires bash 4.x (so if you’re on macOS, brew install bash and make sure your PATH uses that one) and jq.

#! /usr/bin/env bash

set -euo pipefail

# output alias URLs as json
get_urls() {
  local id="$1"

  query_with_id "$id" '[.url, .["url-with-slug"]]'

# given a post ID, run a jq query against it to read field(s)
query_with_id() {
  local id="$1"; shift

  jq --arg id "$id" '.posts[] | select(.id == $id)' "$file" \
    | jq "$@"


# get all of the post IDs from the json file
ids="$( jq --raw-output '.posts[].id' "$file")"
mapfile -t ids <<< "$ids"

echo "got ${#ids[@]} ids"

# iterate over each post and generate the file
for id in "${ids[@]}"; do
  aliases="$( get_urls "$id" | sed -E 's|||' )"

  title="$( query_with_id "$id" '.["regular-title"]' )"
  body="$( query_with_id "$id" --raw-output '.["regular-body"]' )"
  slug="$( query_with_id "$id" --raw-output '.slug' )"
  date="$( query_with_id "$id" --raw-output '.["date-gmt"]' )"

  # convert that date into ISO-8601 format
  date="$( date -d "$date" --iso-8601=seconds )"


  cat <<END > "$outfile"
title = $title
draft = true
date = "$date"
aliases = $aliases

> Note: this post was migrated from my old Tumblr-backed blog


I did wind up having to massage some of the files. Although I used Markdown for most posts, one was HTML and none of them specified the language for any blocks of code.

Redirecting from the old post URLs

Initially, I had planned on Hugo dealing with redirecting from the old post URL to the new one as Tumblr uses a slightly different URL structure. Because I was using the same domain, this seemed like it should work pretty well, but Hugo does the redirects client-side and I wanted search engines to pick up the change, so I created nginx config.

Tumblr uses a structure like /post/<post-id>/<slug> (with the <slug> being optoinal) where this site uses /posts/<slug> and I decided to only handle any URL starting with /post/<post-id> and do the redirect.

To do this, I used the following one-liner:

jq --raw-output '.posts[] | "rewrite ^\( .url | gsub(""; "")) /posts/\(.slug) permanent;"' oldblog.json

I use the url field in the post, remove the scheme and domain from it and output an nginx rewrite directive.

So now when a client requests an old post’s URL, they get a 301 permanent redirect to the updated post location. The one-liner results in a series of lines that I pasted into the nginx config:

rewrite ^/post/74130342713 /posts/experiments-with-elixir-and-dynamo-with-otp permanent;
rewrite ^/post/60548255435 /posts/testing-bash-scripts-with-bats permanent;
rewrite ^/post/59389300605 /posts/new-things-learned-from-minecraft-docker-project permanent;

This shows it in action:

$ curl -XGET -I ''
HTTP/1.1 301 Moved Permanently
Server: nginx/1.10.3 (Ubuntu)
Date: Mon, 26 Feb 2024 05:11:28 GMT
Content-Type: text/html
Content-Length: 194
Connection: keep-alive

My First Programming Project

Last year, my dad was cleaning up and came across my first real programming project. When I was 10 or 11 years old, in sixth grade, my school offered programming classes. The class was taught in BASIC on IBM PCjrs and focused primarily on graphics. This is like 1991-1992.

This is the old school BASIC, where you had to label each line with a line number and we had a dot-matrix printer if we wanted to save our programs. The teacher would hand out special grid paper which accounted for the rectangular pixels that we were working with and it would help out with our assignments. The thing is that this wasn’t even our first real exposure to programming, since the preceeding 2 years, we had computer classes where we wrote LOGO (you know, the turtle that you’d control with a series of commands) and drew pictures on these Atari computers. We never really did much more than RT 90 and PEN UP/PEN DOWN commands, but for my final project, I drew a simplified GameBoy.

Anyway, I loved this class. It was so cool. Our teacher showed us some projects that the 7th and 8th graders had done, and they did some simple animation by clearing the screen and drawing the next frame, so I asked a lot of questions. We had pretty much only learned how to change color, do fills and draw lines and shapes, but the teacher showed me how to write a FOR loop and I kinda understood what it was doing and I had an idea.

Graphics reference with my original diagonal stick, which was too much work.

For my class project, I was going to draw a stick of TNT, animate the sparks on the fuse by cycling through the colors (the colors were indexed 0-15). I don’t know how long I actually spent on it but I swear it felt like a week. And if you had asked me how much code it was, by my recollection, it was like 100 lines of code.

Some printed source code

So this is all the code. I got an A even though it’s only 16 lines. I had to jam in the FOR loop on line 55, so it doesn’t just count by 10s. This concept of numbering one’s lines is so weird and I can’t imagine how it must feel to see this for folks who only got exposed to software in the 2000s.

I’ll transcribe the source here, for posterity (and to show off cool BASIC syntax highlighting):

10 CLS
40 LINE (90,30)-(55-160),4,BF
50 LINE (70,30)-(70,20),1
55 FOR A= 1 TO 15
70 LINE (79,8)-(71,18),A
80 LINE (74,15)-72,18),A
90 LINE (67,13)-(69,18),A
100 PSET (66,8),A
110 PSET (63,15),A
120 PSET (73,4),A
130 PSET (76,17),A
140 NEXT A
150 GOTO 55

I’d love to find somewhere that I could run this again.

It’s kinda crazy how low resolution this is, which you can tell by the magnitude of the numbers. I had another version that would fill the screen with white to signify an explosion and then end. This one just loops forever, animating the fuse.

The shitty thing is that the year I got this class was the last year they offered programming in middleschool. The following year, the school got new Macs and we were given a typing class instead. For some reason, my town really dumbed down the computer class offerings after that and I wasn’t able to take another programming class until senior year of high school when they offered C++ for the first time, but that’s another story.

After taking this class, I discovered that we had QBASIC on our home DOS computer and I picked up some books at the book store because I was bitten by the programming bug. I do wish I took it more seriously, though, but I did manage to build a simple Pong game. I had an epiphany one night about how to animate a bouncing ball and then it was cake to control a paddle and bounce it around, but I had no score nor did I track when the ball missed.

Hello World

Greetings. This is the first post on my blog, here. I need to post something, so here it is.

Experiments with Elixir and Dynamo with OTP

Note: this post was migrated from my old Tumblr-backed blog

The more I’ve been working with large-scale systems and writing code that I aim to be as fault tolerant as possible, the more I’ve become enamored with Erlang and Elixir’s pattern matching style for handling data flow. I’ve had several occasions where I’ve needed scalable solutions that both provided real-time responses to the client and could scale. These are two things for which Erlang, and by extension, Elixir is great.

In the past I only created some terminal programs in Elixir and I wanted to experiment with a web application, so I did some investigation and encountered Dynamo – Elixir’s analog to Ruby’s Sinatra. Following the documentation on GitHub, getting up and running is very straight forward and it’s got support for OTP, which means that I can build individually scalable modules that can be supervised however I want.

I thought about making a chat app, but everyone does that and I wanted to do something relatively simple that I could crank out in a night or two of light coding, so I settled on a basic login tracker. Essentially, it would store a list of users (without duplicates) allowing them to “sign in” and “sign out” and then an arbitrary number of subscribers could hook up to an EventSource and receive login/disconnect events.

What I wanted to accomplish

The goals of my exercise were as follows:

  • use an OTP app to persist data while the application is running
  • enable EventSource so I could stream add/remove events to a client in real-time by connecting to a URI
  • have clients add/remove “users” to a persistent datastore via an http API.
  • have the OTP app send events out to notify clients of changes to the list of users.

I’ve never written an OTP app, but I did read the OTP chapter of both the O’Reilly Erlang book and PragProg’s Elixir book, so I had some basic ideas of the concepts. I also never worked with EventSource before and had no idea about the technical details of its implementation, but I did have a good idea of what it’s used for and how to use it.

After looking online a bit, I found 2 excellent tutorials that really got me started:


I was having a lot of trouble putting everything together until I came across these blog posts, but after seeing their approaches, I was able to get something working and it all began to click.

Creating the OTP UserList

After creating my base Dynamo app, I needed to create an OTP app to persist my data for as long as the app is running. The main Dynamo process would communicate with this app by sending messages via :gen_server.cast/2 and

This app just needed to persist a list of strings representing connected users, return the list, add to the list (without adding if there’s a duplicate) and removing users from the list.

To do that, I created my skeleton OTP Server code at lib/elixir_webtest/user_store.ex:

defmodule ElixirWebtest.UserStore do
  use GenServer.Behaviour

  def start_link( users ) do
    :gen_server.start_link({:local, :user_store}, __MODULE__, users, [])

  def init( users ) do
    {:ok, users}

That code initializes the UserStore with the supplied default state (an empty List) and defines its name so it can be called as :user_store.

I then added 2 functions for working with the state:

defp add_user( users, new_user ) do
  if Enum.any?( users, fn(x) -> x == new_user end ) do

defp del_user( users, user ) do
  if Enum.any?( users, fn(x) -> x == user end ) do
    List.delete( users, user )

The above functions will be called by the OTP cast/call handlers to add or remove users to the list. I didn’t want the ability for any user to be included in the list twice, so I first check the list in both functions using the Enum.any?/2 function.

In del_user/2, there technically is no reason to check for the existence of user, since List.delete/2 will just return the original list if user doesn’t exist, but I’m leaving room for broadcasting changes to subscribers and I only want to broadcast the change if a user is actually deleted from the list.

Adding the OTP message handlers

The next step is to add the actual OTP handlers to this module. These are the functions that receive messages from clients and respond to them. We need to create 3 handlers; one handle_call/2, which will respond with the list of users and 2 handle_cast/2 definitions which will be used for making changes to the list. Since clients making changes don’t need a response, we are using handle_cast rather than handle_call which will reply:

def handle_call( :list_users, _from, users ) do
  { :reply, users, users }

def handle_cast( { :add_user, new_user }, users ) do
  { :noreply, add_user( users, new_user ) }

def handle_cast( { :del_user, user }, users ) do
  { :noreply, del_user( users, user ) }

In OTP, each handle_* function returns a tuple containing a response type (one of :reply or :noreply) and then some values. handle_cast just returns the new state of the app (because casts can change the data). handle_call returns a 3-value tuple, where the second value is the response and the third value is the updated state of the app.

That is it for the UserStore. Now we need to create an OTP app to store who is subscribing to changes in the UserStore.

Creating the OTP subscriber list

Just like with the UserStore, the SubscriberStore begins its life as an OTP app skeleton. This module will store a list of pids that we’ll message to notify of any changes in the UserStore.

Create a file containing the following at lib/elixir_webtest/subscriber_store.ex:

defmodule ElixirWebtest.SubscriberStore do
  use GenServer.Behaviour

  def start_link( subscribers ) do
    :gen_server.start_link({:local, :subscriber_store}, __MODULE__, subscribers, [])

  def init( subscribers ) do
    { :ok, subscribers }

This OTP module is slightly different from the UserStore. Although they both store a list of something, the usage patterns enable us to make this module a little less complex. The UserStore is modified based on client requests where the pids that are being added to this module will be managed by Dynamo itself. Each request that comes in will have a separate pid, so there’s no risk of duplication (provided our app is bug-free).

So with that said, we create 2 functions to manage the state:

defp remove_subscriber( subscribers, subscriber ) do
  List.delete subscribers, subscriber

defp add_subscriber( subscribers, new_subscriber ) do

Everything should be very straight forward, there; now we just need to create the handlers:

def handle_cast( { :add, new_subscriber }, subscribers ) do
  { :noreply, add_subscriber(subscribers, new_subscriber) }

def handle_cast( { :del, subscriber }, subscribers ) do
  { :noreply, remove_subscriber( subscribers, subscriber ) }

There’s one more function we need to add, and that’s the broadcast handler. This will take a supplied message and spit it out to all pids in our subscribers list:

def handle_cast( { :broadcast, event }, subscribers ) do
  Enum.each subscribers, fn(sub) ->
    send( sub, event )

  { :noreply, subscribers }

All we’re doing is iterating over subscribers and using send/2 to send the supplied event to that pid. In this case, event will be a tuple containing an action (either :add or :del) and a user. This will be JSONified and sent to the client that’s connected to our EventSource.

Broadcasting changes

We now want to add our call to :broadcast to our UserStore. So Update the add_user and remove_user functions in UserStore (lib/elixir_webtest/user_store.ex) to the following:

defp add_user( users, new_user ) do
  if Enum.any?( users, fn(x) -> x == new_user end ) do
    :gen_server.cast :subscriber_store, { :broadcast, { :add, new_user } }

defp del_user( users, user ) do
  if Enum.any?( users, fn(x) -> x == user end ) do
    :gen_server.cast :subscriber_store, { :broadcast, { :del, user } }
    List.delete( users, user )

The line containing :gen_server.cast :subscriber_store, { :broadcast, { :del, user } } is actually sending the :broadcast message to our SubscriberStore, which gets picked up by our handler and { :del, user } is sent to each pid in subscribers.

Now we have 2 OTP modules that will persist some data and we can interact with from our Dynamo.

Getting it up and running

The last step before we can work on our routes is to configure Dynamo to boot our OTP modules when the app comes up. This is an extra layer of customization, so we’ll build a Supervisor and then tell Dynamo to boot this first, which will start up UserStore and SubscriberStore and also get the Dynamo up, too.

First, create a file in lib/elixir_webtest/supervisor.ex:

defmodule ElixirWebtest.Supervisor do
  use Supervisor.Behaviour

  def start_link( user_list ) do
    :supervisor.start_link(__MODULE__, user_list )

  def init( user_list ) do
    children = [
      worker(ElixirWebtest.SubscriberStore, [[]]),
      worker(ElixirWebtest.UserStore, [ user_list ]),
      supervisor(ElixirWebtest.Dynamo, [])

    supervise children, strategy: :one_for_one

This is using the Supervisor.Behaviour module which contains helpers for booting our OTP modules as well as some initialization code. The start_link/1 function is how we start up the Supervisor, which takes an argument that we’ll pass to :supervisor.start_link/2. After some magic, it triggers our init/1 function, passing it the initialization value. The great thing about this is that we can have a default user_list (or default anything for that matter if we edit the code a bit).

The init/1 function is where we declare our 2 OTP workers: SubscriberStore and UserStore, and initialize them with an empty list and user_list, respectively as well as declare our Dynamo supervisor. We then call supervise/2 to start the whole thing up.

The last bit to change is in lib/elixir_webtest.ex where we start the app with our custom Supervisor rather than the built-in Dynamo one. We cahnge the start/2 function to the following:

def start(_type, _args) do

That should be it to get this working. The application should boot fine, albeit, it’ll be kinda boring. We need to add our routes!

Interacting with the UserList

The first routes we will create allow us to add and remove users from the UserList via a simple HTTP GET request. I chose GET for this because it made it easier to test in the browser, but this should probably be a POST in the future:

get "/api/login/:name" do
  :gen_server.cast( :user_store, { :add_user, conn.params[:name] } )

  redirect conn, to: "/users"

get "/api/logout/:name" do
  :gen_server.cast( :user_store, { :del_user, conn.params[:name] } )

  redirect conn, to: "/users"

When someone does a GET to /api/login/spike, Dynamo will hit our :user_store OTP module, pass it { :add_user, spike } and then do a redirect to /users. Assuming the app was just started, our UserStore should contain a list with one item: [ "spike" ].

Likewise, if someone does a GET to /api/logout/spike, Dynamo will do the same, only signal to remove “spike” and we’ll be left with an empty list.

This is kinda boring though. Let’s make things more fun. Time to add streaming.

Streaming data to subscribers

What we’re going to do is enable someone to do a GET to /user-stream and we’ll send them chunked data for an EventSource consumer. We’ll add the pid of the connection to SubscriberStore and listen for messages and send JSON over to the clients advertising this fact.

The way this will happen is that we’ll define a function event_handler/1 which will accept our conn Dynamo connection object and wait for a message, handle the message, then recursively call itself.

We’ll be using await/4 which is defined inside Dynamo. await/4 lets the application sleep for a little bit until a message is received or a timeout interval expires. This allows you to control how long to wait for messages and act accordingly.

There are 2 callbacks that are passed to the function, typically called on_wake_up/2 and on_timeout/1. The on_wake_up/2 function is passed the received message and the Dynamo connection object. We’ll call our on_wake_up/2 function handle_event/2 and define it multiple times to pattern match the response.

For our on_timeout/1 callback, we’ll just reply with a tuple containing :timeout so we can easily ignore it. The idea is that we’ll only wait 5 seconds for a message, ignore the fact we timed out and recursively call the function again to wait 5 seconds for a message again.

Then, we’ll pattern match the result and do something with it, and recursively call event_handler/1 again.

The route and function will look like the following:

get "/user-stream" do
  conn = conn.resp_content_type("text/event-stream")
  conn = conn.send_chunked(200)

  # add that handler to the subscribers
  :gen_server.cast( :subscriber_store, { :add, self } )

  event_handler conn

defp event_handler( conn ) do
  # wait for up to 5 seconds for a message
  result = await( conn, 5000, &handle_event(&1, &2), &on_time_out(&1) )

  case result do
    { :timeout } ->
      # this is returned from the on_time_out/1 function below
      # ignore timeouts for now and keep recursing.
      event_handler( conn )
    { :ok, _ } ->
      # normal operation; conn.chunk returns { :ok, _something }
      event_handler( conn )
    { :error, :closed } ->
      # my event stream connection closed
      # so delete self from the subscriberstore and terminate
      :gen_server.cast( :subscriber_store, { :del, self } )
    _ ->
      # anything else, just ignore it and recurse
      event_handler( conn )


defp handle_event( { :add, user }, conn ) do
  send_chunk conn, [ action: "add", user: user ]

defp handle_event( { :del, user }, conn ) do
  send_chunk conn, [ action: "del", user: user ]

defp handle_event( msg, _conn ) do

defp on_time_out( _a ) do
  { :timeout }

That’s a lot to take in. I tried to keep comments in there to explain what’s going on, but it should all be pretty straight forward. The return value of the handle_event/2 functions will be returned by await/4, and that will be pattern matched.

In the case that conn.chunk/1 is sending data to a disconncted client, it returns a tuple containing { :error, :closed }. In this case, we remove that subscriber from SubscriberStore so it won’t be broadcast to anymore and stop recursing. Under every other circumstance, we continue recursing and sending updates.

There is a function above that we haven’t defined yet, and that’s send_chunk/2. This is a convenience function which I’ll show you in a second. It takes care of sending the EventSource events and encoding JSON. For that we’ll use the awesome elixir-json library (

First, let’s add the send_chunk/2 function to our file:

defp send_chunk( conn, data ) do
  result = JSON.encode(data)

  case result do
    { :ok, json } ->
      conn.chunk "data: #{ json }\n\n"
    _ ->

What that’s doing is converting the given data to JSON and assuming it converted correctly, kick it over the wire. According to the EventSource spec, the chunk needs to contain data: followed by the data, followed by 2 newlines. When prototyping this app, I kept forgetting the newlines and wondered why the app wasn’t sending anything, so I created this function to idiot-proof it.

The last step to get this working is to update the deps in mix.exs to use the elixir-json library. Make your deps function match the following:

defp deps do
  [ { :cowboy, github: "extend/cowboy" },
    { :dynamo, "~> 0.1.0-dev", github: "elixir-lang/dynamo" },
    { :json,   github: "cblage/elixir-json", tag: "v0.2.7" }

Now, you should be able to mix deps.get and then mix server and be good to go.

To test this out, open up one terminal window and execute:

curl localhost:4000/user-stream

This will spit out events in real-time to your terminal.

In another window, run:

curl localhost:4000/api/login/yay

You should see data: {"action":"add","user":"yay"} appear in the curl output.

Next, run:

curl localhost:4000/api/logout/yay

And you should instantly see data: {"action":"del","user":"yay"} appear in the curl output.

Cool, huh?


I’ve got the source to the demo app on my GitHub at the following location:

This source also contains some HTML frontend stuff with JS goodness for realtime DOM updates and additional comments and documentation.

If anyone follows along and feels that I missed out on anything or anything wasn’t clear, let me know and I’ll make any necessary corrections.

Older posts