Experiments with Elixir and Dynamo with OTP

The more I’ve been working with large-scale systems and writing code that I aim to be as fault tolerant as possible, the more I’ve become enamored with Erlang and Elixir’s pattern matching style for handling data flow. I’ve had several occasions where I’ve needed scalable solutions that both provided real-time responses to the client and could scale. These are two things for which Erlang, and by extension, Elixir is great.

In the past I only created some terminal programs in Elixir and I wanted to experiment with a web application, so I did some investigation and encountered Dynamo — Elixir’s analog to Ruby’s Sinatra. Following the documentation on GitHub, getting up and running is very straight forward and it’s got support for OTP, which means that I can build individually scalable modules that can be supervised however I want.

I thought about making a chat app, but everyone does that and I wanted to do something relatively simple that I could crank out in a night or two of light coding, so I settled on a basic login tracker. Essentially, it would store a list of users (without duplicates) allowing them to “sign in” and “sign out” and then an arbitrary number of subscribers could hook up to an EventSource and receive login/disconnect events.

What I wanted to accomplish

The goals of my exercise were as follows:

  • use an OTP app to persist data while the application is running
  • enable EventSource so I could stream add/remove events to a client in real-time by connecting to a URI
  • have clients add/remove “users” to a persistent datastore via an http API.
  • have the OTP app send events out to notify clients of changes to the list of users.

I’ve never written an OTP app, but I did read the OTP chapter of both the O’Reilly Erlang book and PragProg’s Elixir book, so I had some basic ideas of the concepts. I also never worked with EventSource before and had no idea about the technical details of its implementation, but I did have a good idea of what it’s used for and how to use it.

After looking online a bit, I found 2 excellent tutorials that really got me started:

  1. http://miguelcamba.com/blog/2013/04/29/tutorial-build-a-web-app-using-elixir-and-dynamo-with-streaming-and-concurrency/
  2. http://benjamintanweihao.github.io/blog/2013/08/14/elixir-for-the-lazy-impatient-and-busy-part-5-streams-streaming-dynamo/

I was having a lot of trouble putting everything together until I came across these blog posts, but after seeing their approaches, I was able to get something working and it all began to click.

Creating the OTP UserList

After creating my base Dynamo app, I needed to create an OTP app to persist my data for as long as the app is running. The main Dynamo process would communicate with this app by sending messages via :gen_server.cast/2 and :gen_server.call/2.

This app just needed to persist a list of strings representing connected users, return the list, add to the list (without adding if there’s a duplicate) and removing users from the list.

To do that, I created my skeleton OTP Server code at lib/elixir_webtest/user_store.ex:

defmodule ElixirWebtest.UserStore do
  use GenServer.Behaviour

  def start_link( users ) do
    :gen_server.start_link({:local, :user_store}, __MODULE__, users, [])

  def init( users ) do
    {:ok, users}

That code initializes the UserStore with the supplied default state (an empty List) and defines its name so it can be called as :user_store.

I then added 2 functions for working with the state:

defp add_user( users, new_user ) do
  if Enum.any?( users, fn(x) -> x == new_user end ) do

defp del_user( users, user ) do
  if Enum.any?( users, fn(x) -> x == user end ) do
    List.delete( users, user )

The above functions will be called by the OTP cast/call handlers to add or remove users to the list. I didn’t want the ability for any user to be included in the list twice, so I first check the list in both functions using the Enum.any?/2 function.

In del_user/2, there technically is no reason to check for the existence of user, since List.delete/2 will just return the original list if user doesn’t exist, but I’m leaving room for broadcasting changes to subscribers and I only want to broadcast the change if a user is actually deleted from the list.

Adding the OTP message handlers

The next step is to add the actual OTP handlers to this module. These are the functions that receive messages from clients and respond to them. We need to create 3 handlers; one handle_call/2, which will respond with the list of users and 2 handle_cast/2 definitions which will be used for making changes to the list. Since clients making changes don’t need a response, we are using handle_cast rather than handle_call which will reply:

def handle_call( :list_users, _from, users ) do
  { :reply, users, users }

def handle_cast( { :add_user, new_user }, users ) do
  { :noreply, add_user( users, new_user ) }

def handle_cast( { :del_user, user }, users ) do
  { :noreply, del_user( users, user ) }

In OTP, each handle_* function returns a tuple containing a response type (one of :reply or :noreply) and then some values. handle_cast just returns the new state of the app (because casts can change the data). handle_call returns a 3-value tuple, where the second value is the response and the third value is the updated state of the app.

That is it for the UserStore. Now we need to create an OTP app to store who is subscribing to changes in the UserStore.

Creating the OTP subscriber list

Just like with the UserStore, the SubscriberStore begins its life as an OTP app skeleton. This module will store a list of pids that we’ll message to notify of any changes in the UserStore.

Create a file containing the following at lib/elixir_webtest/subscriber_store.ex:

defmodule ElixirWebtest.SubscriberStore do
  use GenServer.Behaviour

  def start_link( subscribers ) do
    :gen_server.start_link({:local, :subscriber_store}, __MODULE__, subscribers, [])

  def init( subscribers ) do
    { :ok, subscribers }

This OTP module is slightly different from the UserStore. Although they both store a list of something, the usage patterns enable us to make this module a little less complex. The UserStore is modified based on client requests where the pids that are being added to this module will be managed by Dynamo itself. Each request that comes in will have a separate pid, so there’s no risk of duplication (provided our app is bug-free).

So with that said, we create 2 functions to manage the state:

defp remove_subscriber( subscribers, subscriber ) do
  List.delete subscribers, subscriber

defp add_subscriber( subscribers, new_subscriber ) do

Everything should be very straight forward, there; now we just need to create the handlers:

def handle_cast( { :add, new_subscriber }, subscribers ) do
  { :noreply, add_subscriber(subscribers, new_subscriber) }

def handle_cast( { :del, subscriber }, subscribers ) do
  { :noreply, remove_subscriber( subscribers, subscriber ) }

There’s one more function we need to add, and that’s the broadcast handler. This will take a supplied message and spit it out to all pids in our subscribers list:

def handle_cast( { :broadcast, event }, subscribers ) do
  Enum.each subscribers, fn(sub) ->
    send( sub, event )

  { :noreply, subscribers }

All we’re doing is iterating over subscribers and using send/2 to send the supplied event to that pid. In this case, event will be a tuple containing an action (either :add or :del) and a user. This will be JSONified and sent to the client that’s connected to our EventSource.

Broadcasting changes

We now want to add our call to :broadcast to our UserStore. So Update the add_user and remove_user functions in UserStore (lib/elixir_webtest/user_store.ex) to the following:

defp add_user( users, new_user ) do
  if Enum.any?( users, fn(x) -> x == new_user end ) do
    :gen_server.cast :subscriber_store, { :broadcast, { :add, new_user } }

defp del_user( users, user ) do
  if Enum.any?( users, fn(x) -> x == user end ) do
    :gen_server.cast :subscriber_store, { :broadcast, { :del, user } }
    List.delete( users, user )

The line containing :gen_server.cast :subscriber_store, { :broadcast, { :del, user } } is actually sending the :broadcast message to our SubscriberStore, which gets picked up by our handler and { :del, user } is sent to each pid in subscribers.

Now we have 2 OTP modules that will persist some data and we can interact with from our Dynamo.

Getting it up and running

The last step before we can work on our routes is to configure Dynamo to boot our OTP modules when the app comes up. This is an extra layer of customization, so we’ll build a Supervisor and then tell Dynamo to boot this first, which will start up UserStore and SubscriberStore and also get the Dynamo up, too.

First, create a file in lib/elixir_webtest/supervisor.ex:

defmodule ElixirWebtest.Supervisor do
  use Supervisor.Behaviour

  def start_link( user_list ) do
    :supervisor.start_link(__MODULE__, user_list )

  def init( user_list ) do
    children = [
      worker(ElixirWebtest.SubscriberStore, [[]]),
      worker(ElixirWebtest.UserStore, [ user_list ]),
      supervisor(ElixirWebtest.Dynamo, [])

    supervise children, strategy: :one_for_one

This is using the Supervisor.Behaviour module which contains helpers for booting our OTP modules as well as some initialization code. The start_link/1 function is how we start up the Supervisor, which takes an argument that we’ll pass to :supervisor.start_link/2. After some magic, it triggers our init/1 function, passing it the initialization value. The great thing about this is that we can have a default user_list (or default anything for that matter if we edit the code a bit).

The init/1 function is where we declare our 2 OTP workers: SubscriberStore and UserStore, and initialize them with an empty list and user_list, respectively as well as declare our Dynamo supervisor. We then call supervise/2 to start the whole thing up.

The last bit to change is in lib/elixir_webtest.ex where we start the app with our custom Supervisor rather than the built-in Dynamo one. We cahnge the start/2 function to the following:

def start(_type, _args) do

That should be it to get this working. The application should boot fine, albeit, it’ll be kinda boring. We need to add our routes!

Interacting with the UserList

The first routes we will create allow us to add and remove users from the UserList via a simple HTTP GET request. I chose GET for this because it made it easier to test in the browser, but this should probably be a POST in the future:

get "/api/login/:name" do
  :gen_server.cast( :user_store, { :add_user, conn.params[:name] } )

  redirect conn, to: "/users"

get "/api/logout/:name" do
  :gen_server.cast( :user_store, { :del_user, conn.params[:name] } )

  redirect conn, to: "/users"

When someone does a GET to /api/login/spike, Dynamo will hit our :user_store OTP module, pass it { :add_user, spike } and then do a redirect to /users. Assuming the app was just started, our UserStore should contain a list with one item: [ "spike" ].

Likewise, if someone does a GET to /api/logout/spike, Dynamo will do the same, only signal to remove “spike” and we’ll be left with an empty list.

This is kinda boring though. Let’s make things more fun. Time to add streaming.

Streaming data to subscribers

What we’re going to do is enable someone to do a GET to /user-stream and we’ll send them chunked data for an EventSource consumer. We’ll add the pid of the connection to SubscriberStore and listen for messages and send JSON over to the clients advertising this fact.

The way this will happen is that we’ll define a function event_handler/1 which will accept our conn Dynamo connection object and wait for a message, handle the message, then recursively call itself.

We’ll be using await/4 which is defined inside Dynamo. await/4 lets the application sleep for a little bit until a message is received or a timeout interval expires. This allows you to control how long to wait for messages and act accordingly.

There are 2 callbacks that are passed to the function, typically called on_wake_up/2 and on_timeout/1. The on_wake_up/2 function is passed the received message and the Dynamo connection object. We’ll call our on_wake_up/2 function handle_event/2 and define it multiple times to pattern match the response.

For our on_timeout/1 callback, we’ll just reply with a tuple containing :timeout so we can easily ignore it. The idea is that we’ll only wait 5 seconds for a message, ignore the fact we timed out and recursively call the function again to wait 5 seconds for a message again.

Then, we’ll pattern match the result and do something with it, and recursively call event_handler/1 again.

The route and function will look like the following:

get "/user-stream" do
  conn = conn.resp_content_type("text/event-stream")
  conn = conn.send_chunked(200)

  # add that handler to the subscribers
  :gen_server.cast( :subscriber_store, { :add, self } )

  event_handler conn

defp event_handler( conn ) do
  # wait for up to 5 seconds for a message
  result = await( conn, 5000, &handle_event(&1, &2), &on_time_out(&1) )

  case result do
    { :timeout } ->
      # this is returned from the on_time_out/1 function below
      # ignore timeouts for now and keep recursing.
      event_handler( conn )
    { :ok, _ } ->
      # normal operation; conn.chunk returns { :ok, _something }
      event_handler( conn )
    { :error, :closed } ->
      # my event stream connection closed
      # so delete self from the subscriberstore and terminate
      :gen_server.cast( :subscriber_store, { :del, self } )
    _ ->
      # anything else, just ignore it and recurse
      event_handler( conn )


defp handle_event( { :add, user }, conn ) do
  send_chunk conn, [ action: "add", user: user ]

defp handle_event( { :del, user }, conn ) do
  send_chunk conn, [ action: "del", user: user ]

defp handle_event( msg, _conn ) do

defp on_time_out( _a ) do
  { :timeout }

That’s a lot to take in. I tried to keep comments in there to explain what’s going on, but it should all be pretty straight forward. The return value of the handle_event/2 functions will be returned by await/4, and that will be pattern matched.

In the case that conn.chunk/1 is sending data to a disconncted client, it returns a tuple containing { :error, :closed }. In this case, we remove that subscriber from SubscriberStore so it won’t be broadcast to anymore and stop recursing. Under every other circumstance, we continue recursing and sending updates.

There is a function above that we haven’t defined yet, and that’s send_chunk/2. This is a convenience function which I’ll show you in a second. It takes care of sending the EventSource events and encoding JSON. For that we’ll use the awesome elixir-json library (https://github.com/cblage/elixir-json).

First, let’s add the send_chunk/2 function to our file:

defp send_chunk( conn, data ) do
  result = JSON.encode(data)

  case result do
    { :ok, json } ->
      conn.chunk "data: #{ json }\n\n"
    _ ->

What that’s doing is converting the given data to JSON and assuming it converted correctly, kick it over the wire. According to the EventSource spec, the chunk needs to contain data: followed by the data, followed by 2 newlines. When prototyping this app, I kept forgetting the newlines and wondered why the app wasn’t sending anything, so I created this function to idiot-proof it.

The last step to get this working is to update the deps in mix.exs to use the elixir-json library. Make your deps function match the following:

defp deps do
  [ { :cowboy, github: "extend/cowboy" },
    { :dynamo, "~> 0.1.0-dev", github: "elixir-lang/dynamo" },
    { :json,   github: "cblage/elixir-json", tag: "v0.2.7" }

Now, you should be able to mix deps.get and then mix server and be good to go.

To test this out, open up one terminal window and execute:

curl localhost:4000/user-stream

This will spit out events in real-time to your terminal.

In another window, run:

curl localhost:4000/api/login/yay

You should see data: {"action":"add","user":"yay"} appear in the curl output.

Next, run:

curl localhost:4000/api/logout/yay

And you should instantly see data: {"action":"del","user":"yay"} appear in the curl output.

Cool, huh?


I’ve got the source to the demo app on my GitHub at the following location: https://github.com/spikegrobstein/elixir_webtest

This source also contains some HTML frontend stuff with JS goodness for realtime DOM updates and additional comments and documentation.

If anyone follows along and feels that I missed out on anything or anything wasn’t clear, let me know and I’ll make any necessary corrections.

Testing Bash scripts with Bats

In recent years, I’ve become completely dependent on automating the tests for the code I write. Once it hits a certain size and complexity, the tedium of constantly launching the app and going through the features and checking the outputs can waste time and demotivate, so when I found Bats (the Bash Automated Testing System) and was finally able to write some real tests for mcwrapper, I was ecstatic.

I’d considered rewriting mcwrapper in another language on multiple occasions, with my main purpose being the ability to write tests and push releases without worrying that I broke some key feature, which had happened several times. Sometimes, as in the case of mcwrapper, Bash is the best tool for the job, and the lack of testing framework is no longer a valid excuse to choose another language.

Bats, itself written in Bash, allows you to write a series of tests against not only your shell scripts, but any commandline program that you write. If it doesn’t share the language with your testing target, then you’re limited to testing the commandline interface, which isn’t a total loss, but the real power comes when testing Bash shell scripts.

A Quick Rundown

A basic Bats test looks like the following:

@test "test for equality" {
  [ "$A" -eq "$B" ]

The above file should be saved as equality.bats and run with the following command:

bats equality.bats

Looking at the syntax of the example test, the test block is prefixed with the @test macro and the name of the test. The contents of the block is executed and if any line fails, the entire test will fail with output describing the line in which it failed and the output. Because the tests are written in Bash, you can test for things inside [] or using the test command, which leads to pretty readable tests, for the most part.

For testing expected failure, Bats ships with the run function which will not only always return true, but will set the global variables $status, $output and an array $lines which are the exit code, full text output and an array of the lines of output for you to run your assertions against. An example follows:

# this function returns some test and a non-zero status
failing_function() {
  echo "not good"
  return 1

@test "test for failure" {
  run failing_function

  [ $status -ne 0 ]
  [ $output == "not good" ]

It also has the ability to include helpers, which are just Bash scripts filled with functions that you can use in your tests and has setup() and teardown() functions that you can define in individual test files that will be executed before and after each test, respectively. setup() is handy for ensuring a consistent operating environment for each test, such as creating a directory, cd'ing into it and maybe copying some test files, where teardown() can be used for cleaning up after the fact.

I’d give more examples, but the documentation for the project is more than ample and I’d basically just be duplicating that effort.

More Advanced Usage

In writing the test suite for mcwrapper, I ran into a few cases where I began having issues figuring out how to test certain things. For instance testing internal functions and verifying that proper files were created during the backup process.

Testing Internal Bash Functions

In order to test an internal function, one must source your script containing the functions. If your Bash script breaks functions into separate libraries, testing them become easier as you can source them individually. But, in the case of mcwrapper, all the functions and the program itself, including all the commandline parsing code, is in a single file.

While poking around at the Bats source code, I discovered how to detect if the script is being sourced and skip some functionality in those cases:

if [ "$BASH_SOURCE" == "$0" ]; then
  # code in here only gets executed if
  # script is run directly on the commandline

From there, my tests get simple:

setup() {
  # source the file, now all mcwrapper functions
  # are available in all my tests
  . "$BATS_TEST_DIRNAME/../libexec/mcwrapper"

@test "start_restore sets DOING_RESTORE" {
  [ -z "$DOING_RESTORE" ]
  [ ! -z "$DOING_RESTORE" ]

Testing The Result of a Piped Function

When testing the backup functionality of mcwrapper, there was a need to run a series of backups and verify that mcwrapper only kept the N most recent backups. This got tricky since I was calling ls on the directory, greping the output for backup files, then calling wc -l on it to count them.

I needed to be able to assert that the output of the wc command matched what I expected, so the trick is to use run, but test in the context of that level of the pipeline.

ls "$backup_dir" | grep ^backup | {
  run wc -l
  [ $output -eq 5 ]

By moving the wc test into a command group using curly braces ({}), I was able to isolate that test and keep everything readable without creating a subshell (() would create a subshell but have a similar effect). Also, if either the ls or grep commands fail, the whole test will still fail.

Some Shortcomings

Bats is awesome, but there are places where it can be improved. For one, I’d love customizability of the output. RSpec has some nice options to either print out a . for a passing test or a F for a failed test, and then spit out a dump of the failures at the end. When you’ve got more than a handful of tests, Bats’s output can be difficult to visually parse for errors.

I recently ran tests on a project at work and saw that they broke up the test output by file and also spit out checks for passing tests and Xs for failed ones, which also made things pretty easy to visually parse.

The addition of colour would also aid greatly in visual parsing.

One thing that was missing was the ability to print debugging output when building the tests. I have a pull request that I sent in the other day that adds a decho function which prints things to the terminal prefixed with # DEBUG:, but it has yet to be merged.

Lastly, coming from the RSpec world, I’ve gotten used to being able to group my tests into logical units of similar tests. Breaking Bats tests into separate files helps, and each file can share the same setup() and teardown() functions, but having a way to just group them in the file would be cool.

I’ve gone through the source quite a bit and I could probably add some of these features, but because Bats is trying to be TAP compliant, I’m not sure how some of this would affect that. My plan is to organize my thoughts a little better, get a little more experience with using the project and submit some issues.

New things learned from Minecraft Docker project

After pondering for the better part of a day since building my Minecraft Docker image, I’ve come to realize some poor assumptions that I’ve made and now see a more correct way of doing things.

The primary thing that I learned is that I need to treat Docker containers more like a standard unix process with a slightly different interface rather than like a Virtual Machine, like I had been [mentally] treating it.

The image that I built is using mcwrapper inside the container to manage the Minecraft Server process rather than using mcwrapper to manage the container itself. Doing it this new way requires some changes to the mcwrapper, too, as it needs to inspect the container’s ID rather than the pid, and needs to call docker run rather than calling java with the necessary configurations.

In the coming week, I’m planning to make these changes and kick out another release of the Docker image along with a new mcwrapper. I just need to figure out exactly how to approach this. Currently, my TODO list is:

  • use container_id instead of pid for determining if the server is running
  • install/upgrade minecraft_server.jar inside the image
  • use docker run to launch the process
  • make sure the FIFO works between the host machine and the Docker image
  • because docker run requires root privileges on the host, mcwrapper needs to support that (it’s partially implemented now, albeit, a little half-assed).
  • add mcwrapper config options for memory and cpu shares of the Docker container
  • add mcwrapper config options for filesystem locations of the world data and server config

I don’t necessarily want mcwrapper to be married to Docker, so I may have to make changes so mcwrapper can be extended to support any target platform, be it AWS, DigitalOcean or Docker. It should be possible, but may be quite an undertaking.

Minecraft server Docker image

For the last couple of weeks, I’ve been playing with Docker and it is really, really cool. The fact that I can run an application that’s completely encapsulated from the rest of the system with limited access to memory, CPU, the filesystem and other processes is damn cool.

For part of my learning process, I decided to create a Minecraft Server image and, boy, has it been a real learning process. I’ve wrestled with features in Docker that are published but not yet released in their apt repository. I’ve built dozens of images that were either incomplete or just didn’t work. It’s a completely different way of thinking about applications. I was familiar with chroot, but Docker only has similarities on its fa├žade.

If you want to play with my Minecraft Server Docker image, you can run the following command after installing Docker:

# docker run spikegrobstein/minecraft_server start-foreground

From there, you should be able to see what port it’s running on (via docker ps) and connect to it.

My image is missing a couple of features, but hopefully I’ll be able to work those out as I learn more about how Docker works. This is my todo list:

  • add the ability to customize RAM usage (currently it’s hard-coded at 512MB for the Minecraft server process)
  • add ability to start the Minecraft server in the background (like mcwrapper was built for) so one could do backups and asynchronous server commands

I’ve got the Dockerfile repository on GitHub if anyone wants to see how it was built or make changes and send me pull requests:

mcwrapper 1.6.5 released

I just pushed a minor update to mcwrapper that fixes the minecraft_server.jar download bug.

Mojang updated their download link so that it’s no longer the same link for every version. The new version of mcwrapper will parse the HTML of their download page, extract the Linux server download link and download it.

There was a previous release 1.6.4 that didn’t work correctly in Linux (idiot me only tested in OSX before release. doh.) If you are running that one, please upgrade to 1.6.5.

you can updated either via gem:

gem install mcwrapper

or by fetching it directly from github:

Batt 0.2.1 released!

I just pushed version 0.2.1 of batt to rubygems.

This includes a new meter action which will draw some ascii art of a meter of your battery capacity. It supports tmux colour as well as customizable size.

For example:

$ batt capacity
$ batt meter --size 10
[||||||||  ]

If you add color, the pipes above are replaced with spaces with the background-colour set to the capacity color.

This version also includes a new function for calculating what color the capacity translates to; rather than dividing into even thirds (with 0-33 being red, 34-66 being yellow and 66+ being green), I break it up as 0-19, 20-74, 75-100 as red, yellow, green, respectively.

Because of some refactoring with regard to using tmux colours, I should be able to do more with it in the future for some interesting results.

So, if you want to have your battery status in your tmux status bar, this is probably your best bet!

Install it!

gem install batt

That’s it! Now you can run batt help and get started!

Project page:

Conway’s Game of Life in Elixir

For the last week or so, I’ve been trying to get a better handle on Elixir, which is a new language that compiles down to BEAM, Erlang’s weapons-grade VM (think like Scala -> JVM). To get a handle on it, I’ve built an implementation of Conway’s Game of Life since I feel that it can benefit from the fact that each generational calculation can easily be broken up into many small jobs, which is great for a platform like BEAM.

My first build didn’t use multiple processes at all, but rather stuck to a single-process, serial iterator. It wasn’t very fast, but it worked. Then, as I progressed through the book, I learned how to go multi-process with it and I managed to not only improve performance, but also take better advantage of my multi-core laptop.

I ran the single-process version through the time command and had it calculate 500 generations, which gave me the following results:

real    2m11.806s
user    2m8.933s
sys     0m2.998s

Then, when I timed the multi-process version (still a work in progress), I was able to achieve the following:

real    1m27.609s
user    4m10.059s
sys     0m36.311s

That’s a 66% speed up in total time to calculate with a nearly 90% increase in total CPU time.

Unfortunately, when building and iteratively refactoring the multi-process version, I discovered some bugs in my initial logic for wrapping the field (so I effectively have an unlimited-sized field with the left/right and top/bottom sides wrapping), but I don’t believe this impacts the overall results.


This code isn’t as optimized as it could be. I update the state of the cells (passing them the number of neighbors they have, so they can decide if they should be alive or dead) and then ask for the state afterwards, which should probably be returned to the caller when the update is sent.

I believe I can also speed this up by only taking into account live cells and their neighbors, since any dead cell with no neighbors would remain dead. Because it would be difficult and time consuming to ensure that no cell gets an update more than once, in the case of this optimization, I’ve already ensured that the update call is monotonic by including the current generation in the call. This way, if the update call will only update if the generation included in the call is newer than the last one it received.

Future Benchmarks

I’m not going to implement any of the above optimizations in the single-process implementation, so any future benchmarks will only be comparable to pre-optimized code.

Get the code

My code, which is still a work in progress, is available on the GitHub:

The master branch currently contains the multi-process implementation, while the single-process one is in the single-process branch.

I’m still learning Elixir and will be continually updating the code until I become comfortable with the coding style and conventions of the language, but if you see me doing anything terribly wrong, let me know.

Macbook battery status in tmux

For the last couple of weeks, I’ve been using tmux more and more and I have to say that I really like it. For those that don’t know, tmux is a very scriptable “terminal multiplexer.” What this means is that you can create multiple virtual terminals in a single session and even arbitrarily split the window to run multiple applications in the same session. I like it a lot more than screen in that it just has a better overall feel and I don’t think I can live without it.

After discovering iTerm2 and moving away from MacVim, towards straight-up vim, and going full-screen with my terminal, I noticed one thing that I was missing, and that was being able to see my battery status. When iTerm2 full-screens, I lose my menu bar and once I start worrying about my battery status, I find myself obsessively moving the mouse to the top of the screen to display my menu bar to check the battery.

In walks batt.

batt is a commandline script that will display information about your Macbook’s battery. This includes whether it’s plugged in, how much time is left and what the % capacity is. I packaged it as a gem for easy distribution and released version 0.1.0 today.

Currently, batt only supports OSX and has only been tested in 10.8 Mountain Lion.

To install batt, you can run the following in your terminal:

gem install batt

If you’re not running rvm, you may have to use sudo to install it.

From there, you now have access to batt and can get all kinds of information about your battery; for example:

batt all

will spit out:

source: Battery Power
capacity: 30%
status: discharging
remaining: 1:16 remaining


batt capacity

will spit out


For tmux, I added a --tmux switch to the capacity action to add some colour to the output, so you can use it in your status line. My current tmux.conf contains the following lines:

set-option status-right-length 120
set-option status-right "[ #(batt source) #(batt capacity --tmux) ] #(date \"+%Y-%m-%d %H:%M\")"

You can read more in the readme on the project page:


Please, let me know what you think of the project and I’m open to any kind of code contributions or ideas.

Plex Media Server web proxy fix

A recent update to Plex Media Server broke the proxy I had set up for the web dashboard, and after about a week or so of pulling my hair out, I discovered a fix.

A little background: I have a server that proxies requests to all of my services that run on their own ports so I don’t need to remember which ports things are running on. This allows me to go to ‘http://sickbeard.example.com’ rather than ‘http://myserver.example.com:8081’ and this is the same for sabnzbd, couchpotato and even Plex. This proxying is done with virtualhosts configured in nginx, and for Plex, I had a convenience redirect to bring me to the main dashboard when going to the main page (rather than having to go to ‘/web/index.html’ manually every time).

Anyway, it turns out that the issue was that the Javascript in the dashboard that populates the screen with your recently watched stuff and your media types was attempting to retrieve metadata about Plex by hitting ‘/’, but that was triggering my redirect to ‘/web/index.html’ rather than allowing the dashboard to get its data. This caused an error which manifested itself as a “This media server is unavailable to you” screen.

The fix I put in place is to check for the existence of the X-Plex-Device-Name HTTP header on requests to /, and if they exist, then don’t redirect, otherwise, redirect as normal.

A gist of an example nginx config follows:


I’ve been staring at a lot of Wireshark output for the last few hours, so if this doesn’t make any sense or you have any questions, let me know and I’ll try to help you out.

TL;DR: if you are running an nginx proxy for Plex and it broke in the last week or so, I found a fix. see link to gist for nginx config.

Route definitions matter

Since this past friday, I’ve been dedicating the majority of my time to getting our RSpec tests to pass on our API orders controller. Being that I was tasked with adding some features to this endpoint, I wanted to make sure the existing tests were passing before writing new ones. This would give me a good baseline to begin work and ensure that my code functions as required.

Now, this particular component of our system has gone through a lot of refactoring in the last 6 months and the tests haven’t been kept up, so there were a bunch of failures that were just the result of out of date tests and were very easy to get working again. Then came this doozy.

First some background on the system:

When creating the order in the system, we have 2 endpoints that actually point to the same create action in our orders controller. This is defined in our routes.rb file as follows:

resources :orders, :only => [:index, :show, :create, :update] do
  post :another_action, :on => :collection

  # the problem action is right here:
  post :action_in_question, :on => :collection, :action => :create

  member do
    post :a
    post :b
    post :c

With this, one is able to post to /orders or /orders/action_in_question and both will hit our create action, but behave slightly differently.

We do this with code like the following:

if request.path.include?('action_in_question')
  # special treatment

The problem arose when we would call post :create, post_params on the orders controller. The URI exposed to the controller always included action_in_question.

When running rake routes I could see that the POST /orders/action_in_question route was always listed before the POST /orders action as well, so I focused my energy on solving this.

The solution was to ensure that this action_in_question action would be lower down in the list so it wouldn’t be automatically generated by RSpect. This was accomplished by changing the above route to look like:

resources :orders, :only => [:index, :show, :create, :update] do
  post :another_action, :on => :collection

  # the problem action is right here:
  post :action_in_question, :on => :collection, :action => :create

  member do
    post :a
    post :b
    post :c

# re-open this nested route and declare action_in_question here
resources :orders, :only => [:index, :show, :create, :update] do
  post :action_in_question, :on => :collection, :action => :create

That successfully solved our issue and the tests now generate the correct URI.