eschew obfuscation (and espouse elucidation)

Developing Clojure in Vim (2018 Edition)

When I wrote about developing Clojure in Vim for the first time, I was still early in my journey. For years, I’d only been able to tinker with Clojure in my free time and I was never able to really use it for anything large. Well, now I’m 5 or so months into using it full time and I’m really enjoying the development experience. So I thought I’d update my previous post with what my Vim configuration looks like now.

First of all, I should point out that while I’ve switched over to using Neovim, all of my set up works with both it and Vim 8.x. Neovim has some cool advantages (like inccommand), but they’re orthogonal to my Clojure dev workflow.


Here are the plugins I use that are Clojure related:

  • vim-clojure-static - This is still the way to go for base syntax highlighting and indentation.
  • vim-fireplace - Still the way to go for repl integration and code reloading.
  • rainbow - I previously used rainbow_parentheses.vim, but found that this one is simpler and more stable.
  • vim-sexp - This plugin lets me manipulate code as a tree, and it’s wonderful
  • vim-sexp-mappings-for-regular-people - Tim Pope’s riff on the above, minus the meta key
  • cljfold - I’m a compulsive code-folder, so after a few weeks I went to find a good folding plugin for Clojure and this is what I settled on. It’s old, but it still works, which is a testament to the stability of the language.


Most of the above plugins “just work” when you install them (hopefully via Pathogen or one of its newer workalikes). There are a few bits in my vimrc that tweak settings.

First is rainbow. I turn with a few rotating colors, and only for Clojure files:

" clojure rainbow parens
let g:rainbow_active = 1
let g:rainbow_conf = {
      \  'guifgs': ['royalblue3', 'darkorange3', 'seagreen3', 'firebrick'],
      \  'ctermfgs': ['lightblue', 'lightyellow', 'lightcyan', 'lightmagenta'],
      \  'parentheses': ['start=/(/ end=/)/ fold', 'start=/\[/ end=/\]/ fold', 'start=/{/ end=/}/ fold'],
      \  'separately': {
      \      '*': 0,
      \      'clojure': {},
      \  }

Next is cljfold. I like to fold more than the default:

" configure clojure folding
let g:clojure_foldwords = "def,defn,defmacro,defmethod,defschema,defprotocol,defrecord"

The final tweak is to add a couple of mappings for fireplace. The first is so that I can quickly evaluate a top level form (usually #_(...) when developing) without having to move my cursor. The second is for pulling up the result of the last evaluation in a vim buffer, which is super useful for referencing and copy/paste, especially now that evaluation output is now pretty printed by default.

" a few extra mappings for fireplace
" evaluate top level form
au BufEnter *.clj nnoremap <buffer> cpt :Eval<CR>
" show last evaluation in temp file
au BufEnter *.clj nnoremap <buffer> cpl :Last<CR>

Development workflow

These plugins have enabled me to settle into a very productive workflow, being able to leverage the power of Clojure’s dynamism while editing it all with Vim. I’ll outline the flow in a future post.


Obsessing Over Multiple Projects

In my new job, I’ve switched each project being a unique combination of git repositories1 to all projects being in just a few repositories.

For instance, my primary codebase consists of two repositories, one for the frontend and one for the backend. As time progresses, I work on multiple (mostly) independent projects in each repo, each one on its own branch. Each project requires a different constellation of files, sometimes organized in radically different ways in my Vim tabs.

So, to cope with this, I obsess about it.

I’ve long had Tim Pope’s awesome Obsession plugin installed, but had only rarely used it. Now, each time I start a new project, I begin with a fresh Vim (well, Neovim) instance and start loading up the files I need, organizing them as I see fit with splits and tabs. I often get to the point where I’m humming along after loading 5 to 15 files. At this point, I run :Obsess Session-something.vim, where the ‘something’ is a memorable tag for what I’m working on. That saves (and continues to save) my session so that I can :qa at any point in time and my set up can be easily restored with a vim -S Session-something.vim.

What this allows me to do is switch between projects in an instant. If I have one branch that is close to being merged, I can let the PR simmer while I load up the session from another branch and keep on working. Also, when it comes time to commit, I like to quite out and start a brand new Vim session, so that I can use Fugitive to browse, stage, and write my commit.

One side effect of this workflow is that I end up with several Session-*.vim files in the root of my repository. So then, how do I pick which one to load up without scanning and copy/pasting? I turn to one of my other favorites, fzf:

vi -S `find . -name 'Session*' | xargs ls -t | fzf`

This presents a list of sessions, in reverse chronological order. I can just hit enter for the most recent one, or navigate and filter to the one I want, which is usually not far away.

I continue to iterate on how to best manage my work environment, and I have a few ideas on how to improve it (e.g. worktrees), but for now, this is quite usable.

  1. More on this in another post.

Outputdiff: Easily Spot Differences in Command Output

When I’m in front of a computer, I spend much of my time at the command line, logged into various systems, running commands.

Now, there are two basic categories of commands:

  1. Commands that change - install a package, add a firewall rule, restart a service
  2. Commands that report - list installed packages, list firewall rules, list running services

This post is about making the second category of commands more useful. Quite often, those commands have copious amounts of output. This is usually fine, but sometimes, I just want to see how the output has changed, because that indicates a change in the state of the system.

Firewall changes with Puppet

Not very long ago, I would manage firewall rules with Puppet, and I usually wanted to see if my code change had the correct effect on the system. Here’s what I used to do:

$ iptables -L -n > before.txt
$ sudo puppet agent ...
$ iptables -L -n > after.txt
$ diff before.txt after.txt

This turned out to be a very effective way of determining if my intended changes had been applied, because the diff only shows changes and I didn’t have to hunt through dozens of lines of output. It also helped me see if my puppet run inadvertently removed a firewall rule that I didn’t want to remove.

The problem was that I would end up with before.txt and after.txt files lying around. I also invariably would need to do another puppet run, so I’d end up with after2.txt and so on.

Itch scratched

So, I wrote outputdiff. Outputdiff takes the output of your command and uses git to maintain a history of changes. With output diff, the above turns into this:

$ iptables -L -n | outputdiff --new
$ sudo puppet agent ...
$ iptables -L -n | outputdiff --compare

Upon running that third command, either a diff or a message indicating there is no change would show. No temporary files are created, and if I make another change, I simply re-run the compare command again to see what changed.

I also created several aliases to make using outputdiff easier (and because I’m lazy):

alias odn="outputdiff --new"
alias odc="outputdiff --compare"
alias odv="outputdiff --compare --no-diff && outputdiff --last --vimdiff"
alias odu="outputdiff --undo"

Now, all I need to do to start a comparison is append | odn or | odc to a command to start or add a new comparison.

Other use cases

Once I had outputdiff in my tool belt, I started finding uses for it everywhere:

  • Check two different generated CloudFormation JSON documents to make sure my generating code is properly triggered
  • Check differences in contents of two zip files to ensure a new file is present
  • Check for updates in the output of a large aws CLI call to see if a database snapshot is complete

I use outputdiff all the time, and I find it incredibly useful.


Envbox: Keeping Secret Environment Variables Secure

In my day to day work and evening and weekend side work, I do most almost all of my development working on remote systems. This has a number of advantages that are for another post, but this post is about one of the limitations.

Most developers have a tool belt that they’re continually improving, and as I work on mine I come across commands - like hub - that require1 putting a secret value into an environment variable, usually for authentication.

For instance, to use hub, I need to do something like this:

$ export GITHUB_TOKEN=ba92810bab08av0ab0157028bb
$ alias git=hub
$ git create username/repo
$ git pull-request -o

If I were only running git/hub commands on my local desktop, I could put the environment variable export into my shell and be done with it. But on any remote system, I only have these options:

  1. Run export GITHUB_TOKEN=.. in my shell before any command that requires it. This isn’t good because the token is now in my history, and any command that I run has access to the value.
  2. Run each command that needs the token like this: GITHUB_TOKEN=... git create .... This solves the access issue, but it still pollutes my history. It’s also cumbersome to deal with when running many commands.
  3. Add the export to my dotfiles. This solves the history problem (and the “remembering to enter the variable” problem), but then my token is available to anyone that I share my dotfiles with.

I wanted something that I could use to securely manage these kinds of environment variables while making it convenient to expose them to specific commands. So I wrote envbox.

Envbox is written in Go, primarily because the language is quite suitable for these sorts of problems, but also because there’s a NaCl secretbox implementation in the Go “Sub Repositories”, and I thought it was a good fit for this problem.


After installation (instructions in the README), the first step is to set up envbox by generating a key:

$ envbox key generate -s

This key is used to encrypt each of the environment variables. Next up is to add a new environment variable:

$ envbox add -n GITHUB_TOKEN
value: aeijfalsjiegliasjefliajsefljaef
$ envbox list

Then, when running a command that needs the variable:

$ envbox run -e GITHUB_TOKEN -- bash -c 'echo $GITHUB_TOKEN'

Or, more apropos for the above example:

alias git='envbox run -e GITHUB_TOKEN -- hub'


Envbox stores each variable in its own file on disk:

$ hexdump -C ~/.local/share/envbox/7ebac232c337c78af91cc4341d650a90a9044d0b259059e8.envenc
00000000  79 80 8b 0d e2 9c c1 85  0c 36 1c bb 6c 94 f6 3c  |y........6..l..<|
00000010  25 55 fb c1 00 3a 6c 3e  e4 b7 ad c3 bc cf a5 75  |%U...:l>.......u|
00000020  76 57 cb 23 c2 91 13 20  79 df 9d d8 72 89 05 26  |vW.#... y...r..&|
00000030  90 d5 f1 9e 05 26 51 fb  f5 fd 3d d9 65 fa 3d b9  |.....&Q...=.e.=.|
00000040  79 ee 35 7e 6a 83 8e fd  32 56 9e f1 f7 1d ef 23  |y.5~j...2V.....#|
00000050  05 03 a2 3c cc f0 6b 8d  cc 08 31 8c f2 d2 c1 a1  |...<..k...1.....|
00000060  72 33 6e 48 59 87 b5 8b  82 b3 1a b3 e3 d7 98 8c  |r3nHY...........|
00000070  d8 a3 c0 04 f0 f5 c1 53  06 84 14 b7 ee 45 c0 de  |.......S.....E..|
00000080  82 a2                                             |..|

Currently, the key is stored in a permission-restricted file in your home directory so that envbox can decrypt the files, but the plan is to move to a credential cache system like the one git uses, so that the key is only held in memory for a configurable time. This makes a better tradeoff between security and convenience.


There are a few other things that envbox can do, such as accepting multi-line variables and differentiating the envbox name from the variable name, so that several of the same variable (e.g. two different GITHUB_TOKENs) can be tracked.

I’ve found it to be incredibly useful, allowing me to version and distribute my secret variables while keeping them secure.


  1. hub doesn’t actually require the environment variable, but logging in for every push and pull seems a bit inefficient.

My Note-taking Workflow

A few people have asked about my note-taking workflow and it’s been quite useful to me, so I thought I would describe what works for me.

I’ve tried several of the popular note-taking tools out there and found them overbearing or over-engineered. I just wanted something simple, without lock-in or a crazy data format.

So my notes are just a tree of files. Yup, just directories and files. It isn’t novel or revolutionary. It doesn’t involve a fancy application or Web 2.0 software. It also works surprisingly well.


I’d taken notes in plain-text files for a while, but what really made my notes more useful was that I switched to Markdown a few years ago. Markdown is one of the best text formatting languages out there1, and many sites use it as their markup language.

So, any time I take notes, I write in Markdown. It took a little while to get used to the syntax, but thankfully the basics are straightforward and sensible. It also looks great without any processing. I can share it with others without reformatting. Or, if I need a fancier presentation, I can use pandoc to transform it into almost any other format imaginable.


There are a plethora2 of tools that understand a tree of files. I can use find, ack, vim, and any other command line tools to manage my personal knowledge base. Not only does this make my notes more accessible, but it also means that I develop greater competency in the tools I also use for everyday development.

I originally used Notational Velocity (and then nvALT) for note taking. I really liked the quick searchability that it provides. After a buddy suggested that Vim would be able to do the same, I switched over immediately. For filename searching, I use ctrlp.vim (custom config) and for content searching I use ack.vim.

As far as rendering to other formats, I use the most excellent pandoc. In my vimrc, I have mapping for converting the current file to html with pandoc:

nmap <leader>vv :!pandoc -t html -T 'Pandoc Generated - "%"' --smart --standalone --self-contained --data-dir %:p:h -c ~/.dotfiles/css/pandoc.css "%" \|bcat<cr><cr>

It generates a self-contained html page (with images embedded as data urls) and then opens the output in a web browser (thanks to this bcat script).


Universal access is incredibly important for note taking. Without it, your distilled knowledge is locked inside your computer.

To make my notes available wherever I go, I keep them in Dropbox3. DropBox does a decent job of synchronizing, but it’s best feature is its integration into so many iOS apps. Almost every app that supports remote file access integrates with Dropbox.

I’d love to use BitTorrent Sync, but its developer API was only recently released and it’s going to take time for apps to support it.

Mobile Application

For mobile access I used to use Notesy. I appreciated its simple interface and quick rendering preview. It recently gained a few keyboard helpers for frequently used markdown characters.

However, once Editorial was released as a universal application, I switched over immediately. Not only is it’s main editing interface more pleasant to use, with better helpers and inline markdown rendering previews, it also sports the ability to add snippets via abbreviations and a phenomenally powerful workflow system that can orchestrate inter-app automation.

Use cases

There are many use cases where my system is useful. Here are a few.

Notes instead of bookmarks

I used to save bookmarks on Delicous as I found interesting URLs online. I found out, however, that over time I never went back and looked at those bookmarks because they weren’t coherently organized. There’s something about tags that just doesn’t help when it comes to searching for information.

Now, instead of saving bookmarks, I create notes based on particular topics and add links I find to those files. The fact that it’s a regular text file means that I can not only use Markdown sections to organize links into headings, but I can also include sample code blocks or images from the local directory.

Talk notes

By virtue of the fact that I take notes in Markdown and my blog is Markdown, when I take notes on talks, it’s extremely easy to publish them. I just copy them over and add the right Octopress YAML header.

Conference notes

When I’m at a conference, I can choose to take notes on my phone or my laptop depending on the type of content. One time I was taking notes on my laptop during a late-in-the-day session and noticed that my battery was getting low. I didn’t need to have the laptop out for any other reason, so I closed it up, opened my phone, and continued taking notes where I’d left off.

Sermon notes

I keep notes every Sunday and keep those notes in a sub-folder. It’s easy to keep types of notes separate by just using regular folders.

Blog post editing

This one is a little meta, for sure. I’ve edited this blog post over the course of a few weeks, sometimes on a computer and sometimes on either my iPad or iPhone. I keep a clone of this blog’s source in Dropbox as well, so I can do most of my editing wherever I happen to be. After that, a few quick commands over ssh and this post will be live.


That pretty much covers my note-taking system. If you’d like to adopt a similar system, let me know how it goes and any cool tools that you discover.


  1. I tried RST too, but I found it to be too prickly for my note taking needs. However, it’s awesome for software documentation.

  2. What is a plethora?

  3. Oh oh, guess I do use a Web 2.0 tool.

My Tmux Configuration, Refined

When I wrote about tmux for the first time, I was just getting into the idea of nesting sessions. I ran a local tmux session that wrapped remote tmux sessions for more than a year before I switched it up again.

I added another level.


I originally started nesting tmux sessions so that I wouldn’t have to use tabs in Terminal to keep track of different remote tmux sessions. This allowed me to connect to my work machine from home and get my entire working session instantly. While that worked well, I began to see a few issues with that approach:

  1. At work, I ran my top level tmux session on my work laptop. The downside of this is that I had to leave my laptop open and running all the time to be able to access it remotely. This also necessitated some tricky SSH tunnels that I wasn’t entirely comfortable leaving open.
  2. The top level tmux session at home was on my home server, and so it was convenient to connect to from work, but if I connected to that session from my top level work session, the key bindings would end up conflicting.


I solved the first issue by running my top level work session on a server at work. This allowed me to close my laptop when I wasn’t in the office and it afforded me a location to run things that weren’t specific to a particular system but that I didn’t want to live and die with my laptop.

I solved the second issue by adding a new level of tmux. I called this new level uber and assigned it the prefix C-q to differentiate it from the other levels1,

With that in place, I would start the uber session on my laptop and then connect to both my home and work mid-level sessions, and via those, the leaf tmux sessions. Then, I could choose what level I wanted to operate on just by changing the prefix that I used.

Multiple sockets

Another thing that I wanted to do from time to time was run two independent tmux sessions on my local laptop. I could have used the built-in multi-session support in tmux, but I also wanted the ability to nest sessions locally, and tmux doesn’t support that natively. In looking for a solution, I stumbled on the idea of running each level on it’s own server socket. By adding that, I can now run all three on the same system and running two independent tmux sessions is as easy as running two different levels in separate windows. Plus, I can still use the native multi-session support within each level.

Sharing sessions

The most recent modification I made was to add easy support for sharing a tmux session between two Terminal windows. This allows me to treat my local Terminal windows as viewports into my tmux session tree, attaching where ever I need without necessarily detaching another Terminal window.

To enable this, I added an optional command line flag to the session start scripts that makes tmux start a new view of the session instead of detaching other clients. I also enabled ‘aggressive-resize’ so that the size of the tmux sessions aren’t limited to the smallest Terminal window unless more than one are looking at the exact same tmux window.

How it all looks

tmux sessions

It can look a little overwhelming, but in reality it’s quite simple to use. Most of my time is spent in the leaf node sessions, and that interaction is basically vanilla tmux.

Installing this for yourself


The configuration for my set up is available in my dotfiles repository on Github:

  1. .tmux.shared - contains shared configuration and bindings that are common to all levels
  2. .tmux.uber - configuration unique to the top-level session
  3. .tmux.master - configuration unique to mid-level tmux sessions
  4. .tmux.conf - configuration unique to the lowest-level (leaf) sessions

Wrapper scripts

The heart of the wrapper scripts is tmux-sess. It holds all the logic for setting the socket and sharing sessions.

The rest of the scripts are thin wrappers around tmux-sess. For instance, here is tmux-uber:


tmux-sess -s uber -f ~/.tmux.uber $*

The other level scripts are tmux-home for the mid-level session and tmux-main for the lowest-level.

Wrapping up

I hope that this information is helpful. If you have any questions, please ask me on twitter.


  1. I also quickly decided that this uber level didn’t need to have its own status line. That would be crazy.

Talk Notes: February 2014

I was out of town two of the Fridays this month, so I was only able to get two talks in:

  • Clojure core.async - Continuing in my fascination with Clojure, I picked this talk to explore the non-Java techniques for handling concurrency. I’m familiar with CSP from my Go experience, and it was interesting to hear Clojure’s take on the same foundation. Clojure also implements a macro that turns the spaghetti code that is callbacks into a sequential function that still operates asynchronously.
  • Inventing on Principle - Several people have recommended this talk to me, and I finally got around to watching it. It’s worth the watch just for the amazing demos that he built, but the deeper notion that there could be an underlying principle that guides your life is thought provoking. It also makes me want to play with Light Table.


Talk Notes: February 2014

I was out of town two of the Fridays this month, so I was only able to get two talks in:

  • Clojure core.async - Continuing in my fascination with Clojure, I picked this talk to explore the non-Java techniques for handling concurrency. I’m familiar with CSP from my Go experience, and it was interesting to hear Clojure’s take on the same foundation. Clojure also implements a macro that turns the spaghetti code that is callbacks into a sequential function that still operates asynchronously.
  • Inventing on Principle - Several people have recommended this talk to me, and I finally got around to watching it. It’s worth the watch just for the amazing demos that he built, but the deeper notion that there could be an underlying principle that guides your life is thought provoking. It also makes me want to play with Light Table.


Setting Up Vim for Clojure

I’ve been experimenting with Clojure lately. A few of my coworkers had begun the discovery process as well, so I suggested that we have a weekly show-and-tell, because a little accountability and audience can turn wishes into action.

Naturally, I looked around for plug-ins that would be of use in my editor of choice. Here’s what I have installed:

These are all straightforward to install, as long as you already have a Pathogen or Vundle setup going. If you don’t, you really should, because nobody likes a messy Vim install.

All of these plug-ins automatically work when a Clojure file is opened, with the exception of rainbow parentheses. To enable those, a little .vimrc config is necessary:

au BufEnter *.clj RainbowParenthesesActivate
au Syntax clojure RainbowParenthesesLoadRound
au Syntax clojure RainbowParenthesesLoadSquare
au Syntax clojure RainbowParenthesesLoadBraces

Now, once that’s all set up, it’s time to show a little bit of what this setup can do. I have a little clojure test app over here on github. After cloning it (and assuming you’ve already installed leiningen):

  1. Open up dev.clj and follow the instructions to set up the application in a running repl.
  2. Then open testclj/core.clj and make any modification, such as changing “Hello” to “Hi”.
  3. Then after a quick cpr to reload the namespace in the repl, you can reload your web browser to see the updated code.

This setup makes for a quick dev/test cycle, which is quite useful for experimentation. Of course, there are many more features of each of the above plugins. I’ve barely scratched the surface and I’m already very impressed.


Introducing Talk Notes

In the course of my work and my online reading and research, I often come across videos of talks that I want to watch. I rarely take the time to watch those videos, mostly because of the time commitment; I usually only have a few minutes to spare.

Lately, I’ve done something to change that. I’m taking a little bit of time out of my Friday schedule each week to watch a talk that looks interesting. I also try and focus on the talk. Rather than checking my email or chatting while the talk is playing, I take notes, sometimes including screenshots of important slides.

Over the course of the past month, I’ve had some success with this strategy, and I was able to watch three talks. Here are links to my notes:

  • Using Datomic With Riak - I picked this talk because we’ve used a bit of Riak at work and a buddy of mine keeps raving about Datomic. This talk is actually a great overview of the philosophy and design behind Datomic.
  • Raft - the Understandable Distributed Protocol - CoreOS’s etcd has been getting some mention lately, and Raft is the consensus algorithm used to keep all of its data consistent. At the end of watching this talk, I found another one (by one of the Raft authors), and it balanced the practicality of the first with some more of the theory.
  • React - Rethinking Best Practices - The functional programming paradigm is gathering steam, and Facebook’s React JavaScript library is a fascinating take on building modern web UIs in a functional manner.

I really enjoyed the process of taking notes in this way, and I hope to continue this as the year progresses.

Oh, and if you know of a good talk, please let me know on twitter.