endot

eschew obfuscation (and espouse elucidation)

A Script to Ease SCP Use

Since I work on remote systems all the time, I use SCP repeatedly to transfer files around. One of the more cumbersome tasks is specifying the remote file or directory location.

So I wrote a helper script to make it easier. It’s called scptarget, and it generates targets for SCP, either the source or the destination.

For instance, if I want to copy a file down from a remote server, I run scptarget like this and copy the output:

1
2
$ scptarget file.pl
endot.org:/home/nate/file.pl

Then it’s easy to paste it into my SCP command on my local system:

1
2
$ scp endot.org:/home/nate/file.pl .
...

I usually use remotecopy (specifically remotecopy -c) to copy it so that I don’t even have to touch my mouse.

Examples

Here are a few example uses.

First, without any arguments, it targets the current working directory. This is useful when I want to upload something from my local system to where I’m remotely editing files.

1
2
$ scptarget
endot.org:/home/nate

Specifying a file targets the file directly.

1
2
$ scptarget path/to/file.pl
endot.org:/home/nate/path/to/file.pl

Absolute paths are handled correctly:

1
2
$ scptarget /usr/local/bin/file
endot.org:/usr/local/bin/file

Vim SCP targets

Vim supports editing files over SCP, so passing -v in generates a target that it can use:

1
2
$ scptarget -v path/to/file.pl
scp://endot.org//home/nate/file.pl

And to edit, just pass that in to Vim:

1
$ vim scp://endot.org//home/nate/file.pl

IP based targets

Sometimes I need the target to use the IP of the server instead of its hostname. This usually happens with development VMs (a la Vagrant), which are only addressable via IP. Passing -i to scptarget causes it behave this way. Under the hood, it uses getip, which is a script I wrote that prints out the first IP of the current host. If there is no non-private IP, then it will return the first private IP. (I am fully aware that there may be better ways of doing the above. Let me know if you have a better script.)

1
2
$ scptarget path/to/file.pl
64.13.192.60:/home/nate/path/to/file.pl

That’s it. I find it incredibly useful and I hope you do too.

Enjoy.

Seeing the Shuttle

Launch

A little over thirteen years ago, I embarked on a cross-country trip with one of my college buddies. I’ll elaborate more on the trip in another post, but the pertinent part of that story is that we happened to be in Florida in late May, 2000.

We’d originally planned to see certain sights along the way, but by the time we reached the east coast we had grown quite good at adding extra stops to the itinerary. When we stopped in Orlando, we quickly added a trip to the Kennedy Space Center, as we are both great fans of NASA. While we were there, we learned that in a few days a shuttle (Atlantis) was going to launch, so we quickly rearranged the next leg of our trip so that we could be back in the area and then purchased tickets.

Since it was an early AM launch window, they let us into the main building of the space center just before three in the morning. Most of the exhibits were open and since the only people there were the ones going to see the launch, there were no crowds. We’d spent most of our previous visit in the other buildings on site, so it was quite a treat to wander around uninhibited. One of the theaters that usually shows documentary style films was showing live video of the close out crew getting the astronauts into the shuttle while a staff person up in front answered questions from the dozen or so people in the audience. I remember sitting in that room for some time, intently watching the video and enjoying every minute.

When the time came for us to head out to the launch site, we loaded into shuttles that took us out to where NASA Parkway East crosses the Banana River. The causeway over the river is the closest the public can get to a shuttle launch at just over six miles away. We waited out there for about two hours before the final nine minute countdown began, and when the clock struck zero it lifted off, almost effortlessly. From our vantage point it was silent until a few seconds later when the shock wave rolled across the water and hit us. It was an experience like none other.

Retirement

Shortly before the shuttle program ended a couple years ago, NASA announced which museums around the country would receive a retired orbiter and we were lucky enough to get the Endeavour for the California Science Center.

Over the holiday break, I was able to visit it with my family. It’s on display in a purpose-built hanger while they work on a permanent home. It was great to see it up close, but the hanger and the pre-exhibit room were packed with holiday crowds.

Then, this past week, I was able to return for a second visit with another college friend and his family. This time, there were only a few schoolchildren to maneuver around while looking up at the orbiter. While my friend and his family wandered around, I was able to just sit and study the vehicle itself.

When I saw it thirteen years ago, it was a speck on the horizon. This time it was so big that I couldn’t take it all in at once. I noticed where the black heat tiles begin and the other locations (beside the underbelly) where they’ve been placed. I could appreciate the enormity of the engine nozzles at the back and the texture of the thermal blankets that cover most of the top half. I counted the maneuvering thrusters on the nose and tail and could see the backwards flag on the right side. Again, it was an experience like none other.

There’s a lot to learn about the shuttle program and about Endeavour in particular. For instance, I learned that the reason for Endeavour’s British spelling is that it was named for the HMS Endeavour, the ship that Captain Cook explored Australia and New Zealand with. Also, I learned that Endeavour was built as the replacement for Challenger, and 22 years after the Challenger disaster it was Endeavour who took the first teacher into space.

If you’re in the LA area and are a fan of space flight, then don’t miss seeing the Endeavour. I’ll definitely be going back.

Endeavour Endeavour Endeavour Endeavour Endeavour

Managing Backups With Git-annex

My Situation

I have backups. Many backups. Too many backups.

I use time machine to back up my macs, but that only covers the systems that I currently run. I have archives of older systems, some for nostalgic reasons, some for reference. I also have a decent set of digital artifacts (pictures, videos and documents) that I’d rather not lose.

So I keep backups.

Unfortunately, I’m not very organized. When I encounter data that I want to keep, I usually rsync it onto one or another external drive or server. However, since the data is not organized, I can’t tell how much of it can simply be deleted instead of backed up again. The actual amount of data that should be backed up is probably less than half of the amount of data that exists on the various internal and external drives both at home and at work. This also means that most of my hard drives are at 90% capacity and I don’t know what I can safely delete.

I really needed a way of organizing the data and getting it somewhere that I can trust.

git-annex

I initially heard of git-annex a while ago, when I was perusing the git wiki. It seemed like an interesting extension but I didn’t take another look at it until the creator started a kickstarter project to extend it into a dropbox replacement.

git-annex is great. It’s an extension to git that allows managing files with git without actually checking them in. git-annex does this by replacing each file with a symlink that points to the real content in the .git/annex directory (named after a checksum of the file’s contents). Only the symlink gets checked into git.

To illustrate, here’s how to get from nothing to tracking a file with git-annex:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ mkdir repo && cd repo
$ git init && git commit -m initial --allow-empty
Initialized empty Git repository in /Users/nate/repo/.git/
[master (root-commit) c8562e6] initial
$ git annex init main
init main ok
(Recording state in git...)
$ mv ~/big.tar.gz .
$ ls -lh
-rw-r--r--  1 nate  staff    10M Dec 23 15:31 big.tar.gz
$ git annex add big.tar.gz
add big.tar.gz (checksum...) ok
(Recording state in git...)
$ ls -lh
lrwxr-xr-x  1 nate  staff   206B Dec 23 15:32 big.tar.gz -> .git/annex/objects/PP/wZ/SHA256E-s10485760--7c8fdf649d2b488cc6c545561ba7b9f00c52741a5db3b0130a8c9de8f66ff44f.tar.gz/SHA256E-s10485760--7c8fdf649d2b488cc6c545561ba7b9f00c52741a5db3b0130a8c9de8f66ff44f.tar.gz
$ git commit -m 'adding big tarball'
...

When the repository is cloned, only the symlink exists. To get the file contents, run git annex get:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ cd .. && git clone repo other && cd other
Cloning into 'other'...
done.
$ git annex init other
init other ok
(Recording state in git...)
$ file -L big.tar.gz
big.tar.gz: broken symbolic link to .git/annex/objects/PP/wZ/SHA256E-s10485760--7c8fdf649d2b488cc6c545561ba7b9f00c52741a5db3b0130a8c9de8f66ff44f.tar.gz/SHA256E-s10485760--7c8fdf649d2b488cc6c545561ba7b9f00c52741a5db3b0130a8c9de8f66ff44f.tar.gz
$ git annex get big.tar.gz
get big.tar.gz (merging origin/git-annex into git-annex...)
(Recording state in git...)
(from origin...) ok
(Recording state in git...)
$ file -L big.tar.gz
big.tar.gz: data

By using git-annex, every clone doesn’t have to have the data for every file. git-annex keeps track of which repositories contain each file (in a separate git branch that it maintains) and provides commands to move file data around. Every time file content is moved, git-annex updates the location information. This information can be queried to figure out where a files content is and to limit the data manipulation commands.

There is (much) more info in the walkthrough on the git-annex site.

My Setup

What I have is a set of git repositories that are linked like this:

git annex map

[git-annex has a subcommand to generate a map, but it requires that all hosts are reachable from where it’s run, and that’s not possible for me. I quickly gave up when trying to make my own Graphviz chart and ended up using Lekh Diagram on my iPad (thanks Josh).]

My main repository is on a machine at home (which started life as a mini thumper and is now an Ubuntu box), and there are clones of that repository on various remote machines. To add a new one, all I need to do is clone an existing repository and run git annex init <name> in that repository to register it in the system.

This has allowed me to start organizing my backup files in a simple directory structure. Here is a sampling of the directories in my repository:

  • VMs - VM images that I don’t want to (or can’t) recreate.
  • funny - Humorous files that I want to keep a copy of (as opposed to trusting the Internet).
  • media - Personal media archives, currently mostly tarballs of pictures going back ten years.
  • projects - Archives of inactive projects.
  • software - Downloaded software for which I’ve purchased licenses.
  • systems - Archives of files from systems I no longer access.

There are other directories, and these directories may change over time as I add more data. I can move the symlinks around, even without having the actual data on my system, and when I commit, git-annex will update its tracking information accordingly. Every time I add data or move things around, all I need to do is run git annex sync to synchronize the tracking data.

Here is the simple workflow that I go through when changing data in any git-annex managed repository:

1
2
3
4
5
$ git annex sync
$ # git annex add ...
$ # git annex get ...
$ # git annex drop ...
$ git annex sync

With this in place, it’s easy to know where to put new data since everything is just directories in a git repo. I can access files from anywhere because my home backup server is available as an ssh remote. More importantly, I can just grab what I want from there, because git-annex knows how to just grab the contents of a single file.

One caveat to this system is that using git and git-annex means that certain file attributes, like permissions and create/modify/access time are not preserved. To work around this, for files that I want to preserve completely, I just tarball them up and add that file to the git-annex.

Installing git-annex

git-annex is written in Haskell. Installing the latest version on on OS X is not the most repeatable process, and the version that comes with most linux distributions is woefully out of date. So I’ve opted for using the prebuilt OS X app (called beta) or linux tarball.

After copying the OS X app into Applications or unpacking the linux tarball, I run the included runshell script to get access to git-annex:

1
2
3
4
$ /home/nate/git-annex.linux/runshell bash                      # on linux
$ /Applications/git-annex.app/Contents/MacOS/runshell bash      # on OS X
$ git annex version
git-annex version: 3.20121211

I’ll share more scripts and tips in future blog posts.

Enjoy.

Dfm Graduates to Its Own Repository and Learns How to Import Files

I recently split dfm out into its own git repository. This should make it easier to add new features and grow the test suite without cluttering up the original dotfiles repository. I’ll sync dfm over at regular intervals, so anyone who wants to keep up to date by merging with master will be ok.

I also just finished up a major new feature: dfm can now import files. So instead of:

1
2
3
4
$ cp .vimrc .dotfiles
$ dfm install
$ dfm add .vimrc
$ dfm ci -m 'adding .vimrc'

There is an import subcommand that accomplishes all of this:

1
2
3
4
5
6
7
$ dfm import .vimrc
INFO: Importing .vimrc from /home/user into /home/user/.dotfiles
INFO:   Symlinking .vimrc (.dotfiles/.vimrc).
INFO: Committing with message 'importing .vimrc'
[personal 8dbf30d] importing .vimrc
 1 file changed, 46 insertions(+)
 create mode 100644 .vimrc

There are a smattering of other new features as well, like having dfm execute a script or fixup permissions on install. These are listed in the changelog for v0.6 and documented in the wiki.

To update to the latest, just run these commands:

1
2
$ dfm remote add upstream git://github.com/justone/dotfiles.git
$ dfm pull upstream master

Or, grab dfm from its repository.

Enjoy.

Extending Svn, à La Git

Subversion is a useful tool. It does most of what I need it to do, but sometimes there are missing features. Sometimes, it’s something that git does natively. Other times, it’s a repeated command sequence. It’s easy to write small scripts to do these new things, but they never feel like they fit in with the rest of the commands.

I’ve always been fond of the way that git can be extended by simply creating a script with the right name; git-foo [args] becomes git foo [args]. I wanted that same level of extensibility with subversion, so I decided to write a little wrapper called svn. It’s in my PATH ahead of /usr/bin, and it detects if the subcommand given exists as svn-$subcommand in my path somewhere. If that’s found, it is executed. Otherwise the real svn binary is executed.

I originally wrote svn in perl, but the other day a friend of mine 1 rewrote it with shell, cutting it by more than half and making it easier to understand. Here it is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/env bash

## If there is a svn-${COMMAND}, try that.
## Otherwise, assume it is a svn builtin and use the real svn.

COMMAND=$1
shift

SUB_COMMAND=$(type -path svn-${COMMAND})
if [ -n "$SUB_COMMAND" -a -x "$SUB_COMMAND" -a "${COMMAND}" != "upgrade" ]; then
    exec $SUB_COMMAND "$@"
else
    command -p svn $COMMAND "$@"
fi

Once I had the wrapper, I started creating little extensions to subversion. Here are the ones I’ve created.

svn url

This prints out the URL of the current checkout.

I frequently need to have the same checkout on multiple machines. So, grabbing the URL quickly is essential. All this script does is get the the URL out of the svn status line, but it makes the following possible:

1
$ svn url | remotecopy

Which means no mouse is needed.

svn vd

This shows the uncommitted differences with vimdiff.

Since subversion doesn’t have native support for using external diff tools, this script uses vimdiff.pl to add that in.

I used to have my subversion configuration set so that vimdiff was always used, but decided to add this script so that I could choose at the prompt which one I wanted (svn di for native, svn vd for vimdiff).

svn clean

This is the analog to git-clean. It removes any untracked or ignored files.

This is indispensible for projects that generate a lot of build artifacts or times when there are several untracked items to delete. Running it without additional options will show what files would be removed, and adding the -f flag will do the deleting.

svn fm (fm = ‘fix merge’)

This makes it easy to fix merge conflicts by loading up the right files in vimdiff.

When a conflict exists during a merge, subversion dumps several files in the local directory to help you figure out how the conflict occurred.

1
2
3
4
5
6
7
nate@laptop:~/test1
$ svn st
 M      .
?       file.merge-left.r23262
?       file.merge-right.r23265
?       file.working
C       file

I can never remember which file is which, so running svn fm conflictedfile runs vimdiff like this:

On the left is the file before the merge and on the right is the new file being merged. The middle has the merged file with conflict markers.

If all the conflicts are resolved, the conflict is marked as resolved.

All done

That’s it for now. Enjoy.

Update 2012-09-17: Updated wording about svn clean behavior. Default changed from deleting to showing what would be deleted and the option -n changed to -f.

  1. The above links to his github, but I like the picture on his homepage better.

Host-specific Bash Configuration

Well, I was going to write about a nifty bit of bash to help with ssh-agent in tmux, but someone beat me to it, so I’ll just write up his idea instead.

Every once in a while, it’s nice to have a bit of bash initialization that only runs on one system. You could just throw that at the end of the .bashrc for that system, but that’s not very persistent. It would be better to have, in the spirit of dotjs, a directory where you drop files with the same name as the host and they get run.

So, here’s a bit of bash initialization that does that and a bit more.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
HN=$( hostname -f )
HOST_DIR=$HOME/.bashrc.d/host.d

# split hostname
HN_PARTS=($(echo $HN | tr "." "\n"))

TEST_DOMAIN_NAME=
for (( c = ${#HN_PARTS[@]}; c--; c == 0 )); do
    if [[ -z $TEST_DOMAIN_NAME ]]; then
        TEST_DOMAIN_NAME="${HN_PARTS[$c]}"
    else
        TEST_DOMAIN_NAME="${HN_PARTS[$c]}.$TEST_DOMAIN_NAME"
    fi

    if [[ -f $HOST_DIR/$TEST_DOMAIN_NAME ]]; then
        source $HOST_DIR/$TEST_DOMAIN_NAME
    elif [[ -d $HOST_DIR/$TEST_DOMAIN_NAME ]]; then
        for file in $HOST_DIR/$TEST_DOMAIN_NAME/*; do
            source $file
        done
    fi
done

One additional bit is that it uses successively longer segments of the hostname, so for the hostname foo.bar.domain.com, the following names are checked, in order: com, domain.com, bar.domain.com, foo.bar.domain.com. Doing this means that domain-specific initialization is easy and that more specific filenames can override their general counterparts.

The other extra is that if the name exists as a directory, all the files in that directory are sourced. So the full list of checked locations for the above hostname would be:

  • com
  • com/*
  • domain.com
  • domain.com/*
  • bar.domain.com
  • bar.domain.com/*
  • foo.bar.domain.com
  • foo.bar.domain.com/*

It works pretty well, but I’m sure it could be better written. I’m not very proficient with bash, so if you have any suggestions for improving it, let me know.

Enjoy.

Easy Development With Git_backup

I’ve been using git_backup to back up the websites I run for quite a while now. It works well and I only need to scan the daily cron emails to see if the backup went well or if there were any odd files changed the day before.

One thing that I didn’t expect when I started using it was how it would enable developing those websites in a sandbox without any danger of affecting the production instances.

Local development

Back when my blog was powered by Wordpress, I would do most of my major modifications on a copy of my blog that ran on my local desktop.

Setup

First, to provide the LAMP stack, I downloaded MAMP.

Then, I cloned my blog from my backup server and modified the code to point at a local database.

1
2
3
4
5
6
7
8
9
$ git clone nate@mybackupserver.com:endot.org.git
Cloning into 'endot.org'...
remote: Counting objects: 2661, done.
remote: Compressing objects: 100% (1321/1321), done.
remote: Total 2661 (delta 1157), reused 2538 (delta 1098)
Receiving objects: 100% (2661/2661), 3.82 MiB | 342 KiB/s, done.
Resolving deltas: 100% (1157/1157), done.
$ cd endot.org && vi html/wp-config.php  # edit to point at local database
$ git ci -am 'modifying to point at local database'

And finally, I imported the live database data:

1
$ for dbfile in db/*; do echo "processing $dbfile"; mysql --defaults-file=.my.cnf endot_dev < $dbfile; done

At this point, I was free to try out new plugins or do any wide-reaching change without worrying that I might break something permanently. Then, when I was confident about the change I wanted, I would change the live site.

Synchronizing

When I wanted to update my local working copy, all I needed to run was a few git commands:

1
2
3
$ git reset --hard HEAD
$ git clean -fd
$ git pull --rebase

The first clears out any changes I had made. The second removed any untracked files (new plugins, etc.). And the third grabbed upstream changes while preserving my commit that changed the config to point at my local database.

Then, I re-imported the live database data.

Once this was done, I had an up-to-date copy of my blog to play around with.

Server-side development

For one of the Drupal installations I ran, I used a scripted version of the above technique to keep a development copy up to date on the server. This allowed the site’s admins to have the same safety net while trying out new things that I had locally, but without having to set up a local database and web server.

Here is the cron script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/bin/sh

cd /var/www/dev.domain.com

echo -e "\nCleaning up extra files."
echo "=================================================================================="
chmod u+w html/sites/default/
git reset --hard HEAD
git clean -df

echo -e "\nSynchronizing with live site backup."
echo "=================================================================================="
git pull --rebase

echo -e "\nLoading database."
echo "=================================================================================="
for dbfile in db/*; do echo "processing $dbfile"; mysql --defaults-file=/root/.my.cnf site_dev < $dbfile; done

echo -e "\nDone."

The only odd line is the chmod. That was necessary because Drupal itself made that directory unwritable and that prevented the git pull command from working.

Cron ran this script at 1am every night, so each morning the development site would be an up-to-date copy of the previous day’s content and code. This was frequent enough for the site’s owner and when he wanted it reset in the middle of the day I would just manually run the script.

Enjoy.

Graphing Presentation Times With R/ggplot2

I’ve been kicking around this bit of R code for the last couple of months, and so I thought I would share it.

First off, a little background. At work, we have a noon meeting every Tuesday where each team in the engineering department gets up in front of the rest of us and gives a little update on their progress from the last week. Well, over time this meeting grew in length, so a couple of months ago the suggestion was made to limit the length of the meeting to five minutes, which would give each team about 45 seconds to speak. I took the opportunity to capture how long each team took, and so now I have a bit of data to play with.

These graphs show how each individual team is doing 1. Click for larger version.

And this graph shows how the total meeting time looks. Click for larger version.

While we have never cleared the five minute goal, the meeting is still much more efficient than it was before.

The R code for this can be found on github. I feel like each time I use R, I’m a little more comfortable, but I still feel like I struggle with simple tasks for too long.

Enjoy.

  1. Team names changed to protect the innocent. Why clown names? It just came to me.

Meeting Michael Connelly and Robert Crais

Tonight, I had the privilege of seeing one of my favorite authors, Michael Connelly, at a session where he and Robert Crais discussed the effect that Raymond Chandler had on their careers and on the crime fiction genre. It was a part of Santa Monica’s Citywide Reads 2012 program celebrating Chandler’s work. My sister, who introduced me to Michael Connelly in the first place, found out about it on his facebook fan page and came all the way out here to go with me.

They started off with a short back and forth time, chatting about how they each were influenced and what they thought of Chandler’s work. It quickly became evident that the two of them are good friends. Robert is the laid back, easy going one and Michael plays the straight man.

One interesting thing that came up was that Chandler is sometimes criticized for not having strong plots. Robert pointed out that he (Chandler) was more interested in how the scenes were received than the plot of the entire book. To reinforce this, he read a short excerpt where Philip Marlowe (Chandler’s protagonist), pours himself a drink and then looks out his window at the city of Los Angeles. It mentioned nothing about what was happening at that part of the book, but the description of Los Angeles and how individual perspective influences one’s perception of the city was captivating. Michael mentioned chapter thirteen of The Little Sister, where Marlowe drives around the city describing what he sees, as something he used to read before starting a new novel.

Following the chat was a question and answer period, but not before each one asked a trivia question about their work. I knew I had no chance with Robert’s question as I’ve only read one of his books, but I thought I had a chance with Michael’s. That is, until I heard it. He asked if anyone knew the intersection where Harry Bosch (his character) had met Elvis Cole (one of Robert’s characters) in the book Lost Light. I haven’t read that one yet, but when I do, I’ll be sure to look for the reference. 1

One of the questions was about the fact that both of the authors’ characters have homes on Woodrow Wilson Drive in the Hollywood Hills. Both found the ability to stand on a back patio and gaze at the city integral to their characters’ psyche. Michael said that he’d actually been sent up into the hills as a reporter to write about a murder and while he was waiting around, he wandered over to a burnt out cantilevered platform where he caught a view of the city. It was there that he decided to give Bosch a home in the hills. He also mentioned that, even after twenty years (his first novel came out in 1992), no one built a house at that address. 2

Another interesting question asked each author to name their favorite and least favorite book that they’d created. Robert replied that his favorite is usually the most recent one. When he starts a book, he loves it. By the time he finishes it, he hates it, but then a few months later he starts to love it again.3 Furthermore, there are things he likes about each book, so it’s hard to pick an overall favorite. Michael felt similar, and added that he likes his early books the least and has thought about what he would do differently if he were to rewrite them. I found this interesting because I just finished his first book and I rather enjoyed it.

After the Q&A session, they cleared off the stage and everyone lined up to walk up and get their books signed. We each picked up a book and patiently waited our turn. My sister was more nervous than I was initially, but as we drew closer I could definitely feel my anxiety increase. At these things, you have much more time to think about what you will say than time to actually say it. So, I carefully prepared a few sentences explaining how much I liked his work and that, even though I had purchased a book, I’d rather he sign my ticket, as I tend to give away or sell books after I read them. Then, suddenly I was next and I can’t even remember what I said, but I’m sure it wasn’t eloquent and it probably wasn’t even grammatically sound. He graciously signed my ticket after shaking my hand and then he signed my book too. He even let me get a picture with him. I know he puts his pants on one leg at a time, but it was still surreal to meet him in person.

Michael Connelly and me

We then hopped into the line for Robert. This time, I was much more relaxed, probably because I have only read one of his books so far. It was fun to watch my sister get all starstruck, and it was great to meet him as well. Plus, he was nice enough to sign my ticket too.

Signed ticket

My sister and I share a love of reading, and tonight was a special night for both of us. I’m glad that I had the opportunity to attend tonight and I won’t soon forget it.

Oh, and one cool thing that I found out about Michael tonight: he’s left handed.

Good night.

  1. Only one street had to be named, so one answer is Mulholland. I don’t know what the other one is.

  2. Yes, he gave the address. Yes, I’ll probably drive up there. No, not tonight.

  3. I find that I have a similar reaction to things I write, albeit on a much shorter time scale.

Git-walklog

Most of the time, when looking at history in a git repository, I am most interested in changes at a higher level than an individual commit. From time to time, however, I really want to look at each commit on its own. So, I created git-walklog. For each commit in the range specified, it:

  1. Shows the standard log format: author, date, and commit message. Then it waits for input.
  2. Hitting enter then runs git difftool on just that commit, showing you any differences in your configured difftool 1.

If you want to skip a commit, all you need to do is type ‘n’ or ‘no’.

I usually use git log with different options till I get it to just show the entries I’m interested in and then replace log with walklog to cruise through the commits.

Examples

To see the last three commits:

1
git walklog -3 --reverse

To see the changes for a particular branch:

1
git walklog master..branch --reverse

To see what came in the last git pull:

1
git walklog master@{1}.. --reverse

I usually put --reverse in there, because I want to see the commits in the same order as they were created.

Enjoy.

  1. You do have a difftool configured, don’t you? Run git config --global diff.tool vimdiff and then use git difftool instead of git diff and all your diffs will show up in vimdiff. Works for other diffing tools too, look for “Valid merge tools” in man difftool.