I wrote a few months ago about envbox, and I can say that it has proved useful to me many times over.
One thing that felt unfinished was the way that envbox stores the key that it uses to encrypt all of the environment variables. It did move the problem from the shell/history to one of system security, which was acceptable. But there are better ways of storing credentials on most systems.
When I’m in front of a computer, I spend much of my time at the command line, logged into various systems, running commands.
Now, there are two basic categories of commands:
Commands that change - install a package, add a firewall rule, restart a service Commands that report - list installed packages, list firewall rules, list running services This post is about making the second category of commands more useful. Quite often, those commands have copious amounts of output.
In my day to day work and evening and weekend side work, I do most almost all of my development working on remote systems. This has a number of advantages that are for another post, but this post is about one of the limitations.
Most developers have a tool belt that they’re continually improving, and as I work on mine I come across commands - like hub
that require1 putting a secret value into an environment variable, usually for authentication.
A few people have asked about my note-taking workflow and it’s been quite useful to me, so I thought I would describe what works for me.
I’ve tried several of the popular note-taking tools out there and found them overbearing or over-engineered. I just wanted something simple, without lock-in or a crazy data format.
So my notes are just a tree of files. Yup, just directories and files. It isn’t novel or revolutionary.
When I originally set up Octopress, I set it up on my Mac laptop using rvm, as recommended at the time. It worked very well for me until just a few minutes after my last post, when I decided to sync with the upstream changes.
After merging in the changes, I tried to generate my blog again, just to make sure everything worked. Well, it didn’t, and things went downhill from there.
Last time I posted about git-annex, I introduced it and described the basics of my set up. Over the past year, I’ve added quite a bit of data to my main git-annex. It manages just over 100G of data for me across 9 repositories. Here’s a few bits of information that may be useful to others considering git-annex (or who are already knee deep in).
Archive, not backup # The website for git-annex explicitly states that it is not a backup system.
My Situation # I have backups. Many backups. Too many backups.
I use time machine to back up my macs, but that only covers the systems that I currently run. I have archives of older systems, some for nostalgic reasons, some for reference. I also have a decent set of digital artifacts (pictures, videos and documents) that I’d rather not lose.
So I keep backups.
Unfortunately, I’m not very organized.
Last year, when I made my list of pros and cons comparing git subtrees with submodules, one of the downsides listed for subtrees was that it’s hard to figure out where the code came from originally.
Well, it seems that the internet hasn’t been sitting on its hands. While the main repository remained stable, a couple forks took it upon themselves to teach git-subtree to keep a record of what it merges in a .
After a few months of managing my dotfiles with git, I felt the need to organize my vim plugins a little better. I chose to use pathogen (created by Tim Pope), which allows me to keep each plugin in its own subdirectory. It fits well with using git to manage dotfiles because git has two ways of tracking content from other repositories. The first is submodules, which keep a remote URL and commit SHA1 so that the other content can be pulled in after cloning.
I have quite a few dotfiles. I have so many that keeping them in sync is impossible with conventional methods. So, I turned to my old friend: version control. For a while, I kept them in subversion at work. This worked well as that was where I spent most of my time. Recently, however, I’ve wanted those same dotfiles to be available at home and other non-work areas. So, I investigated moving them over to a git repository.
I’ve been using git for a while now, and I’m just getting to the point where I can think in it.
It’s the same as learning a new spoken language. I took three years of Spanish in high school, so I knew most of the rules and could translate back and forth to English, but I never really learned to think in Spanish (as opposed to thinking in English and then quickly translating).
[Update] It looks like this only really applies to USB flash drives. When I mounted my actual backup drive, it showed up in prtpart. This post was written using the root drive on my old backup server, which is a SanDisk Cruzer flash drive.
Now that I finally got my mini thumper up and online, it’s time pull everything from my previous backup drive. The problem is that it’s a USB drive with an ext3 partition on it.
After basically copying my friend’s exact specifications, I now have a little server at home with 1.5T of mirrored disk space. By and large it was a straightforward process, with the following interesting tidbits.
Most of the assembly went smoothly. You do have to pull the motherboard out to get the CF drive into its slot. In order to maneuver it out, you have to unclip the SATA cables and unscrew the VGA connector.