Quickly post gists to GitHub Enterprise and github.com

GitHub Logo
The gist gem from GitHub allows you to quickly post text into a GitHub gist. You can use it with the public github.com site but you can also configure it to work with a GitHub Enterprise installation.

To get started, add two aliases to your ~/.bashrc:

alias gist="gist -c"
alias workgist="GITHUB_URL=https://github.mycompany.com gist -c"

The -c will copy the link to the gist to your keyboard whenever you use the gist tool on the command line. Now, go through the login process with each command after sourcing your updated ~/.bashrc:

source ~/.bashrc
gist --login
(follow the prompts to auth and get an oauth token from github.com)
workgist --login
(follow the prompts to auth and get an oauth token from GitHub Enterprise)

You’ll now be able to use both aliases quickly from the command line:

cat boring_public_data.txt | gist
cat super_sensitive_company_script.sh | workgist

icanhazip.com blocked by Websense

UPDATE 2014-08-07: Websense emailed me to say that the site has been reviewed and found to be safe. It may take some time for all of their products to receive the updated classification.

Quite a few emails and IRC messages hit my screen today about icanhazip.com being blocked by Websense products. The report on Websense’s site claims shows that the site is part of a bot network: The URL analyzed is currently compromised to serve malicious content to visitors.

Here are some screenshots from the report:

icanhazip blocked by websense
icanhazip blocked by websense

I reached out to Websense on Twitter and via their site. In the report I sent to them, I explained how the site works, gave them a link to the FAQ, and directed them to several blog posts from this site about icanhazip.com. This response from Websense hit my inbox late today:


The site you submitted has been reviewed and determined to pose security risk. At this time, the site is not safe for browsing and is appropriately classified under the following category:

hxxp://icanhazip.com/ – Bot Networks

Researcher Notes: according to our findings, this site in question is embedded with Dyzap campaign malware.

For additional details related to this threat, please refer to the following source: https://www.bluecoat.com/security-blog/2014-08-01/dyzap-campaign-employs-freshly-minted-domains-and-other-tricks

The site will resume its content-based categorization, once it has been determined to no longer be a security risk.

For further investigation, please contact the website administrator.

If you have any questions and/or need any additional information, please let us know.

Thank you for your inquiry,

Websense Labs

Here’s what I know:

  • The application that serves up the icanhazip services is not compromised
  • The virtual machine on which the application resides is not compromised
  • The application is returning valid data with no evidence of serving malware

If Websense wishes to claim that the site is being used by malware, I can certainly believe that. However, if they claim the site is serving malicious content or actively participating in attacks in any way, I’ve found no evidence that supports that claim.

I’ll be reaching out to Websense again for additional details and to clear up the report listing on the website. If anyone knows of a way for me to identify this malware traffic and block it from accessing icanhazip.com, please let me know. My GPG key is available.

Unexpected predictable network naming with systemd

3240995967_04d7888d5c_oWhile using a Dell R720 at work today, we stumbled upon a problem where the predictable network device naming with systemd gave us some unpredictable results. The server has four onboard network ports (two 10GbE and two 1GbE) and an add-on 10GbE card with two additional ports.

Running lspci gives this output:

# lspci | grep Eth
01:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
01:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
08:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
08:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
42:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
42:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)

If you’re not familiar with that output, it says:

  • Two 10GbE ports on PCI bus 1 (ports 0 and 1)
  • Two 1GbE ports on PCI bus 8 (ports 0 and 1)
  • Two 10GbE ports on PCI bus 42 (ports 0 and 1)

When the system boots up, the devices are named based on systemd-udevd’s criteria. Our devices looked like this after boot:

# ip addr | egrep ^[0-9]
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
2: enp8s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
3: enp8s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
4: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
5: enp1s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
6: enp66s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
7: enp66s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

Devices 2-5 make sense since they’re on PCI buses 1 and 8. However, our two-port NIC on PCI bus 42 has suddenly been named 66. We rebooted the server with the rd.udev.debug kernel command line to display debug messages from systemd-udevd during boot. That gave us this:

# journalctl | grep enp66s0f0
systemd-udevd[471]: renamed network interface eth0 to enp66s0f0
systemd-udevd[471]: NAME 'enp66s0f0' /usr/lib/udev/rules.d/80-net-setup-link.rules:13
systemd-udevd[471]: changing net interface name from 'eth0' to 'enp66s0f0'
systemd-udevd[471]: renamed netif to 'enp66s0f0'
systemd-udevd[471]: changed devpath to '/devices/pci0000:40/0000:40:02.0/0000:42:00.0/net/enp66s0f0'

So the system sees that the enp66s0f0 device is actually on PCI bus 42. What gives? A quick trip to #systemd on Freenode caused a facepalm:

mhayden | weird, udev shows it on pci bus 42 but yet names it 66
    jwl | 0x42 = 66

I didn’t expect to see hex. Sure enough, converting 42 in hex to decimal yields 66:

$ printf "%d\n" 0x42

That also helps to explain why the devices on buses 1 and 8 were unaffected. Converting 1 and 8 in hex to decimal gives 1 and 8. If you’re new to hex, this conversion table may help.

Photo Credit: mindfieldz via Compfight cc

Play/pause button stopped working in OS X Mavericks

Apple KeyboardMy play/pause button mysteriously stopped working in iTunes and VLC mysteriously this week on my laptop. It affected the previous track and next track buttons as well. It turns out that my Google Music extension in Chrome stole the keyboard bindings after the extension updated this week.

If your buttons stopped working as well, follow these steps to check your keyboard shortcuts in Chrome:

  • Choose Preferences in the Chrome menu in the menu bar
  • Click Extensions in the left sidebar
  • Scroll all the way to the bottom of the page
  • Click Keyboard Shortcuts
  • Look at the key bindings in the Google Play Music section

Your shortcuts might look like the ones shown here in an Apple support forum. Click each box with the X to clear each key binding or click on the key binding box itself to bind it to another key combination. If you do that, it should end up like this:


You also have the options of switching the shortcuts to only work within Chrome by using the drop down menus to the right of the key binding boxes.

Photo Credit: Andrew* via Compfight cc

Adventures in live booting Linux distributions

We’re all familiar with live booting Linux distributions. Almost every Linux distribution under the sun has a method for making live CD’s, writing live USB sticks, or booting live images over the network. The primary use case for some distributions is on a live medium (like KNOPPIX).

However, I embarked on an adventure to look at live booting Linux for a different use case. Sure, many live environments are used for demonstrations or installations — temporary activities for a desktop or a laptop. My goal was to find a way to boot a large fleet of servers with live images. These would need to be long-running, stable, feature-rich, and highly configurable live environments.

Finding off the shelf solutions wasn’t easy. Finding cross-platform off the shelf solutions for live booting servers was even harder. I worked on a solution with a coworker to create a cross-platform live image builder that we hope to open source soon. (I’d do it sooner but the code is horrific.) ;)

Debian jessie (testing)

First off, we took a look at Debian’s Live Systems project. It consists of two main parts: something to build live environments, and something to help live environments boot well off the network. At the time of this writing, the live build process leaves a lot to be desired. There’s a peculiar tree of directories that are required to get started and the documentation isn’t terribly straightforward. Although there’s a bunch of documentation available, it’s difficult to follow and it seems to skip some critical details. (In all fairness, I’m an experienced Debian user but I haven’t gotten into the innards of Debian package/system development yet. My shortcomings there could be the cause of my problems.)

The second half of the Live Systems project consist of multiple packages that help with the initial boot and configuration of a live instance. These tools work extremely well. Version 4 (currently in alpha) has tools for doing all kinds of system preparation very early in the boot process and it’s compatible with SysVinit or systemd. The live images boot up with a simple SquashFS (mounted read only) and they use AUFS to add on a writeable filesystem that stays in RAM. Reads and writes to the RAM-backed filesystem are extremely quick and you don’t run into a brick wall when the filesystem fills up (more on that later with Fedora).

Ubuntu 14.04

Ubuntu uses casper which seems to precede Debian’s Live Systems project or it could be a fork (please correct me if I’m incorrect). Either way, it seemed a bit less mature than Debian’s project and left a lot to be desired.

Fedora and CentOS

Fedora 20 and CentOS 7 are very close in software versions and they use the same mechanisms to boot live images. They use dracut to create the initramfs and there are a set of dmsquash modules that handle the setup of the live image. The livenet module allows the live images to be pulled over the network during the early part of the boot process.

Building the live images is a little tricky. You’ll find good documentation and tools for standard live bootable CD’s and USB sticks, but booting a server isn’t as straightforward. Dracut expects to find a squashfs which contains a filesystem image. When the live image boots, that filesystem image is connected to a loopback device and mounted read-only. A snapshot is made via device mapper that gives you a small overlay for adding data to the live image.

This overlay comes with some caveats. Keeping tabs on how quickly the overlay is filling up can be tricky. Using tools like df is insufficient since device mapper snapshots are concerned with blocks. As you write 4k blocks in the overlay, you’ll begin to fill the snapshot, just as you would with an LVM snapshot. When the snapshot fills up and there are no blocks left, the filesystem in RAM becomes corrupt and unusable. There are some tricks to force it back online but I didn’t have much luck when I tried to recover. The only solution I could find was to hard reboot.


The ArchLinux live boot environments seem very similar to the ones I saw in Fedora and CentOS. All of them use dracut and systemd, so this makes sense. Arch once used a project called Larch to create live environments but it’s fallen out of support due to AUFS2 being removed (according to the wiki page).

Although I didn’t build a live environment with Arch, I booted one of their live ISO’s and found their live environment to be much like Fedora and CentOS. There was a device mapper snapshot available as an overlay and once it’s full, you’re in trouble.


The path to live booting an OpenSUSE image seems quite different. The live squashfs is mounted read only onto /read-only. An ext3 filesystem is created in RAM and is mounted on /read-write. From there, overlayfs is used to lay the writeable filesystem on top of the read-only squashfs. You can still fill up the overlay filesystem and cause some temporary problems, but you can back out those errant files and still have a useable live environment.

Here’s the problem: overlayfs was given the green light for consideration in the Linux kernel by Linus in 2013. It’s been proposed for several kernel releases and it didn’t make it into 3.16 (which will be released soon). OpenSUSE has wedged overlayfs into their kernel tree just as Debian and Ubuntu have wedged AUFS into theirs.


Building highly customized live images isn’t easy and running them in production makes it more challenging. Once the upstream kernel has a stable, solid, stackable filesystem, it should be much easier to operate a live environment for extended periods. There has been a parade of stackable filesystems over the years (remember funion-fs?) but I’ve been told that overlayfs seems to be a solid contender. I’ll keep an eye out for those kernel patches to land upstream but I’m not going to hold my breath quite yet.