Etsy reminds us that information security is an active process

I’m always impressed with the content published by folks at Etsy and Ben Hughes’ presentation from DevOpsDays Minneapolis 2014 is no exception.

Ben adds some levity to the topic of information security with some hilarious (but relevant) images and reminds us that security is an active process that everyone must practice. Everyone plays a part — not just traditional corporate security employees.

I’ve embedded the presentation here for your convenience:

Here’s a link to the original presentation on SpeakerDeck:

icanhazip and CORS

I received an email from an user last week about enabling cross-origin resource sharing. He wanted to use AJAX calls on a different site to pull data from and use it for his visitors.

Those headers are now available for all requests to the services provided by! Here’s what you’ll see:

$ curl -i
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET

AVC: denied dyntransition from sshd

I’ve been working with some Fedora environments in chroots and I ran into a peculiar SELinux AVC denial a short while ago:

avc:  denied  { dyntransition } for  pid=809 comm="sshd" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:sshd_net_t:s0 tclass=process

The ssh daemon is running on a non-standard port but I verified that the port is allowed with semanage port -l. The target context of sshd_net_t from the AVC seems sensible for the ssh daemon. I started to wonder if a context wasn’t applied correctly to the sshd excutable itself, so I checked within the chroot:

# ls -alZ /usr/sbin/sshd
-rwxr-xr-x. 1 root root system_u:object_r:sshd_exec_t:SystemLow 652816 May 15 03:56 /usr/sbin/sshd

That’s what it should be. I double-checked my running server (which booted a squashfs containing the chroot) and saw something wrong:

# ls -alZ /usr/sbin/sshd
-rwxr-xr-x. root root system_u:object_r:file_t:s0      /usr/sbin/sshd

How did file_t get there? It turns out that I was using rsync to drag data out of the chroot and I forgot to use the --xattrs argument with rsync.

Install Debian packages without starting daemons

My work at Rackspace has involved working with a bunch of Debian chroots lately. One problem I had was that daemons tried to start in the chroot as soon as I installed them. That created errors and made my ansible output look terrible.

If you’d like to prevent daemons from starting after installing a package, just toss a few lines into /usr/sbin/policy-rc.d:

cat > /usr/sbin/policy-rc.d < < EOF
echo "All runlevel operations denied by policy" >&2
exit 101

Now, install any packages that you need and the daemons will remain stopped until you start them (or reboot the server). Be sure to remove the policy file you added once you’re done installing your packages.

This seems like a good opportunity to get on a soapbox about automatically starting daemons. ;)

I still have a very difficult time understanding why Debian-based distributions start daemons as soon as the package is installed. Having an option to enable this might be useful for some situations, but this shouldn’t be the default.

You end up with situations like the one in this puppet bug report. The daemon shouldn’t start until you’re ready to configure it and use it. However, the logic is that the daemon is so horribly un-configured that it shouldn’t hurt anything if starts immediately. So why start the daemon at all?

When I run the command apt-get install or yum install, I expect that packages will be installed to disk and nothing more. Even the definition of the English word “install” talks about “preparing” something for use, not actually using it:

To connect, set up or prepare something for use

If I install an electrical switch at home, I don’t install it in the ON position with my circuit breaker in the ON position. I install it with everything off, verify my work, ensure that it fits in place, and then I apply power. The installation and actual use of the new switch are two completely separate activities with additional work required in between.

I strongly urge the Debian community to consider switching to a mechanism where daemons don’t start until the users configure them properly and are ready to use them. This makes configuration management much easier, improves security, and provides consistency with almost every other Linux distribution.

Get colorful ansible output in Jenkins

Working with ansible is enjoyable, but it’s a little bland when you use it with Jenkins. Jenkins doesn’t spawn a TTY and that causes ansible to skip over the code that outputs status lines with colors. The fix is relatively straightforward.

First, install the AnsiColor Plugin on your Jenkins node.

Once that’s done, edit your Jenkins job so that you export ANSIBLE_FORCE_COLOR=true before running ansible:

ansible-playbook -i hosts site.yml

If your ansible playbook requires sudo to run properly on your local host, be sure to use the -E option with sudo so that your environment variables are preserved when your job runs. For example:

sudo -E ansible-playbook -i hosts site.yml

HOLD UP: As Sam Sharpe reminded me, the better way to handle environment variables with sudo is to add them to env_keep in your sudoers file (use visudo to edit it):

Defaults        env_reset
Defaults        env_keep += "ANSIBLE_FORCE_COLOR"

Adding it to env_keep is a more secure method and you won’t need the -E any longer on the command line.

While you’re on the configuration page for your Jenkins job, look for Color ANSI Console Output under the Build Environment section. Enable it and ensure xterm is selected in the drop-down box.

Save your new configuration and run your job again. You should have some awesome colors in your console output when your ansible job runs.