DevOps and enterprise inertia

As I wait in the airport to fly back home from this year’s Red Hat Summit, I’m thinking back over the many conversations I had over breakfast, over lunch, and during the events. One common theme that kept cropping up was around bringing DevOps to the enterprise. I stumbled upon Mathias Meyer’s post, The Developer is Dead, Long Live the Developer, and I was inspired to write my own.

Before I go any further, here’s my definition of DevOps: it’s a mindset shift where everyone is responsible for the success of the customer experience. The success (and failure) of the project rests on everyone involved. If it goes well, everyone celebrates and looks for ways to highlight what worked well. If it fails, everyone gets involved to bring it back on track. Doing this correctly means that your usage of “us” and “them” should decrease sharply.

The issue at hand

One of the conference attendees told me that he and his technical colleagues are curious about trying DevOps but their organization isn’t set up in a way to make it work. On top of that, very few members of the teams knew about the concept of continuous delivery and only one or two people knew about tools that are commonly used to practice it.

I dug deeper and discovered that they have outages just like any other company and they treat outages as an operations problem primarily.  Operations teams don’t get much sleep and they get frustrated with poorly written code that is difficult to deploy, upgrade, and maintain.  Feedback loops with the development teams are relatively non-existent since the development teams report into a different portion of the business.  His manager knows that something needs to change but his manager wasn’t sure how to change it.

His company certainly isn’t unique.  My advice for him was to start a three step process:

Step 1: Start a conversation around responsibility.

Leaders need to understand that the customer experience is key and that experience depends on much more than just uptime. This applies to products and systems that support internal users within your company and those that support your external customers.

Imagine if you called for pizza delivery and received a pizza without any cheese. You drive back to the pizza place to show the manager the partial pizza you received. The manager turns to the employees and they point to the person assigned to putting toppings on the pizza. They might say: “It’s his fault, I did my part and put it in the oven.” The delivery driver might say: “Hey, I did what I was supposed to and I delivered the pizza. It’s not my fault.”

All this time, you, the customer, are stuck holding a half made pizza. Your experience is awful.

Looking back, the person who put the pizza in the oven should have asked why it was only partially made. The delivery driver should have asked about it when it was going into the box. Most important of all, the manager should have turned to the employees and put the responsibility on all of them to make it right.

Step 2: Foster collaboration via cross-training.

Once responsibility is shared, everyone within the group needs some knowledge of what other members of the group do. This is most obvious with developers and operations teams. Operations teams need to understand what the applications do and where their weak points are. Developers need to understand resource constraints and how to deploy their software. They don’t need to become experts but they need to know enough overlapping knowledge to build a strong, healthy feedback loop.

This cross-training must include product managers, project managers, and leaders. Feedback loops between these groups will only be successful if they can speak some of the language of the other groups.

Step 3: Don’t force tooling.

Use the tools that make the most sense to the groups that need to use them. Just because a particular software tool helps another company collaborate or deploy software more reliably doesn’t mean it will have a positive impact on your company.

Watch out for the “sunk cost” fallacy as well. Neal Ford talked about this during a talk at the Red Hat Summit and how it can really stunt the growth of a high performing team.


The big takeaway from this post is that making the mindset shift is the first and most critical step if you want to use the DevOps model in a large organization. The first results you’ll see will be in morale and camaraderie. That builds momentum faster than anything else and will carry teams into the idea of shared responsibility and ownership.

openssl heartbleed updates for Fedora 19 and 20

heartbleedThe openssl heartbleed bug has made the rounds today and there are two new testing builds or openssl out for Fedora 19 and 20:

Both builds are making their way over into the updates-testing stable repository thanks to some quick testing and karma from the Fedora community.

If the stable updates haven’t made it into your favorite mirror yet, you can live on the edge and grab the koji builds:

For Fedora 19 x86_64:

yum -y install koji
koji download-build --arch=x86_64 openssl-1.0.1e-37.fc19.1
yum localinstall openssl-1.0.1e-37.fc19.1.x86_64.rpm

For Fedora 20 x86_64:

yum -y install koji
koji download-build --arch=x86_64 openssl-1.0.1e-37.fc20.1
yum localinstall openssl-1.0.1e-37.fc20.1.x86_64.rpm

Be sure to replace x86_64 with i686 for 32-bit systems or armv7hl for ARM systems (Fedora 20 only). If your system has openssl-libs or other package installed, be sure to install those with yum as well.

Kudos to Dennis Gilmore for the hard work and to the Fedora community for the quick tests.

Detect proxies with icanhazproxy

You can already detect proxy servers using by accessing the service on port 80, 81, and 443. If you compare your results and you see different IP addresses, there’s most likely a proxy in the way.

To make things easier, I’ve launched It’s available on ports 80, 81 and 443 as well. If you choose to access it on port 443, you’ll get a certificate for that you’ll need to ignore.

You’ll get one of two possible responses:

200/OK with JSON output: When a proxy is detected, you’ll receive a 200 response and any proxy-related headers will be returned in JSON format. Here’s a sample:

$ curl
{"via": "1.1 0A065C93"}

You may receive multiple headers via JSON, so please be prepared for that.

204/NO CONTENT and empty response: No common proxy headers were detected. Here’s an example:

$ curl -si | head -n1

If you get this response but you know there’s a proxy in the way, let me know. I may need to look for extra headers or other items in the request.

Go try it out and send me your feedback!

Xen hackathon coming up in London

Rackspace London office
If you enjoy using Xen, join members of the Xen Project community and Rackspace at the Xen Hackathon in London. The two day event starts on May 29th.

Use these links to get more information:

You don’t need to be a developer to join the event. It’s a great networking opportunity and you can take time to learn more about virtualization and how Xen works under the hood.