I am here in Las Vegas for IBM Edge 2016 to learn about the latest developments in POWER, machine learning, and OpenStack. It isn’t just about learning - I’m sharing some of our own use cases and challenges from my daily work at Rackspace.
I kicked off the day with a great run down the Las Vegas strip. There are many more staircases and escalators than I remember, but it was still a fun run. The sunrise was awesome to watch as all of the colors began to change:
Without further ado, let’s get to the recap.
Two of the talks in the general session caught my attention: OpenPOWER and Red Bull Racing.
The OpenPOWER talk showcased the growth in the OpenPOWER ecosystem. It started with only five members, but it’s now over 250. IBM is currently offering OpenPOWER-based servers and Rackspace’s Barreleye design is available for purchase from various vendors.
Red Bull Racing kicked off with an amazing video about the car itself, the sensors, and what’s involved with running in 21 races per year. The highlight of the video for me was seeing the F1 car round corners on a mountain while equipped with snow chains.
The car itself has 100,000 components and the car is disassembled and reassembled for each race based on the race conditions. Due to restrictions on how often they can practice, they run over 50,000 virtual simulations per year to test out different configurations and parts. Each race generates 8 petabytes of data and it is live-streamed to the engineers at the track as well as an engineering team in the UK. They can make split second choices on what to do during the race based on this data.
They gave an example of a situation where something was wrong with the car and the driver needed to make a pit stop. The engineers looked over the data that was coming from the car and identified the problem. Luckily, the driver could fix the issue by flipping a switch on the steering wheel. The car won the race by less than a second.
My first stop on breakouts was Trends and Directions in IBM Power Systems. We had a high-level look at some of the advancements in POWER8 and OpenPOWER. Two customers shared their stories around why POWER was a better choice for them than other platforms, and everyone made sure to beat up on Moore’s Law at every available opportunity. Rackspace was applauded for its leadership on Barreleye!
The most interesting session of the day was the IBM POWER9 Technology Advanced Deep Dive. Jeff covered the two chips in detail and talked about some of the new connections between the CPU and various components. I’m interested in the hardware GZIP acceleration, NVLINK, and CAPI advancements. The connections to CAPI will be faster, thanks to the Power Service Layer (PSL) moving from the CAPI chip to the CPU itself. This reduces latency when communicating with the accelerator chip.
POWER9 has 192GB/sec on the PCIe Gen4 bus (that’s 48 lanes) and there’s 300GB/sec (25Gbit/sec x 48 lanes) of duplex bandwidth available for what’s called Common Link. Common Link is used to communicate with accelerators or remote SMP and it will likely be called “Blue Link” at a later date. Very clever, IBM.
I wrapped the day with Calista Redmond’s OpenPower Revolution in the Datacenter. She talked about where the OpenPOWER foundation is today and where it’s going in the future.
As you might expect, IBM has most of the EXPO floor set aside for themselves and they’re showing off new advances in POWER, System z, and LinuxONE. I spent a while looking at some of the new POWER8 chassis offerings and had a good conversation with some LinuxONE experts about some blockchain use cases.
IBM hired DJ Andrew Hypes and DJ Tim Exile to make some unique music by sampling sounds in a datacenter. They sampled sounds from IBM servers and storage devices and created some really unique music. It doesn’t sound anything like a datacenter, though (thank goodness for that).
The Red Bull Racing booth drew a fairly large crowd throughout the evening. They had one of their F1 cars on site with its 100+ sensors:
The big emphasis for the first day was on using specialized hardware for specialized workloads. Moore’s Law took a beating throughout the day as each presenter assured the audience that 2x performance gains won’t come in the chip itself for much longer.
It won’t be possible to achieve the performance we want in the future on the backs of software projects alone. We will need to find ways to be smarter about how we run software on our servers. When something is ripe for acceleration, especially CPU-intensive, repetitive workloads, we should find a way to accelerate it in hardware. There are tons of examples of this already, like AES encryption acceleration, but we will need more acceleration capabilities soon.