If you weren’t able to make it, don’t fret! This post will cover some of the main points of the talk and link to the video and slides.
There are lots of efforts underway to get students (young and old) to learn to write code. There are far-reaching efforts, like the Hour of Code, and plenty of smaller, more focused projects, such as the Design and Technology Academy (part of Northeast ISD here in San Antonio, Texas). These are excellent programs that enrich the education of many students.
I often hear a question from various people about these programs:
Why should a student learn to write code if they aren’t going to become a software developer or IT professional?
It’s a completely legitimate question and I hope to provide a helpful response in this post.
Welcome to the fourth post in the series of What’s Happening in OpenStack-Ansible (WHOA) posts that I’m assembling each month. OpenStack-Ansible is a flexible framework for deploying enterprise-grade OpenStack clouds. In fact, I use OpenStack-Ansible to deploy the OpenStack cloud underneath the virtual machine that runs this blog!
My goal with these posts is to inform more people about what we’re doing in the OpenStack-Ansible community and bring on more contributors to the project.
There are plenty of updates since the last post from August. The race is on to finish up the Newton release and start new developments for the Ocata release! We hope to see lots of contributors in Barcelona!
The OpenStack-Ansible releases are announced on the OpenStack development mailing list. Here are the things you need to know:
The OpenStack-Ansible Newton release is still being finalized this week. The
stable/newton branches were created yesterday and stabilization work is ongoing.
The latest Liberty release, 12.2.4, contains lots of updates and fixes. The updates include a fix for picking up where you left off on a failed upgrade and a fix for duplicated log lines. The security role received some updates to improve performance and reduce unnecessary logging.
This section covers discussions from the OpenStack-Ansible weekly meetings, IRC channels, mailing lists, or in-person events.
As mentioned earlier, the
stable/newton branches have arrived for OpenStack-Ansible! This will allow us to finish stabilizing the Newton release and look ahead to Ocata.
Michael Johnson and Jorge Miramontes stopped by our weekly meeting to talk about how Octavia could be implemented in OpenStack-Ansible. Recent Octavia releases have some new features that should be valuable to OpenStack-Ansible deployers.
There is a spec from the Liberty release for deploying Octavia, but we were only able to get LBaaSv2 with the agent deployed. Jorge and Michael are working on a new spec to get Octavia deployed with OpenStack-Ansible.
There’s now a centralized testing repository for all OpenStack-Ansible roles. This allows the developers to share variables, scripts, and test cases between multiple roles. Developers can begin testing new roles with much less effort since the scaffolding for a basic test environment is available in the repository.
You can follow along with the development by watching the central-test-config topic in Gerrit.
The OpenStack-Ansible tag was fairly quiet on the OpenStack Development mailing list during the time frame of this report, but there were a few threads:
- cinder volume lxc and iscsi
- Blueprint discussion (for the Ocata OpenStack Summit)
- Newton RC2 available
This section covers some of the improvements coming to Newton, the upcoming OpenStack release.
Thanks to Florian Haas and Adolfo Brandes for assembling this course!
OpenStack-Ansible powers the OSIC cloud
One of the clouds operated by the OpenStack Innovation Center (OSIC) is powered by OpenStack-Ansible. It’s a dual-stack (IPv4 and IPv6) environment and it provides the most nodes for the OpenStack CI service! If you need to test an application on a large OpenStack cloud, apply for access to the OSIC cluster.
The backbone of OpenStack-Ansible is its inventory. The dynamic inventory defines where each service should be deployed, configured and managed. Some recent improvements include exporting inventory for use by other scripts or applications. Ocata should bring even more improvements to the dynamic inventory.
Thanks to Nolan Brubaker for leading this effort!
The installation guide has been completely overhauled! It has a more concise, opinionated approach to deployments and this should make the first deployment a little easier for newcomers. OpenStack can be a complex system to deploy and our goal is to provide the cleanest path to a successful deployment.
Thanks to Alex Settle for leading this effort!
The goal of this newsletter is three fold:
- Keep OpenStack-Ansible developers updated with new changes
- Inform operators about new features, fixes, and long-term goals
- Bring more people into the OpenStack-Ansible community to share their use
cases, bugs, and code
Please let me know if you spot any errors, areas for improvement, or items that I missed altogether. I’m
mhayden on Freenode IRC and you can find me on Twitter anytime.
Photo credit: Mattia Felice Palermo (Own work) CC BY-SA 3.0 es, via Wikimedia Commons
IBM Edge 2016 is almost over and I’ve learned a lot about Power 8 this week. I’ve talked about some of the learnings in my recaps of days one and two. The performance arguments sound really interesting and some of the choices in AIX’s design seem to make a lot of sense.
However, there’s one remaining barrier for me: Power 8 isn’t really accessible for a tinkerer.
Google defines tinkering as:
attempt to repair or improve something in a casual or desultory way,
often to no useful effect.
“he spent hours tinkering with the car”
When I come across a new piece of technology, I really enjoy learning how it works. I like to find its strengths and its limitations. I use that information to figure out how I might use the technology later and when I would recommend the technology for someone else to use it.
To me, tinkering is simply messing around with something until I have a better understanding of how it works. Tinkering doesn’t have a finish line. Tinkering may not have a well-defined goal. However, it’s tinkering that leads to a more robust community around a particular technology.
For example, take a look at the Raspberry Pi. There were plenty of other ARM systems on the market before the Pi and there are still a lot of them now. What makes the Pi different is that it’s highly accessible. You can get the newest model for $35 and there are tons of guides for running various operating systems on it. There are even more guides for how to integrate it with other items, such as sprinkler systems, webcams, door locks, and automobiles.
Another example is the Intel NUC. Although the NUC isn’t the most cost-effective way to get an Intel chip on your desk, it’s powerful enough to be a small portable server that you can take with you. This opens up the door for software developers to test code wherever they are (we use them for OpenStack development), run demos at a customer location, or make multi-node clusters that fit in a laptop bag.
What makes Power 8 inaccessible to tinkerers?
One of the first aspects that most people notice is the cost. The S821LC currently starts at around $6,000 on IBM’s site, which is a bit steep for someone who wants to learn a platform.
I’m not saying this server should cost less — the pricing seems quite reasonable when you consider that it comes with dual 8-core Power 8 processors in a 1U form factor. It also has plenty of high speed interconnects ready for GPUs and CAPI chips. With all of that considered, $6,000 for a server like this sounds very reasonable.
There are other considerations as well. A stripped down S821LC with two 8-core CPUs will consume about 406 Watts at 50% utilization. That’s a fair amount of power draw for a tinkerer and I’d definitely think twice about running something like that at home. When you consider the cooling that’s required, it’s even more difficult to justify.
What about AIX?
AIX provides some nice benefits on Power 8 systems, but it’s difficult to access as well. Put “learning AIX” into a Google search and look at the results. The first link is a thread on LinuxQuestions.org where the original poster is given a few options:
- Buy some IBM hardware
- Get in some legal/EULA gray areas with VMware
- Find an old Power 5/6 server that is coming offline at a business that is doing a refresh
Having access to AIX is definitely useful for tinkering, but it could be very useful for software developers. For example, if I write a script in Python and I want to add AIX support, I’ll need access to a system running AIX. It wouldn’t necessarily need to be a system with tons of performance, but it would need the functionality of a basic AIX environment.
I’d suggest two solutions:
- Get AIX into an accessible format, perhaps on a public cloud
- Make a more tinker-friendly Power 8 hardware platform
Let’s start with AIX. I’d gladly work with AIX in a public cloud environment where I pay some amount for the virtual machine itself plus additional licensing for AIX. It would still be valuable even if the version of AIX had limiters so that it couldn’t be used for production workloads. I would be able to access the full functionality of a running AIX environment.
The hardware side leads to challenges. However, if it’s possible to do a single Power 8 SMT2 CPU in a smaller form factor, this could become possible. Perhaps these could even be CPUs with some type of defect where one or more cores are disabled. That could reduce cost while still providing the full functionality to someone who wants to tinker with Power 8.
Some might argue that this defeats the point of Power 8 since it’s a high performance, purpose-built chip that crunches through some of the world’s biggest workloads. That’s a totally valid argument.
However, that’s not the point.
The point is to get a fully-functional Power 8 CPU — even if it has serious performance limitations — into the hands of developers who want to do amazing things with it. My hope would be that these small tests will later turn into new ways to utilize POWER systems.
It could also be a way for more system administrators and developers to get experience with AIX. Companies would be able to find more people with a base level of AIX knowledge as well.
IBM has something truly unique with Power 8. The raw performance of the chip itself is great and the door is open for even more performance through NVlink and CAPI accelerators. These features are game changers for businesses that are struggling to keep up with customer demands. A wider audience could learn about this game-changing technology if it becomes more accessible for tinkering.
Photo credit: Wikipedia
Day two of IBM Edge 2016 is all done, and the focus has shifted to the individual. Let’s get right to the recap:
One of the more memorable talks during the general session was Hortonworks. They’ve helped a transport company do more than simply track drivers. They assemble and analyze lots of information about each driver, the truck, the current road conditions, and other factors. From there, they apply a risk rating to that particular truck and provide updates to the driver about potential hazards. It reduced their insurance costs by 10%.
Florida Blue shared some insights from their POWER deployments and how they were able to get customers serviced faster. One of the more memorable quotes was:
The best way to get a customer happy is to get them off the phone.
They were able to rework how the backend systems retrieved data for their customer service personnel and cut average phone call durations from 9 minutes to 6.
Jason Pontin came on stage with three technology innovators under 35. They shared some of their latest work with the audience and it was amazing to see the problems they’re trying to solve. Lisa DeLuca introduced her new children’s book that helps to explain technology in new ways:
— Lisa Seacat DeLuca (@LisaSeacat) September 20, 2016
My first breakout session was Getting Started with Linux Performance on IBM POWER8 from Steve Nasypany. This was a highly informative session and you’ll definitely want to grab the slides from this talk whether you use POWER or not.
Steve dove into how to measure and adjust performance on POWER systems. He also gave some insight on how AIX and Linux differ when it comes to performance measurements. There are quite a few differences in how AIX and Linux refer to processors and how they measure memory usage. He took quite a bit of time to explain not only the what, but the why. It was a great session.
My second breakout was Bringing the Deep Learning Revolution into the Enterprise from Michael Gschwind. He kicked off with the basics of machine learning and how it matches up with the functions of a human brain. He provided some examples of objects that the human brain can quickly identify but a computer cannot.
The math is deep. Really deep. One of the interesting topics was stochastic gradient descent (warning: highly nerdy territory). It measures how well the computer has been trained on a particular machine learning task. The goal is to reduce errors and do less brute-force training with the computer so it can begin working independently. It’s oddly similar to raising children.
My breakouts were cut a little short because I was invited to be on theCUBE! It was completely nerve-wracking, but I had a great time. The hosts were fun to work with and the conversation seemed to flow quite well.
We talked about OpenStack, OpenPOWER, and Rackspace. You can watch my interview below if you can put up with my Texas accent:
We headed outside in the evening for a poolside reception. The weather was in the 80’s and it felt great outside!
— Major Hayden (@majorhayden) September 21, 2016
Everyone made their way inside to see Train perform live!
— Major Hayden (@majorhayden) September 21, 2016
The concert was great. They played plenty of their older hits and shared a new single that hasn’t been released yet. We even heard some covers of Led Zeppelin and Rolling Stones songs! Some attendees were dragged up on stage to help with the singing and they loved it.