Stumbling into the world of 4K displays [UPDATED]

Samsung U28D590D 4K displayWoot suckered me into buying a 4K display at a fairly decent price and now I have a Samsung U28D590D sitting on my desk at home. I ordered a mini-DisplayPort to DisplayPort from Amazon and it arrived just before the monitor hit my doorstep. It’s time to enter the world of 4K displays.

The unboxing of the monitor was fairly uneventful and it powered up after small amount of assembly. I plugged my mini-DP to DP cable into the monitor and then into my X1 Carbon 3rd gen. After a bunch of flickering, the display sprang to life but the image looked fuzzy. After some hunting, I found that the resolution wasn’t at the monitor’s maximum:

$ xrandr -q
DP1 connected 2560x1440+2560+0 (normal left inverted right x axis y axis) 607mm x 345mm
   2560x1440     59.95*+
   1920x1080     60.00    59.94  
   1680x1050     59.95  
   1600x900      59.98

I bought this thing because it does 3840×2160. How confusing. After searching through the monitor settings, I found an option for “DisplayPort version”. It was set to version 1.1 but version 1.2 was available. I selected version 1.2 (which appears to come with something called HBR2) and then the display flickered for 5-10 seconds. There was no image on the display.

I adjusted GNOME’s Display settings back down to 2560×1440. The display sprang back to life, but it was fuzzy again. I pushed the settings back up to 3840×2160. The flickering came back and the monitor went to sleep.

My laptop has an HDMI port and I gave that a try. I had a 3840×2160 display up immediately! Hooray! But wait — that resolution runs at 30Hz over HDMI 1.4. HDMI 2.0 promises faster refresh rates but neither my laptop or the display support it. After trying to use the display at max resolution with a 30Hz refresh rate, I realized that it wasn’t going to work.

The adventure went on and I joined #intel-gfx on Freenode. This is apparently a common problem with many onboard graphics chips as many of them cannot support a 4K display at 60Hz. It turns out that the i5-5300U (that’s a Broadwell) can do it.

One of the knowledgeable folks in the channel suggested a new modeline. That had no effect. The monitor flickered and went back to sleep as it did before.

I picked up some education on the difference between SST and MST displays. MST displays essentially have two chips handling half of the display within the monitor. Both of those do the work to drive the entire display. SST monitors (the newer variety, like the one I bought) take a single stream and one single chip in the monitor figures out how to display the content.

At this point, I’m stuck with a non-working display at 4K resolution over DisplayPort. I can get lower resolutions working via DisplayPort, but that’s not ideal. 4K works over HDMI, but only at 30Hz. Again, not ideal. I’ll do my best to update this post as I come up with some other ideas.

UPDATE 2015-07-01: Thanks to Sandro Mathys for spotting a potential fix:

I found BIOS 1.08 waiting for me on Lenovo’s site. One of the last items fixed in the release notes was:

(New) Supported the 60Hz refresh rate of 4K (3840 x 2160) resolution monitor.

After a quick flash of a USB stick and a reboot to update the BIOS, the monitor sprang to life after logging into GNOME. It looks amazing! The graphics performance is still not amazing (but hey, this is Broadwell graphics we’re talking about) but it does 3840×2160 at 60Hz without a hiccup. I tried unplugging and replugging the DisplayPort cable several times and it never flickered.

Fedora 22 and rotating GNOME wallpaper with systemd timers

My older post about rotating GNOME’s wallpaper with systemd timers doesn’t seem to work in Fedora 22. The DISPLAY=:0 environment variable isn’t sufficient to allow systemd to use gsettings.

Instead, the script run by the systemd timer must know a little bit more about dbus. More specifically, the script needs to know the address of the dbus session so it can communicate on the bus. That’s normally kept within the DBUS_SESSION_BUS_ADDRESS environment variable.

Open a shell and you can verify that yours is set:

$ env | grep ^DBUS_SESSION
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-xxxxxxxxxx,guid=fa6ff8ded93c1df77eba3fxxxxxxxxxx

That is actually set when gnome-session starts as your user on your machine. for the script to work, we need to add a few lines at the top:

#!/bin/bash
 
# These three lines are new
USER=$(whoami)
PID=$(pgrep -u $USER gnome-session)
export DBUS_SESSION_BUS_ADDRESS=$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/$PID/environ|cut -d= -f2-)
 
# These three lines are unchanged from the original script
walls_dir=$HOME/Pictures/Wallpapers
selection=$(find $walls_dir -type f -name "*.jpg" -o -name "*.png" | shuf -n1)
gsettings set org.gnome.desktop.background picture-uri "file://$selection"

Let’s look at what the script is doing:

  • First, we get the username of the user running the script
  • We look for the gnome-session process that is running as that user
  • We pull out the dbus environment variable from gnome-session’s environment variables when it was first started

Go ahead and adjust your script. Once you’re done, test it by simply running the script manually and then using systemd to run it:

$ bash ~/bin/rotate_bg.sh
$ systemctl --user start gnome-background-change

Both of those commands should now rotate your GNOME wallpaper in Fedora 22.

Book Review: Linux Kernel Development

Linux Kernel Development book coverI picked up a copy of Robert Love’s book, Linux Kernel Development, earlier this year and I’ve worked my way through it over the past several weeks. A few people recommended the book to me on Twitter and I’m so glad they did. This book totally changed how I look at a system running Linux.

You must be this tall to ride this ride

I’ve never had formal education in computer science or software development in the past. After all, my degree was in Biology and I was on the path to becoming a phyisician when this other extremely rewarding career came into play. (That’s a whole separate blog post in itself.)

Just to level-set: I can read C and make small patches when I spot problems. However, I’ve never set out and started a project in C on my own and I haven’t really made any large contributions to projects written in C. However, I’m well-versed in Perl, Ruby, and Python mainly from job experience and leaning on some much more skilled colleagues.

The book recommends that you have a basic grasp of C and some knowledge around memory management and process handling. I found that I was able to fully understand about 70% of the book immediately, another 20% or so required some additional research and practice, while about 10% was mind-blowing. Obviously, that leaves me with plenty of room to grow.

Honestly, if you understand how most kernel tunables work and you know at least one language that runs on your average Linux box, you should be able to understand the majority of the material. Some sections might require some re-reading and you might need to go back and read a section when a later chapter sheds more light on the subject.

Moving through the content

I won’t go into a lot of detail around the content itself other than to say it’s extremely comprehensive. After all, you wouldn’t be reading a book about something as complex as the Linux kernel if you weren’t ready for an onslaught of information.

The information is organized in an effective way. Initial concepts are familiar to someone who has worked in user space for quite some time. If you’ve dealt with oom-killer, loaded kernel modules, or written some horrible code that later needed to be optimized, you’ll find the beginning of the book to be very useful. Robert draws plenty of distinctions around kernel space, user space, and how they interact. He take special care to cover SMP-safe code and how to take non-SMP-safe code and improve it.

I found a ton of value in the memory management, locking, and the I/O chapters. I didn’t fully understand the blocks of C code within the text but there was a ton of value in the deep explanations of how data flows (and doesn’t flow) from memory to disk and back again.

The best part

If I had to pick one thing to entice more people to read the book, it would be the way Robert explains every concept in the book. He has a good formula that helps you understand the how, the what, and the why. So many books forget the why.

He takes the time to explain what frustrated the kernel developers that made them write a feature in the first place and then goes into detail about how they fixed it. He also talks about differences between other operating systems (like Unix, Windows, and others) and other hardware types (like ARM and Alpha). So many books leave this part out but it’s often critical for understanding difficult topics. I learned this the hard way in my biology classes when I tried to memorize concepts rather than trying to understand the evolutionary or chemical reasons for why it occurs.

Robert also rounds out the book with plenty of debugging tips that allow readers to trudge through bug hunts with better chances of success. He helps open the doors to the Linux kernel community and gives tips on how to get the best interactions from the community.

Wrap-up

This book is worth it for anyone who wants to learn more about how their Linux systems operate or who want to actually write code for the kernel. Much of the deep workings of the kernel was a mystery to me before and I really only knew how to interact with a few interfaces.

Reading this book was like watching a cover being taken off of a big machine and listening to an expert explain how it works. It’s definitely worth reading.

Improving LXC template security

LXC logoI’ve been getting involved with the Fedora Security Team lately and we’re working as a group to crush security bugs that affect Fedora, CentOS (via EPEL) and Red Hat Enterprise Linux (via EPEL). During some of this work, I stumbled upon a group of Red Hat Bugzilla tickets talking about LXC template security.

The gist of the problem is that there’s a wide variance in how users and user credentials are handled by the different LXC templates. An inventory of the current situation revealed some horrifying problems with many OS templates.

Many of the templates set an awful default root password, like rooter, toor, or root. Some of the others create a regular user with sudo privileges and give it a default, predictable password unless the user specifies otherwise.

There are some bright spots, though. Fedora and CentOS templates will accept a root password from the user during the build and set a randomized password for the root user if a password isn’t specified. Ubuntu Cloud takes another approach by locking out the root user and requiring cloud-init configuration data to configure the root account.

I kicked off a mailing list thread and wrote a terrible pull request to get things underway. Stéphane Graber requested that all templates use a shared script to handle users and credentials via standardized environment variables and command line arguments. In addition, all passwords for users (regular or root) should be empty with password-less logins disabled. Those are some reasonable requests and I’m working on a shell script that’s easy to import into LXC templates.

There’s also a push to remove sshd from all LXC templates by default, but I’m hoping to keep that one tabled until the credentials issue is solved.

If you’d like to help out with the effort, let me know! I’ll probably get some code up onto Github soon and as for comments.

Time for a new GPG key

Yubikey NEOAfter an unfortunate death of my Yubikey NEO and a huge mistake on backups, I’ve come to realize that it’s time for a new GPG key. My new one is already up on Keybase and there’s a plain text copy on my resume site.

Action required

If you’re using a key for me with a fingerprint of 6DC99178, that one is no longer valid. My new one is C1011FB1.

For the impatient, here’s the easiest way to retrieve my new key:

gpg2 --keyserver pgp.mit.edu --recv-key C1011FB1

Lessons learned

Always ensure that you have complete backups of all of your keys. I made a mistake and forgot to back up my original signing subkey before I moved that key to my Yubikey. When the NEO died, so did the last copy of the most important subkey. It goes without saying but I don’t plan on making that mistake again.

Always make a full backup of all keys and make a revocation certificate that also gets backed up. There’s a good guide on this topic if you’re new to the process.

Wait. A Yubikey stopped working?

This is the first Yubikey failure that I’ve ever experienced. I’ve had two regular Yubikeys that are still working but this is my first NEO.

I emailed Yubico support earlier today about the problem and received an email back within 10-15 minutes. They offered me a replacement NEO with free shipping. It’s still a bummer about the failure but at least they worked quickly to get me a free replacement.