2019-02-08

The i3 window manager is a fast window manager that helps you keep all of
your applications in the right place. It automatically tiles windows and can
manage those tiles across multiple virtual desktops.
However, there are certain applications that I really prefer in a floating
window. Floating windows do not get tiled and they can easily be dragged
around with your mouse. They’re the type of windows you expect to see on
other non-tiling desktops such as GNOME or KDE.
Convert a window to floating temporarily
If you have an existing window that you prefer to float, select that window
and press Mod + Shift + Space bar. The window will pop up in front of the
tiled windows and you can easily move it with your mouse.
Depending on your configuration, you may be able to resize it by grabbing a
corner of the window with your mouse. You can also assign a key combination
for resizing in your i3 configuration file (usually ~/.config/i3/config
):
# resize window (you can also use the mouse for that)
mode "resize" {
bindsym Left resize shrink width 10 px or 10 ppt
bindsym Down resize grow height 10 px or 10 ppt
bindsym Up resize shrink height 10 px or 10 ppt
bindsym Right resize grow width 10 px or 10 ppt
bindsym Return mode "default"
bindsym Escape mode "default"
bindsym $mod+r mode "default"
}
bindsym $mod+r mode "resize"
With this configuration, simply press Mod + r and use the arrow keys to
grow or shrink the window’s borders.
Always float certain windows
For those windows that you always want to be floating no matter what, i3 has
a solution for that, too. Just tell i3 how to identify your windows and
ensure floating enable
appears in the i3 config:
for_window [window_role="About"] floating enable
for_window [class="vlc"] floating enable
for_window [title="Authy"] floating enable
In the example above, I have a few windows always set to be floating:
[window_role="About"]
- Any of the “About” windows in various applications
that are normally opened by Help -> About.
[class="vlc"]
- The VLC media player can be a good one to float if you
need to stuff it away in a corner.
[title="Authy"]
- Authy’s chrome extension looks downright silly as a
tiled window.
Any time these windows are spawned, they will automatically appear as
floating windows. You can always switch them back to tiled manually by
pressing Mod + Shift + Space bar.
Identifying windows
Identifying windows in the way that i3 cares about can be challenging.
Knowing when to use window_role
or class
for a window isn’t very
intuitive. Fortunately, there’s a great script from an archived i3 faq
thread that makes this easy:
Download this script to your system, make it executable (chmod +x
i3-get-window-criteria
), and run it. As soon as you do that, a plus (+) icon
will replace your normal mouse cursor. Click on the window you care about and
look for the output in your terminal where you ran the
i3-get-window-criteria
script.
On my system, clicking on a terminator terminal window gives me:
[class="Terminator" id=37748743 instance="terminator" title="major@indium:~"]
If I wanted to float all terminator windows, I could add this to my i3
configuration file:
for_window [class="Terminator"] floating enable
Float in a specific workspace
Do you need a window to always float on a specific workspace? i3 can do that,
too!
Let’s go back to the example with VLC. Let’s consider that we have a really
nice 4K display where we always want to watch movies and that’s where
workspace 2 lives. We can tell i3 to always float the VLC window on workspace
2 with this configuration:
set $ws1 "1: main"
set $ws2 "2: 4kdisplay"
for_window [class="vlc"] floating enable
for_window [class="vlc"] move to workspace $ws2
Restart i3 to pick up the new changes (usually Mod + Shift + R) and start
VLC. It should appear on workspace 2 as a floating window!
Photo source
2019-01-31
DevConf.CZ 2019 wrapped up last weekend and it was a great event packed
with lots of knowledgeable speakers, an engaging hallway track, and delicious
food. This was my first trip to any DevConf and it was my second trip to
Brno.
Lots of snow showed up on the second day and more snow arrived later in the
week!

First talk of 2019
I co-presented a talk with one of my teammates, Nikolai, about some of the
fun work we’ve been doing at Red Hat to improve the quality of the Linux
kernel in an automated way. The room was full and we had lots of good
questions at the end of the talk. We also received some feedback that we
could take back to the team to change how we approached certain parts of the
kernel testing.

Our project, called Continuous Kernel Integration (CKI), has a goal of
reducing the amount of bugs that are merged into the Linux kernel. This
requires lots of infrastructure, automation, and testing capabilities. We
shared information about our setup, the problems we’ve found, and where we
want to go in the future.
Feel free to view our slides and watch the video (which should be up soon.)
Great talks from DevConf
My favorite talk of the conference was Laura Abbott’s “Monsters, Ghosts, and
Bugs.”

It’s the most informative, concise, and sane review of how all the Linux
kernels on the planet fit together. From the insanity of linux-next to the
wild world of being a Linux distribution kernel maintainer, she helped us all
understand the process of how kernels are maintained. She also took time to
help the audience understand which kernels are most important to them and how
they can make the right decisions about the kernel that will suit their
needs. There are plenty of good points in my Twitter thread about her talk.
Dan Walsh gave a detailed overview of how to use Podman instead of Docker. He
talked about the project’s origins and some of the incorrect assumptions that
many people have (that running containers means only running Docker). Running
containers without root has plenty of benefits. In addition, a significant
amount of work has been done to speed up container pulls and pushes in
Podman. I took some notes on Dan’s talk in a thread on Twitter.
The firewalld package has gained some new features recently and it’s poised
to fully take advantage of nftables in Fedora 31! Using nftables means that
firewall updates are done faster with fewer hiccups in busy environments
(think OpenStack and Kubernetes). In addition, nftables can apply rules to
IPv4 and IPv6 simultaneously, depeending on your preferences. My firewalld
Twitter thread has more details from the talk.
The cgroups v2 subsystem was a popular topic in a few of the talks I visited,
including the lightning talks. There are plenty of issues to get it working
with Kubernetes and container management systems. It’s also missing the
freezer capability from the original cgroups implementation. Without that,
pausing a container, or using technology like CRIU, simply won’t work.
Nobody could name a Linux distribution that has cgroups v2 enabled at the
moment, and that’s not helping the effort move forward. Look for more news on
this soon.

OpenShift is quickly moving towards offering multiple architectures as a
first class product feature. That would incluve aarch64, ppc64le, and s390x
in addition to the existing x86_64 support. Andy McCrae and Jeff Young had a
talk detailing many of the challenges along with lots of punny references to
various “arches”. I made a Twitter thread of the main points from the
OpenShift talk.
Some of the other news included:
- real-time linux patches are likely going to be merged into mainline.
(only 15 years in the making!)
- Fedora, CentOS, RHEL and EPEL communities are eager to bring more of their
processes together and make it easier for contributors to join in.
- Linux 5.0 is no more exciting than 4.20. It would have been 4.21 if Linus
had an extra finger or toe.
DevConf.US Boston 2019
The next DevConf.US is in Boston, USA this summer. I hope to see you there!
2019-01-27

Fedora 29 now has kernel 4.20 available and it has lots of new features.
One of the more interesting and easy to use features is the pressure stall
information interface.
Load average
We’re all familiar with the load average measurement on Linux machines,
even if the numbers do seem a bit cryptic:
$ w
10:55:46 up 11 min, 1 user, load average: 0.42, 0.39, 0.26
The numbers denote how many processes were active over the last one, five and
15 minutes. In my case, I have a system with four cores. My numbers above
show that less than one process was active in the last set of intervals. That
means that my system isn’t doing very much and processes are not waiting in
the queue.
However, if I begin compiling a kernel with eight threads (double my core
count), the numbers change dramatically:
$ w
11:00:28 up 16 min, 1 user, load average: 4.15, 1.89, 0.86
The one minute load average is now over four, which means some processes are
waiting to be served on the system. This makes sense because I am using eight
threads to compile a kernel on a system with four cores.
More detail
We assume that the CPU is the limiting factor in the system since we know
that compiling a kernel takes lots of CPU time. We can verify (and quantify) that with the pressure stall information available in 4.20.
We start by taking a look in /proc/pressure
:
$ head /proc/pressure/*
==> /proc/pressure/cpu <==
some avg10=71.37 avg60=57.25 avg300=23.83 total=100354487
==> /proc/pressure/io <==
some avg10=0.17 avg60=0.13 avg300=0.24 total=8101378
full avg10=0.00 avg60=0.01 avg300=0.16 total=5866706
==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=0
full avg10=0.00 avg60=0.00 avg300=0.00 total=0
But what do these numbers mean? The shortest explanation is in the patch
itself:
PSI aggregates and reports the overall wallclock time in which the
tasks in a system (or cgroup) wait for contended hardware resources.
The numbers here are percentages, not time itself:
The averages give the percentage of walltime in which one or more
tasks are delayed on the runqueue while another task has the
CPU. They’re recent averages over 10s, 1m, 5m windows, so you can tell
short term trends from long term ones, similarly to the load average.
We can try to apply some I/O pressure by making a big tarball of a kernel
source tree:
$ head /proc/pressure/*
==> /proc/pressure/cpu <==
some avg10=1.33 avg60=10.07 avg300=26.83 total=262389574
==> /proc/pressure/io <==
some avg10=40.53 avg60=13.27 avg300=3.46 total=20451978
full avg10=37.44 avg60=12.40 avg300=3.21 total=16659637
==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=0
full avg10=0.00 avg60=0.00 avg300=0.00 total=0
The CPU is still under some stress here, but the I/O is now the limiting
factor.
The output also shows a total=
number, and that is explained in the patch
as well:
The total= value gives the absolute stall time in microseconds. This
allows detecting latency spikes that might be too short to sway the
running averages. It also allows custom time averaging in case the
10s/1m/5m windows aren’t adequate for the usecase (or are too coarse
with future hardware).
The total number can be helpful for machines that run for a long time,
especially when you graph them and you monitor them for trends.
2019-01-14
The Home Assistant project provides a great open source way to get started
with home automtion that can be entirely self-contained within your home. It
already has plenty of integrations with external services, but it can also
monitor Z-Wave devices at your home or office.
Here are my devices:
Install the Z-Wave stick
Start by plugging the Z-Stick into your Linux server. Run lsusb
and it should appear in the list:
# lsusb | grep Z-Stick
Bus 003 Device 006: ID 0658:0200 Sigma Designs, Inc. Aeotec Z-Stick Gen5 (ZW090) - UZB
The system journal should also tell you which TTY is assigned to the USB
stick (run journalctl --boot
and search for ACM
):
kernel: usb 3-3.2: USB disconnect, device number 4
kernel: usb 3-1: new full-speed USB device number 6 using xhci_hcd
kernel: usb 3-1: New USB device found, idVendor=0658, idProduct=0200, bcdDevice= 0.00
kernel: usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
kernel: cdc_acm 3-1:1.0: ttyACM0: USB ACM device
kernel: usbcore: registered new interface driver cdc_acm
kernel: cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
In my case, my device is /dev/ttyACM0
. If you have other serial devices
attached to your system, your Z-Stick may show up as ttyACM1
or ttyACM2
.
Using Z-Wave in the Docker container
If you use docker-compose
, simply add a devices
section to your existing
YAML file:
version: '2'
services:
home-assistant:
ports:
- "8123:8123/tcp"
network_mode: "host"
devices:
- /dev/ttyACM0
volumes:
- /etc/localtime:/etc/localtime:ro
- /mnt/raid/hass/:/config:Z
image: homeassistant/home-assistant
restart: always
You can add the device to manual docker run
commands by adding --device
/dev/ttyACM0
to your existing command line.
Pairing
For this step, always refer to the instructions that came with your Z-Wave
device since some require different pairing steps. In my case, I installed
the battery, pressed the button inside the sensor, and paired the device:
- Go to the Home Assistant web interface
- Click Configuration on the left
- Click Z-Wave on the right
- Click Add Node and follow the steps on screen
Understanding how the sensor works
Now that the sensor has been added, we need to understand how it works. One
of the entities the sensor provides is an alarm_level
. It has two possible
values:
0
: the sensor is tilted vertically (garage door is closed)
255
: the sensor is tilted horizontally (garage door is open)
If the sensor changes from 0
to 255
, then someone opened the garage door.
Closing the door would result in the sensor changing from 255
to 0
.
Adding automation
Let’s add automation to let us know when the door is open:
- Click Configuration on the left
- Click Automation on the right
- Click the plus (+) at the bottom right
- Set a good name (like “Garage door open”)
- Under triggers, look for
Vision ZG8101 Garage Door Detector Alarm Level
and select it
- Set From to
0
- Set To to
255
- Leave the For spot empty
Now that we can detect the garage door being open, we need a notification
action. I love PushBullet and I have an action set up for PushBullet
notifications already. Here’s how to use an action:
- Select Call Service for Action Type in the Actions section
- Select a service to call when the trigger occurs
- Service data should contain the json that contains the notification
message and title
Here’s an example of my service data:
{
"message": "Someone opened the garage door at home.",
"title": "Garage door opened"
}
Press the orange and white save icon at the bottom right and you are ready to
go! You can tilt the sensor in your hand to test it or attach it to your
garage door and test it there.
If you want to know when the garage door is closed, follow the same steps
above, but use 255
for From and 0
for To.
2019-01-04
Managing iptables gets a lot easier with firewalld. You can manage rules for
the IPv4 and IPv6 stacks using the same commands and it provides fine-grained
controls for various “zones” of network sources and destinations.
Quick example
Here’s an example of allowing an arbitrary port (for netdata) through the
firewall with iptables and firewalld on Fedora:
## iptables
iptables -A INPUT -j ACCEPT -p tcp --dport 19999
ip6tables -A INPUT -j ACCEPT -p tcp --dport 19999
service iptables save
service ip6tables save
## firewalld
firewall-cmd --add-port=19999/tcp --permanent
In this example, firewall-cmd
allows us to allow a TCP port through the
firewall with a much simpler interface and the change is made permanent with
the --permanent
argument.
You can always test a change with firewalld without making it permanent:
firewall-cmd --add-port=19999/tcp
## Do your testing to make sure everything works.
firewall-cmd --runtime-to-permanent
The --runtime-to-permanent
argument tells firewalld to write the currently
active firewall configuration to disk.
Adding a port range
I use mosh with most of my servers since it allows me to reconnect to an
existing session from anywhere in the world and it makes higher latency
connections less painful. Mosh requires a range of UDP ports (60000 to 61000)
to be opened.
We can do that easily in firewalld:
firewall-cmd --add-port=60000-61000/udp --permanent
We can also see the rule it added to the firewall:
# iptables-save | grep 61000
-A IN_public_allow -p udp -m udp --dport 60000:61000 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT
# ip6tables-save | grep 61000
-A IN_public_allow -p udp -m udp --dport 60000:61000 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT
If you haven’t used firewalld yet, give it a try! There’s a lot more documentation on common use cases in the Fedora firewalld documentation.