I have a CyberPower CP1350AVRLCD under my desk at home and I use it to run my computer, monitors, speakers, and a lamp. My new computer is a little more power hungry than my old one since I just moved to to a Ryzen 3700x and Nvidia GeForce 2060 and I like to keep tabs on how much energy it is consuming. Some power supplies offer a monitoring interface where you can watch your power consumption in real time, but I’m not willing to spend that much money.
UPDATE: The chromium-vaapi package is now chromium-freeworld. This post was updated on 2019-11-06 to include the change. See the end of the post for the update steps. If you use a web browser to watch videos on a laptop, you’ve probably noticed that some videos play without much impact on the battery. Other videos cause the fans to spin wildly and your battery life plummets. Intel designed a specification called VA API, often called VAAPI (without the space), and it offers up device drivers to applications running on your system.
i3 has been my window manager of choice for a while and I really enjoy its simplicity and ease of use. I use plenty of gtk applications, such as Firefox and Evolution, and configuring them within i3 can be confusing. This post covers a few methods to change configurations for GNOME and gtk applications from i3. lxappearance Almost all of the gtk theming settings are available in lxappearance. You can change fonts, mouse cursors, icons, and colors.
Monit is a tried-and-true method for monitoring all kinds of systems, services, and network endpoints. Deploying monit is easy. There’s only one binary daemon to run and it reads monitoring configuration from files in a directory you specify. Most Linux distributions have a package for monit and the package usually contains some basic configuration along with a systemd unit file to run the daemon reliably. However, this post is all about how to deploy it inside OpenShift.
When you build tons of kernels every day like my team does, you look for speed improvements anywhere you can. Caching repositories, artifacts, and compiled objects makes kernel builds faster and it reduces infrastructure costs. Need for speed We use GitLab CI in plenty of places, and that means we have a lot of gitlab-runner configurations for OpenShift (using the kubernetes executor) and AWS (using the docker-machine executor). The runner’s built-in caching makes it easy to upload and download cached items from object storage repositories like Google Cloud Storage or Amazon S3.