[{"content":"","date":null,"permalink":"/tags/fedora/","section":"Tags","summary":"","title":"Fedora"},{"content":"Scroll through the list of Wayland posts posts on the blog and you\u0026rsquo;ll see that I\u0026rsquo;ve solved plenty of weird problems with Wayland and the Sway compositor. Most are pretty easy to fix but some are a bit trickier.\nJava applications are notoriously unpredictable and Wayland takes unpredictability to the next level. One particular application on my desktop always seems to start with massive cursors.\nThis post is about how I fixed and then discovered something interesting along the way.\nFixing big cursors #I recently moved some investment and trading accounts from TD Ameritrade to Tastytrade. Both offer Java applications that make trading easier, but Tastytrade\u0026rsquo;s application always started with massive cursors.\nTo make matters worse, sometimes the cursor looked lined up on the screen but then the click landed on the wrong buttons in the application! Errors are annoying. Errors that cost you money and time must be fixed. 😜\nSome web searches eventually led me to Arch Linux\u0026rsquo;s excellent Wayland wiki page. None of the adjustments or environment variables there had any effect on my cursors.\nI eventually landed on a page that suggested setting XCURSOR_SIZE. I don\u0026rsquo;t remember ever setting that, but it was being set by something:\n$ echo $XCURSOR_SIZE 24 One of the suggestions was to decrease it, so I decided to give 20 a try. That was too big, but 16 was perfect and it matched all of my other applications:\n$ export XCURSOR_SIZE=20 # /opt/tastytrade/bin/tastytrade That works fine when I start my application via the terminal, but how do I set it for the application when I start it from ulauncher in sway? 🤔\nDesktop file #The Tastytade RPM comes with a .desktop file for launching the application. I copied that over to my local applications directory:\ncp /opt/tastytrade/lib/tastytrade-tastytrade.desktop \\ ~/.local/share/applications/ Then I opened the copied ~/.local/share/applications/tastytrade-tastytrade.desktop file in a text editor:\n[Desktop Entry] Name=tastytrade Comment=tastytrade Exec=/opt/tastytrade/bin/tastytrade Icon=/opt/tastytrade/lib/tastytrade.png Terminal=false Type=Application Categories=tastyworks MimeType= I changed the Exec line to be:\nExec=env XCURSOR_SIZE=16 /opt/tastytrade/bin/tastytrade I launched the application again after making that change, but the cursors were still huge! There has to be another way. 🤔\nsystemd does everything 😆 #After more searching and digging, I discovered that systemd has a capability to set environment variables for user sessions:\nConfiguration files in the environment.d/ directories contain lists of environment variable assignments passed to services started by the systemd user instance. systemd-environment-d-generator(8) parses them and updates the environment exported by the systemd user instance. See below for an discussion of which processes inherit those variables.\nIt is recommended to use numerical prefixes for file names to simplify ordering.\nFor backwards compatibility, a symlink to /etc/environment is installed, so this file is also parsed.\nLet\u0026rsquo;s give that a try:\n$ mkdir -p ~/.config/environment.d/ $ vim ~/.config/environment.d/wayland.conf In the file, I added one line with a comment (because you will soon forget why you added it 😄):\n# Fix big cursors in Java apps in Wayland XCURSOR_SIZE=16 After a reboot, I launched my Java application and boom \u0026ndash; the cursors were perfect! 🎉\nI went back and cleaned up some other hacks I had applied and added them to that wayland.conf file:\n# This was important at some point but I\u0026#39;m afraid to remove it. # Note to self: make detailed comments when adding lines here. SDL_VIDEODRIVER=wayland QT_QPA_PLATFORM=wayland # Reduce window decorations for VLC QT_WAYLAND_DISABLE_WINDOWDECORATION=\u0026#34;1\u0026#34; # Fix weird window handling when Java apps do certain pop-ups _JAVA_AWT_WM_NONREPARENTING=1 # Ensure Firefox is using Wayland code (not needed any more) MOZ_ENABLE_WAYLAND=1 # Disable HiDPI GDK_SCALE=1 # Fix big cursors in Java apps in Wayland XCURSOR_SIZE=16 I\u0026rsquo;m told there are some caveats with this solution, especially if your Wayland desktop doesn\u0026rsquo;t use systemd to start. This is working for me with GDM launching Sway on Fedora 40.\n","date":"26 April 2024","permalink":"/p/java-big-cursors-wayland/","section":"Posts","summary":"Java applications under Wayland seemed to have all different sizes of cursors, but\nsome were way, way, too big. 🐘","title":"Fix big cursors in Java applications in Wayland"},{"content":"","date":null,"permalink":"/tags/java/","section":"Tags","summary":"","title":"Java"},{"content":"","date":null,"permalink":"/tags/linux/","section":"Tags","summary":"","title":"Linux"},{"content":"","date":null,"permalink":"/","section":"Major Hayden","summary":"","title":"Major Hayden"},{"content":"","date":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts"},{"content":"","date":null,"permalink":"/tags/sway/","section":"Tags","summary":"","title":"Sway"},{"content":"","date":null,"permalink":"/tags/systemd/","section":"Tags","summary":"","title":"Systemd"},{"content":"","date":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"","date":null,"permalink":"/tags/wayland/","section":"Tags","summary":"","title":"Wayland"},{"content":"","date":null,"permalink":"/tags/cloud/","section":"Tags","summary":"","title":"Cloud"},{"content":"We\u0026rsquo;re all familiar with the trusty old dhclient on our Linux systems, but it went end-of-life in 2022:\nNOTE: This software is now End-Of-Life. 4.4.3 is the final release planned. We will continue to keep the public issue tracker and user mailing list open. You should read this file carefully before trying to install or use the ISC DHCP Distribution. Most Linux distributions use dhclient along with cloud-init for the initial dhcp request during the first part of cloud-init\u0026rsquo;s work. I set off to switch Fedora\u0026rsquo;s cloud-init package to dhcpcd instead.\nWhat\u0026rsquo;s new with dhcpcd? #There are some nice things about dhcpcd that you can find in the GitHub repository:\nVery small footprint with almost no dependencies on Fedora It can do DHCP and DHCPv6 It can also be a ZeroConf client The project had its last release back in December 2023 and had commits as recently as this week.\nBut I use NetworkManager #That\u0026rsquo;s great! A switch from dhclient to dhcpcd for cloud-init won\u0026rsquo;t affect you.\nWhen cloud-init starts, it does an initial dhcp request to get just enough networking to reach the cloud\u0026rsquo;s metadata service. This service provides all kinds of information for cloud-init, including network setup instructions and initial scripts to run.\nNetworkManager doesn\u0026rsquo;t start taking action until cloud-init has written the network configuration to the system.\nBut I use systemd-networkd #Same as with NetworkManager, this change applies to the very early boot and you won\u0026rsquo;t notice a different when deploying new cloud systems.\nHow can I get it right now? #If you\u0026rsquo;re using a recent build of Fedora rawhide (the unstable release under development), you likely have it right now on your cloud instance. Just run journalctl --boot, search for dhcpcd, and you should see these lines:\ncloud-init[725]: Cloud-init v. 24.1.4 running \u0026#39;init-local\u0026#39; at Wed, 17 Apr 2024 14:39:36 +0000. Up 6.13 seconds. dhcpcd[727]: dhcpcd-10.0.6 starting kernel: 8021q: 802.1Q VLAN Support v1.8 dhcpcd[730]: DUID 00:01:00:01:2d:b2:9b:a9:06:eb:18:e7:22:dd dhcpcd[730]: eth0: IAID 18:e7:22:dd dhcpcd[730]: eth0: soliciting a DHCP lease dhcpcd[730]: eth0: offered 172.31.26.195 from 172.31.16.1 dhcpcd[730]: eth0: leased 172.31.26.195 for 3600 seconds avahi-daemon[706]: Joining mDNS multicast group on interface eth0.IPv4 with address 172.31.26.195. avahi-daemon[706]: New relevant interface eth0.IPv4 for mDNS. avahi-daemon[706]: Registering new address record for 172.31.26.195 on eth0.IPv4. dhcpcd[730]: eth0: adding route to 172.31.16.0/20 dhcpcd[730]: eth0: adding default route via 172.31.16.1 dhcpcd[730]: control command: /usr/sbin/dhcpcd --dumplease --ipv4only eth0 There\u0026rsquo;s also an update pending for Fedora 40, but it\u0026rsquo;s currently held up by the beta freeze. That should appear as an update as soon as Fedora 40 is released.\nKeep in mind that if you have a system deployed already, cloud-init won\u0026rsquo;t need to run again. Updating to Fedora 40 will update your cloud-init and pull in dhcpcd, but it won\u0026rsquo;t need to run again since your configuration is already set.\n","date":"18 April 2024","permalink":"/p/fedora-cloud-init-dhcpcd/","section":"Posts","summary":"Fedora\u0026rsquo;s cloud-init package now uses dhcpcd in place of dhclient, which went end of life in 2022. 💀","title":"cloud-init and dhcpcd"},{"content":"","date":null,"permalink":"/tags/centos/","section":"Tags","summary":"","title":"Centos"},{"content":"","date":null,"permalink":"/tags/presentation/","section":"Tags","summary":"","title":"Presentation"},{"content":"The 2024 Texas Linux Festival just ended last weekend and it was a fun event as always. It\u0026rsquo;s one my favorite events to attend because it\u0026rsquo;s really casual. You have plenty of opportunities to see old friends, meet new people, and learn a few things along the way.\nI was fortunate enough to have two talks accepted for this year\u0026rsquo;s event. One was focused on containers while the other was a (very belated) addition to my impostor syndrome talk from 2015.\nThis was also my first time building slides with reveal-md, a \u0026ldquo;batteries included\u0026rdquo; package for making reveal.js slides. Nothing broke too badly and that was a relief.\nContainers talk #I\u0026rsquo;ve wanted to share more of what I\u0026rsquo;ve done with CoreOS in low-budget container deployments and this seemed like a good time to share it with the world out loud. My talk, Automated container updates with GitHub and CoreOS, walked the audience through how to deploy containers on CoreOS, keep them updated, and update the container image source.\nMy goal was to keep it as low on budget as possible. Much of it was centered around a stack of caddy, librespeed, and docker-compose. All of it was kept up to date with watchtower.\nMy custom Caddy container needed support for Porkbun\u0026rsquo;s DNS API and I used GitHub Actions to build that container and serve it to the internet using GitHub\u0026rsquo;s package hosting. This also gave me the opportunity to share how awesome Porkbun is for registering domains, including their customized pig artwork for every TLD imaginable. 🐷\nWe had a great discussion afterwards about how CoreOS does indeed live on as Fedora CoreOS.\nTech career talk #This talk made me nervous because it had a lot of slides to cover, but I also wanted to leave plenty of time for questions. Five tips for a thriving technology career built upon my old impostor syndrome talk by sharing some of the things I\u0026rsquo;ve learned over the year that helped me succeed in my career.\nI managed to end early with time for questions, and boy did the audience have questions! 📣 Some audience members helped me answer some questions, too!\nWe talked a lot about office politics, tribal knowledge, and toxic workplaces. The audience generally agreed that most businesses tried to rub copious amounts of Confluence on their tribal knowledge problem, but it never improved. 😜\nThe room was full with people standing in the back and I\u0026rsquo;m tremendously humbled by everyone who came. I received plenty of feedback afterwards and that\u0026rsquo;s the best gift I could ever get. 🎁\nOther talks #Anita Zhang had an excellent keynote talk on the second day about her unusual path into the world of technology. Her slides were pictures of her dog that lined up with various parts of her story. That was a great idea.\nKyle Davis offered talks on valkey and bottlerocket. There was plenty about the redis and valkey story that I didn\u0026rsquo;t know and the context was useful. It looks like you can simply drop valkey into most redis environments without much disruption.\nThomas Cameron talked about running OKD on Fedora CoreOS in his home lab. There were quite a few steps, but he did a great job of connecting the dots between what needed to be done and why.\nAround the exhibit hall #I helped staff the Fedora/CoreOS booth and we had plenty of questions. Most questions were around the M1 Macbook running Asahi Linux that was on the table. 😉\nThere were still quite a few misconceptions around the CentOS Stream changes, as well as how AlmaLinux and Rocky Linux fit into the picture. Our booth was right next to the AlmaLinux booth and I had the opportunity to meet Jonathan Wright. That was awesome!\nI can\u0026rsquo;t wait for next year\u0026rsquo;s event.\n","date":"16 April 2024","permalink":"/p/texas-linux-fest-2024-recap/","section":"Posts","summary":"I gave two talks at this year\u0026rsquo;s event and ran into lots of old friends and colleagues. 🐧","title":"Texas Linux Fest 2024 recap 🤠"},{"content":"","date":null,"permalink":"/tags/amd/","section":"Tags","summary":"","title":"Amd"},{"content":"","date":null,"permalink":"/tags/laptop/","section":"Tags","summary":"","title":"Laptop"},{"content":"Static blogs come with tons of advantages. They\u0026rsquo;re cheap to serve. You store all your changes in git. People with spotty internet connections can clone your blog and run it locally.\nHowever, one of the challenges that I\u0026rsquo;ve run into over the years is around analytics.\nI could quickly add Google Analytics to the site and call it a day, but is that a good idea? Many browsers have ad blocking these days and the analytics wouldn\u0026rsquo;t even run. For those that don\u0026rsquo;t have an ad blocker, do I want to send more data about them to Google? 🙃\nHow about running my own self-hosted analytics platform? That\u0026rsquo;s pretty easy with containers, but most ad blockers know about those, too.\nThis post talks about how to host a static blog in a container behind a Caddy web server. We will use goaccess to analyze the log files on the server itself to avoid dragging in an analytics platform.\nWhy do you need analytics? #Yes, yes, I know this comes from the guy who wrote a post about writing for yourself, but sometimes I like to know which posts are popular with other people. I also like to know if something\u0026rsquo;s misconfigured and visitors are seeing 404 errors for pages which should be working.\nIt can also be handy to know when someone else is writing about you, especially when those things are incorrect. 😉\nSo my goals here are these:\nGet some basic data on what\u0026rsquo;s resonating with people and what isn\u0026rsquo;t Find configuration errors that are leading visitors to error pages Learn more about who is linking to the site Do all this without impacting user privacy through heavy javascript trackers What are the ingredients? #There are three main pieces:\nCaddy, a small web server that runs really well in containers This blog, which is written with Hugo and stored in GitHub Goaccess, a log analyzer with a capability to do live updates via websockets Caddy will write logs to a location that goaccess can read. In turn, goaccess will write log analysis to an HTML file that caddy can serve. The HTML file served by caddy will open a websocket to goaccess for live analytics.\nA static blog in a container? #We can pack a static blog into a very thin container with an extremely lightweight web server. After all, caddy can handle automatic TLS certificate installation, logging, and caching. That just means we need the most basic webserver in the container itself.\nI was considering a second caddy container with the blog content in it until I stumbled upon a great post by Florin Lipan about The smallest Docker image to serve static websites. He went down a rabbit hole to make the smallest possible web server container with busybox.\nHis first stop led to a 1.25MB container, and that\u0026rsquo;s tiny enough for me.1 🤏\nI built a container workflow in GitHub Actions that builds a container, puts the blog in it, and stores that container as a package in the GitHub repository. It all starts with a brief Dockerfile:\nFROM docker.io/library/busybox:1.36.1 RUN adduser -D static USER static WORKDIR /home/static COPY ./public/ /home/static CMD [\u0026#34;busybox\u0026#34;, \u0026#34;httpd\u0026#34;, \u0026#34;-f\u0026#34;, \u0026#34;-p\u0026#34;, \u0026#34;3000\u0026#34;] We start with busybox, add a user, put the website content into the user\u0026rsquo;s home directory, and start busybox\u0026rsquo;s httpd server. The container starts up and serves the static content on port 3000.\nCaddy logs #Caddy writes its logs in a JSON format and goaccess already knows how to parse caddy logs. Our first step is to get caddy writing some logs. In my case, I have a directory called caddy/logs/ in my home directory where those logs are written.\nI\u0026rsquo;ll mount the log storage into the caddy container and mount one extra directory to hold the HTML file that goaccess will write. Here\u0026rsquo;s my docker-compose.yaml excerpt:\ncaddy: image: ghcr.io/major/caddy:main container_name: caddy ports: - \u0026#34;80:80/tcp\u0026#34; - \u0026#34;443:443/tcp\u0026#34; - \u0026#34;443:443/udp\u0026#34; restart: unless-stopped volumes: - ./caddy/Caddyfile:/etc/caddy/Caddyfile:Z - caddy_data:/data - caddy_config:/config # Caddy writes logs here 👇 - ./caddy/logs:/logs:z # This is for goaccess to write its HTML file 👇 - ./storage/goaccess_major_io:/var/www/goaccess_major_io:z Now we need to update the Caddyfile to tell caddy where to place the logs and add a reverse_proxy configuration for our new container that serves the blog:\nmajor.io { # We will set up this container in a moment 👇 reverse_proxy major_io:3000 { lb_try_duration 30s } # Tell Caddy to write logs to `/logs` which # is `storage/logs` on the host: log { output file /logs/major.io-access.log { roll_size 1024mb roll_keep 20 roll_keep_for 720h } } } Great! We now have the configuration in place for caddy to write the logs and the caddy container can mount the log and analytics storage.\nEnabling analytics #We\u0026rsquo;re heading back to the docker-compose.yml file once more, this time to set up a goaccess container:\ngoaccess_major_io: image: docker.io/allinurl/goaccess:latest container_name: goaccess_major_io restart: always volumes: # Mount caddy\u0026#39;s log files 👇 - \u0026#34;./caddy/logs:/var/log/caddy:z\u0026#34; # Mount the directory where goaccess writes the analytics HTML 👇 - \u0026#34;./storage/goaccess_major_io:/var/www/goaccess:rw\u0026#34; command: \u0026#34;/var/log/caddy/major.io-access.log --log-format=CADDY -o /var/www/goaccess/index.html --real-time-html --ws-url=wss://stats.major.io:443/ws --port=7890 --anonymize-ip --ignore-crawlers --real-os\u0026#34; This gets us a goaccess container to parse the logs from caddy. We need to update the caddy configuration so that we can reach the goaccess websocket for live updates:\nstats.major.io { root * /var/www/goaccess_major_io file_server reverse_proxy /ws goaccess_major_io:7890 } At this point, we have caddy writing logs in the right place, goaccess can read them, and the analytics output is written to a place where caddy can serve it. We\u0026rsquo;ve also exposed the websocket from goaccess for live updates.\nServing the blog #We\u0026rsquo;ve reached the most important part!\nWe added the caddy configuration to reach the blog container earlier, but now it\u0026rsquo;s time to deploy the container itself. As a reminder, this is the container with busybox and the blog content that comes from GitHub Actions.\nThe docker-compose.yml configuration here is very basic:\nmajor_io: image: ghcr.io/major/major.io:main container_name: major_io restart: always Caddy will connect to this container on port 3000 to serve the blog. (We set port 3000 in the original Dockerfile).\nAt this point, everything should be set to go. Make it live with:\ndocker-compose up -d This should bring up the goaccess and blog containers while also restarting caddy. The website should be visible now at major.io (and that\u0026rsquo;s how you\u0026rsquo;re reading this today).\nWhat about new posts? #I\u0026rsquo;m glad you asked! That was something I wondered about as well. How do we get the new blog content down to the container when a new post is written? 🤔\nAs I\u0026rsquo;ve written in the past, I like using watchtower to keep containers updated. Watchtower offers an HTTP API interface for webhooks to initiate container updates. We can trigger that update via a simple curl request from GitHub Actions when our container pipeline runs.\nMy container workflow has a brief bit at the end that does this:\n- name: Update the blog container if: github.event_name != \u0026#39;pull_request\u0026#39; run: | curl -s -H \u0026#34;Authorization: Bearer ${WATCHTOWER_TOKEN}\u0026#34; \\ https://watchtower.thetanerd.com/v1/update env: WATCHTOWER_TOKEN: ${{ secrets.WATCHTOWER_TOKEN }} You can enable this in watchtower with a few new environment variables in your docker-compose.yml:\nwatchtower: # New environment variables 👇 environment: - WATCHTOWER_HTTP_API_UPDATE=true - WATCHTOWER_HTTP_API_TOKEN=SUPER-SECRET-TOKEN-PASSWORD - WATCHTOWER_HTTP_API_PERIODIC_POLLS=true WATCHTOWER_HTTP_API_UPDATE enables the updating via API and WATCHTOWER_HTTP_API_TOKEN sets the token required when making the API request. If you set WATCHTOWER_HTTP_API_PERIODIC_POLLS to true, watchtower will still periodically look for updates to containers even if an API request never appeared. By default, watchtower will stop doing periodic updates if you enable the API.\nThis is working on my site right now and you can view my public blog stats on stats.major.io. 🎉\nFlorin went all the way down to 154KB and I was extremely impressed. However, I\u0026rsquo;m not too worried about an extra megabyte here. 😉\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"4 April 2024","permalink":"/p/static-blog-analytics/","section":"Posts","summary":"Static blogs are easy to serve, but so many of the free options have no analytics whatsoever.\nThis post talks about how to serve your own blog from a container with live updating analytics","title":"Roll your own static blog analytics"},{"content":"","date":null,"permalink":"/tags/thinkpad/","section":"Tags","summary":"","title":"Thinkpad"},{"content":"","date":null,"permalink":"/tags/caddy/","section":"Tags","summary":"","title":"Caddy"},{"content":"I recently told a coworker about Caddy, a small web and proxy server with a very simple configuration. It also has a handy feature where it manages your TLS certificate for you automatically.\nHowever, one problem I had at home with my CoreOS deployment is that I don\u0026rsquo;t have inbound network access to handle the certificate verification process. Most automated certificate vendors need to reach your web server to verify that you have control over your domain.\nThis post talks about how to work around this problem with domains registered at Porkbun.\nDNS validation #Certificate providers usually default to verifying domains by making a request to your server and retrieving a validation code. If your systems are all behind a firewall without inbound access from the internet, you can use DNS validation instead.\nThe process looks something like this:\nYou tell the certificate provider the domain names you want on your certificate The certificate provider gives you some DNS records to add wherever you host your DNS records You add the DNS records You get your certificates once the certificate provider verifies the records. You can do this manually with something like acme.sh today, but it\u0026rsquo;s painful:\n# Make the initial certificate request acme.sh --issue --dns -d example.com \\ --yes-I-know-dns-manual-mode-enough-go-ahead-please # Add your DNS records manually. # Verify the DNS records and issue the certificates. acme.sh --issue --dns -d example.com \\ --yes-I-know-dns-manual-mode-enough-go-ahead-please --renew # Copy the keys/certificates and configure your webserver. We don\u0026rsquo;t want to live this way.\nLet\u0026rsquo;s talk about how Caddy can help.\nAdding Porkbun support to Caddy #Caddy is a minimal webserver and Porkbun support doesn\u0026rsquo;t get included by default. However, we can quickly add it via a simple container build:\nFROM caddy:2.7.6-builder AS builder RUN xcaddy build \\ --with github.com/caddy-dns/porkbun FROM caddy:2.7.6 COPY --from=builder /usr/bin/caddy /usr/bin/caddy This is a two stage container build where we compile the Porkbun support and then use that new caddy binary in the final container.\nWe\u0026rsquo;re not done yet!\nAutomated Caddy builds with updates #I created a GitHub repository that builds the Caddy container for me and keeps it updated. There\u0026rsquo;s a workflow to publish a container to GitHub\u0026rsquo;s container repository and I can pull containers from there on my various CoreOS machines.\nIn addition, I use Renovate to watch for Caddy updates. New updates come through a regular pull request and I can apply them whenever I want.\nExample pull request from Renovate Connecting to Porkbun #We start here by getting an API key to manage the domain at Porkbun.\nLog into your Porkbun dashboard. Click Details to the right of the domain you want to manage. Look for API Access in the leftmost column and turn it on. At the top right of the dashboard, click Account and then API Access. Add a title for your new API key, such as Caddy, and click Create API Key. Save the API key and secrey key that are displayed. Open up your Caddy configuration file (the Caddyfile) and add some configuration:\n{ email me@example.com # Uncomment this next line if you want to get # some test certificates first. # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory acme_dns porkbun { api_key pk1_****** api_secret_key sk1_****** } } example.com { handle { respond \u0026#34;Hello world!\u0026#34; } } Save the Caddyfile and restart your Caddy server or container. Caddy will immediately begin requesting your TLS certificates and managing your DNS records for those certificates. This normally finishes in less than 30 seconds or so during the first run.\nIf you don\u0026rsquo;t see the HTTPS endpoint working within a minute or two, be sure to check the Caddy logs. You might have a typo in a Porkbun API key or the domain you\u0026rsquo;re trying to modify doesn\u0026rsquo;t have the API Access switch enabled.\nRemember that Porkbun requires you to enable API access for each domain. API access is disabled at Porkbun by default. That\u0026rsquo;s it! 🎉\nRenewals #Caddy will keep watch over the certificates and begin the renewal process as the expiration approaches. It has a very careful retry mechanism that ensures your certificates are updated without tripping any rate limits at the certificate provider.\nFurther reading #Caddy\u0026rsquo;s detailed documentation about Automatic HTTPS and the tls configuration directive should answer most questions about how the process works.\n","date":"29 February 2024","permalink":"/p/caddy-porkbun/","section":"Posts","summary":"Caddy offers a great web and proxy server experience with minimal configuration and automated TLS certificates. Learn how to connect Caddy to Porkbun to get TLS certificates by managing your DNS records for you automatically. 🐷","title":"Connect Caddy to Porkbun"},{"content":"","date":null,"permalink":"/tags/containers/","section":"Tags","summary":"","title":"Containers"},{"content":"","date":null,"permalink":"/tags/coreos/","section":"Tags","summary":"","title":"Coreos"},{"content":"","date":null,"permalink":"/tags/dns/","section":"Tags","summary":"","title":"Dns"},{"content":"","date":null,"permalink":"/tags/ssl/","section":"Tags","summary":"","title":"Ssl"},{"content":"","date":null,"permalink":"/tags/tls/","section":"Tags","summary":"","title":"Tls"},{"content":"AMD\u0026rsquo;s new Zen 4 processors started rolling out in 2022 and I\u0026rsquo;ve been watching for the mobile CPUs to reach laptops. I like where AMD is going with these chips and how they provide lots of CPU power without eating up the battery.\nI recently ordered a ThinkPad Z13 Gen 2 with an AMD Ryzen 7. As you might expect, I loaded it up with Fedora Linux and set out to ensure that everything works.\nThis post includes all of the configurations and changes I added along the way.\nPower management #I removed the power profiles daemon that comes with Fedora by default. and replaced it with tlp. This is a great package for ThinkPad laptops as it takes care of most of the power management configuration for you with sane defaults. It also offers an easy to read configuration file where you can make adjustments.\nThe defaults seem to be working well so far, but my only complaint is that the power management for amdgpu seems to be really aggressive. Graphics performance on battery power is okay, but I\u0026rsquo;m told this improves in kernel 6.7. I\u0026rsquo;m on 6.6.11 in Fedora 39 right now.\nI\u0026rsquo;ll wait to see if this new kernel makes any improvements.\nTouchpad #The ELAN touchpad in the Z13 is a bit different. It\u0026rsquo;s a haptic touchpad. It doesn\u0026rsquo;t push down with a click like the other thinkpads. It provides haptic feedback, much like a mobile phone does when you tap on the screen. (I usually turn this off on my phone, but it feels good on the laptop.)\nThe touchpad works right out of the box without any additional configuration. I made a basic Sway configuration stanza to get it configured with my preferences:\n# ThinkPad Z13 Gen 2 AMD Touchpad input \u0026#34;11311:40:SNSL0028:00_2C2F:0028_Touchpad\u0026#34; { drag disabled tap enabled dwt enabled natural_scroll disabled } The configuration above enables tap to click and dragging with taps. I like the old school scrolling style and I\u0026rsquo;ve disabled the natural scroll.\nYou can always get a list of your input devices in Sway with swaymsg:\nswaymsg -t get_inputs Display #The display worked right out of the box but the UI elements were scaled up far too large for me. I typically value screen real estate over all other aspects, but my usual default of scaling to 1.0 made the UI far too small.\nI set my output scaling to 1.2:\n# Disable HiDPI output * scale 1.2 I also enabled the RPM Fusion repos to get the freeworld AMD Mesa drivers:\nsudo dnf install \\ https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \\ https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm sudo dnf swap mesa-va-drivers mesa-va-drivers-freeworld sudo dnf swap mesa-vdpau-drivers mesa-vdpau-drivers-freeworld Audio #Sound worked right out of the box, but I found that the loudness preset from easyeffects made the speakers sound a little bit better:\nsudo dnf install easyeffects Everything else #Everything else just worked!\nI\u0026rsquo;m really pleased with the performance and the battery life so far. My only complaint is that the OLED screen can be a battery hog at times.\nFor more details, check out the Arch Linux wiki page for the Z13. They documented lots of the function keys if you want to create keyboard shortcuts and they link to some downloadable monitor profiles.\n","date":"14 January 2024","permalink":"/p/linux-thinkpad-z13-amd/","section":"Posts","summary":"Now that AMD\u0026rsquo;s Zen 4 CPUs landed in lots of laptops, I picked up a ThinkPad Z13 G2\nwith an AMD Ryzen CPU. Did I put Linux on it? Of course I did. 🐧","title":"Linux on the AMD ThinkPad Z13 G2"},{"content":"Ah, dark mode! I savor my dark terminals, window decorations, and desktop wallpapers. It\u0026rsquo;s so much easier on my eyes on those long work days. 😎\nHowever, I think the author Mary Oliver said it best:\nSomeone I loved once gave me a box full of darkness. It took me years to understand that this too, was a gift.\nIn most window managers, such as GNOME or KDE, switching to dark mode involves a simple trip to the settings panels and clicking different themes. Sway doesn\u0026rsquo;t offer us those types of comforts, but we can get dark mode there, too!\nGTK applications #If you happen to have GNOME on your system alongside sway, go into Settings, then Appearance and select Dark. You can also get dark mode by applying a setting in ~/.config/gtk-3.0/settings.ini:\n[Settings] gtk-application-prefer-dark-theme=1 Restart whichever application you were using and it should pick up the new configuration.\nFirefox, for example, ships with an automatic appearance setting that follows the OS. That should be reflected immediately upon restart. If not, go into Firefox\u0026rsquo;s settings, and look for dark mode under the Language and Appearance section of the general settings.\nQT applications #Most of my applications are GTK-based, but I have one or two which use QT. Again, just like the GTK example, if you have KDE installed along side Sway, you can configure dark mode there easily. Just open the system settings and look for Breeze Dark in the Plasma Style section.\nYou don\u0026rsquo;t have KDE? Don\u0026rsquo;t worry! There are a couple of commands which should work:\n# This should work for all QT/KDE apps # if you have the Breeze Dark theme installed. lookandfeeltool -platform offscreen \\ --apply \u0026#34;org.kde.breezedark.desktop\u0026#34; # You can set the theme for GTK apps here as well # if you run into problems. dbus-send --session --dest=org.kde.GtkConfig \\ --type=method_call /GtkConfig org.kde.GtkConfig.setGtkTheme \\ \u0026#34;string:Breeze-dark-gtk\u0026#34; Alternate dark mode based on time #Many window managers offer a method for adjusting dark and light modes based on the time of day. For example, some people love brighter interfaces during the day and darker ones at night. There\u0026rsquo;s a great tool called darkman that makes this easier. 🤓\nThe darkman service runs in the background and runs various commands to change dark mode settings for all kinds of window managers. It also speaks to dbus directly to set the configurations if needed.\nIt also has a directory full of user contributed scripts to change dark and light modes for various environments. You might be able to pull some commands from these files to test which configurations might work best on your system.\n","date":"9 January 2024","permalink":"/p/sway-dark-mode/","section":"Posts","summary":"Dark mode lovers rejoice! It\u0026rsquo;s possible to get (most) applications to show up\nin dark mode in the Sway window manager. 😎","title":"Dark mode in Sway"},{"content":"","date":null,"permalink":"/tags/firefox/","section":"Tags","summary":"","title":"Firefox"},{"content":"","date":null,"permalink":"/tags/gnome/","section":"Tags","summary":"","title":"Gnome"},{"content":"","date":null,"permalink":"/tags/kde/","section":"Tags","summary":"","title":"Kde"},{"content":"","date":null,"permalink":"/tags/advice/","section":"Tags","summary":"","title":"Advice"},{"content":"","date":null,"permalink":"/tags/career/","section":"Tags","summary":"","title":"Career"},{"content":"","date":null,"permalink":"/tags/diversity/","section":"Tags","summary":"","title":"Diversity"},{"content":"️ 👋 This post represents my own views on the topic of diversity and it doesn\u0026rsquo;t represent the views of my employer or any professional group I belong to.\nI\u0026rsquo;ve written a post on diversity and deleted it several times. It remains a sensitive topic for different people for different reasons. My gut feeling is that no matter how you frame a post on diversity, some group of people will be upset about it1.\nThere was a great speaker who came and spoke to us at my last job and she made an excellent point that I remember today:\nYour experiences are yours. Nobody can take them away from you.\nNobody can say that your experiences do not matter.\nNobody can tell you that you didn\u0026rsquo;t experience what you experienced.\nSharing these experiences with others allows us to grow and understand more about the world around us.\nThat speech entirely changed my way of thinking about interactions with other people at work and at home. There are two main benefits here:\nIt\u0026rsquo;s incredibly freeing for someone who has experienced something to be able to share it with others and not be told that their experience was wrong or misguided. It\u0026rsquo;s also freeing for the listener to take in someone else\u0026rsquo;s experience and be able to ask clarifying questions so they get a better understanding of how something felt for someone else. With that in mind, here\u0026rsquo;s we go with the rest of the post. I\u0026rsquo;m not deleting it this time.\nI promise. 😉\nMy first experience #I\u0026rsquo;ve written in the past about my unexpected leap to lead an information security architecture team in a previous role. Being a director was a new world unto itself, but then I found that my team wasn\u0026rsquo;t performing well. To make matters worse, our success was critical to the ongoing work of the security department as a whole.\nWe had three members of the team that all brought something unique to the team\u0026rsquo;s perspective. All three were men of different races, but each had a different approach to security based on their backgrounds and experiences. One left the team due to some interpersonal issues that eventually boiled over.\nI was suddenly down to two people and our team needed to hire two as soon as possible.\nRecruiting teams started putting feelers out into the market to find talented people and I was poking several friends for referrals. A colleague in another department reached out and really wanted to join the team. I knew about her experience from several previous interactions and she was highly recommended from her peers. She joined the team and hit the ground running.\nAs applicants began trickling in through the recruiting team, I started with screening calls for each. We really needed someone with skills in a few key areas:\nGreat communicator with empathy Knowledge of secure development and operations practices Someone who could be trusted to operate independently and work on team projects Most of the applicants I screened were male and that wasn\u0026rsquo;t a surprise at the time. We brought five through the screening into interviews and it was down to four males and one female. Three of them turned out to be great and they all had deep knowledge of security architecture. After another round of interviews, we began to realize that the female matched the other applicants, but her communication skills were stronger, especially under pressure.\nNeedless to say, we sent her the offer and she accepted! We were thrilled! Our team was full!\nGetting underway #We began chipping away at the mountain of projects set aside for our team and started making progress. Our new team member was struggling to move from the rigidity of her previous employer to our new way of working, but she adjusted well over time.\nI sat down in one of our weekly leadership meetings some time later. These meetings usually involved a round-the-horn of what\u0026rsquo;s working well for each team, the threats on the board for the next few months, and our plans.\nWe usually had an attendee from HR in the meetings for various reasons and she asked me how our new team member was doing. I said:\nOh, she\u0026rsquo;s doing a good job. Her last company was pretty rigid and things are different here, but she\u0026rsquo;s figuring it out. She knows her stuff and she\u0026rsquo;s a team player. We\u0026rsquo;re working through some small things here and there.\nThen the HR representative said:\nWell, I have to commend you for building out a such a diverse team. It\u0026rsquo;s much more so than the other teams. That\u0026rsquo;s really great work and I want to make sure you\u0026rsquo;re recognized for it.\nI smiled and thanked her (because that\u0026rsquo;s my usual response), but then I almost felt sick.\nDifferent view #I left that meeting and went back to my desk to think.\nHad I assembled a diverse team intentionally? No, I didn\u0026rsquo;t. I looked for people who had the qualities we desperately needed, gave them guardrails, and got out of the way. That\u0026rsquo;s what you do with smart people, right?\nThen I wondered, \u0026ldquo;Is my team really diverse?\u0026rdquo;\nThere were two men and two women. Two were on the younger end of a generation and two were on the older end. One was of mixed Asian descent, one was Hispanic, and two were what most people would likely refer to as \u0026ldquo;white.\u0026rdquo; So maybe my team is diverse.\nThen I realized that most of the people on the team had the same certifications, all had at least an undergraduate degree, and all were married. All of them were in heterosexual relationships and all dressed in a way that aligned with their gender.\nDoes that mean they\u0026rsquo;re not diverse?\nIs my team more or less diverse than other teams?\nDoes any of this even matter?\nDid I do a good or a bad thing?\nDiversity challenges #This brings me to the two problems I struggle with most around diversity, especially when people talk about increasing or improving diversity on their team or within their company:\nQuantifying diversity is highly subjective and in the eye of the beholder. Challenges arise when you apply diversity requirements to real world situations. I\u0026rsquo;ll break down both of these now.\nIn the eye of the beholder #You can choose how you want to measure diversity on all kinds of factors. Depending on the factors, a team can look more or less diverse. Also, your experiences often define how you judge the diversity of people and teams.\nOne could argue that a team made up entirely of white males is likely not very diverse. The majority of people would likely agree with that statement.\nHowever, what if those males vary in their sexual orientations, educational backgrounds, and socioeconomic status. Is that diverse?\nIf you have a team of people made up of various genders with various sexual orientations from all contents on the planet, but they all went to Ive League schools and they\u0026rsquo;re all wealthy \u0026ndash; is that diverse?\nAre any of these examples diverse enough? Does the answer to that question even matter?\nIn my experience, assembling a team of people with different backgrounds and approaches to problems is incredibly valuable. That type of diversity led to some incredible innovation in the past.\nHowever, these diverse backgrounds and approaches don\u0026rsquo;t always line up with differences in gender identity, socioeconomic status, sexual orientation, or other factors. This is why I find it really challenging to quantify the level of diversity within a company or in individual teams.\nRubber meets the road #There\u0026rsquo;s a common phrase in English: \u0026ldquo;when the rubber meets the road.\u0026rdquo;\nIn a literal sense, it\u0026rsquo;s referring to when car tires move on pavement during a race. What it really means is, when it comes time to do something for real and the stakes are high, what happens?\nHere\u0026rsquo;s another example. Let\u0026rsquo;s say you lead an engineering team that is all males and your company says that diversity must be a priority in hiring decisions.\nSo you take your job requisition and send it through the recruiting team. They work hard to remove any gender-specific language or anything else that might turn an applicant away. You put the job on the internet, talk to your friends about people they know, and then wait for the responses.\nLet\u0026rsquo;s assume you get ten male applicants.\nDo you proceed with screening and interviewing them while you try harder to drum up more female applicants? If a female applicant never appears, do you pause the hiring process while you try to find one? What if your existing applicants find other roles in the meantime and suddenly your applicant pipeline is empty?\nSome might say \u0026ldquo;Yes, of course you wait until you can find a female applicant!\u0026rdquo; In that case, your team is still short-staffed and likely not performing as well as it could. Would that be good for your customers? How about your shareholders?\nOthers might say \u0026ldquo;No, go ahead and complete the hiring process but you should search harder for women for future roles.\u0026rdquo; In this case, you\u0026rsquo;ll have a fully staffed team and hopefully be delivering more value quickly. However, you haven\u0026rsquo;t improved the diversity on your team and that could come back to be a problem if you\u0026rsquo;re asked about it later.\nGo backwards a bit with the same example and assume you get a split of ten applicants: half male and half female. That\u0026rsquo;s awesome because now you have a diverse talent pool, right?\nHere\u0026rsquo;s where it gets challenging.\nIf you interview them all and make an offer to the female applicant because she has the skills and qualifications needed, you now have a more diverse team (on one measure) and you\u0026rsquo;re fully staffed! Great!\nIf you interview them all and it turns out one of the men has the best skills and qualifications, what do you do? Your company made diversity a priority, but you\u0026rsquo;re also trying to assemble a strong team.\nDo you take a less qualified applicant that improves the team\u0026rsquo;s diversity?\nOr, do you take a more qualified applicant that leaves the team\u0026rsquo;s diversity unchanged?\nThis is where diversity breaks down: when you have to really sit down and compare outcomes, there\u0026rsquo;s not a right answer.\nAnother viewpoint #My wife constantly points out things to me that I completely missed and we\u0026rsquo;ve talked about this topic many times. She has asked me the same thing in the past:\nWhy do people in your field care so much about getting women into technology? I hate technology. Maybe other women hate technology, too. If I knew I was hired someplace because they wanted a woman for the role and they weren\u0026rsquo;t looking at how well someone could do the job, I\u0026rsquo;d be pretty upset.\nShe\u0026rsquo;s a medical professional and she\u0026rsquo;s happy to remind me about this:\nI went to PA (physician assistant) school and most people there were women. All the nurses at my office are women. All the front office staff are women. We\u0026rsquo;re not out there trying to get male nurses or male front office staff in here all the time. We just find people who do their job well and hire them.\nOur conversations really make me stop and think.\nMy goals #I\u0026rsquo;d also like to see more people from underrepresented communities across the globe break into the world of technology and really change things. This means empowering a wider array of people with varying gender, education, nationality, wealth, and opportunities to join a field of work which they thought might be inaccessible to them.\nThis is why I try to volunteer as much as possible to inspire young people of all backgrounds to set goals for themselves and look at the world as if nothing is out of reach.\nIt\u0026rsquo;s one of the reasons I write this blog and put everything out there for free. Democratizing access to learning (and my mediocre blog posts) is key to leveling the playing field.\nThese are some of those pieces of work that are never finished.\nHowever, I really worry that quantifying diversity or forcing one\u0026rsquo;s definition of diversity onto someone else could lead us to a bad place where no result is satisfactory. It\u0026rsquo;s much more subjective than some would like to admit and that becomes a problem when you directly apply it to specific situations.\nIn the meantime, I\u0026rsquo;ll keep writing these posts, mentoring others, and lifting people up to do things they never imagined they could do. ️♥️\nThen again, we live in a world where someone can say \u0026ldquo;Puppies are cute\u0026rdquo; and the first reply would be \u0026ldquo;Why do you hate cats so much?\u0026rdquo; 😄\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"16 December 2023","permalink":"/p/on-diversity/","section":"Posts","summary":"Diverse teams lead to great outcomes, but how we measure diversity\nremains a challenge. Enforcing it is even more challenging. 🌎","title":"On diversity"},{"content":"","date":null,"permalink":"/tags/books/","section":"Tags","summary":"","title":"Books"},{"content":"Reading allows me to travel to other places and times while also reducing my stress and helping me to think more creatively. Sometimes this leads me to wild fictional stories or takes me on a learning journey into history.\n(I track my reading lists on Goodreads if you want to see what I\u0026rsquo;m reading.)\nIn this post, I\u0026rsquo;ll list the spooky books I read this October and hopefully you\u0026rsquo;ll find at least one them interesting!\nThe Cabin at the End of the World #My first book of the month was Paul Tremblay\u0026rsquo;s The Cabin at the End of the World. It\u0026rsquo;s centered around a family with a small child that goes on a relaxing vacation in a lakefront cabin.\nAll is well until a friendly stranger named Leonard befriends the child, Wen, and his three fellow travelers appear. The travelers hold the family hostage and tell them that they have the key to save the world, but it\u0026rsquo;s not as easy as it seems. There seems to be an unseen force that has some sort of control over the travelers. 🤔\nMy thoughts #I thought I had this story figured out several times only to be proven wrong just as many times. There are so many themes in this book that tug at your emotions, including family, racism, homophobia, and socioeconomic differences. This book packs plenty of suspense, but the most frightening parts aren\u0026rsquo;t supernatural or ghostly. The scariest parts feel centered around the essence of human nature.\nAlthough this felt like a quick read, it was intense in many places. I definitely enjoyed it and I was looking forward to seeing the movie adaptation, Knock at the Cabin. The movie was done by M. Night Shyamalan (of The Sixth Sense fame) and I read that he changed the entire ending of the movie. 😞\nThe Ruins #I moved onto something completely different next with Scott Smith\u0026rsquo;s The Ruins. The story takes place in Mexico with several people on a beach vacation. One of the tourists notes that his brother went to check out an archeological dig further into the countryside but never returned. The group eventually decides to go search for the missing brother as some sort of vacation adventure.\nThe adventure turns ugly as they make their way to the country\u0026rsquo;s interior. 😬\nMy thoughts #You\u0026rsquo;ll have a tough time finding the antagonist in this story until it\u0026rsquo;s too late, but that\u0026rsquo;s part of the fun. Every character in the group brings personality quirks and old baggage with them that impairs their judgement in different ways. This works well for some but not for others.\nThis book had several scary moments, but most of the horror came again from how humans interact with one another. As soon as someone (or something) else figures out how to exploit those against them, bad things happen.\nThis book was difficult to read because much of it was quite gruesome and brutal. It\u0026rsquo;s definitely a book for adults only and you should be prepared to work your way through it.\nThe Troop #Everything is fine with a scout trip to an island in Canada in Nick Cutter\u0026rsquo;s The Troop. Well, it\u0026rsquo;s fine until a mysterious and incredibly hungry man suddenly shows up on the island. He looks a lot like he\u0026rsquo;s dead already and the the leader of the scout troop, a medical doctor, is completely mystified by the man\u0026rsquo;s condition.\nIt goes downhill from there in a story told mostly through diary entries, newspaper clippings, and court testimony.\nMy thoughts #Of all the books I read in October, this one scared me the most because it felt like it was entirely possible. There wasn\u0026rsquo;t any part of the book that I looked at and said: \u0026ldquo;Oh, that could never happen.\u0026rdquo; That\u0026rsquo;s what makes this one so good.\nIt\u0026rsquo;s suspenseful enough to keep you turning the pages but it\u0026rsquo;s also plausible enough that you might find yourself wanting to wash your hands a little longer the next time you eat a meal. It feels a bit like Lord of the Flies mixed with a pandemic novel like Station Eleven or The Stand.\nThis one is also very gruesome in parts with some very difficult scenes to read.\nThis one was my favorite of the group by far.\nDevolution: A Firsthand Account of the Rainier Sasquatch Massacre #Sasquatch is back in Max Brooks\u0026rsquo; Devolution! Told through the journals of a woman living in a remote town in Washington, this book covers a modern time where Mount Ranier erupted and caused lots of species to get on the move from their habitats around the mountain. Some of those creatures are the ones you\u0026rsquo;d expect, but some are ones you won\u0026rsquo;t expect.\nThe small community is an experiment in green, off the grid living, and they are quickly tested by just about everything mother nature can throw at them. This includes some rather tall, furry, human-like creatures.\nMy thoughts #This one felt scary due to the remoteness of the village and the unprepared people involved. Also, whether you believe in Sasquatch or not, this felt a bit more plausible than I expected.\nThere were plenty of difficult to read scenes in here, but the gore was reduced compared to other books listed here. Much of the suspense came from how humans interact with one another especially when faced with an adversary that presents a unique set of challenges.\nThis book was difficult to get into (it starts slow), but stick with it. It\u0026rsquo;s a wild ride.\nTender is the Flesh #Be prepared when you crack open Agustina Bazterrica\u0026rsquo;s Tender is the Flesh. Imagine a world where animals somehow contract a virus that they quickly spread to each other and to humans as well. Pets are banned and animals near cities are killed. The pandemic puts a significant dent in the human population.\nHowever, with all of the animals either infected or gone, where do people get meat to eat? 🤔\nMy thoughts #This was by far the most challenging book to read out of the group because I honestly felt like I was going to be sick in places. The author doesn\u0026rsquo;t set out for cheap scares or basic unsettling events \u0026ndash; there\u0026rsquo;s something much deeper. It really makes you question how humanity operates and how quickly boundaries can shift when hunger becomes a problem.\nI had to take a lot of breaks with this book. It has some suspenseful parts but this book has a slow-burn horror feel that gives you hope, crushes that hope, and then starts the cycle once more.\nI strongly recommend this book for adults only but I feel terrible recommending it at the same time. 😄\nWhat\u0026rsquo;s next? #After all of that horror and unsettling fiction, I\u0026rsquo;m shaking things up for November. My current book is Larry McMurtry\u0026rsquo;s Lonesome Dove. So many people have recommended it to me as a great book about the American west and I\u0026rsquo;m enjoying it so far.\n","date":"19 November 2023","permalink":"/p/horror-book-reviews/","section":"Posts","summary":"October brings us the Halloween holiday here in the US and I set off on an adventure into some spooky and unsettling books. 👻","title":"Horror book reviews from October 2023"},{"content":"","date":null,"permalink":"/tags/reading/","section":"Tags","summary":"","title":"Reading"},{"content":"Much of my work at Red Hat revolves around the RHEL experience in public clouds. I thrive on input from customers, partners, and coworkers about how they consume public clouds and why they made decisions to deploy there.\nThroughout this process, I run into some wild misconceptions about public clouds and what makes them useful. One that I hear most often is:\nBusinesses are moving to the cloud to reduce cost and improve efficiency. It\u0026rsquo;s mainly just a purchasing exercise.\nThis couldn\u0026rsquo;t be further from the truth.\nCloud offers a chance to start over #Sometimes businesses find themselves in an IT quagmire1. No matter what they do to improve their situation, it just gets worse. Capital expenditures grow and grow, datacenter space gets more expensive, and companies spend more time focusing on IT rather than their core business.\nDeploying in clouds offers that chance to break the capital expense cycle and gradually improve infrastructure. The key word here is gradual.\nBusinesses can choose how much they want to deploy and when without worrying about expensive servers in the datacenter waiting to be used. Some deployments are greenfield, or entirely net new applications. Some are basic migrations of applications from servers or virtual machines directly to the cloud.\nEither way, businesses have the freedom to deploy as little or as much as they want on their own schedule.\nCloud offers a chance to software-define (nearly) anything #Anyone who has worked in a large organization before knows the pain of change management. Sure, it ticks a box on that yearly compliance program, but it also ensures that everyone is aligned on the plan.\nOne of the greatest aspects of cloud is that you can define almost everything in software. This makes changes easier to apply, easier to roll back, and easier to track.\nTools like Terraform or Ansible allow developers and operations team to work from the same playbook. My team enjoys using Infracost to track how much a particular Terraform change might cost us under different scenarios.\nOnce teams set a policy of \u0026ldquo;we define our changes in git, and that\u0026rsquo;s it\u0026rdquo;, you can rely on a git history for change management. This avoids drift in production environments and it also ensures that changes made in development environments make it into staging and then into production. The days of \u0026ldquo;it worked on my system, what\u0026rsquo;s wrong with production?\u0026rdquo; slowly fade away.\nLess than ideal architectural decisions can also be adjusted over time to fit the applications being deployed. Did you set up a network incorrectly? Did you choose an instance type without enough RAM?\nThat\u0026rsquo;s okay!\nJust adjust the deployment in git, test it in staging, and push it to production.\nCloud offers managed services #One thing I tell people constantly is that if you bend the cloud to fit your application, you will almost always pay more. You get cost and performance efficiencies if you bend your application to fit the cloud. Confused? I\u0026rsquo;ll explain.\nWhen I talk to people doing their first cloud deployments, they deploy everything into VMs, much as they would in a local virtualized environment.\nYou need a database server? Make a couple of VMs and set up replication. You need to run a batch job via cron? Deploy a VM and add it to the crontab. You need a server to export an NFS share? Deploy a VM with lots of storage and export it to other instances. Do you see a pattern here?\nMost public clouds offer tons of services that lift the management burden from engineering teams and offload into a managed service. For example, that cron job might be able to move into a \u0026ldquo;serverless\u0026rdquo;2 service, such as AWS Lambda. It\u0026rsquo;s critical to check the pricing here to ensure you\u0026rsquo;re not headed down a bad path, but you have one less VM to maintain, one less IPv4 address to pay for, and a greatly reduced risk of configuration drift. That reduction in stress and risk might be worth any additional costs.\nDeployment decisions become much easier and lower stress when you consume services offered by the provider. There are those situations where deploying a whole VM is needed, but I\u0026rsquo;ve managed to avoid that for some of my team\u0026rsquo;s recent deployments.\nOur last deployment uses GitHub Actions, S3, and CloudFront and costs us about $6.50 per month to run. There are no virtual machines. There\u0026rsquo;s nothing to patch.\nThis blog runs on a similar stack and costs me about $0.25 per month to run.\nCloud offers geographic distribution #Nearly every public cloud, even the smallest ones, offer you the same or similar services in a wide variety of geographic regions. Disaster recovery feels more attainable when you can easily deploy to multiple regions with the same software-defined infrastructure.\nData sovereignty continues to grow in importance around the world as more countries demand that their data remains within their borders. As long as your cloud offers a region in that country, you can deploy there. There\u0026rsquo;s no challenging legal issues with finding datacenter space or getting hardware delivered. You just change your region and deploy.\nCloud regions also allow you to bring your applications much closer to the people who use them. Reduced latency delivers content faster to customers and provides a responsive experience.\nClouds offer purchasing efficiency #Wait a minute! Didn\u0026rsquo;t I say that moving to cloud isn\u0026rsquo;t just a purchasing exercise? 🤔\nYour move to cloud should not be solely based on cutting costs or making purchasing IT more efficient. Most teams find that moving to cloud is more expensive than they anticipated because they\u0026rsquo;re finally able to get access to the right amount of resources that they need. (Also, they usually go with some more expensive options up front until they figure out how to optimize for cost.)\nFirst off, it\u0026rsquo;s much easier to budget and pay one vendor for multiple services than deal with multiple independent vendors. Instead of paying for datacenter space, then paying for servers, then paying for network equipment, then paying for people to set it up, and so on, you pay the cloud provider for all of it.\nThis also extends to other purchases on the cloud, such as products from certain vendors. For example, you can buy Red Hat products directly from some cloud providers and that gets added onto your cloud invoice. You can even deploy your own Cisco ASA in the cloud if you feel so inclined.\nWith all of these purchases going through one vendor, you can also negotiate discounts if you set a spending commitment. Discounts depend on your committed spend, of course, and the term that you agree to spend it. There\u0026rsquo;s a whole industry around financial operations in the cloud, called FinOps, and this is one of many things that factors into it.\nWrapping up #Public clouds offer an incredible amount of opportunity to get your IT deployments into better shape with better change control and a solid software-defined workflow. They also offer the ability to \u0026ldquo;write one check\u0026rdquo; to consume infrastructure via utility billing.\nHowever, public clouds are not ideal for every application or situation.\nDo I think that every company in the world could benefit from getting some part of their IT deployments into a public cloud platform? Yes, I do.\nWould every company benefit from putting most of their infrastructure into public clouds? Very unlikely.\nSome applications still benefit from being on purpose-built hardware or in certain locations where a cloud might not exist today. Clouds can also be extremely expensive if you run large workloads around the clock. They can also be painful for applications with very strict or special requirements that don\u0026rsquo;t fit a cloud deployment model well.\nThe vendors that will succeed the most in the cloud space are the ones that look beyond purchasing efficiencies and IT acquisition concerns. Simply dragging the old world of physical servers or virtual machines into cloud won\u0026rsquo;t lead anywhere.\nThose companies that help their customers benefit from the best of what public clouds have to offer in the most secure, reliable, and simple ways will be in the driver\u0026rsquo;s seat.\nA quagmire is something that gets worse no matter how you try to improve it. The only way to win is to avoid it entirely.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nBoy, I still dislike that serverless term so much. 🤦‍♂️\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"27 October 2023","permalink":"/p/cloud-more-than-purchasing-exercise/","section":"Posts","summary":"Moving to cloud is about much more than just capital efficiency. It enables your teams to do more if they\u0026rsquo;re willing to adopt some new practices.","title":"Moving to cloud is more than just a purchasing exercise"},{"content":"","date":null,"permalink":"/tags/terraform/","section":"Tags","summary":"","title":"Terraform"},{"content":"","date":null,"permalink":"/tags/docker/","section":"Tags","summary":"","title":"Docker"},{"content":" It\u0026rsquo;s quite clear that I\u0026rsquo;ve been on a CoreOS blogging streak lately. I keep getting asked by people inside and outside my company about what makes CoreOS special and why I\u0026rsquo;ve switched over so many workloads to it.\nThe answer is pretty basic. It makes my life easier.\nI\u0026rsquo;m a Dad. I\u0026rsquo;m on the PTC (Parent Teacher Club) at one of my children\u0026rsquo;s schools. I volunteer as an IT person for a non-profit. I write software. I have other time consuming hobbies, such as ham radio, reading, and becoming a longer distance runner1.\nMy available time for my own IT projects is extremely limited and CoreOS plays a part in keeping that part of my life as efficient as possible.\nThat\u0026rsquo;s what this blog post is about!\nUpdates #First and foremost, I love how CoreOS does updates. I encourage you to read the docs on this topic, but here\u0026rsquo;s a short explanation:\nUpdates are automatically retrieved and they\u0026rsquo;re loaded into a slot. Your system reboots into the new update but your original OS tree remains in place. Did the update boot? Awesome. You\u0026rsquo;re good to go. Did something break? The system reverts back to the known good tree. In this way, it\u0026rsquo;s a lot like your smartphone.\nYou have full control over when a node looks for an update and how often it checks for them. Check out the Zincati docs for tons of controls over updates and reboots.\nSome of mine are timed so well that I set maintenance windows with my monitoring provider when I know an update might take place. The updates come through, monitoring shuts off, the node reboots, and monitoring comes back. The nodes almost always come back before the monitoring even alerts me.\nIt also removes the reminders I would set for myself to update packages and run reboots. I know that my CoreOS nodes will do this automatically, so I don\u0026rsquo;t need to think about it.\nAlso, updates are rarely ever impactful to my workloads since all of them are running inside containers. My containers come right back up as soon as the node finishes its reboot.\nToolbox #To be fair, you can get toolbox running on lots of different Linux distributions outside of CoreOS, but that\u0026rsquo;s the first place I ever used it. Toolbox, also called toolbx, gives you a utility container on your CoreOS node for all kinds of adminsitrative and diagnostic capabilities.\nYou might need a certain package for diagnosting a hardware issue or you might want to install some helpful utilities for the command line. Do that in a toolbox container. Just run toolbox enter and if you\u0026rsquo;ve never created a toolbox container before, you\u0026rsquo;ll get a Fedora container that matches your CoreOS release.\nBut it gets better.\nToolbox automatically saves your container when you\u0026rsquo;re done with it so all of your installed packages stay there for next time. Also, these containers have seamless access to anything you have in your home directory, including sockets. You\u0026rsquo;re running inside a container, but it\u0026rsquo;s almost like you\u0026rsquo;re running on the host itself inside your home directory. You get the best of both worlds.\nDon\u0026rsquo;t want Fedora? You have lots of distribution options through toolbox. Read the details on custom images to create your own!\nLayering #Okay, there are those situations where you really want a package on CoreOS and toolbox might not be sufficient. My muscle memory for vim is so strong and CoreOS only comes with vi.\nYou have a couple of options here:\nRun sudo rpm-ostree install vim, reboot, and you have vim Run sudo rpm-ostree install --apply-live vim and you have vim right now! (And it\u0026rsquo;s there after a reboot as well.) When a new update comes down for the base OS from CoreOS, any packages you\u0026rsquo;ve added will be layered on the base image and available after a reboot. Layering is generally chosen as a last resort option for adding packages to the system but you shouldn\u0026rsquo;t run into issues if you\u0026rsquo;re installing small utilities or command line tools.\nDeclarative provisioning #If you\u0026rsquo;ve provisioned Linux distributions on cloud instances in the past, you\u0026rsquo;ve likely provided metadata that cloud-init uses to provision your system. CoreOS has something that acts a lot earlier in the boot process and has more power to get things done: ignition.\nThere\u0026rsquo;s a handy butane file forma that you use for writing your configuration. You use the butane utility to get it into ignition format. The ignition format is highly compressed to ensure you can fit your configuration into most cloud providers\u0026rsquo; metadata fields.\nFor a real example of what you can do with ignition, check out my quadlets post where I provisioned an entire Wordpress container stack using a single ignition file.\nThere\u0026rsquo;s lots of documentation for writing butane configuration files for common situations. It\u0026rsquo;s easy to add files, configure Wireguard, set up users, and launch containers immediately on the first boot.\nPets and cattle #CoreOS works well for systems that I only need online for a short time. These might be situations where I need to test a few containers and throw it away. There\u0026rsquo;s no OS to mess with and no updates to worry about.\nIt also works well for systems that I keep online for a long time. I have a few physical systems at home that run CoreOS and they\u0026rsquo;ve been extremely stable. I also have cloud instances on Hetzner, VULTR, and Digital Ocean that have run CoreOS for months without issues.\nMore questions? #Feel free to send me an email or drop me a toot on Mastodon. I\u0026rsquo;ll update this post if I get some good ones!\nCompleting a half marathon without keeling over is the current goal! 👟\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"13 October 2023","permalink":"/p/why-coreos/","section":"Posts","summary":"Here\u0026rsquo;s a blog post to answer the question: Why do you write so much about CoreOS? 📦","title":"How I learned to stop worrying and love the CoreOS"},{"content":"","date":null,"permalink":"/tags/podman/","section":"Tags","summary":"","title":"Podman"},{"content":" I\u0026rsquo;ve written a lot about containers on this blog. Why do I love containers so much?\nThey start quickly They make your workloads portable They disconnect your application stack from the OS that runs underneath You can send your application through CI as a single container image You can isolate workloads on the network and limit their resource usage much like a VM However, I\u0026rsquo;m still addicted to docker-compose. Can podman\u0026rsquo;s quadlets change that?\nYes, I think they can.\nWhat\u0026rsquo;s a quadlet? #Podman introduced support for quadlets in version 4.4 and it\u0026rsquo;s a simpler way of letting systemd manage your containers. There was an option in the past to have podman generate systemd unit files, but those were unwieldy and full of podman command line options inside a unit file. These unit files weren\u0026rsquo;t easy to edit or even parse with eyeballs.\nQuadlets make this easier by giving you a simple ini-style file that you can easily read and edit. This blog post will include some quadlets later, but here\u0026rsquo;s an example one for Wordpress:\n[Unit] Description=Wordpress Quadlet [Container] Image=docker.io/library/wordpress:fpm ContainerName=wordpress AutoUpdate=registry EnvironmentFile=/home/core/.config/containers/containers-environment Volume=wordpress.volume:/var/www/html Network=wordpress.network [Service] Restart=always TimeoutStartSec=900 [Install] WantedBy=caddy.service multi-user.target default.target Lots of the lines under [Container] should look familiar to most readers who have worked with containers before. However, there\u0026rsquo;s something new here.\nCheck out the AutoUpdate=registry line. This tells podman to keep your container updated on a regular basis with the upstream container registry. I\u0026rsquo;ve used watchtower in the past for this, but it requires a privileged container and it\u0026rsquo;s yet another external dependency.\nAlso, at the very end, you\u0026rsquo;ll see a WantedBy line. This is a great place to set up container dependencies. In this example, the container that runs caddy (a web server) can\u0026rsquo;t start until Wordpress is up and running.\nSo why not stick with docker-compose? #There\u0026rsquo;s no denying that docker-compose is an awesome tool. You specify the desired outcome, tell it to bring up containers, and it gets containers into the state you specified. It handles volumes, networks, and complicated configuration without a lot of legwork. The YAML files are pretty easy to read, too.\nHowever, as with watchtower, that\u0026rsquo;s another external dependency.\nMy container deployments are often done at instance boot time and I don\u0026rsquo;t make too many changes afterwards. I found myself using docker-compose for the initial deployment and then I didn\u0026rsquo;t really use it again.\nWhy not remove it entirely and use what\u0026rsquo;s built into CoreOS already?\nQuaint quadlets quickly! #Before we start, we\u0026rsquo;re going to need a few things:\nAn easy to read butane configuration which gets transformed into a tiny ignition configuration for CoreOS Some quadlets Extra system configuration A cloud provider with CoreOS images (using VULTR for this) I\u0026rsquo;ve packed all of these items into my quadlets-wordpress repository to make it easy. Start by looking at the config.butane file.\nLet\u0026rsquo;s break it down here. First up, we add an ssh key for the default core user.\nvariant: fcos version: 1.5.0 passwd: users: - name: core ssh_authorized_keys: - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDyoH6gU4lgEiSiwihyD0Rxk/o5xYIfA3stVDgOGM9N0 Next up, we enable the podman-auto-update.timer so we get container updates automatically:\nstorage: links: - path: /home/core/.config/systemd/user/timers.target.wants/podman-auto-update.timer target: /usr/lib/systemd/user/podman-auto-update.timer user: name: core group: name: core Next is the long files section:\nfiles: # Ensure the `core` user can keep processes running after they\u0026#39;re logged out. - path: /var/lib/systemd/linger/core mode: 0644 # Allow caddy to listen on 80 and 443. # Allow it to ask for bigger network buffers, too. - path: /etc/sysctl.d/90-caddy.conf contents: inline: | net.ipv4.ip_unprivileged_port_start = 80 net.core.rmem_max=2500000 net.core.wmem_max=2500000 # Set up an an environment file that containers can read to configure themselves. - path: /home/core/.config/containers/containers-environment contents: inline: | MYSQL_DATABASE=wordpress MYSQL_USER=wordpress MYSQL_ROOT_PASSWORD=mariadb-needs-a-secure-password MYSQL_PASSWORD=wordpress-needs-a-secure-password WORDPRESS_DB_HOST=mariadb WORDPRESS_DB_USER=wordpress WORDPRESS_DB_PASSWORD=wordpress-needs-a-secure-password WORDPRESS_DB_NAME=wordpress mode: 0644 # Deploy the caddy configuration file from the repository. - path: /home/core/.config/caddy/Caddyfile contents: local: caddy/Caddyfile mode: 0644 user: name: core group: name: core # Add some named volumes for caddy and wordpress. - path: /home/core/.config/containers/systemd/caddy-config.volume contents: inline: | [Volume] user: name: core group: name: core - path: /home/core/.config/containers/systemd/caddy-data.volume contents: inline: | [Volume] user: name: core group: name: core - path: /home/core/.config/containers/systemd/wordpress.volume contents: inline: | [Volume] user: name: core group: name: core # Create a network for all the containers to use and enable the # DNS plugin. This allows containers to find each other using # the container names. - path: /home/core/.config/containers/systemd/wordpress.network contents: inline: | [Network] DisableDNS=false Internal=false user: name: core group: name: core # Add the wordpress container. - path: /home/core/.config/containers/systemd/wordpress.container contents: local: quadlets/wordpress.container mode: 0644 user: name: core group: name: core # Add the MariaDB container. - path: /home/core/.config/containers/systemd/mariadb.container contents: local: quadlets/mariadb.container mode: 0644 user: name: core group: name: core # Add the caddy container. - path: /home/core/.config/containers/systemd/caddy.container contents: local: quadlets/caddy.container mode: 0644 user: name: core group: name: core The Caddyfile is also in the repository and will be deployed by the butane configuration shown above.\nWe can go through each quadlet in detail. First up is MariaDB. We tell systemd that the wordpress container will want to have this one started first.\n[Unit] Description=MariaDB Quadlet [Container] Image=docker.io/library/mariadb:11 ContainerName=mariadb AutoUpdate=registry EnvironmentFile=/home/core/.config/containers/containers-environment Volume=mariadb.volume:/var/lib/mysql Network=wordpress.network [Service] Restart=always TimeoutStartSec=900 [Install] WantedBy=wordpress.service multi-user.target default.target The wordpress quadlet is much the same as the MariaDB one, but we tell systemd that caddy will want wordpress started first.\n[Unit] Description=Wordpress Quadlet [Container] Image=docker.io/library/wordpress:fpm ContainerName=wordpress AutoUpdate=registry EnvironmentFile=/home/core/.config/containers/containers-environment Volume=wordpress.volume:/var/www/html Network=wordpress.network [Service] Restart=always TimeoutStartSec=900 [Install] WantedBy=caddy.service multi-user.target default.target Finally, the caddy quadlet contains four volumes and some published ports. These ports will be published to the container host. Also, you\u0026rsquo;ll note that the wordpress volume is mounted here, too. This is because caddy can serve static files much faster than wordpress can.\n[Unit] Description=Caddy Quadlet [Container] Image=docker.io/library/caddy:latest ContainerName=caddy AutoUpdate=registry EnvironmentFile=/home/core/.config/containers/containers-environment Volume=caddy-data.volume:/data Volume=caddy-config.volume:/config Volume=/home/core/.config/caddy/Caddyfile:/etc/caddy/Caddyfile:Z Volume=wordpress.volume:/var/www/html PublishPort=80:80 PublishPort=443:443 Network=wordpress.network [Service] Restart=always TimeoutStartSec=900 [Install] WantedBy=multi-user.target default.target Launch the quadlets #There\u0026rsquo;s a launch script that ships this configuration to VULTR and launches a CoreOS instance:\n#!/bin/bash # This command starts up a CoreOS instance on Vultr using the vultr-cli vultr-cli instance create \\ --os 391 \\ --plan vhp-1c-1gb-amd \\ --region dfw \\ --notify true \\ --ipv6 true \\ -u \u0026#34;$(butane --files-dir . config.butane)\u0026#34; \\ -l \u0026#34;coreos-$(date \u0026#34;+%s\u0026#34;)\u0026#34; To launch an instance, get your VULTR API key first. Then install vultr-cli and butane:\n$ sudo dnf -y install butane vultr-cli After launch, check to see what your containers are doing:\n[core@vultr ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES afa2d6501593 docker.io/library/caddy:latest caddy run --confi... 54 seconds ago Up 53 seconds 0.0.0.0:80-\u0026gt;80/tcp, 0.0.0.0:443-\u0026gt;443/tcp caddy 460426f39e6c docker.io/library/mariadb:11 mariadbd 35 seconds ago Up 35 seconds mariadb 92ece6538d5a docker.io/library/wordpress:fpm php-fpm 28 seconds ago Up 29 seconds wordpress We should be able to talk to wordpress through caddy on port 80:\n[core@vultr ~]$ curl -si http://localhost/wp-admin/install.php | head -n 25 HTTP/1.1 200 OK Cache-Control: no-cache, must-revalidate, max-age=0 Content-Type: text/html; charset=utf-8 Expires: Wed, 11 Jan 1984 05:00:00 GMT Server: Caddy X-Powered-By: PHP/8.0.30 Date: Mon, 25 Sep 2023 21:43:40 GMT Transfer-Encoding: chunked \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en-US\u0026#34; xml:lang=\u0026#34;en-US\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta name=\u0026#34;viewport\u0026#34; content=\u0026#34;width=device-width\u0026#34; /\u0026gt; \u0026lt;meta http-equiv=\u0026#34;Content-Type\u0026#34; content=\u0026#34;text/html; charset=utf-8\u0026#34; /\u0026gt; \u0026lt;meta name=\u0026#34;robots\u0026#34; content=\u0026#34;noindex,nofollow\u0026#34; /\u0026gt; \u0026lt;title\u0026gt;WordPress \u0026amp;rsaquo; Installation\u0026lt;/title\u0026gt; \u0026lt;link rel=\u0026#39;stylesheet\u0026#39; id=\u0026#39;dashicons-css\u0026#39; href=\u0026#39;http://localhost/wp-includes/css/dashicons.min.css?ver=6.3.1\u0026#39; type=\u0026#39;text/css\u0026#39; media=\u0026#39;all\u0026#39; /\u0026gt; \u0026lt;link rel=\u0026#39;stylesheet\u0026#39; id=\u0026#39;buttons-css\u0026#39; href=\u0026#39;http://localhost/wp-includes/css/buttons.min.css?ver=6.3.1\u0026#39; type=\u0026#39;text/css\u0026#39; media=\u0026#39;all\u0026#39; /\u0026gt; \u0026lt;link rel=\u0026#39;stylesheet\u0026#39; id=\u0026#39;forms-css\u0026#39; href=\u0026#39;http://localhost/wp-admin/css/forms.min.css?ver=6.3.1\u0026#39; type=\u0026#39;text/css\u0026#39; media=\u0026#39;all\u0026#39; /\u0026gt; \u0026lt;link rel=\u0026#39;stylesheet\u0026#39; id=\u0026#39;l10n-css\u0026#39; href=\u0026#39;http://localhost/wp-admin/css/l10n.min.css?ver=6.3.1\u0026#39; type=\u0026#39;text/css\u0026#39; media=\u0026#39;all\u0026#39; /\u0026gt; \u0026lt;link rel=\u0026#39;stylesheet\u0026#39; id=\u0026#39;install-css\u0026#39; href=\u0026#39;http://localhost/wp-admin/css/install.min.css?ver=6.3.1\u0026#39; type=\u0026#39;text/css\u0026#39; media=\u0026#39;all\u0026#39; /\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body class=\u0026#34;wp-core-ui language-chooser\u0026#34;\u0026gt; \u0026lt;p id=\u0026#34;logo\u0026#34;\u0026gt;WordPress\u0026lt;/p\u0026gt; Awesome! 🎉\nManaging containers #Containers will automatically update on a schedule and you can check the timer:\n[core@vultr ~]$ systemctl status --user podman-auto-update.timer ● podman-auto-update.timer - Podman auto-update timer Loaded: loaded (/usr/lib/systemd/user/podman-auto-update.timer; enabled; preset: disabled) Active: active (waiting) since Mon 2023-09-25 21:41:31 UTC; 3min 14s ago Trigger: Tue 2023-09-26 00:04:46 UTC; 2h 20min left Triggers: ● podman-auto-update.service Sep 25 21:41:31 vultr.guest systemd[1786]: Started podman-auto-update.timer - Podman auto-update timer. Quadlets are just regular systemd units:\n[core@vultr ~]$ systemctl list-units --user | grep -i Quadlet caddy.service loaded active running Caddy Quadlet mariadb.service loaded active running MariaDB Quadlet wordpress.service loaded active running Wordpress Quadlet As an example, you can make changes to caddy\u0026rsquo;s config file and restart it easily:\n[core@vultr ~]$ systemctl restart --user caddy [core@vultr ~]$ systemctl status --user caddy ● caddy.service - Caddy Quadlet Loaded: loaded (/var/home/core/.config/containers/systemd/caddy.container; generated) Drop-In: /usr/lib/systemd/user/service.d └─10-timeout-abort.conf Active: active (running) since Mon 2023-09-25 21:46:28 UTC; 5s ago Main PID: 2652 (conmon) Tasks: 18 (limit: 1023) Memory: 15.1M CPU: 207ms If you need to change a quadlet\u0026rsquo;s configuration, just open up the configuration file in your favorite editor under ~/.config/containers/systemd, reload systemd, and restart the container:\n$ vi ~/.config/containers/systemd/caddy.container --- make your edits and save the quadlet configuration --- $ systemctl daemon-reload --user $ systemctl restart --user caddy Enjoy!\n","date":"25 September 2023","permalink":"/p/quadlets-replace-docker-compose/","section":"Posts","summary":"Sure, docker-compose is great, but could we get similar functionality using just the tools that are built into CoreOS? Can we get automatic updates, too? Yes we can! 📦","title":"Quadlets might make me finally stop using docker-compose"},{"content":"","date":null,"permalink":"/tags/wordpress/","section":"Tags","summary":"","title":"Wordpress"},{"content":"","date":null,"permalink":"/tags/aws/","section":"Tags","summary":"","title":"Aws"},{"content":" I package a few things here and there in Fedora and one of my latest packages is efs-utils. AWS offers a mount helper for their Elastic File System (EFS) product on GitHub.\nIn this post, I\u0026rsquo;ll explain how to:\nLaunch a Fedora instance on AWS EC2 Install efs-utils and launch the watchdog service Create an EFS volume in the AWS console Mount the EFS volume inside the Fedora instance Always check the pricing for any cloud service before you use it! EFS pricing is based on how much you store and how often you access it. Backups are also enabled by default and they add to the monthly charges. Let\u0026rsquo;s go! 🚀\nWait, what is EFS? #When you launch a cloud instance (virtual machine) on most clouds, you have different storage options available to you:\nBlock storage: You can add partitions to this storage, create filesystems, or even use LVM. It looks like someone plugged in a disk to your instance. You get full control over every single storage block on the volume. An example of this is Elastic Block Storage (EBS) on AWS.\nObject storage: Although you can\u0026rsquo;t mount object storage (typically) within your instance, you can read/write objects to this storage via an API. You can upload nearly any type of file you can imagine as an object and then download it later. Objects can also have little bits of metadata attached to them and some of the metadata include prefixes which give a folder-like experience. AWS S3 is a good example of this.\nShared filesystems: This storage shows up in the instance exactly as it sounds: you get a shared filesystem. If you\u0026rsquo;re familiar with NFS or Samba (SMB), then you\u0026rsquo;ve used shared filesystems already. They give you much better performance than object storage but offer less freedom than block storage. They\u0026rsquo;re also great for sharing the same data between multiple instances.\nUsing EFS is almost like having someone else host a network accessible storage (NAS) device within your cloud deployment.\nLaunching Fedora #Every image in AWS has an AMI ID attached to it and you need to know the ID for the image you want in your region. You can find these quickly for Fedora by visiting the Fedora Cloud download page. Look for AWS in the list, click the button on that row, and you\u0026rsquo;ll see a list of Fedora AMI IDs. Click the rocket (🚀) for your preferred region and you\u0026rsquo;re linked directly to launch that instance in AWS!\nI\u0026rsquo;m clicking the launch link for us-east-2 (Ohio)1. To finish quickly, I\u0026rsquo;m choosing all of the default options and using a spot instance (look inside Advanced details at the bottom of the page).\nWait for the instance to finish intializing and access it via ssh:\n$ ssh fedora@EXTERNAL_IP [fedora@ip-172-31-2-38 ~]$ cat /etc/fedora-release Fedora release 38 (Thirty Eight) Success! 🎉\nPrepare your security group #Before leaving the EC2 console, you need to make a note of the security group that you used for this instance. That\u0026rsquo;s because EFS uses security groups to guard access to volumes. Follow these steps to find it:\nClick Instances on the left side of the EC2 console. Click on the row showing the instance we just created. In the bottom half of the screen, click the Security tab. Look for Security groups in the security details and copy the security group ID for later. It should be in the format sg-[a-f0-9]*.\nIf you click the security group name (after saving it), you\u0026rsquo;ll see the inbound rules associated with that security group. By default, items in the same security group can\u0026rsquo;t talk to each other. We need to allow that so our EFS mount will work later.\nClick Edit inbound rules and do the following:\nClick Add rule. Choose All traffic in the Type column. (You can narrow this down further later.) In the source box, look for the security group you just created along with your EC2 instance. If you took the default during the EC2 launch process, it might be named launch-wizard-[0-9]+. Click Save rules. Installing efs-utils #Let\u0026rsquo;s start by getting the efs-utils package onto our new Fedora system:\n$ sudo dnf -qy install efs-utils Installed: efs-utils-1.35.0-2.fc38.noarch The package includes some configuration, a watchdog, and a mount helper:\n$ rpm -ql efs-utils /etc/amazon /etc/amazon/efs /etc/amazon/efs/efs-utils.conf /etc/amazon/efs/efs-utils.crt /usr/bin/amazon-efs-mount-watchdog /usr/lib/systemd/system/amazon-efs-mount-watchdog.service /usr/sbin/mount.efs /usr/share/doc/efs-utils /usr/share/doc/efs-utils/CONTRIBUTING.md /usr/share/doc/efs-utils/README.md /usr/share/licenses/efs-utils /usr/share/licenses/efs-utils/LICENSE /usr/share/man/man8/mount.efs.8.gz /var/log/amazon/efs Let\u0026rsquo;s get the watchdog running so we have that ready later. The watchdog helps to build and tear down the encrypted connection when you mount and unmount an EFS volume:\n$ sudo systemctl enable --now amazon-efs-mount-watchdog.service Created symlink /etc/systemd/system/multi-user.target.wants/amazon-efs-mount-watchdog.service → /usr/lib/systemd/system/amazon-efs-mount-watchdog.service. $ systemctl status amazon-efs-mount-watchdog.service ● amazon-efs-mount-watchdog.service - amazon-efs-mount-watchdog Loaded: loaded (/usr/lib/systemd/system/amazon-efs-mount-watchdog.service; enabled; preset: disabled) Drop-In: /usr/lib/systemd/system/service.d └─10-timeout-abort.conf Active: active (running) since Wed 2023-09-13 18:43:46 UTC; 5s ago Main PID: 1258 (amazon-efs-moun) Tasks: 1 (limit: 4385) Memory: 13.3M CPU: 76ms CGroup: /system.slice/amazon-efs-mount-watchdog.service └─1258 /usr/bin/python3 /usr/bin/amazon-efs-mount-watchdog Sep 13 18:43:46 ip-172-31-2-38.us-east-2.compute.internal systemd[1]: Started amazon-efs-mount-watchdog.service - amazon-efs-mount-watchdog. Setting up an EFS volume #Start by going over to the EFS console and do the following:\nClick File systems in the left navigation bar\nClick the orange Create file system button at the top right\nA modal appears with a box for the volume name and a VPC selection. Select an easy to remember name (I\u0026rsquo;m using testing-efs-for-blog-post) and select a VPC. If you\u0026rsquo;re not sure what a VPC is or which one to use, use the default VPC since that\u0026rsquo;s likely where your instance landed as well.\nClick Create.\nThere\u0026rsquo;s a delay while the filesystem initializes and you should see the filesystem show Available with a green check mark after about 30 seconds. Click on the filesystem you just created from the list and you\u0026rsquo;ll see the details page for the filesystem.\nSecurity setup #EFS volumes come online with the default security group attached and that\u0026rsquo;s not helpful. From the EFS filesystem details page, click the Network tab and then click Manage.\nFor each availability zone, go to the Security groups column and add the security group that your instance came up with in the first step. In my case, I accepted the defaults from EC2 and ended up with a launch-wizard-1 security group. Remove the default security group from each. Click Save.\nMounting time #You should still be on the filesystem details page from the previous step. Click Attach at the top right and a modal will appear with mount instructions. The first option should use the EFS mount helper!\nFor me, it looks like sudo mount -t efs -o tls fs-0baabc62763375bb1:/ efs\nGo back to your Fedora instance, create a mount point, and create the volume:\n$ sudo mkdir /mnt/efs $ sudo mount -t efs -o tls fs-0baabc62763375bb1:/ /mnt/efs $ df -hT | grep efs 127.0.0.1:/ nfs4 8.0E 0 8.0E 0% /mnt/efs We did it! 🎉\nWe see 127.0.0.1 here because efs-utils uses stunnel to handle the encryption between your instance and the EFS storage system.\nThe disk was mounted by root, so we can add a -o user=fedora to give our Fedora user permissions to write files:\n$ umount /mnt/efs $ sudo mount -t efs -o user=fedora,tls fs-0baabc62763375bb1:/ /mnt/efs $ touch /mnt/efs/test2.txt $ stat /mnt/efs/test2.txt File: /mnt/efs/test2.txt Size: 0 Blocks: 8 IO Block: 1048576 regular empty file Device: 0,54\tInode: 17657675890899444015 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ fedora) Gid: ( 1000/ fedora) Context: system_u:object_r:nfs_t:s0 Access: 2023-09-13 19:14:23.308000000 +0000 Modify: 2023-09-13 19:14:23.308000000 +0000 Change: 2023-09-13 19:14:23.308000000 +0000 Birth: - Also, efs-utils uses encrypted communication by default, which is great. There may be some situations where you don\u0026rsquo;t need encrypted communications or you don\u0026rsquo;t want the overhead. In that case, drop the -o tls option from the mount command and you\u0026rsquo;ll mount the volume unencrypted.\n$ sudo umount /mnt/efs $ sudo mount -t efs -o user=fedora fs-0baabc62763375bb1:/ /mnt/efs $ df -hT | grep efs fs-0baabc62763375bb1.efs.us-east-2.amazonaws.com:/ nfs4 8.0E 0 8.0E 0% /mnt/efs Extra credit #You can get fancy with access points that allow you to carve up your EFS storage and only let certain instances mount certain parts of the filesystem. So instance A might only be able to mount /files/hr while instance B can only mount /documents.\nIt would also be a good idea to take an inventory of your security groups and ensure the least amount of instances can reach your EFS volume as possible. Much of the work I did in this post was just for testing. A good plan might be to make a security group for your EFS volume and only allow inbound traffic from security groups which should access it. That would allow you to gather up all of your instances into different security groups and limit access.\nAlso, be aware of the EFS pricing! 💸\nYou are billed not only for how much storage you use, but also on requests. Different requests are priced differently depending on access frequency. Backups are also enabled by default at $0.05/GB-month!\nWhy Ohio? I\u0026rsquo;m mainly doing it to irritate Corey Quinn. 🤭 Any region you prefer should be fine.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"13 September 2023","permalink":"/p/aws-elastic-file-system-fedora/","section":"Posts","summary":"Fedora now has the AWS Elastic File Store (EFS) mount helper available for Fedora 38 and newer releases! It chooses optimized NFS mount options for you and makes mounting and unmounting a breeze.","title":"Mounting the AWS Elastic File Store on Fedora"},{"content":"","date":null,"permalink":"/tags/python/","section":"Tags","summary":"","title":"Python"},{"content":" I love optimizing nearly everything in my life. Sometimes it means saving money. Other times it means squeezing every bit of performance out of a server.\nBut let\u0026rsquo;s try optimizing something I\u0026rsquo;ve never done on this blog before: Buying a car.\nAlthough most of this information applies best to people in the USA, there are several things I\u0026rsquo;ve learned over the years that might benefit people in other places. Car purchases are often the second largest purchase for most Americans after buying a house. Why not optimize it as much as possible?\nMy family is in the market for new vehicle and I\u0026rsquo;ve immersed myself in learning more about the whole process. There are plenty of legal issues in play that many buyers don\u0026rsquo;t know about and there are tons of ways to walk into a dealership fully prepared.\nWithout further ado, I\u0026rsquo;ll share what I\u0026rsquo;ve learned with you!\nShopping #Shopping and buying are two different things. You must keep them separate. If you\u0026rsquo;re not sure on the exact model of car you want, go shopping and don\u0026rsquo;t commit to buy anything.\nIf you have an idea of a particular car you want, go to Google and search for 5-10 competitors to that car. It\u0026rsquo;s as easy as searching \u0026ldquo;top competitors Camry\u0026rdquo; to find other cars that compete with a Toyota Camry. Go to those dealers and take a good look at the cars to see if they have the features you want. Test drive each and see if you still like them.\nThe really important thing here is to separate shopping from buying.\nSearch for prices #Once you narrow your search down a bit, start to compare prices between dealers for different trim levels. I love using CarEdge for this since you can examine dealer inventory across the country and see how dealer prices stack up against each other.\nAlso check how long the vehicle has been sitting on the lot. CarEdge offers this data and it can give you an idea of how desperate a dealer is to get rid of a particular vehicle. If the total supply of a model is really high and the vehicle has been sitting on the lot longer than 90 days, you have the ability to negotiate for that car. Cars with a very low supply or cars that just arrived to the lot will likely be priced higher and dealers are less likely to budge on price.\nFor used cars, this is even more important since you don\u0026rsquo;t have an MSRP to work with as you do for new cars. Comparing prices for used cars is critical.\nKeep in mind that some cars are in higher demand in some areas than others, too. You might be able to drive a few hours and save quite a bit. A friend of mine drove from Texas to Colorado to buy a car and saved $3,500.\nWatch out for arbitrary markups! All dealers in the US are required to display a Monroney label and this shows the manufacturer\u0026rsquo;s suggested retail price. Dealers might add an addendum sticker somewhere else that show accessories they added. Sometimes these stickers show arbitrary markups from the dealer.\nSince COVID, many dealers have been stacking massive fees on top of car purchases. Some of these fees exceed $10,000, $25,000 or more! This should be a massive red flag from the start of a negotiation with any dealer. If they aren\u0026rsquo;t willing to budge from their arbitrary markups, look for another dealer. 🚩\nYou found the car you want #Awesome! 🥳\nWhether it\u0026rsquo;s new or used, take the VIN number and dig up information about the car. You can learn a lot from just a quick Google search from the VIN!\nOther than that, CarEdge has some good tools for digging up data on individual cars without too much expense. CarFax is the gold standard used and it offers some fairly inexpensive options.\nSome might be saying: \u0026ldquo;Isn\u0026rsquo;t this a waste for a brand new car?\u0026rdquo; No, it\u0026rsquo;s not.\nI was once buying a pickup truck that was advertised as one model year, but the truck was actually a year older. The dealer even switched the paper tags attached to the keys so I wouldn\u0026rsquo;t notice. I didn\u0026rsquo;t notice this until months later when I found it buried in my paperwork. 😡\nThere have also been situations where stolen cars are sitting on dealer lots. 😱\nRemember, this is a big investment. Spending $20 on a CarFax report as you purchase a $30,000 car should be worth it.\nFinancing #Take care of your financing before visiting a dealer. Local credit unions are often great for this but they\u0026rsquo;ve been under a squeeze lately with an increase in reposessions. Work with the credit union to get a good rate for the type of car you want.\nSome people have had good luck financing through a dealership. However, getting financing set up ahead of time gives you the upper hand with financing negotiations. For example, when you sit down with in the finance office, you could say: \u0026ldquo;I have 5.9% through my local credit union. If you can beat that, I\u0026rsquo;ll finance with you.\u0026rdquo;\nIf you choose to go through financing with the dealer, here\u0026rsquo;s what I recommend:\nDo not allow the dealer to run your credit report until the last minute. If you let them run it, they know that you\u0026rsquo;ll get a hard hit on your credit report and you\u0026rsquo;re less likely to look at other dealers. You want to leave your options open. Don\u0026rsquo;t let them run your credit until you are 100% sure you\u0026rsquo;re ready to buy from them. Ask to see the \u0026ldquo;call sheet\u0026rdquo; and see what the bank offered the dealer for financing (the \u0026ldquo;buy rate\u0026rdquo;). It\u0026rsquo;s very likely that the dealer gets one rate from the bank and then marks it up for you. For example, the dealer might say \u0026ldquo;Oh, your rate is 8%.\u0026rdquo; Then you ask for the call sheet or the buy rate and see that they got 5.9% from the bank. You\u0026rsquo;re getting a huge markup. That\u0026rsquo;s another negotiation point. If the dealer says they can lower your interest rate if you buy an add-on from them, such as extended warranty, STOP IMMEDIATELY. 🚨 This is called \u0026ldquo;tied selling\u0026rdquo; and is almost always illegal in the USA. This is a good moment to stop and re-evaluate the whole transaction. You are not required to buy any of the add-ons that the finance manager offers you. Extended warranties, dent and ding protection, and tire/wheel protection are common items. There\u0026rsquo;s no requirement to purchase these. If you do decide to purchase one, be sure they can show you the actual amount that it will cost you. Don\u0026rsquo;t let them tell you how much it adds to your monthly payment. That allows them to hide the cost of certain add-ons. You are required to pay reasonable fees for tax, title, and license. Sometimes documentation fees are rolled up into this, too. Every state is a little different on how much these cost, but you can Google \u0026ldquo;tax title and license\u0026rdquo; for your state and get a good estimation. Demand to see the \u0026ldquo;out the door price.\u0026rdquo; If a dealer asks how much you can afford per month, don\u0026rsquo;t answer. You are interested in the full price of the car plus fees. Most states require that this comes out on a single sheet of paper with each expense clearly labeled with what you will owe for the car. If you tell a dealer \u0026ldquo;I can\u0026rsquo;t affort more than $500 per month\u0026rdquo;, then they will tinker with various parts of the deal to ensure you pay more in the long term without exceeding $500 per month. If the dealer won\u0026rsquo;t budge, keep asking them \u0026ldquo;If I was getting you a cashier\u0026rsquo;s check today, how much needs to be on the check?\u0026rdquo; They will eventually get the idea. Dealers make the most money by far not on the lot itself, but in the finance office. Don\u0026rsquo;t get swindled there.\nGet copies and read them #You are entitled to a copy of everything that shows up in the finance office. Don\u0026rsquo;t get up from the chair until you have a copy of everything and you\u0026rsquo;ve examined each page.\nDealers commonly adjust numbers or conveniently leave out \u0026ldquo;We Owe\u0026rdquo; sheets (see the next section) in the finance office. Sometimes it\u0026rsquo;s an honest mistake. Sometimes it\u0026rsquo;s not.\nThe dreaded \u0026ldquo;We Owe\u0026rdquo; #If a dealer doesn\u0026rsquo;t have something in stock that they promised you, such as an accessory or add-on, ensure it lands on the \u0026ldquo;We Owe\u0026rdquo; sheet in your paperwork.\nThere should be page somewhere in your sale paperwork that shows anything that the dealer owes you. In many states, this sheet must be in your paperwork even if it\u0026rsquo;s blank or zeroed out!\nIf a dealer promised you something and it didn\u0026rsquo;t land on the \u0026ldquo;We Owe\u0026rdquo; sheet, stop immediately and ask for that to be corrected right then.\nTrade-ins #Any time you trade in a vehicle, make it a different transaction. Allowing a dealer to add your trade-in with the current deal for purchase allows them to hide money for themselves in the deal.\nFirst, get multiple offers from various sites that buy cars all day long. I recommend Carvana, Driveway, and Vroom. They will give you an immediate estimate online with a little bit of information. The CarEdge site I mentioned earlier also has some tools that allow you to look up your car in the Black Book, which is what dealers use to appraise cars.\nNext, when you go to the dealer and they ask if you\u0026rsquo;re trading in a car, tell them you haven\u0026rsquo;t decided yet. Your best bet is to act as if you want them to talk you into it. Either way, keep the trade-in as a separate transaction.\nWhen it comes time to talk about your trade-in, you should already have an out the door price on your car purchase. Scroll up if you\u0026rsquo;re not sure about this. 😉\nLet the dealer know you have some other offers on your trade and let them know you\u0026rsquo;ll do the trade there if they can beat the offers. If they offer to beat the other deals, that\u0026rsquo;s awesome! That\u0026rsquo;s less work for you!\nIf they can\u0026rsquo;t, don\u0026rsquo;t worry. You can handle the trade-in separately with those other companies.\nBuying online #Finally, you can buy a car almost entirely online these days. Carvana, Driveway, and Vroom offer a fully online experience, but traditional dealers often have salespeople focused on internet deals.\nThis is a good way to sort out dealers who want to work with you and those that don\u0026rsquo;t. Let dealers know that:\nYou know what vehicle you want. You\u0026rsquo;re comparing the offers from multiple dealers. You\u0026rsquo;re looking to purchase within the next few weeks. You want an out the door price on the vehicle so you know how much to get on the cashier\u0026rsquo;s check. If they\u0026rsquo;re willing to deal with you via email or phone, that\u0026rsquo;s great! If not, there\u0026rsquo;s plenty of other dealers out there.\nFurther learning #I\u0026rsquo;ve learned a lot from several YouTube channels over the years. Here are my favorites:\nCar Questions Answered: Brandon runs a small used car dealership in North Carolina and gives lots of insights on what is happening the used and new car markets. He has lots of good advice on when to buy a car and which cars are the ones to avoid at certain times. If you ever wanted to go behind the scenes to see how a small used car dealership works, his channel is great. Deshone The Auto Advisor: Deshone has lots of helpful short videos. He does offer a membership program that comes with a fee, but his short videos cover a ton of car buying and leasing recommendations. If you\u0026rsquo;re interested in leasing a car, be sure to watch some of his leasing videos. CarEdge: CarEdge offers tons of services to help with car buying including 1:1 coaching once you have a deal sheet from a dealer. However, they also have tons of videos that explain how to buy a car effectively. They do role playing for finance managers and salespeople that highlight certain areas where buyers usually lose. Their role play videos are highly recommended. Lucky Lopez: Lucky has tons of insight from a dealer perspective and he follows lots of trends around pricing, supply, and reposessions. Most of his content might be too much for car buyers, but it\u0026rsquo;s good information to know. Chevy Dude: The Chevy Dude used to work for multiple dealers but now it running his own. He shares lots of sales tricks and secrets that help you prepare for making your next deal on a car \u0026ndash; new or used. Did I miss something? #Let me know if I missed something and I\u0026rsquo;ll come back and edit this post! Just contact me via any of the methods below in the author block. ️⬇️\n","date":"4 September 2023","permalink":"/p/car-buying-guide/","section":"Posts","summary":"If you love to nerd out on just about anything, give it a try the next time you buy a car.","title":"Car buying guide"},{"content":"","date":null,"permalink":"/tags/finance/","section":"Tags","summary":"","title":"Finance"},{"content":"I love learning about the behind the scenes aspects of just about everything. I do ham radio, I self-host lots of my personal infrastructure, and I\u0026rsquo;ve been learning more about the math behind the stock market for the last year or two.\nThat led me to start a blog on Ghost to share my findings with others. I started Theta Nerd1 earlier this summer.\nMy deployment looked great when I started! Everything was automatically updated with watchtower and running with docker-compose on Fedora CoreOS. (Click these links to read the posts on both topics!)\nHowever, I woke up one morning to my monitoring going off and my site was down. 😱\nWhy is the site down? #Anyone who has worked in IT knows this sinking feeling. Something is down, you don\u0026rsquo;t know why, and you suspect the worst possible scenarios.\nThe instance hosting the blog was online and responsive, so I started digging into the logs with docker-compose logs. I suddenly found a wall of text in the logs for the Ghost container:\n[2023-08-03 11:10:16] INFO Adding members.email_disabled column [2023-08-03 11:10:16] INFO Setting email_disabled to true for all members that have their email on the suppression list [2023-08-03 11:10:16] INFO Setting nullable: stripe_products.product_id [2023-08-03 11:10:16] INFO Adding table: donation_payment_events [2023-08-03 11:10:16] INFO Rolling back: alter table `donation_payment_events` add constraint `donation_payment_events_member_id_foreign` foreign key (`member_id`) references `members` (`id`) on delete SET NULL - Referencing column \u0026#39;member_id\u0026#39; and referenced column \u0026#39;id\u0026#39; in foreign key constraint \u0026#39;donation_payment_events_member_id_foreign\u0026#39; are incompatible.. [2023-08-03 11:10:16] INFO Dropping table: donation_payment_events [2023-08-03 11:10:16] INFO Dropping nullable: stripe_products.product_id with foreign keys disabled [2023-08-03 11:10:16] INFO Setting email_disabled to false for all members [2023-08-03 11:10:16] INFO Removing members.email_disabled column [2023-08-03 11:10:16] INFO Rollback was successful. [2023-08-03 11:10:16] ERROR alter table `donation_payment_events` add constraint `donation_payment_events_member_id_foreign` foreign key (`member_id`) references `members` (`id`) on delete SET NULL - Referencing column \u0026#39;member_id\u0026#39; and referenced column \u0026#39;id\u0026#39; in foreign key constraint \u0026#39;donation_payment_events_member_id_foreign\u0026#39; are incompatible. alter table `donation_payment_events` add constraint `donation_payment_events_member_id_foreign` foreign key (`member_id`) references `members` (`id`) on delete SET NULL - Referencing column \u0026#39;member_id\u0026#39; and referenced column \u0026#39;id\u0026#39; in foreign key constraint \u0026#39;donation_payment_events_member_id_foreign\u0026#39; are incompatible. {\u0026#34;config\u0026#34;:{\u0026#34;transaction\u0026#34;:false},\u0026#34;name\u0026#34;:\u0026#34;2023-07-27-11-47-49-create-donation-events.js\u0026#34;} \u0026#34;Error occurred while executing the following migration: 2023-07-27-11-47-49-create-donation-events.js\u0026#34; Error ID: 300 Error Code: ER_FK_INCOMPATIBLE_COLUMNS ---------------------------------------- Error: alter table `donation_payment_events` add constraint `donation_payment_events_member_id_foreign` foreign key (`member_id`) references `members` (`id`) on delete SET NULL - Referencing column \u0026#39;member_id\u0026#39; and referenced column \u0026#39;id\u0026#39; in foreign key constraint \u0026#39;donation_payment_events_member_id_foreign\u0026#39; are incompatible. at /var/lib/ghost/versions/5.57.2/node_modules/knex-migrator/lib/index.js:1032:19 at Packet.asError (/var/lib/ghost/versions/5.57.2/node_modules/mysql2/lib/packets/packet.js:728:17) at Query.execute (/var/lib/ghost/versions/5.57.2/node_modules/mysql2/lib/commands/command.js:29:26) at Connection.handlePacket (/var/lib/ghost/versions/5.57.2/node_modules/mysql2/lib/connection.js:478:34) at PacketParser.onPacket (/var/lib/ghost/versions/5.57.2/node_modules/mysql2/lib/connection.js:97:12) at PacketParser.executeStart (/var/lib/ghost/versions/5.57.2/node_modules/mysql2/lib/packet_parser.js:75:16) at Socket.\u0026lt;anonymous\u0026gt; (/var/lib/ghost/versions/5.57.2/node_modules/mysql2/lib/connection.js:104:25) at Socket.emit (node:events:513:28) at addChunk (node:internal/streams/readable:315:12) at readableAddChunk (node:internal/streams/readable:289:9) at Socket.Readable.push (node:internal/streams/readable:228:10) at TCP.onStreamRead (node:internal/stream_base_commons:190:23) Ah, so a failed database migration in the upgrade to 5.57.2 is the culprit! 👏\nI brought the site back online quickly by changing the container version for Ghost back to the previous version (5.55.2).\nWhy did the database migration fail? #The error message from above boils down to this:\nError: alter table `donation_payment_events` add constraint `donation_payment_events_member_id_foreign` foreign key (`member_id`) references `members` (`id`) on delete SET NULL - Referencing column \u0026#39;member_id\u0026#39; and referenced column \u0026#39;id\u0026#39; in foreign key constraint \u0026#39;donation_payment_events_member_id_foreign\u0026#39; are incompatible. Adjusting the donation_payment_events.member_id column to be a foreign key of members.id is failing because they are incompatible types. However, as I examined both tables, both were regular varchar(24) columns without anything special attached to them:\nmysql\u0026gt; describe members; +------------------------------+---------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------------------------+---------------+------+-----+---------+-------+ | id | varchar(24) | NO | PRI | NULL | | | uuid | varchar(36) | YES | UNI | NULL | | | email | varchar(191) | NO | UNI | NULL | | | status | varchar(50) | NO | | free | | | name | varchar(191) | YES | | NULL | | | expertise | varchar(191) | YES | | NULL | | | note | varchar(2000) | YES | | NULL | | | geolocation | varchar(2000) | YES | | NULL | | | enable_comment_notifications | tinyint(1) | NO | | 1 | | | email_count | int unsigned | NO | | 0 | | | email_opened_count | int unsigned | NO | | 0 | | | email_open_rate | int unsigned | YES | MUL | NULL | | | last_seen_at | datetime | YES | | NULL | | | last_commented_at | datetime | YES | | NULL | | | created_at | datetime | NO | | NULL | | | created_by | varchar(24) | NO | | NULL | | | updated_at | datetime | YES | | NULL | | | updated_by | varchar(24) | YES | | NULL | | +------------------------------+---------------+------+-----+---------+-------+ 18 rows in set (0.00 sec) Going upstream #I went to Ghost\u0026rsquo;s GitHub repository and opened an issue with as much data as I can find.\nOne of the first replies mentioned something about database collations. Long story short, collations describe how databases handle sorting and comparing data for different languages. Comparing some languages to other languages can be particularly challenging and this can lead to problems.\nI made a switch from MariaDB to MySQL recently for the blog. Could that be related?\nMore searching #I figured that I wasn\u0026rsquo;t the first one to stumble into this problem, and sure enough \u0026ndash; I wasn\u0026rsquo;t! There\u0026rsquo;s a great blog post about a broken migration from MySQL 5 to 8 with Ghost.\nIn short, it required several steps to fix it:\nStop the Ghost container Back up the database first (always a good idea) Do a quick find/replace on the dumped database to change the collations Drop the ghost database from the database 😱 Import the database back into MySQL Start Ghost again Dropping databases always makes me pause, but that\u0026rsquo;s what backups are for! 😉\nHow I fixed it #In my case, my MySQL container is called ghostmysql and my Ghost database is ghostdb. Then I made a backup of the database using mysqldump:\nsudo docker-compose exec ghostmysql mysqldump \\ -u root -psuper-secret-password ghostdb \u0026gt; backup-ghost-db.sql Next, I copied the SQL file to another directory just in case I accidentally deleted this backup with an errant command.\ncp backup-ghost-db.sql ../ Then I made a copy of the SQL file in the current directory and ran the find and replace on that copy. This changes the collations from the wrong one, utf8mb4_general_ci, to the right one, utf8mb4_0900_ai_ci2:\ncp backup-ghost-db.sql backup-ghost-db-new.sql sed -i \u0026#39;s/utf8mb4_general_ci/utf8mb4_0900_ai_ci/g\u0026#39; \\ backup-ghost-db-new.sql Now I have the collations right for importing the database back into MySQL. But first, I have to drop the existing database. This is a good time to double check your backups!\nsudo docker-compose exec ghostmysql mysql -u root \\ -psuper-secret-password mysql\u0026gt; DROP DATABASE ghostdb; Now we can import the modified backup:\ncat backup-ghost-db-new.sql | sudo docker-compose exec -T \\ ghostmysql mysql -u root -psuper-secret-password ghostdb Start all the containers:\nsudo docker-compose up -d Ghost was back online with the older version and everything looked good! I updated my docker-compose.yaml back to use latest for the Ghost version and ran sudo docker-compose up -d once more.\nWithin seconds, the new container image was in place and the container was running! Both migrations completed in seconds and the blog was back online with the newest version. 🎉\nTheta is one of many financial Greeks that measure certain aspects of options contracts in the market. It\u0026rsquo;s also a letter in the Greek alphabet.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThe default collation in MySQL 8 is utf8mb4_0900_ai_ci.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"31 August 2023","permalink":"/p/ghost-db-migration-failure/","section":"Posts","summary":"I woke up one morning to find my Ghost blog unresponsive. It required an unexpected fix. 🔧","title":"Fixing a ghost database migration failure"},{"content":"","date":null,"permalink":"/tags/ghost/","section":"Tags","summary":"","title":"Ghost"},{"content":"","date":null,"permalink":"/tags/mariadb/","section":"Tags","summary":"","title":"Mariadb"},{"content":"","date":null,"permalink":"/tags/mysql/","section":"Tags","summary":"","title":"Mysql"},{"content":"","date":null,"permalink":"/tags/development/","section":"Tags","summary":"","title":"Development"},{"content":"","date":null,"permalink":"/tags/open-source/","section":"Tags","summary":"","title":"Open Source"},{"content":"I had a great time on the Fedora Podcast yesterday to talk about Fedora cloud! We talked about all kinds of Fedora-related topics, but a couple of questions came up around how to contribute, especially when there\u0026rsquo;s not a lot of structure in place for a particular type of contributions. Here\u0026rsquo;s the full video if you\u0026rsquo;re interested:\nThat made me think about a post that deserves to be written: How do you get started with open source contributions in a new project? 🤔\nMy answer is pretty simple: Just do it. 🚀\nJust do what? #In the late 1980\u0026rsquo;s, one of Nike\u0026rsquo;s ad agencies came up with the phrase as a way to push through uncertainty. I was pretty young when this campaign started, but the general idea was this:\nAnyone can achieve what they want Stop worrying about whether you can actually do something Try something new Just do it Simple, right?\nThis works for open source contributions, too. I often have conversations with people inside and outside of work where they identify a problem or an improvement in an open source project. My customary response is \u0026ldquo;Let\u0026rsquo;s go upstream and make this better!\u0026rdquo;\nHowever, what I hear back most often is \u0026ldquo;I don\u0026rsquo;t know how.\u0026rdquo; This is where the whole just do it part comes in.\nI found a bug #Nearly every open source project wants to know about bugs that users experience. Start by finding out the best way to communicate with the people working on a particular project.\nFor projects on GitHub or GitLab, you can open up an issue and describe your problem. Some repositories have a template generated for bugs that ask you several important questions, so be sure to follow those templates. If there isn\u0026rsquo;t a template, I usually follow this format:\nWhat happened that was unexpected? What were you doing right before that unexpected event happened? What did you expect to happen instead? What else is nearby in the environment that might have an impact? For example, the versions of Python might be important for Python-based projects. What log files or other diagnostic materials exist? What\u0026rsquo;s the goal? We want to give maintainers enough information for a quick diagnosis in the best case. If it\u0026rsquo;s not obvious, then they need enough information to try to reproduce it on their own machine for debugging.\nMaintainers might come back with additional questions about your environment or the events just before the bug occurred. Be sure to respond in a timely way while the information is top of mind for them.\nAlways remember that these maintainers are real people who are likely not being paid for their work. Assume the best of intentions (unless proven otherwise) and stay focused on the solution. There might always be the chance that the maintainers are not interested in your use case and might not be interested in solving it.\nThat leads me to the next step.\nI found a bug and I want to fix it #Start by opening an issue or a bug report first (see the previous section).\nThis ensures that maintainers get a full picture of the problem you\u0026rsquo;re trying to solve. Also, I\u0026rsquo;ve had maintainers immediately reply and tell me that it\u0026rsquo;s a known issue already being solved in another issue. That could save you some work.\nIf you have a patch that fixes the issue, go through the following steps before submitting the fix upstream:\nEnsure your fix references the issue or bug report that you opened Use a very clear first line in your commit message, such as parser: Fix emoji handling in YAML rather than Fix YAML bug Include a very brief explanation of the bug you\u0026rsquo;re fixing in the commit message Extra credit: Add or update existing tests so they catch the bug you just found Extra credit: Add or update the project documentation for your change if necessary These extra credit items often make it easier to review your patch. Maintainers love extra test coverage, too.\nSubmit your change in a pull request or merge request and watch for updates. Be patient with replies from the maintainers, but be timely in your replies. Remember that your use case might be an edge case for the upstream project and you might need to explain your fix (or the original bug) in more detail.\nI want to improve something #Improving an open source project could involve several things, such as:\nEnhancing by adding a new feature Optimizing an existing feature Creating documentation Building integrations I strongly recommend opening an issue first with the project maintainers to explain your enhancement. These Requests for Enhancements, or RFEs, should include several things:\nYour use case that made you think of the enhancement in the first place What you plan to add, substract, or change How the changes might affect different users, especially as they upgrade from older versions How the changes might affect testing or release processes Any changes in dependencies required Before going down the road of enhancements, always bring up these ideas with the maintainers first. You want to ensure that your ideal changes are aligned with the future goals of the project. In addition, maintainers will want to better understand your use case.\nRemember that an enhancement almost always requires additional work from maintainers. Every new use case means more work to ensure the project still functions. That\u0026rsquo;s why it\u0026rsquo;s critical to share your use case and have a good plan for testing and documentation.\nGetting involved #Whenever I find an open source project that I\u0026rsquo;d like to get involved with, I start looking around for several things:\nWhat do they use for informal asynchronous chat? IRC? Matrix? Slack? Something else? I join the chat, introduce myself, and get an idea for how they interact. Some groups are very chatty and informal while others are much more formal and regimented. Where do they have detailed discussions? Many projects have detailed discussions in their issues/bugs or in places like GitHub\u0026rsquo;s discussions. Others use old school mailing lists. Some groups have regular meetings where anyone can add agenda items for discussions. If I need to talk about something a bit more long form and I expect some back and forth on it, I look for this avenue. What requirements exist for contributors? Some projects require that contributors sign a CLA or some other sort of agreement. Make sure that any CLAs you sign are approved by your employer (if applicable). You might need an account on a system that you don\u0026rsquo;t have, so check for that as well. From there, I take the just do it mentality and go for it. The worst thing you\u0026rsquo;ll be told is \u0026ldquo;No\u0026rdquo;. If that happens, take a step back, see if there\u0026rsquo;s another way to approach it, and try again.\nRemember one thing most of all: avoid taking anything personally. All of us have our bad days and some people have personalities that might be totally incompatible with yours (and most people in general). 🤭\n","date":"16 August 2023","permalink":"/p/open-source-like-nike/","section":"Posts","summary":"Want to make a change in an open source project? Take the Nike approach and Just Do It. 👟","title":"Open source contributions: Just do it"},{"content":"After I launched my new stock market blog on a self-hosted Ghost, I wrote up the deployment process in containers last week. Then I had a shower thought: How do I put a CDN in front of that?\nThis blog is back on an S3 + CloudFront deployment at AWS and I figured CloudFront could work well for a self-hosted Ghost blog, too.\nThere are tons of blog posts out there that have outdated processes or only show you how to do one piece of the CDN deployment for Ghost. I read most of them and cobbled together a working deployment. Read on to learn how to do this yourself!\nWhy add a CDN? #Content Delivery Networks (CDN) enhance websites by doing a combination of different things:\nHigh throughput content delivery. CDNs have extremely well connected systems with plenty of bandwidth available. When your web traffic goes overboard or a popular person links to your site, CDNs allow you to continue serving content at very high rates. Cached content. CDNs will pull content from your origin server (the one running your application) and cache that content for you. This means fewer requests to your origin server and less bandwidth consumed there. Content closer to consumers. You might host your site in the eastern USA, but a CDN can cache your content around the world for faster access. Your website might normally be slow for someone in Tokyo, but a local CDN endpoint in Japan could serve that content immediately there. Improved security. Many CDNs offer a web application firewall (WAF) that allows you to limit access to certain functions on your site. This could prevent or slow down certain types of attacks that could take your site offline. CDNs have trade-offs, though. They\u0026rsquo;re complicated.\nThey often require lots of DNS changes. TLS certificates remain a challenge. Caching solves lots of problems but can create headaches in a flash. A misconfiguration at the CDN level can take down your site or prevent it from operating properly for longer periods of time.\nCareful planning helps a lot! Measure twice, cut once.\nAWS terminology #The names of various AWS services often confuse me, but here\u0026rsquo;s what we need for this project:\nAWS Certificate Manager: handles TLS certificate issuance and renewal for the CDN distribution AWS CloudFront: the actual CDN itself CloudFront has a concept of distributions, which is a single configuration of the CDN for a particular site. We will get to that in the CloudFront section. 😉\nCertificates #First off, we need a certificate for TLS connections. Run over to the AWS Certificate Manager (ACM) console for your preferred region and follow these steps:\nClick the orange Request button at the top right. Request a public certificate on the next page and click Next. Type in the domain for your certificate that your users will type to access your site. For example, example.com or blog.example.com. Click Request You should be back to your certificate list. Refresh the page by clicking on the circle with the arrow at the top right. Click on the certificate for the domain name you just added.\nIn the second detail block labeled Details, look for the CNAME name and value at the far right. You need to set both of these wherever you host your DNS records. If you use AWS Route 53 for DNS, there\u0026rsquo;s a button you can click there to do it immediately. If you use another DNS provider, create a CNAME record with the exact text shown there.\nOnce you create those DNS records, go back to the page with your certificate and wait for it to change from Pending validation to Issued. This normally takes 2-3 minutes for most DNS providers I use.\nWait for this to turn green and say Issued before proceeding to the next step! Now that you have a certificate, it\u0026rsquo;s time to configure our CDN distribution.\nCloudFront #Now comes the fun, but complicated part. You have two DNS records to think about here:\nThe CDN DNS record that users will type to access your site, such as example.com. The origin DNS record that the CDN will use to access your backend Ghost blog, such as origin.example.com. The origin record will be hidden away behind the CDN when we\u0026rsquo;re done.\nCreate the distribution #Go to the CloudFront console in your preferred region and follow these steps:\nClick Create Distribution at the top right. Put your origin (hidden) domain in Origin domain, such as origin.example.com. Skip down to Name for the distribution such as \u0026ldquo;My Ghost Blog\u0026rdquo;. (This is for your internal use only.) Compress objects automatically: Yes Viewer protocol policy: Redirect HTTP to HTTPS Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE Cache policy: CachingOptimized Origin request policy: AllViewerExceptHostHeader WAF: Do not enable security protections (This costs extra and you can tweak this configuration later if needed.) Alternate domain name (CNAME): Use the DNS name that your users will access, such as example.com Custom SSL certificate: Choose the certificate we created in the previous section Click Create distribution This can take up to 10 minutes to deploy once you\u0026rsquo;re finished. At this point, we have an aggressive caching policy that will cause problems when members attempt to sign in or manage their membership. It will also break the Ghost administrative area.\nLet\u0026rsquo;s fix that next.\nAdjust caching #Find the CloudFront distribution we just created and click the Behaviors tab. We are going to make three different sets of behavior configurations to handle the dynamic pages.\nClick Create Behavior and do the following:\nEnter /ghost* as the path pattern. Choose the origin from the drop down that you specified when creating the distribution. Compress objects automatically: Yes Viewer protocol policy: Redirect HTTP to HTTPS Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE Cache policy: CachingDisabled Origin request policy: AllViewer Click Save changes That takes care of the administrative interface. Now let\u0026rsquo;s fix the caching on the members page:\nEnter /members* as the path pattern. Choose the origin from the drop down that you specified when creating the distribution. Compress objects automatically: Yes Viewer protocol policy: Redirect HTTP to HTTPS Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE Cache policy: CachingDisabled Origin request policy: AllViewer Click Save changes With this configuration, we have caching for all content except for the administrative and member interfaces.\nTesting #There are a few different ways to test at this point, but I prefer to go with an old tried and true method: the /etc/hosts file. 😜\nCloudFront offers a domain name on *.cloudfront.net that you can use, but it\u0026rsquo;s not quite the same. Cookies for the admin/member interface don\u0026rsquo;t always work since they cross domains and sometimes you\u0026rsquo;re redirected back to the original domain name which bypasses the CDN altogether.\nGo back to the list of distributions in your CloudFront console in your preferred region. Click on the distribution you created earlier. At the top left, you\u0026rsquo;ll see Distribution domain name with a domain underneath that contains random_text.cloudfront.net.\nTake that domain name and get an IPv4 address:\n$ dig +short A d2xznlk9a1h8zn.cloudfront.net 18.161.156.2 18.161.156.18 18.161.156.61 18.161.156.9 Open /etc/hosts in your favorite editor (root access required) and use one of the IP addresses that correspond to your CDN endpoint. Add a line like this one (using your CDN domain and IPv4 address from the last step):\n18.161.156.2 example.com Access your site in a browser and verify that everything works. Be sure that you can access the administrative console under example.com/ghost and any member settings.\n️Remove the line in /etc/hosts now that we\u0026rsquo;re finished with testing.\nProduction #Our first step is to set up the origin.\nOrigin configuration #Ensure your origin server has a proper DNS record so that CloudFront can access it on the backend. For example, origin.example.com must have a DNS record that points to your backend server running Ghost.\nVerify that the DNS record for your origin works before proceeding. 💣 If you followed my guide for deploying Ghost, then you need to adjust your caddy configuration to answer requests to your origin URL. I updated my Caddyfile to contain both the origin and CDN hostnames:\n{ email major@mhtx.net } thetanerd.com, origin.thetanerd.com { reverse_proxy ghost:2368 log { output stderr format console } } www.thetanerd.com { redir https://thetanerd.com{uri} } Restart caddy with sudo docker-compose restart caddy.\nVerify that caddy responds to requests to the origin hostname before going any further. It must respond properly with a valid SSL/TLS certificate! 💣 Big switch #Now that our origin server is happy and responding, it\u0026rsquo;s time to make the big switch. We\u0026rsquo;re going to remove the record for the main CDN domain, such as example.com and replace it with a CNAME or ALIAS record to the CDN name in CloudFront. This is the name that ends in cloudfront.net that we used for testing earlier.\nThe use of a CNAME or ALIAS record depends on your DNS host and the type of domain name you\u0026rsquo;re using for the CDN.\nIf you\u0026rsquo;re using apex domain name (no subdomain) such as example.com, you will likely need to use an ALIAS record For domain names with a subdomain, such as blog.example.com, you will likely need to use a CNAME record Read your DNS host\u0026rsquo;s documentation if you are unsure about ALIAS vs CNAME records! 💣 Go your DNS registrar and follow these steps:\nScreenshot your existing DNS records or export them if possible (in case you need to revert). Remove the existing A/AAAA/CNAME/ALIAS record(s) for your main domain name, such as example.com. Immediately add a CNAME/ALIAS record from example.com to random_text.cloudfront.net that corresponds to your CloudFront distribution. Once that\u0026rsquo;s done, I usually run curl in a terminal to watch for the changeover with watch curl -si https://example.com. When CloudFront is handling your traffic you\u0026rsquo;ll see headers like these:\nHTTP/2 200 content-type: text/html; charset=utf-8 cache-control: public, max-age=0 date: Mon, 03 Jul 2023 19:44:55 GMT server: Caddy x-powered-by: Express etag: W/\u0026#34;19e7b-q5fZSjf8acC7o9lhdO5R+jOASfM\u0026#34; vary: Accept-Encoding x-cache: Miss from cloudfront via: 1.1 b2ba542a917451d9d85e07dba0cfd9a4.cloudfront.net (CloudFront) x-amz-cf-pop: DFW57-P2 x-amz-cf-id: Tpcjk886L0xAZzOjuUP-js_7-twE7ZGDZKlkmGHNTjW8hEs7oOWaLg== If it seems like it\u0026rsquo;s taking a very long time to change over, use a tool like DNS Checker to see how various DNS servers see your recent DNS change.\nRevert (if needed) #If something went horribly wrong, DON\u0026rsquo;T PANIC. 😱\nDNS is like IT quicksand. Once you get stuck in a problem with DNS, any level of fighting just makes you more stuck. Take a deep breath first. 🫁 Go back to your DNS provide and remove the ALIAS/CNAME record for your CDN domain name, such as example.com. Add back in the original A/AAAA/ALIAS/CNAME records that were there previously. Be patient for traffic to shift back to your origin server.\nReview the changes you made and look for any errors.\nConfiguring Ghost #Ghost is fairly easy to put behind a CDN, but it does have some additional caching configuration that you can change if needed. It provides hints to the CDN about what should and should not be cached and for how long. Refer to the Ghost docs for details.\nI decided to cache requests to the Content API and to the frontend for 60 seconds as a test. My docker-compose.yml now looks like this:\nghost: image: docker.io/library/ghost:5 container_name: ghost restart: always depends_on: - ghostdb environment: url: https://thetanerd.com caching__contentAPI__maxAge: 60 caching__frontend__maxAge: 60 database__client: mysql database__connection__host: ghostdb database__connection__user: ghost database__connection__password: ... database__connection__database: ghostdb volumes: - ghost:/var/lib/ghost/content Now if I access the main page of the site, I see cache hits in the headers:\nHTTP/2 200 content-type: text/html; charset=utf-8 cache-control: public, max-age=600 date: Mon, 03 Jul 2023 19:54:39 GMT etag: W/\u0026#34;19e7b-5MKnFrme/sGk5DT2yvMkbgDsl+4\u0026#34; server: Caddy x-powered-by: Express vary: Accept-Encoding x-cache: Hit from cloudfront via: 1.1 308bae6dc9384ec8e0a82ba2d96014bc.cloudfront.net (CloudFront) x-amz-cf-pop: DFW57-P2 x-amz-cf-id: 0Dvoc_ST8-FK_TD4lEMQg6-uiDqhaUbYAqbylkiUP61eGcQsZSFEGg== age: 7 The x-cache header shows a hit and the age header says it\u0026rsquo;s been cached for 7 seconds.\nEnjoy your new CDN-accelerated Ghost blog! 🐇\n","date":"3 July 2023","permalink":"/p/ghost-cloudfront-cdn/","section":"Posts","summary":"Adding an AWS CloudFront CDN distribution to a Ghost blog improves response times\non an already fast blogging platform and increases security along the way. ⚡","title":"Add CloudFront CDN to a Ghost blog"},{"content":"","date":null,"permalink":"/tags/cdn/","section":"Tags","summary":"","title":"Cdn"},{"content":"","date":null,"permalink":"/tags/cloudfront/","section":"Tags","summary":"","title":"Cloudfront"},{"content":"There\u0026rsquo;s no shortage of options for starting a self-hosted blog. Wordpress might be chosen most often, but I stumbled upon Ghost recently and their performance numbers really got my attention.\nI prefer deploying most things in containers these days with Fedora CoreOS. Luckily, the Ghost stack doesn\u0026rsquo;t demand a lot of infrastructure:\nGhost itself MySQL 8+ (I went with MariaDB 11.x) A web server out front TLS certificate Although I chose MariaDB for the database here, Ghost recommends MySQL and will throw a warning in the admin panel if you\u0026rsquo;re using something else. I haven\u0026rsquo;t had any issues so far, but you\u0026rsquo;ve been warned. 💣 I picked Caddy for the webserver since it\u0026rsquo;s so small and the configuration is tremendously simple.\nLaunch CoreOS #Fedora CoreOS offers lots of cloud options for launching it immediately. Many public clouds already have CoreOS images available, but I love Hetzner\u0026rsquo;s US locations and I already had a CoreOS image loaded up in my account.\n🇩🇪 Want CoreOS at Hetzner? There\u0026rsquo;s a blog post for that!\nOnce your CoreOS instance is running, connect to the instance over ssh and ensure the docker.service starts on each boot:\nsudo systemctl enable --now docker.service This ensures that containers come up on each reboot. CoreOS has a podman socket that listens for docker-compatible connections, but that doesn\u0026rsquo;t help with reboots.\nPerhaps I\u0026rsquo;m old fashioned, but I still enjoy using docker-compose for container management. I like how I can declare what I want and let docker-compose sort out the rest.\nLet\u0026rsquo;s install docker-compose on the CoreOS instance now:\n# Check the latest version in the GitHub repo before starting! # https://github.com/docker/compose curl -LO https://github.com/docker/compose/releases/download/v2.19.0/docker-compose-linux-x86_64 # Install docker-compose and make it executable. sudo mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose sudo chown +x /usr/local/bin/docker-compose Verify that docker-compose is ready to go:\n$ docker-compose --version Docker Compose version v2.19.0 Preparing Caddy #Caddy uses a configuration file called a Caddyfile and we need that in place before we deploy the other containers. Within my home directory, I created a directory called caddy:\nmkdir caddy Then I added the Caddyfile inside the directory:\n{ # Your email for LetsEncrypt warnings/notices. email youremail@domain.com # Staging LetsEncrypt server to use while testing. # Uncomment this before going to production! acme_ca https://acme-staging-v02.api.letsencrypt.org/directory } # Basic virtual host definition to feed traffic into the # Ghost container when it arrives. example.com { reverse_proxy ghost:2368 } # OPTIONAL: Redirect traffic to \u0026#39;www\u0026#39; to the bare domain. www.example.com { redir https://example.com{uri} } This configuration sets up LetsEncrypt certificates automatically from the staging server for now. Once we know our configuration is working well, we can comment out the acme_ca line above and get production TLS certificates.\nAt this point, you need a DNS record pointed to your server so you can get a certificate. You have some options:\nIf the site is entirely new, just point the root domain name to your CoreOS instance. Use that domain in the configuration above and later in the deployment.\nIf you\u0026rsquo;re migrating from an existing site, choose a subdomain off your main domain to use. If your website is example.com, use something like test.example.com or new.example.com to get Ghost up and running. It\u0026rsquo;s really easy to change this later.\nNow we\u0026rsquo;re ready for the rest of the deployment.\nDeploying containers #Here\u0026rsquo;s the docker-compose.yml file I\u0026rsquo;m using:\n--- version: \u0026#39;3.8\u0026#39; services: # OPTIONAL # Watchtower monitors all running containers and updates # them when the upstream container repo is updated. watchtower: image: docker.io/containrrr/watchtower:latest container_name: watchtower restart: unless-stopped hostname: coreos-ghost-deployment environment: - WATCHTOWER_CLEANUP=true - WATCHTOWER_POLL_INTERVAL=3600 command: - --cleanup volumes: - /var/run/docker.sock:/var/run/docker.sock privileged: true # Caddy acts as our external-facing webserver and handles # getting TLS certs from LetsEncrypt. caddy: image: caddy:latest container_name: caddy depends_on: - ghost ports: - 80:80 - 443:443 restart: unless-stopped volumes: - ./caddy/Caddyfile:/etc/caddy/Caddyfile:Z - ghost:/var/www/html - caddy_data:/data - caddy_config:/config # The Ghost blog software itself ghost: image: docker.io/library/ghost:5 container_name: ghost restart: always depends_on: - ghostdb environment: url: https://example.com database__client: mysql database__connection__host: ghostdb database__connection__user: ghost database__connection__password: GHOST_PASSWORD_FOR_MARIADB database__connection__database: ghostdb volumes: - ghost:/var/lib/ghost/content # Our MariaDB database ghostdb: image: docker.io/library/mariadb:11 container_name: ghostdb restart: always environment: MYSQL_ROOT_PASSWORD: A_SECURE_ROOT_PASSWORD MYSQL_USER: ghost MYSQL_PASSWORD: GHOST_PASSWORD_FOR_MARIADB MYSQL_DATABASE: ghostdb volumes: - ghostdb:/var/lib/mysql volumes: caddy_config: caddy_data: ghost: ghostdb: I love watchtower but that step is completely optional. It does require some elevated privileges to talk to the podman socket, so keep that in mind if you choose to use it.\nOur ghostdb container starts first, followed by ghost, and then caddy. That follows the depends_on configuration keys shown above.\nThere are two steps to take now:\nReplace GHOST_PASSWORD_FOR_MARIADB and A_SECURE_ROOT_PASSWORD above with better passwords. 😉 Also, set the url parameter for the ghost container to your blog\u0026rsquo;s domain name. Once all of that is done, let\u0026rsquo;s let docker-compose do the heavy lifting:\nsudo docker-compose up -d Let\u0026rsquo;s verify that our containers are running:\n$ sudo docker-compose ps NAME IMAGE COMMAND SERVICE caddy caddy:latest \u0026#34;caddy run --config …\u0026#34; caddy ghost docker.io/library/ghost:5 \u0026#34;docker-entrypoint.s…\u0026#34; ghost ghostdb docker.io/library/mariadb:11 \u0026#34;docker-entrypoint.s…\u0026#34; ghostdb watchtower docker.io/containrrr/watchtower:latest \u0026#34;/watchtower --clean…\u0026#34; watchtower Awesome! 👏\nGhost initial setup #With all of your containers running, browse to https://example.com/ghost/ Just add /ghost/ to the end of your domain name to reach the admin panel. Create your admin account there with a good password.\nIf everything looks good, run back to your Caddyfile and comment out the acme_ca line:\n{ # Your email for LetsEncrypt warnings/notices. email youremail@domain.com # Staging LetsEncrypt server to use while testing. # Uncomment this before going to production! # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory } Restart the caddy container to get a production LetsEncrypt certificate on the site:\nsudo docker-compose restart caddy Customizing Ghost #Ghost looks for lots of environment variables to determine its configuration and you can set these in your docker-compose.yml file. Although some configuration items are easy, like url, some are nested and get more complicated. For these, you can use double underscores __ to handle the nesting.\nAs an example, we already used database__connection__host in the docker-compose.yaml, and that\u0026rsquo;s the equivalent to this nested configuration:\n\u0026#34;database\u0026#34;: { \u0026#34;connection\u0026#34;: { \u0026#34;host\u0026#34;: \u0026#34;...\u0026#34; } } If you\u0026rsquo;re deploying in containers, it\u0026rsquo;s a good idea to configure Ghost via environment variables. This ensures that your docker-compose.yml is authoritative for the Ghost deployment. You can exec into the container, adjust the config file on disk, and restart Ghost, but then you have to remember where you configured each item. 🥵\nSwitching to production domain #If you used a temporary domain to get everything configured and you\u0026rsquo;re ready to use your production domain, follow these steps:\nOpen your Caddyfile and replace all instances of the testing domain with the production domain Restart caddy: sudo docker-compose restart caddy Edit the docker-compose.yml and change the url key in the ghost container to the production domain Apply the configuration with sudo docker-compose up -d Enjoy your new automatically-updating Ghost blog deployment! 👻\n","date":"27 June 2023","permalink":"/p/deploy-ghost/","section":"Posts","summary":"Ghost delivers a great self-hosted blogging platform that deploys well in containers.\nLet\u0026rsquo;s deploy it on CoreOS along with Caddy. ️📝","title":"Deploy a containerized Ghost blog 👻\n"},{"content":" All comments and thoughts in this post are my own and certainly do not reflect the positions of any of my employers, past and present. The goal of this post is to help with healing after a layoff event and organizing your thoughts around your decisions afterwards. Whatever you want to call them \u0026ndash; layoffs, reductions in force, or downsizing \u0026ndash; they\u0026rsquo;re terrible.\nFor those who leave, uncertainty can become overwhelming. Loss of work means a loss of salary on the simplest level, but it can also mean a loss of purpose. It can mean a loss of critical medical insurance benefits.\nFor those who stay, layoffs shake the foundations of trust with the employer. I have a whole blog post on red flags that goes into detail on this topic, so I won\u0026rsquo;t repeat it all here.\nMy argument in this post is that for those who stay and recommit to the mission of the business, you have much more control over customer outcomes than you ever did before.\nHow to think about layoffs #I went through several rounds of layoffs at my last employer and my current employer just did a round. Engineers around me often say things like:\n\u0026ldquo;How could they let him go? He was so helpful!\u0026rdquo; \u0026ldquo;She is critical to this project and it\u0026rsquo;s our top priority. How could she leave now?\u0026rdquo; \u0026ldquo;He was there for 24 years and he helped everyone, even our CEO. Why would they make him go?\u0026rdquo; \u0026ldquo;Our quarterly results looked great. Why does anyone need to be laid off?\u0026rdquo; The first step is to avoid reading deeply into the decision and avoid making it personal.\nIn my experience, some of the decisions are made based on data that you can\u0026rsquo;t see at a publicly traded company. Sometimes a company sheds employees simply because everyone in their market sector is doing it and they\u0026rsquo;re looking for a temporary bump in the stock price. And then there are those situations where a company chooses to end a product line or project.\nThese decisions are often made at a high level within the company and done in such a way to avoid any type of employment-related lawsuits. That brings me to my next topic.\nWhy do they let top performers go? #This frustrated me many times over the years. As an example, there was a talented network engineer on a team who was a rising star in the company. He could work through complex network topologies to provide a balance of performance and security based on the customer\u0026rsquo;s demands.\nEven better, he could explain it all to customers. Better still, he could explain it to the customer\u0026rsquo;s technical and non-technical staff.\nThe customer was getting closer to making the deal and this engineer was central to the deal being made. The deal was large. Everyone was preparing with implementation calls, documents, and everything else needed for the final meeting.\nThe final meeting came, but the network engineer wasn\u0026rsquo;t on the call.\nSalespeople, solutions architects, and other engineers were frantic. Did he go home sick? Did we send him the wrong time on the calendar invitation?\nNo, as it turns out, he was laid off at lunch and the call was scheduled for 2PM.\nA choice was made to reduce engineering staff by some percentage, so the business essentially did this:\nSELECT * FROM employees WHERE job_family = \u0026#34;engineering\u0026#34; ORDER BY RAND() LIMIT 100 And they went through the list methodically until their percentage was met.\nThis is why taking layoffs personally will only cause you pain. Avoid looking for a deeper meaning and explanation where one does not exist.\nAs Yoda once said:\nFear is the path to the dark side. Fear leads to anger. Anger leads to hate. Hate leads to suffering.\nHow about the people who aren\u0026rsquo;t top performers?\nWhy don\u0026rsquo;t companies just lay off low performers? #(Again, try to to avoid looking for deeper meaning here, but I\u0026rsquo;ll go through this question anyway.)\nI\u0026rsquo;ve heard this many times and I\u0026rsquo;ve asked it myself before:\nWhy don\u0026rsquo;t companies just lay off the low performers? After all, some of these people might be toxic to teams and it\u0026rsquo;s clear they\u0026rsquo;re not committed to the company mission.\nIt\u0026rsquo;s a good argument! If a company wants to save money by spending less on salaries and benefits, why not target the people who aren\u0026rsquo;t doing the work first? You\u0026rsquo;d reduce expenses while improving the quality of the workforce!\nLong story short: It\u0026rsquo;s not that simple.\nA wise manager once told me that:\nIf you\u0026rsquo;re the last person to find out that your performance is inadequate, that\u0026rsquo;s not your fault. It\u0026rsquo;s your manager\u0026rsquo;s fault.\nManagers make mistakes. Whether they\u0026rsquo;re mistakes made on purpose or not, it often opens the company up to litigation.\nFor example, a manager might label an employee as a low performer due to factors outside their job performance. Perhaps they don\u0026rsquo;t look like the rest of the team, they have a different religious affiliation, or a different sexual orientation. They might not participate in after-work functions with the team where alcohol is involved. In some extreme situations, an employee might be labeled a low performer due to rejected romantic advances from the manager. (This last one seems crazy, but I\u0026rsquo;ve seen it happen once.)\nThe problem shows up when the company tries to do a layoff like this:\nSELECT * FROM employees WHERE performance_level = \u0026#34;unacceptable\u0026#34; ORDER BY RAND() LIMIT 100 Suddenly there are people on the low performing list who don\u0026rsquo;t belong there. However, at the executive level, they have no idea about the dubious performance reviews.\nThis is a fast path to wrongful termination lawsuits.\nFirst off, be sure that you\u0026rsquo;re ready to recommit to the company mission. If something happened that shook your commitment to the core, take some time to truly understand how you feel about your company. My post on red flags might help. Let\u0026rsquo;s get back on something positive. How do we avoid taking these events personally and push through to something better?\nUse your newfound power #As an engineer, you have more control over customer outcomes after a layoff than ever before. Confused? I\u0026rsquo;ll explain.\nI\u0026rsquo;ve worked in engineering, management, and leadership roles in technology since 2004. In many situations, engineers struggle to change business processes and persuade business-minded people to change their outlook on a topic. There\u0026rsquo;s another blog post on here about persuasion engineering that might be worth reading.\nLayoffs shake the foundations of any company, including the processes that brought the company to that point. It\u0026rsquo;s a great time to question any of these processes. Does the process save time? Does it benefit customers? Does it need to be modified? Should we throw it away completely?\nI\u0026rsquo;m not suggesting that you approach all processes and business justifications with immediate contempt, but have the courage to ask questions about them. Even long-held beliefs should be questioned.\nFor example, I recently had an exchange like this one:\nMe: \u0026ldquo;What if we offered customers the capability to do X?\u0026rdquo; Them: \u0026ldquo;Well, we don\u0026rsquo;t have any data to support that.\u0026rdquo; Me: \u0026ldquo;This could be an opportunity to guide customers to doing X on our Y product.\u0026rdquo; Them: \u0026ldquo;But we need something well defined that customers have asked for before going down that path.\u0026rdquo; Me: \u0026ldquo;We\u0026rsquo;ve followed that data for quite some time and the uptake from customers is low. We just went through a round of layoffs \u0026ndash; perhaps we should take a leap here and try something new?\u0026rdquo; The number one fear I have as someone who stays when a layoff happens is this: What if we\u0026rsquo;re too afraid to speak up? What if we\u0026rsquo;re too afraid to take a leap? What if fear of being next on the layoff list prevents us from doing something amazing?\nYou can be an advocate for change. It\u0026rsquo;s the best environment to make a change and think differently about where the company can best serve its customers.\nIt could end in one of two ways:\nYou change the future of the company for the better and delight your customers You\u0026rsquo;re on the termination list for the next layoff On the first one, you\u0026rsquo;ve done something truly incredible and you will likely receive recognition for it. You\u0026rsquo;ll also feel more engaged in your work.\nOn the second one, if the company decides you rocked the boat too much and decides to let you go, it\u0026rsquo;s for the best. You\u0026rsquo;re likely dealing with some levels of middle management who lead with fear rather than a drive to improve. Don\u0026rsquo;t take it personally and look for the next opportunity.\nPersonally, I\u0026rsquo;d rather go out in a blaze of glory trying to make the company a better place. 😉\n","date":"25 June 2023","permalink":"/p/engineering-through-layoffs/","section":"Posts","summary":"Layoffs create traumatic times for many. Find ways to break through the frustration\nand pain. For those that stay, your ability to influence the business can grow. 🪴","title":"Engineering through layoffs"},{"content":"Most of my container workloads run on independent CoreOS cloud instances that I treat like pets. Keeping containers update remains a constant battle, but it\u0026rsquo;s still easier than running kubernetes.\nI wrote about using watchtower in the past to keep containers updated. It\u0026rsquo;s a simple container that does a few important things:\nIt monitors (via docker/podman socket) the running containers on the host It tracks the versions/tags of each container image It looks for updated versions of the container image in their upstream repositories Based on a configurable schedule, it pulls a new container image and restarts the container for updates I encourage you to read more about watchtower on GitHub. There\u0026rsquo;s plenty you can configure, including update intervals, how updates are handled, and how you can get notifications when an update happens.\nMy new deployments always need watchtower running. Luckily, we can combine Fedora CoreOS\u0026rsquo; initial provisioning system, called ignition, with podman\u0026rsquo;s new quadlet feature and launch watchtower automatically on the first boot.\nQuadlets #So what\u0026rsquo;s a quadlet?\nThe blog post explains it well by making containers more declarative via a familiar systemd syntax. Here\u0026rsquo;s an example .container file from the post:\n[Unit] Description=The sleep container After=local-fs.target [Container] Image=registry.access.redhat.com/ubi9-minimal:latest Exec=sleep 1000 [Install] # Start by default on boot WantedBy=multi-user.target default.target You can toss this into $HOME/.config/containers/systemd/mysleep.container for rootless user containers or in /etc/containers/systemd/mysleep.container for a container running as root.\nConfigure a quadlet on boot #As I mentioned earlier, I want a watchtower container running on my CoreOS nodes at first boot. Let\u0026rsquo;s start with a fairly basic butane file:\nvariant: fcos version: 1.4.0 passwd: users: - name: major groups: - wheel - sudo ssh_authorized_keys: - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDyoH6gU4lgEiSiwihyD0Rxk/o5xYIfA3stVDgOGM9N0 storage: files: - path: /etc/containers/systemd/watchtower.container contents: inline: | [Unit] Description=Watchtower container updater Wants=network-online.target After=network-online.target [Container] ContainerName=watchtower Image=ghcr.io/containrrr/watchtower:1.5.3@sha256:a924a9aaef50016b7e69c7f618c7eb81ba02f06711558af57da0f494a76e7aca Environment=WATCHTOWER_CLEANUP=true Environment=WATCHTOWER_POLL_INTERVAL=3600 Volume=/var/run/docker.sock:/var/run/docker.sock SecurityLabelDisable=true [Install] WantedBy=multi-user.target default.target Let\u0026rsquo;s break this file down:\nI start by adding a user named major that has administrative privileges an an ssh key (this is optional, but I like using my own username rather than core) The quadlet unit file lands in /etc/containers/systemd/watchtower.container and starts at boot time The quadlet file has some important configurations:\nI added environment variables to clean up outdated container images and check for updates once an hour The podman socket is mounted inside the watchtower container Security labels are disabled to allow for communication with the podman socket Mounting the podman socket and disabling security labels is not an ideal security approach. However, I\u0026rsquo;ve found that watchtower\u0026rsquo;s configuration and automation fits my needs really well and I retreive the image from a trusted source. If this won\u0026rsquo;t work for you, you can use podman\u0026rsquo;s built-in auto-update feature instead. From here, we convert the butane configuration into an ignition configuration. I\u0026rsquo;m launching this CoreOS node on VULTR, so I\u0026rsquo;ve named my files accordingly:\n$ butane vultr-coreos.butane \u0026gt; vultr-coreos.ign Let\u0026rsquo;s go 🚀 #I\u0026rsquo;m using VULTR\u0026rsquo;s CLI here in Fedora, but you can do the same steps via VULTR\u0026rsquo;s portal if needed. Just paste in the ignition configuration into the large text box before launch.\n# Install vultr-cli in Fedora sudo dnf install vultr-cli # Launch the instance vultr-cli instance create --region dfw --plan vhp-2c-2gb-amd \\ --os 391 --label coreos-dfw-1 --host coreos-dfw-1 \\ --userdata \u0026#34;$(cat vultr-coreos.ign)\u0026#34; Let\u0026rsquo;s see how the container is doing:\n$ ssh major@COREOS_HOST Fedora CoreOS 38.20230430.3.1 Tracker: https://github.com/coreos/fedora-coreos-tracker Discuss: https://discussion.fedoraproject.org/tag/coreos [major@coreos-dfw-1 ~]$ sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a0024712c95d ghcr.io/containrrr/watchtower@sha256:a924a9aaef50016b7e69c7f618c7eb81ba02f06711558af57da0f494a76e7aca About a minute ago Up About a minute watchtower [major@coreos-dfw-1 ~]$ sudo podman logs watchtower time=\u0026#34;2023-05-31T14:01:12Z\u0026#34; level=info msg=\u0026#34;Watchtower 1.5.3\u0026#34; time=\u0026#34;2023-05-31T14:01:12Z\u0026#34; level=info msg=\u0026#34;Using no notifications\u0026#34; time=\u0026#34;2023-05-31T14:01:12Z\u0026#34; level=info msg=\u0026#34;Checking all containers (except explicitly disabled with label)\u0026#34; time=\u0026#34;2023-05-31T14:01:12Z\u0026#34; level=info msg=\u0026#34;Scheduling first run: 2023-05-31 15:01:12 +0000 UTC\u0026#34; time=\u0026#34;2023-05-31T14:01:12Z\u0026#34; level=info msg=\u0026#34;Note that the first check will be performed in 59 minutes, 59 seconds\u0026#34; Awesome! 🥳\nMy system rebooted for an ostree update shortly after provisioning and the container came up automatically both times.\n","date":"31 May 2023","permalink":"/p/podman-quadlet-watchtower/","section":"Posts","summary":"Podman\u0026rsquo;s new quadlet feature lets you specify container launch configuration via\nsimple systemd-like unit files. 📦","title":"Launch a watchtower container via podman quadlets"},{"content":"","date":null,"permalink":"/tags/quadlet/","section":"Tags","summary":"","title":"Quadlet"},{"content":"","date":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security"},{"content":"","date":null,"permalink":"/tags/watchtower/","section":"Tags","summary":"","title":"Watchtower"},{"content":"Anyone working with containers has likely heard of CoreOS by this point Haven\u0026rsquo;t heard about it? Don\u0026rsquo;t despair. I\u0026rsquo;ll catch you up on what you missed.\nFedora CoreOS offers a really fast pathway to running containers on hardware, in virtual machines, or in clouds. It delivers a lightweight operating system with all of the container technology that you need for running simple containers or launching a kubernetes deployment.\nBut that\u0026rsquo;s not the best part.\nCoreOS really shines due to its immutable OS layer1. The OS underneath your containers ships as a single unit and it automatically updates itself much like your mobile phone. An update rolls down, CoreOS sets it up as a secondary OS, reboots into that new update, and rolls back to the original one if there were any issues.\nMany people use CoreOS as the workhorse underneath kubernetes. Red Hat uses it underneath OpenShift as well. It\u0026rsquo;s even supported by the super light weight kubernetes distribution k3s.\nBut can you use CoreOS as a pet type instance that you use and maintain for long periods of time just like any other server? Absolutely!\nWhat\u0026rsquo;s this pet stuff about? #Whether you like it or not, there\u0026rsquo;s a cattle versus pets paradigm that took hold in the world of IT at some point. The basic ideas are these:\nWhen you take care of cattle, you take care of them as a group. Losing one or more of them would make you sad, but you know you have many others. As for pets, you spend a lot of time taking care of them and playing with them. If you lost one, it would be devastating. A fleet of web servers could be treated like cattle. Keep lots of them online and replace any instances that have issues.\nOn the other hand, databases or tier zero systems (everyone feels if it they went down) are like pets. You carefully build, maintain, and monitor these.\nHow does CoreOS fit in? #Many people do use CoreOS as a container hosting platform as part of a bigger system. It works really well for that. But it\u0026rsquo;s great as a regular cloud server, too.\nYou can run a single node CoreOS deployment and manage containers via the tools that you know and love. For example, docker-compose works great on CoreOS. I even used it to host my own Mastodon deployment.\nYou can also load up more user-friendly tools such as portainer to manage containers in a browser.\nMy development tools are missing! #😱 No vim? This is too minimal! What are we going to do?\nLuckily CoreOS comes with toolbox. 🧰\nToolbox gives you the ability to run a utility container on the system with some handy benefits:\nToolbox environments have seamless access to the user\u0026rsquo;s home directory, the Wayland and X11 sockets, networking (including Avahi), removable devices (like USB sticks), systemd journal, SSH agent, D-Bus, ulimits, /dev and the udev database, etc..\nThis means that the toolbox feels like a second OS on the system and it has all of the elevated privileges that you need to do your work. Simply run toolbox enter, follow the prompts, and you\u0026rsquo;ll end up with a Fedora toolbox that matches your CoreOS version. Need a different version, such as Fedora Rawhide? Just specify the Fedora release you want on the prompt:\n$ toolbox enter --release 39 No toolbox containers found. Create now? [y/N] y Image required to create toolbox container. Download registry.fedoraproject.org/fedora-toolbox:39 (500MB)? [y/N]: y Welcome to the Toolbox; a container where you can install and run all your tools. - Use DNF in the usual manner to install command line tools. - To create a new tools container, run \u0026#39;toolbox create\u0026#39;. For more information, see the documentation. ⬢[major@toolbox ~]$ Look at the toolbox create --help output to see how to create lots of different toolbox containers with different names and releases. If you go overboard and need to delete some toolboxes, just list your toolboxes with toolbox list and follow it up with toolbox rm.\nMy tool won\u0026rsquo;t work in the toolbox. #Some applications have issues running inside a container, even one that has elevated privileges on the system. CoreOS offers an option for layering packages on top of the underlying immutable OS.\nSimply run rpm-ostree install PACKAGE to layer a package on top of the OS. When rpm-ostree runs, it creates a new layer and sets that layer to be active on the next boot. That means that you need to reboot before you can use the package.\nDon\u0026rsquo;t want to reboot? There\u0026rsquo;s another option, but I recommend against it if you can avoid it2.\nYou can apply a package layer live on the system without a reboot with the --apply-live flag. Installing a package like mtr would look like this:\n$ sudo rpm-ostree install --apply-live mtr As soon as rpm-ostree finishes its work, mtr should be available on the system for you to use.\nHow do updates work? #There are two main technologies at work here.\nFirst, zincati checks for updates to your immutable OS tree. It runs on a configurable schedule that you can adjust based on your preferences.\nSecond, rpm-ostree handles the OS layers and switches between them at boot time. If you\u0026rsquo;re running off layer A and an update comes down (layer B), that layer is written to the disk and activated on the next boot. Should there be any issues booting up layer B later, rpm-ostree switches the system back to layer A. In these situations, your downtime might be extended a bit due to two reboots. Your system will come back up with the original OS layer activated.\nYou also get a choice of update streams. Want to live a bit more on the edge? Go for next or testing. You\u0026rsquo;re on the stable stream by default.\nAlthough I haven\u0026rsquo;t landed in this situation, it\u0026rsquo;s possible that the system boots into a new update where you notice a problem that doesn\u0026rsquo;t affect the boot. You can manually roll back to fix it.\nI have more questions. #Your first stop should be the Fedora CoreOS docs. There are also lots of ways to contact the development team and talk with the community.\nLove the idea of an immutable OS but you wish you had it for your desktop or laptop? Go check out Fedora Silverblue. 💻\nOkay, so it\u0026rsquo;s mostly immutable. You can edit configuration in /etc and you can layer more packages on top of the base OS layer if you need them. However, CoreOS maintainers discourage adding layered packages if you can avoid it.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nWhen you apply some packages and make them available immediately, you may lose track of which ones were applied live and which ones are available on the next reboot. Things can get a bit confusing if you suddenly change your mind about applying a package live or not.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"25 May 2023","permalink":"/p/coreos-as-pet/","section":"Posts","summary":"CoreOS provides a fast track to running containers with a light weight immutable OS\nunderneath. This doesn\u0026rsquo;t mean that you can\u0026rsquo;t keep it around as a pet instance. 🐕","title":"CoreOS as a pet"},{"content":"","date":null,"permalink":"/tags/toolbox/","section":"Tags","summary":"","title":"Toolbox"},{"content":"","date":null,"permalink":"/tags/wireguard/","section":"Tags","summary":"","title":"Wireguard"},{"content":"","date":null,"permalink":"/tags/email/","section":"Tags","summary":"","title":"Email"},{"content":"🥵 This post is long. If you need a TL;DR, just hop down to the end.\nOne of my toots fell into a Fedora Development mailing list discussion recently that was titled \u0026ldquo;It\u0026rsquo;s time to transform the Fedora devel list into something new.\u0026rdquo; As you might imagine, that post blew up.\nHere\u0026rsquo;s the toot:\nAfter 20 days and 218 emails from 63 participants, the discussion continues. A few people reached out with questions and comments to better understand where I was coming from in that tweet.\nLong story short: My beef1 isn\u0026rsquo;t about the venue, the technology, or the people.\nMy issues are centered around the discourse itself and the time required to parse it.\nTime #One of my favorite leaders of all time once caught me off guard during a development conversation:\nThem: Major, what\u0026rsquo;s the most valuable thing you have that you can give to someone else?\nMe: What I know?\nThem: No.\nMe: My ability to understand what they need?\nThem: No.\nMe: Okay, just tell me.\nThem: Your time.\nHe made a point that time is something you can\u0026rsquo;t get back and it\u0026rsquo;s why companies pay people. Companies pay people for their time.\nTime spent doing hard things.\nTime spent away from family.\nTime spent building something new or repairing something that\u0026rsquo;s broken.\nTime spent doing tasks that nobody else wants to do.\nHumans jump at anything that saves them a little time. We stream Netflix instead of going to the video store or getting DVDs in the mail. We sign up for Amazon Prime to save time on shopping. We rely on appliances to do work for us so we have time for other things.\nMailing lists #Anyone working on open source projects of any scale have likely used mailing lists from time to time. If you\u0026rsquo;re not familiar, here\u0026rsquo;s an example workflow:\nYou write an email to a special email address with a question, comment, or a request for help That email goes into a system which distributes the email to people who are interested in that topic Mailing list subscribers submit their replies asynchronously until the issue is resolved (or everyone is exhausted) Hopefully you got what you needed But this post isn\u0026rsquo;t about technology. You can have asynchronous discussions of varying quality levels just like these in other systems, such as Discourse, forums, Reddit, or bug trackers. There\u0026rsquo;s nothing inherently bad about mailing lists in general. Mailing lists just happen to be very common in open source projects.\nSo what is this post about?\nUnorganized discourse #When someone asks for comments on a topic, especially a controversial one, they get a wide array of replies:\nPeople who don\u0026rsquo;t understand and have questions People who dislike change in all its forms People who dislike your change and provide use cases describing how it\u0026rsquo;s bad People who dislike your change People who heard there\u0026rsquo;s a fight and they can\u0026rsquo;t stay away People who like parts of your idea but want to make changes People who like everything you said2 People who meant to reply to another thread but replied on yours instead People who are upset about someone else who top posted 🤭 No reply at all Filtering through all of these to get to the heart of the argument is incredibly tedious and time consuming. 🥵\nAs Matthew Miller mentioned in his Fedora thread, long threads often cause people to lose the meaning of the discussion. They reply on third, fourth, and fifth level comments in the thread that often veer into other topics. Nobody can rein in the discussion at that point, but people do try. Again, these issues crop up in all kinds of discussion technology systems. They are not unique to mailing lists.\nImprovements #The problem is not the mailing list. It\u0026rsquo;s the discourse.\nKeeping the discourse more organized is a good option, but its effectiveness is largely governed by the project\u0026rsquo;s communication rules and the civility of those who communicate in the thread. How do we make this better?\nProvide the why #I\u0026rsquo;m often amazed at some of the change proposals I see where the technical solution looks so elegant. It\u0026rsquo;s so easy to maintain. It won\u0026rsquo;t be difficult to test. We could implement it so quickly. This person is a genius!\nThen I stop. Wait a minute. Why do we need to do this?\nAlways include some of the backstory and use cases behind the change. Ask yourself a few questions:\nWhy do we need to make the change? Who benefits? Who is harmed? What happens if we don\u0026rsquo;t make the change? Do we have alternatives to the proposed change? Can the change be broken up into smaller pieces? Include these in your original post to ensure everyone is on a level playing field at the start. This reduces replies requesting more information or questioning the value of the change based on a lack of understanding.\nIntermission summary #When I\u0026rsquo;ve written posts that blew up, I took time to read through the replies and summarize the comments thus far. I do this as a reply to my original post3 with a bulleted list.\nAs an example:\nThanks for all the replies! So far, this is what I\u0026#39;ve heard in the thread: 1. Most people think the first change makes sense and should be done soon 2. The second change requires some more thought based on the use cases provided by lumpynuggets25 3. We need better documentation to explain why we are making this change This saves time for newcomers to the thread since they can get a brief summary of the comments made thus far without needing to read all of them first. In addition, it gives some of the people who replied a chance to say \u0026ldquo;Yes, that\u0026rsquo;s what I meant. Thank you.\u0026rdquo; It also redirects the conversation back to the original topic and reduces the veering off into other topics.\nBe patient #I\u0026rsquo;ve seen many threads where the original author obviously felt the need to reply to every comment that came through. This lengthens the thread unnecessarily and causes you to think more about replying rather than understanding the replies. (Other participants in the thread may start doing this themselves.)\nWhen you send something and the replies start rolling in, just be patient. Sometimes others will engage each other in the thread and answer your questions for you. This gives you time to read and understand what people are saying. You also get the opportunity to build that intermission summary from the previous section. 😉\nLet it go #Just like Elsa sang in Frozen, sometimes you have to let it go:\nLet it go, let it go\nCan\u0026rsquo;t hold it back anymore\nLet it go, let it go\nTurn away and slam the door\nI don\u0026rsquo;t care what they\u0026rsquo;re going to say\nLet the storm rage on\nThe cold never bothered me anyway\nThere\u0026rsquo;s always going to be that time where you need to walk away. Take the feedback you received, build on it, and deliver something valuable for people. Let the people who want to be angry for the sake of being angry just be angry on their own.\nEvery thread reaches that point where the people with real comments have provided all of their feedback. Other people just get exhausted with the conversation. That leaves the people who have nothing better to do, and of course, the trolls. (Don\u0026rsquo;t feed the trolls.)\nTL;DR #Just to summarize:\nMailing lists aren\u0026rsquo;t evil, but unorganized discourse in any medium becomes a time sink and that\u0026rsquo;s evil Always provide the \u0026ldquo;why\u0026rdquo; along with the \u0026ldquo;what\u0026rdquo; Take time to repeat back what was said in a summary Let the comments roll in before considering a limited amount of replies Know when to let it go and build something great The word beef is used informally in English as a replacement for complaint.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nAlways compliment these people. When people fully agree, they don\u0026rsquo;t reply that often because they feel like they have nothing to add. Sticking your neck out to say \u0026ldquo;This is good and here\u0026rsquo;s why\u0026hellip;\u0026rdquo; is just as treacherous as disagreeing.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nSome people frown on replying to yourself in a mailing list thread. Are you adding value to the thread? If so, ignore them.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"10 May 2023","permalink":"/p/mailing-list-beef/","section":"Posts","summary":"My issues with open source mailing lists aren\u0026rsquo;t with the technology,\nbut with unorganized pattern of the discourse itself. 🖇️","title":"My beef with mailing lists"},{"content":"I enjoy taking Fedora with me to various clouds and ensuring that it works well on all of them. I\u0026rsquo;ve written posts on taking Fedora to Hetzner cloud and deploying custom Fedora images to AWS with image builder.\nAlthough Oracle Cloud isn\u0026rsquo;t a cloud I use frequently, a question came up earlier this week in the Fedora community about how to take an image there. I love a good challenge, so buckle up and follow along as we launch a Fedora 38 instance on Oracle Cloud.\nBe sure to create an Oracle Cloud account first! The rest of the blog post requires CLI interactions with the Oracle Cloud API. Let\u0026rsquo;s go! 🎒\nOracle\u0026rsquo;s cloud tools #Although you can do the image upload and import via the web interface, I enjoy getting to learn a cloud provider\u0026rsquo;s CLI tools in case I need them later. Oracle offers a CLI called oci that you can install via pipx:\n$ pipx install oci-cli installed package oci-cli 3.26.0, installed using Python 3.11.3 These apps are now globally available - create_backup_from_onprem - oci done! ✨ 🌟 ✨ $ oci --version 3.26.0 The CLI tool has a helpful authentication wizard that configures your credentials on your local machine. Run oci -i and follow the prompts.\n$ oci session authenticate Enter a region by index or name(e.g. 1: af-johannesburg-1, 2: ap-chiyoda-1, 3: ap-chuncheon-1, 4: ap-dcc-canberra-1, 5: ap-hyderabad-1, 6: ap-ibaraki-1, 7: ap-melbourne-1, 8: ap-mumbai-1, 9: ap-osaka-1, 10: ap-seoul-1, 11: ap-singapore-1, 12: ap-sydney-1, 13: ap-tokyo-1, 14: ca-montreal-1, 15: ca-toronto-1, 16: eu-amsterdam-1, 17: eu-dcc-dublin-1, 18: eu-dcc-dublin-2, 19: eu-dcc-milan-1, 20: eu-dcc-milan-2, 21: eu-dcc-rating-1, 22: eu-dcc-rating-2, 23: eu-frankfurt-1, 24: eu-jovanovac-1, 25: eu-madrid-1, 26: eu-marseille-1, 27: eu-milan-1, 28: eu-paris-1, 29: eu-stockholm-1, 30: eu-zurich-1, 31: il-jerusalem-1, 32: me-abudhabi-1, 33: me-dcc-muscat-1, 34: me-dubai-1, 35: me-jeddah-1, 36: mx-queretaro-1, 37: sa-santiago-1, 38: sa-saopaulo-1, 39: sa-vinhedo-1, 40: uk-cardiff-1, 41: uk-gov-cardiff-1, 42: uk-gov-london-1, 43: uk-london-1, 44: us-ashburn-1, 45: us-chicago-1, 46: us-gov-ashburn-1, 47: us-gov-chicago-1, 48: us-gov-phoenix-1, 49: us-langley-1, 50: us-luke-1, 51: us-phoenix-1, 52: us-sanjose-1): 51 Please switch to newly opened browser window to log in! You can also open the following URL in a web browser window to continue: https://login.us-phoenix-1.oraclecloud.com/v1/oauth2/authorize?... A long URL will appear on the console. Open that URL in a browser and finish the login process in your browser. Once you finish, you should see a message like this:\nEnter the name of the profile you would like to create: DEFAULT Config written to: /home/major/.oci/config Try out your newly created session credentials with the following example command: oci iam region list --config-file /home/major/.oci/config --profile DEFAULT --auth security_token Replace major with your username and try out your authentication:\n$ oci iam region list --config-file /home/major/.oci/config --profile DEFAULT --auth security_token { \u0026#34;data\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;AMS\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;eu-amsterdam-1\u0026#34; }, ... Success! Let\u0026rsquo;s move on.\nUploading the Fedora image #Most cloud providers have a custom image process that involves uploading the image to some object storage and then telling the compute service where the image is located. Oracle Cloud follows the same pattern.\nFirst up, we need our compartment ID. This is a way to logically separate infrastructure at Oracle Cloud. We will store it as an environment variable called COMPARTMENT_ID\n$ COMPARTMENT_ID=$(oci iam compartment list --auth security_token | jq -r \u0026#39;.data[].\u0026#34;compartment-id\u0026#34;\u0026#39;) We need an object storage bucket to hold our image file. Naming things isn\u0026rsquo;t my strong suit, so I\u0026rsquo;ll call my bucket majors-fedora-upload-bucket:\n$ oci os bucket create --name majors-fedora-upload-bucket \\ --compartment-id $COMPARTMENT_ID --auth security_token { \u0026#34;data\u0026#34;: { \u0026#34;approximate-count\u0026#34;: null, \u0026#34;approximate-size\u0026#34;: null, \u0026#34;auto-tiering\u0026#34;: null, ... Within the data that is returned, look for the namespace key. You will need the value from that key when you do the image import step. Now we need a Fedora image. The latest Fedora 38 QCOW image should work fine.\n$ wget https://mirrors.kernel.org/fedora/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 (Oracle Cloud has aarch64 instances and you can use a Fedora aarch64 image for those instead. This example focuses on x86_64.)\nUpload the image to our bucket:\n$ oci os object put \\ --bucket-name majors-fedora-upload-bucket \\ --file ~/Downloads/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 \\ --auth security_token Upload ID: 0f9b7008-b2d5-185f-08c7-8aeae904f136 Split file into 4 parts for upload. Uploading object [####################################] 100% { \u0026#34;etag\u0026#34;: \u0026#34;c0f1bd77-46c6-4e65-a2bb-0d0b0dafe586\u0026#34;, \u0026#34;last-modified\u0026#34;: \u0026#34;Fri, 05 May 2023 21:33:41 GMT\u0026#34;, \u0026#34;opc-multipart-md5\u0026#34;: \u0026#34;iK+1qeXizzd1r+5lEZX6cQ==-4\u0026#34; } This may take a while depending on your upload speed.\nImport the Fedora image #If you forgot to save the namespace from your bucket when you created it, just look it up again with the bucket get command:\n$ oci os bucket get --name majors-fedora-upload-bucket \\ --auth security_token | jq -r \u0026#39;.data.namespace\u0026#39; axr6swqvwoeb At this point, we must tell Oracle\u0026rsquo;s compute service to import the image we just uploaded to the object storage. Let\u0026rsquo;s run another command to do the import:\n$ oci compute image import from-object --auth security_token \\ --bucket-name majors-fedora-upload-bucket \\ --compartment-id $COMPARTMENT_ID \\ --name \u0026#34;Fedora-Cloud-Base-38-1.6.x86_64.qcow2\u0026#34; \\ --namespace axr6swqvwoeb \\ --display-name Fedora-Cloud-Base-38-1.6 \\ --operating-system Fedora \\ --operating-system-version 38 \\ --source-image-type QCOW2 \\ --launch-mode PARAVIRTUALIZED { \u0026#34;data\u0026#34;: { \u0026#34;agent-features\u0026#34;: null, \u0026#34;base-image-id\u0026#34;: null, \u0026#34;billable-size-in-gbs\u0026#34;: null, Look for the id in the output that was returned. Use that identifier to check the status of the import.\n$ export IMAGE_ID=ocid1.image.oc1.phx.aaaaaaaayiu26gv67exe7mvpxkq76zwh44otstzktzaf2f6vqe5izqzrciqq $ oci compute image get --auth security_token \\ --image-id $IMAGE_ID | jq -r \u0026#39;.data.\u0026#34;lifecycle-state\u0026#34;\u0026#39; IMPORTING This step takes about ten minutes for most images I tested.\nOracle Cloud uses work requests for most long running actions in the cloud. You can get their status via the CLI tools, but I found that to be extremely tedious. For a percentage completed and a progress bar, go to the Custom Images panel in the web interface, click your image name, click the Create image link under Work requests and monitor the percentage there. 😉\nCreate an instance #The oci CLI tool is good at many things, but it\u0026rsquo;s tedious with many others. Launching a VM instance via the CLI was incredibly frustrating for me, so I usually go to the web interface to get this done. (You could also use tools like Terraform for this step.)\nLet\u0026rsquo;s run through the steps:\nGo to the Instances panel in the web UI Click the Create instance button at the top Click Change Image in the Image and Shape section Click the My images box and then the checkbox next to the Fedora image you imported Click Select Image at the bottom Choose your preferred shape (instance type) Choose your SSH key Click Create at the bottom Now you should have an instance beginning to launch! 🚀\nAfter it\u0026rsquo;s online, you should be able to ssh1 to the instance using the fedora user:\n$ ssh fedora@129.146.75.xxx [fedora@fedora-38-oracle-whee ~]$ cat /etc/fedora-release Fedora release 38 (Thirty Eight) Enjoy running Fedora on Oracle Cloud! 🎉\nIf you aren\u0026rsquo;t able to access the instance, you might be missing an internet gateway or a security group to allow traffic through to your instance. Here are direct links to the console instructions for those items:\nAdding security groups Working with internet gateways \u0026#160;\u0026#x21a9;\u0026#xfe0e; ","date":"5 May 2023","permalink":"/p/fedora-oracle-cloud/","section":"Posts","summary":"Add a Fedora x86_64 or aarch64 image to Oracle Cloud and launch an instance. 🚀","title":"Fedora on Oracle Cloud"},{"content":"","date":null,"permalink":"/tags/oracle/","section":"Tags","summary":"","title":"Oracle"},{"content":"If your house is like mine, you have devices that you really trust and then there are those other devices.\nMy trusted device group includes my work computers, a Synology NAS, and a few other computers. The bucket of untrusted devices includes Chromecasts, TVs, tablets, phones, and whatever random devices that my kids\u0026rsquo; friends bring over.\nA VLAN helps with traffic segmentation by isolating certain traffic over the same network cable. A router can manage tons of different networks via the same downlink cable(s) to a switch or other equipment. You can tell a switch to only allow certain VLANs through a port or you can have the port only offer one network that happens to be one of your VLANs.\nThe best analogy for a VLAN is a cable within a cable. It\u0026rsquo;s almost like being able to add thousands of individual segmented networks in the same ethernet cable.\nVLANs are possible via a networking standard called 802.1Q. Network devices add a small 802.1Q header, often called a VLAN tag, to each ethernet packet. These tags offer a way for network devices to filter traffic on a network.\nIt works well for devices that don\u0026rsquo;t understand VLANs, too. For example, if you have a device that isn\u0026rsquo;t VLAN-aware, you can plug it into a switch port that is configured to offer a VLAN network as the native VLAN. That device happily uses the network it is offered via the switch port without knowing that the switch is adding VLAN tags to all traffic that the device creates.\nLet\u0026rsquo;s get a VLAN working on a Mikrotik router.\nAdding a VLAN #Mikrotik devices have a great command line interface and I\u0026rsquo;ll use that for this post.\nIn this example, my networks are set up like this:\nI have a default LAN network: 192.168.10.0/24 My VLAN network is tagged with tag 15: 192.168.15.0/24 The basic building block of any network on a Mikrotik device is an interface. We start by creating a VLAN interface:\n/interface vlan \\ add interface=bridge name=vlan15 vlan-id=15 My router uses a bridge called bridge (gotta keep things simple), but you may need to use something like ether2 or ether3 if you\u0026rsquo;re using a physical network interface instead of a bridge.\nNow I can add an IP address to my new network interface:\n/ip address \\ add address=192.168.15.1/24 interface=vlan15 network=192.168.15.0 DHCP sure does make IP address configuration easier, so let\u0026rsquo;s create an address pool and a DHCP server instance for our VLAN network. Choose whatever range makes sense for you but my default is usually 10-254:\n/ip pool \\ add name=vlan15 ranges=192.168.15.10-192.168.15.254 Add a DHCP server and a DHCP network configuration:\n/ip dhcp-server \\ add address-pool=vlan15 interface=vlan15 name=vlan15 /ip dhcp-server network add address=192.168.15.0/24 dns-server=192.168.15.1 gateway=192.168.15.1 The DHCP server uses our vlan15 address pool for handing out addresses to devices on the VLAN.\nTesting the VLAN #I like to give the VLAN a quick test with my desktop PC before I start messing around with the switch configuration. We just need to add a VLAN device via nmcli and verify that DHCP and routing are working.\nLet\u0026rsquo;s add a new interface called VLAN15 to handle traffic tagged with VLAN 15:\n# Replace enp7s0 with your ethernet interface name! $ nmcli con add type vlan ifname VLAN15 con-name VLAN15 dev enp7s0 id 15 Connection \u0026#39;VLAN15\u0026#39; (f7cd4cdf-d2ce-4dc7-9ed8-f40102ff3e42) successfully added. Did we get an IP address and a route?\n$ ip addr show dev VLAN15 7: VLAN15@enp7s0: \u0026lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026gt; mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 192.168.15.52/24 brd 192.168.15.255 scope global dynamic noprefixroute VLAN15 valid_lft 409sec preferred_lft 409sec inet6 fe80::7f62:9d9a:dc8c:3fd7/64 scope link noprefixroute valid_lft forever preferred_lft forever $ ip route show dev VLAN15 192.168.15.0/24 proto kernel scope link src 192.168.15.52 metric 400 Awesome!\nUsing it with a switch #I have a Mikrotik CRS that I use for my main home switch and it handles VLAN tagging. My goal here is to trunk VLAN 15 from the router down to the switch so that a particular port ONLY exposes the VLAN 15 network to a device. I don\u0026rsquo;t want that device to have any idea that VLAN 15 even exists. It should think that there\u0026rsquo;s a regular old LAN network coming through the switch port.\nThis is called an access port. Devices on the port have no idea that VLAN tagging is happening on the switch, but the switch tags all traffic coming from the port.\nIn this example, I set up switch port ether18 to take VLAN 15 and make it available as the native VLAN to anything connected to that switch port:\n# Translate VLAN 15 to the native VLAN on ether18 # This is creating the access port /interface ethernet switch ingress-vlan-translation \\ add ports=ether18 customer-vid=0 new-customer-vid=15 # Ensure that traffic tagged with VLAN 15 can exit the switch # through the uplink to the router /interface ethernet switch egress-vlan-tag \\ add tagged-ports=ether1 vlan-id=15 # Add VLAN table entries to show which ports are members of the VLAN /interface ethernet switch vlan \\ add ports=ether1,ether18 vlan-id=15 # Don\u0026#39;t allow anyone on port ether18 to tag their traffic with a # different VLAN ID and circumvent our access port settings /interface ethernet switch \\ set drop-if-invalid-or-src-port-not-member-of-vlan-on-ports=ether1,ether18 At this point, I can connect a device to port ether18 and it gets an IP address via DHCP on the 192.168.15.1 network automatically!\nFor further reading on these settings, check out Mikrotik\u0026rsquo;s wiki page of switch configuration examples.\nExtra credit #Once you begin segmenting your network, review your router configuration to see how these networks are allowed to communicate with one another. The default on Mikrotik devices is to allow internal networks to freely communicate with each other since that makes everything easier to get started. However, I don\u0026rsquo;t want my Chromecast to talk to my NAS.\nMikrotik\u0026rsquo;s IP firewalling capabilities give you lots of methods for limiting access between networks. Be sure to read up on the IP/Firewall/Filter documentation. If you use IPv6 on your network, be sure to review the IPv6/Firewall/Filter docs, too.\nEven if you think that you aren\u0026rsquo;t using IPv6 internally, you might actually be using it. 😉\n","date":"20 April 2023","permalink":"/p/mikrotik-vlan/","section":"Posts","summary":"Segment your home network easily with a VLAN on a Mikrotik router. 🖥️","title":"Add a VLAN on a Mikrotik router"},{"content":"","date":null,"permalink":"/tags/mikrotik/","section":"Tags","summary":"","title":"Mikrotik"},{"content":"","date":null,"permalink":"/tags/networking/","section":"Tags","summary":"","title":"Networking"},{"content":"","date":null,"permalink":"/tags/vlan/","section":"Tags","summary":"","title":"Vlan"},{"content":"","date":null,"permalink":"/tags/1password/","section":"Tags","summary":"","title":"1password"},{"content":"1Password remains a core part of my authentication workflow for password, two factor authentication, and even ssh keys. It uses Linux system authentication to authenticate access from various command line tools and that\u0026rsquo;s quite helpful for automation.\nSometimes I need a quick copy of a particular password or two factor code while I\u0026rsquo;m in another application. That\u0026rsquo;s where 1Password\u0026rsquo;s quick access menu comes in very handy.\nIt creates a pop-up in the middle of the screen that you can use immediately to search your vault. Once you find what you need, you can press various hot keys to get the right data into your clipboard.\nPress the right arrow key once the right credential is highlighted and you\u0026rsquo;ll get instructions on which key combinations to press:\nctrl-c copies the username ctrl-shift-c copies the password ctrl-alt-c copies the two factor authentication code In most window managers, 1Password handles the keybinding for launching the quick access menu, which is ctrl-shift-space by default. You can probably guess by the title of this post that it doesn\u0026rsquo;t work out of the box with Sway. 😉\nIntegrating with Sway #Start by adding a keybinding for the quick access menu. I added mine in ~/.config/sway/config.d/launcher.conf:\n# Start 1Password Quick Access bindsym Control+Shift+Space exec /usr/bin/1password --quick-access Save the file and reload Sway\u0026rsquo;s configuration with mod+shift+c (mod is usually your Windows key unless you changed it). Now press ctrl-shift-space and the quick access menu should appear!\n","date":"19 April 2023","permalink":"/p/1password-quick-access-sway/","section":"Posts","summary":"1Password has a handy quick access launcher and you can bring it on screen for fast\naccess to passwords and two factor codes in Sway. 🔐","title":"1Password quick access in Sway"},{"content":"","date":null,"permalink":"/tags/android/","section":"Tags","summary":"","title":"Android"},{"content":"I aired my grievances about Ooma\u0026rsquo;s phone service recently on my Mastodon account. They require you to call them to cancel and then their convoluted cancellation process spins you in circles. Luckily I had a prepaid credit card with a dollar or two left on it and I used that as my primary billing card. Problem solved. 👏\nLater in the Mastodon thread, I mentioned how my replacement solution costs 85 cents a month and someone asked me how I do it. It\u0026rsquo;s not the easiest process. However, once you get it working, it doesn\u0026rsquo;t require much upkeep.\nBut before we start\u0026hellip;\nWho the heck needs a home phone in 2023? #Yes, this is a common question I get. Mobile phones, tablets, and laptops all have so much communication connectivity now that home phones aren\u0026rsquo;t really relevant.\nI like having one around for my kids to use and it\u0026rsquo;s nice to have a backup in case there are issues with the mobile phone networks. One of my kids has a mobile phone and the other does not. It also allows my neighbors\u0026rsquo; kids (who may or may not have phones of their own) to call their parents at any time.\n85 cents a month? #We\u0026rsquo;re talking $0.85 USD per month. For real. That price covers a direct inbound dialing (DID) number in various area codes throughout the USA. Some countries might have different pricing, but this works in the USA.\nYou might be wondering if there are additional costs. Well, yes, there are.\nOutbound calls are $0.01 per minute and inbound calls are $0.009 per minute. That would leave you with a bill of about $10.85 for 1,000 minutes (just under 17 hours of calls).\nWhat\u0026rsquo;s involved? #When you get a phone call on your mobile phone, this is what happens:\nSomeone dials your number from their phone ✨ Magic ✨ Your phone rings and you can pick it up For this home phone solution, it goes something like this:\nYou buy a DID number from a VOIP provider You connect a SIP phone, an ATA device, or your phone/computer to a SIP endpoint Someone calls your DID number Something on your end rings and you can answer the call I say something here because it can be anything. An old telephone with a cradle, wireless DECT phones, fancy SIP phones, or a computer.\nSIP phones are phones that connect directly to a network via ethernet or wi-fi. You configure a SIP account on the device itself and it connects to your VOIP provider, registers itself, and waits for inbound or outbound calls.\nATA devices are analog telephone adapters that translate the modern world of VOIP for a regular old phone. This allows you to take existing cordless phones (or corded phones) and connect them to a VOIP account. For these devices, you normally access the ATA via a web interface and configure them.\nSIP phone vs. ATA #I chose ATA devices because they\u0026rsquo;re cheaper to buy, easy to maintain, and you can use anything with them that has a phone jack. Got Grandma\u0026rsquo;s old red phone with a cradle? It works. Got a cordless DECT multi-phone system with an answering machine? It works.\nAs long as it does tone dialing, you\u0026rsquo;re set. (Let\u0026rsquo;s not bring pulse dialing into this, please.)\nThe challenge with these is that they\u0026rsquo;re not being manufactured that often lately. Mobile phones have really pushed these devices to the corners of the market. Most VOIP equipment is aimed at big businesses and these small devices can be hard to find.\nMy current favorite is the Grandstream HT801. It supports only phone phone line, so get the HT802 if you need two lines at home.\nThe device has a port for power, an ethernet port, and phone line port. That\u0026rsquo;s all you need. Plug the phone into the phone line port, connect the ethernet port to your router or switch, and you\u0026rsquo;re set!\nGetting a phone number #My favorite VOIP service is voip.ms. Their prices are reasonable, their control panel is easy to use, and they have lots of business functions that you can use for free (such as an IVR, which is a \u0026ldquo;press 1 for sales\u0026rdquo; menu system).\nAfter you set up an account on voip.ms and deposit some money, go to DID Numbers and click Order DID. You get lots of options of numbers to buy including international numbers. (If you have family overseas who would like to call you as a local call from their phone, this could be a great option!)\nFor US numbers, you\u0026rsquo;ll then get a menu asking you to pick a state and then pick numbers. You can also look for numbers that contain a set of numbers (including the area code). These are helpful if you want to spell something with your number (young kids have no idea what fun this used to be) or get something that\u0026rsquo;s easy to remember.\nYou\u0026rsquo;ll get two options for paying for the number:\nOne option is a flat rate all-you-can-eat plan, usually for $3-$10 monthly. This might be a good option if you plan to use your home phone a lot. Another option will be the $0.85/month option where you pay extra for all calls. This is my favorite option. Complete the remaining steps you\u0026rsquo;ll have your number!\nCreate a sub account #The voip.ms system has a concept of \u0026ldquo;sub-accounts\u0026rdquo; where you have individual SIP logins for each device. This is highly recommended for security reasons.\nClick the Sub Accounts menu and then Create Sub Account. Set up your password and choose from the options around allowing international calls and how you want your calls routed. Most of the defaults are fine here.\nThe username/password combination is the one you\u0026rsquo;ll use for your ATA later, so be sure to remember those.\nConnecting the DID and sub account #Go back to the DID Numbers menu and click Manage DIDs. Find the number you bought and look for the orange pencil underneath the Actions heading. When the page loads, make a few changes:\nFirst, look for the SIP/IAX option in the Main section. Choose the sub account you created earlier from the drop down list. It should be something like SIP/123456_yourusername. This routes the DID number to the devices connected to that sub account.\nUnder the DID Point of Presence section, choose a server that is close to you. The voip.ms team puts green check marks next to the ones it recommends for your location but you can choose any location you wish.\nUnder CallerID Name Lookup, you can enable it for $0.008 per lookup. That means that 100 inbound calls will cost you about $0.80 for lookups total.\nYou have an option for enabling SMS/MMS for an additional fee, but it\u0026rsquo;s not terribly easy to use.\nApply the changes at the bottom of the page. Now you\u0026rsquo;re all set to connect your phones or ATA device!\nAdding an ATA #voip.ms has a massive page full of information about nearly every ATA they support. There are configuration instructions in a link under each device that give you tips on how to best configure your device.\nIf you picked up an HT801/HT802 like I recommended, you can go straight to the HT802 configuration guide. The instructions there work just fine for the HT801, too.\nMake sure your ATA device is powered on and plugged into your home ethernet network (or wi-fi if it is so equipped). You can get the IP address for your ATA device by checking DHCP leases on your router or you can pick up your phone connected to the ATA and press asterisk/star (*) three times. A friendly computer voice will tell you the IP address of the ATA device.\nFollow the configuration instructions to the letter! Some of these settings, especially for audio codecs, are critical for getting high quality, reliable phone calls.\nIt\u0026rsquo;s time to make phone calls once you\u0026rsquo;ve configured your device! You should be able to dial out from your home phone and receive calls on the same number. If you can make calls but can\u0026rsquo;t receive calls, double check that your sub account shows Registered on the voip.ms portal home page. If it doesn\u0026rsquo;t appear as Registered in green, then voip.ms has no way to tell your device there\u0026rsquo;s a phone call coming.\nGo back and double check your account username and password. Also verify that your ATA configuration matches exactly to the recommended configuration provided by voip.ms.\nExtra credit #Although I get a very small number of spam calls and robocalls on my voip.ms DID, there\u0026rsquo;s a chance your experience might be different.\nvoip.ms offers quite a few services to help here, especially CallerID Filtering. You can block anonymous calls or callers who have their CallerID marked as unavailable.\nIt\u0026rsquo;s also pretty easy to set up a Digital Receiptionist (IVR) where you can make callers press a number or jump through a hoop or two before your phone rings. Once you create your IVR, run back to DID Numbers and then Manage DIDs to change your routing settings to use the new IVR. (Look for IVR just under SIP/IAX on the DID settings page.)\nBefore you celebrate, be sure to turn on automatic billing! You can tell voip.ms to fill your account with some money when it crosses a certain threshold. I have mine set to add $25 each time I drop under $10. They will send you nag emails as soon as your balance gets low but you don\u0026rsquo;t want to forget about it.\n","date":"18 April 2023","permalink":"/p/85-cents-home-phone/","section":"Posts","summary":"After trying several services for home phones, I found a solution that costs me about $0.85 per month. ️️☎️","title":"My home phone costs 85 cents a month"},{"content":"","date":null,"permalink":"/tags/phone/","section":"Tags","summary":"","title":"Phone"},{"content":"","date":null,"permalink":"/tags/voip/","section":"Tags","summary":"","title":"Voip"},{"content":"","date":null,"permalink":"/tags/cost/","section":"Tags","summary":"","title":"Cost"},{"content":"Public clouds are the all-you-can-eat buffet of infrastructure. Nearly any IT problem can be solved in minutes with a few clicks or API requests.\nThis is not your average buffet.\nEvery item at the buffet comes with a cost. Many of these costs are difficult to understand. Even if you do understand them, estimating your potential usage of these services is challenging.\nMost services are pay-per-use where you pay based on the time used or the amount used. Other charges are one-time costs that get billed immediately.\nSome pricing, like object storage, looks totally straightforward at first glance. Then you find millions upon millions of half penny charges that add up to real money.\nThen there\u0026rsquo;s the situation that angers me the most: charges for infrastructure you deployed and forgot to clean up!\nHow do we tackle this problem? Let\u0026rsquo;s get right to it.\nSet a budget #AWS allows you to set a budget and get alarms when you\u0026rsquo;ve consumed part of your budget or exceeded it entirely. This is a great way to catch unexpected charges. It also helps you find forgotten deployments.\nHead on over to the AWS Billing Dashboard first. Look for Budgets on the left navigation menu and click it.\nOnce you click Create a Budget, you get a few handy options:\nChoose the Zero spend budget if you plan to stay on the free tier. You get alerts when your bill crosses $0.01. Choose the Monthly cost budget to set your own budget. I use a little more than what the free tier offers on some services, so I go for the monthly cost budget. Choose a budget amount and add some email recipients to be notified.\nAs of this post\u0026rsquo;s writing, AWS will send you notifications under three different conditions:\nYou used \u0026gt; 85% of your budget. You used 100% of your budget. Your forecasted spend is expected to cross your budget limit. Keep a close watch for the forecasted spend emails. These are usually the earliest warning that you\u0026rsquo;re potentially in danger of exceeding your budget.\nMonitor your inbox closely for these emails.\nAnalyzing your spend #AWS offers some tools for analyzing your cloud spend and these can give you clues to investigate. Start by going to AWS Cost Management and clicking on Cost Explorer on the left side.\nLook for Group by on the right and choose Service from the drop down. The chart and table provided should help you see which service is running up charges.\nIn my situation, I had a lot of charges from S3 that I didn\u0026rsquo;t expect. The bar chart showed a big jump in S3 expenses. I broke down the expenses by choosing two new options on the right side:\nClick Usage type in the Group by drop down. Choose S3 (Simple Storage Service) from the Service drop down. My actual storage consumption costs were quite low, but my tier 1 and 2 requests increased massively. I began sifting through my scripts that do things in S3. There was one that was poorly written that was downloading larger and larger amounts of data from S3 frequently. These increased request counts caused my bill to increase by 4x!\nRemember the forgotten #As I mentioned earlier, one of the most painful situations involves big charges for infrastructure that you forgot about. Perhaps it\u0026rsquo;s an EC2 instance you spun up for testing. Maybe it\u0026rsquo;s a Lambda you tried and forgot about. Perhaps you provisioned a NAT gateway (ultimate pain).\nMonitoring your budgets will help a lot with forgotten infrastructure, but you only find out after your bill increased.\nMost of my mistakes happen with EC2-related infrastructure such as instances, volumes, or snapshots. EC2 is a region-specific service, though. Who wants to go through their EC2 infrastructure region by region?\nLuckily, AWS provides the EC2 Global View. You get a look across all of your EC2 infrastructure in all regions to get counds of instances, networks, volumes, and auto scaling groups. This page helped me find some forgotten snapshots that kept dinging me with small bills each month.\nAnother option is to provision infrastructure with terraform. Terraform allows you to specify your cloud infrastructure in code. From there, you can build it (terraform apply) and tear it down (terraform destroy) easily.\nAnything that you build with terraform is easily destroyed. Destroyed completely. If terraform can\u0026rsquo;t clean up your infrastructure for some reason, it notifies you about the resources it could not delete.\nPlan ahead for costs #Storing your terraform code in a GitHub repository allows you to make pull requests for changes and see what will change. You can run terraform via GitHub Actions.\nOnce you have that running, consider adding Infracost to your repository. Infracost analyzes each pull request and explains the billing changes based on what you\u0026rsquo;re deploying (or destroying). It replies in the PR with a comment detailing the potential charges that your change might incur.\nThis is a great way to avoid really painful charges (like the $600 dedicated IP charge for CloudFront) and track your cloud infrastructure costs over time.\n","date":"2 March 2023","permalink":"/p/monitor-aws-bill/","section":"Posts","summary":"Nobody likes a surprise bill. Learn some ways to keep your AWS bill under control and\navoid that end of the month panic. 😱","title":"Monitor your AWS bill"},{"content":"","date":null,"permalink":"/tags/blog/","section":"Tags","summary":"","title":"Blog"},{"content":"","date":null,"permalink":"/tags/hugo/","section":"Tags","summary":"","title":"Hugo"},{"content":"","date":null,"permalink":"/tags/iam/","section":"Tags","summary":"","title":"Iam"},{"content":"This blog moved from Wordpress to Hugo back in 2020 and that was a great decision. Static blog generators free you from the vast majority of blog hosting security risks (but some still exist) and they give you the freedom to host your blog almost anywhere.\nI\u0026rsquo;ve tried a few hosting methods for the blog so far:\nOn a VPS or on cloud instances With a third party static hosting specialist, such as Netlify In GitHub or GitLab Pages Using CloudFlare Pages Object storage with a CDN (Backblaze/CloudFlare) All of them have their advantages and disadvantages.\nFor example, running on a VPS with a web server like nginx gives you a ton of control over every aspect of the site, but then there\u0026rsquo;s a server to manage. GitHub pages provides a fast and free option for hosting a static blog from a git repository, but you give up lots of control and access to metrics.\nI\u0026rsquo;ve been a bit leery of free offerings in the past because they can easily be taken away or they may suddenly begin charging for the service. Often times the free services turn me into the product.\nAfter hacking with AWS CloudFront this week at work, I set off on an expedition to see if I could host this blog with it.1\nArchitecture #As with most cloud-related deployments, we stitch together a few different cloud services to make the magic happen. Here\u0026rsquo;s the goal at a high level:\nGitHub Actions should build the blog content using Hugo and ship that content to a bucket in AWS S3. AWS CloudFront serves the content from the S3 bucket to visitors around the world. Logs from website visitors are placed in a different S3 bucket. How much does all of this cost? The first step for any AWS deployment involves a quick look at the AWS Free Tier list:\nIAM roles and policies are already free. 🎉 AWS S3 gives you 5GB free for the first 12 months and then it\u0026rsquo;s $0.023/GB/month after that.2 AWS CloudFront allows 1TB of data transfer and 10M requests per month. We\u0026rsquo;re building our static content in GitHub Actions and that\u0026rsquo;s free already. My blog uses about 218MB of storage and transfers less than 1TB per month. My bill should easily come in under $1.\nLet\u0026rsquo;s get started.\nConfigure the storage #Our first stop on the blog hosting train is AWS S3. The S3 bucket holds the static files that make up the site. 🪣\nWe need two buckets:\nOne bucket for our static blog content Another bucket for our CloudFront access logs Let\u0026rsquo;s start with the bucket for our static blog content:\nCreate a bucket to hold your website content and choose your preferred region. Scroll down to the Block Public Access settings for this bucket section and uncheck the Block all public access box. Acknowledge that you want this content to be public by checking the box. Add any tags (optional) and click Create bucket. Go back to the bucket listing and click the bucket you just made. Click the Properties tab and scroll to the bottom. Find the Static website hosting section and click Edit on the right. Click Save changes. (The defaults fit almost everybody.) Let\u0026rsquo;s go back and make the logs bucket:\nCreate a second bucket to hold the CloudFront logs. Use the defaults this time and click Create bucket. Our storage is ready to go!\nGetting certificates #AWS provides a domain validated certificates for free via AWS Certificate Manager (ACM). Once you make a certificate request, ACM provides you with a DNS record that must appear when ACM queries your domain name.\nLet\u0026rsquo;s request a certificate:\nGo to the Request certificate page. Ensure Request a public certificate is active and click Next. Provide the fully qualified domain name for your blog. That\u0026rsquo;s major.io for this blog. Do not include any http:// or https:// there. Click Request certificate. Now you should see your requested certificate in the list along with the Pending validation status. Click the certificate ID and take a look at the Domains section on the next page. You should see a CNAME name and CNAME value on the far right.\nGo to your DNS provider and create a DNS record that matches. ACM will query your domain using the CNAME name and it expects to see the CNAME value returned. Once the DNS record is in place, wait a minute or two for ACM to check the DNS record and flip the status to a green Issued status.\nGo on to the next step once you see that green status on the certificate.\nProvision the CDN #This is where we begin connecting some dots. CloudFront will serve content from the S3 bucket via a worldwide CDN and it uses the certificate we created in the last step.\nStart by clicking Create Distrbution at the top right of the main CloudFront page:\nFor Origin domain, choose the S3 bucket you created for your static blog content. CloudFront will immediately suggest using the website endpoint instead, so click Use website endpoint. Choose a memorable name in case you host multiple sites on CloudFront. Scroll down to Viewer and change Viewer Protocol Policy to Redirect HTTP to HTTPS. Scroll down to Alternate domain name (CNAME) and use the same domain name that you used for your certificate. Just below that line, choose your certificate from the list under Custom SSL certificate. Enable HTTP/3 if you want to be fancy. 😉 For Default root object, type index.html so that it will be served when a user requests a bare directory, like https://example.com/tags/. Enable Standard logging and choose your logs bucket (not the blog static content bucket). The page might ask you enable ACLs on your bucket so CloudFront can drop off logs. Click to accept that option. Click Create distribution. CloudFront distributions take some time to deploy the first time and after modifications. Be patient!\nAt this point, we have a storage bucket ready to hold our content and a TLS-enabled CDN ready to serve the content. Now we need to build the content and ship it to S3.\nGitHub Actions + OpenID #Most people will generate static authentication credentials, add them as GitHub secrets, and call it a day. That\u0026rsquo;s not for me. I prefer to use OpenID authentication and I avoid putting any credentials into GitHub.\nHow does this process work?\nGitHub asks AWS if it can assume a specific role that has permissions to do things at AWS. AWS will verify that it\u0026rsquo;s really GitHub making the request and that the request came from a valid source at GitHub. AWS then provides temporary credentials to GitHub to assume the AWS role and make changes in AWS services. GitHub has some great documentation on this process, but I\u0026rsquo;ll cover it briefly here as well.\nWe start by making an identity provider at AWS that allows us to trust GitHub as an identity source:\nGo to the IAM Identity Providers page and click Add Provider. Click OpenID Connect. Use https://token.actions.githubusercontent.com for the provider URL. Click Get thumbprint to hash GitHub\u0026rsquo;s OpenID certificate. Enter sts.amazonaws.com in the Audience box. Click Add provider. Now we need a policy that tells AWS what our GitHub Actions workflow is allowed to do. We use the principle of least privileges to limit access as much as possible:\nGo to the IAM Policies page. Click Create policy. Click the JSON tab and delete everything in the big text box Paste in my template: { \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;VisualEditor0\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:PutBucketPolicy\u0026#34;, \u0026#34;s3:ListBucket\u0026#34;, \u0026#34;cloudfront:CreateInvalidation\u0026#34;, \u0026#34;s3:GetBucketPolicy\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:cloudfront::AWS_ACCOUNT_ID:distribution/CLOUDFRONT_DISTRIBUTION\u0026#34;, \u0026#34;arn:aws:s3:::STATIC_CONTENT_BUCKET/*\u0026#34;, \u0026#34;arn:aws:s3:::STATIC_CONTENT_BUCKET\u0026#34; ] } ] } Replace a few things in this template:\nSTATIC_CONTENT_BUCKET is the name of your static content S3 bucket that you created first. AWS_ACCOUNT_ID is your numeric account ID for your AWS account (click your name at the top right of the AWS console to get the ID) Go back to your CloudFront distribution and use the ID for CLOUDFRONT_DISTRIBUTION (should be all capital letters and numbers) Click Next, give the policy a friendly name, and finish creating the policy.\nFinally, we need a role that glues these two things together. We tie the role to the identity provider (to allow GitHub to authenticate) and then tie the policy to the role (to allow GitHub Actions to do things in AWS).\nOn the IAM Roles page, click Create role Choose Web identity at the top. Find token.actions.githubusercontent.com in the Identity provider drop down and click it. Choose sts.amazonaws.com as the Audience. Click Next. Find the policy you just created in the previous step and check the box next to it. Give your role a friendly name and click Create role. 🚨 WE ARE NOT DONE YET! You must restrict this role to your repository to prevent other repos from assuming your role. 🚨\nGo back to the role you just created and click the Trust relationships tab. You must add a StringLike condition that limits access to only your GitHub repository! Click Edit trust policy and add a StringLike condition like my example below:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;Federated\u0026#34;: \u0026#34;ARN_FOR_GITHUB\u0026#34; }, \u0026#34;Action\u0026#34;: \u0026#34;sts:AssumeRoleWithWebIdentity\u0026#34;, \u0026#34;Condition\u0026#34;: { \u0026#34;StringEquals\u0026#34;: { \u0026#34;token.actions.githubusercontent.com:aud\u0026#34;: \u0026#34;sts.amazonaws.com\u0026#34; }, \u0026#34;StringLike\u0026#34;: { \u0026#34;token.actions.githubusercontent.com:sub\u0026#34;: \u0026#34;repo:major/major.io:*\u0026#34; } } } ] } The StringLike condition shown above will limit access to the role to only my blog repository, major/major.io, and deny access to any other repositories. Be sure to change the username and repository name to match your GitHub user/organization and repository name. Save the policy when you\u0026rsquo;re finished.\nNow we can create a workflow to build our blog and ship it to S3!\nGitHub workflow #My blog has a workflow that might work for you as a starting point. Just in case it disappears, here\u0026rsquo;s an excerpt:\nname: Deploy to AWS S3/CloudFront on: push: branches: - \u0026#34;main\u0026#34; workflow_dispatch: permissions: id-token: write contents: read concurrency: group: \u0026#34;cloudfront\u0026#34; cancel-in-progress: true defaults: run: shell: bash jobs: build: runs-on: ubuntu-latest steps: - name: Setup Hugo uses: peaceiris/actions-hugo@v2 with: hugo-version: \u0026#39;latest\u0026#39; extended: true - name: Checkout uses: actions/checkout@v3 with: submodules: recursive - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1.7.0 with: role-to-assume: arn:aws:iam::911986281031:role/github-actions-major.io-blog role-duration-seconds: 900 aws-region: us-east-1 - name: Build with Hugo env: HUGO_ENVIRONMENT: production HUGO_ENV: production run: hugo --minify - name: Deploy to S3 run: hugo deploy --force --maxDeletes -1 --invalidateCDN Reading from top to bottom:\nWe give permissions to write the id token that we get back from AWS and read-only contents to the repo itself. Concurrent runs are not allowed (we don\u0026rsquo;t want two updates shipping at the same time). The hugo setup and repo checkout are standard for nearly any hugo blog. Next we assume the role at AWS using the ARN of our role that we created in IAM. (Go back to your role in IAM and look for ARN at the top right to get your ARN.) Hugo builds the static content as it normally would. Finally, we deploy new content to the S3 bucket, delete anything that doesn\u0026rsquo;t belong, and we invalidate the CDN cache3. Now we need to tell hugo how to deploy our blog. Open up your blog\u0026rsquo;s configuration file (usually config.toml) and add some configuration:\n# Deployment configuration for S3/CloudFront [deployment] [[deployment.targets]] name = \u0026#34;BLOG_DOMAIN_NAME\u0026#34; URL = \u0026#34;s3://STATIC_CONTENT_S3_BUCKET?region=AWS_REGION\u0026#34; cloudFrontDistributionID =\t\u0026#34;CLOUDFRONT_DISTRIBUTION\u0026#34; [[deployment.matchers]] pattern = \u0026#34;^.+\\\\.(js|css|png|jpg|gif|svg|ttf)$\u0026#34; cacheControl = \u0026#34;max-age=2592000, no-transform, public\u0026#34; gzip = true [[deployment.matchers]] pattern = \u0026#34;^.+\\\\.(html|xml|json)$\u0026#34; gzip = true Replace YOUR_BLOG_DOMAIN_NAME with your blog\u0026rsquo;s domain name, such as major.io. On the URL line, provide your static content S3 bucket (the first one you created) and the region where you created it. Paste your CloudFront distribution ID to replace CLOUDFRONT_DISTRIBUTION.\nCommit all of the changes and push them! Make sure that the GitHub action runs well and can authenticate to AWS properly.\nOnly one step remains\u0026hellip;\nDNS #Sending visitors to your new site in CloudFront is one DNS record away!\nGo back to the list of CloudFront distributions in your AWS console and click on the one you created earlier. Look for the Distribution domain name at the top left and you should see a domain that looks like ********.cloudfront.net. You will need an ALIAS or CNAME record that points to this domain name in your DNS records.\nI tend to use ALIAS records if I am using an apex domain with no subdomain, such as major.io. If your blog is on a subdomain, such as blog.example.com, you may want to use a CNAME instead.\nEither way, point your ALIAS or CNAME records to the distribution domain name shown on your CloudFront distribution page. DNS records take a while to propagate through various caches scattered over the globe, so it may take some time before everyone see your updated DNS records.\nSummary #In this lengthy post, we did plenty of things:\n🪣 Configured S3 buckets to hold our static blog content and CDN logs 🚀 Deployed a CloudFront distribution to serve our content quickly to visitors around the world 🔑 Built IAM roles and policies to avoid placing any sensitive credentials in our GitHub repository 🔧 Re-configured hugo to deploy content directly to S3 and flush the CDN cache 🚚 Assembled a GitHub workflow to build the static content and ship it to S3 I love learning new things and this is one of many that I\u0026rsquo;ve enjoyed. Hopefully you enjoyed it, too! 💕\nSpoiler alert! This blog is already on AWS S3 and CloudFront as of today. 😉\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nYes, there are some charges for requests, but these charges are so small, you\u0026rsquo;re likely not to notice. Putting a CDN out front greatly reduces those requests even further since origin requests to S3 will only happen for cache misses. 1M requests to S3 comes out to about $5 per month and that\u0026rsquo;s orders of magnitude more than I can use.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nInvalidating the cache is not required, but it does help with getting new content served by the CDN as soon after a deployment as possible.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"17 February 2023","permalink":"/p/cloudfront-migration/","section":"Posts","summary":"New experiences bring joy! After working with fun AWS CloudFront hacks at work this week,\nI decided to migrate this blog to AWS S3 and CloudFront. ⛅","title":"Migrating to AWS CloudFront"},{"content":"","date":null,"permalink":"/tags/s3/","section":"Tags","summary":"","title":"S3"},{"content":"As I advance in my career, there\u0026rsquo;s one activity I consistently enjoy: mentoring.\nI love helping other people discover their hidden potential and work through obstacles. Mentoring involves lots of listening and asking questions. My mentees ask some great questions as well, and some of them are difficult to answer.\nOne that I\u0026rsquo;ve had a lot recently is:\nWhen do I know it\u0026rsquo;s time to move on from my current project or even my company?\nTake a quick look at any news source and you\u0026rsquo;ll find news about big layoffs and a hot labor market. Moving on from one project to another is one thing, but moving to another company is something different entirely.\nThis becomes more even complicated in the United States since, for most of us, our health insurance and disability benefits are tied to our current employer. 😰\nI\u0026rsquo;ve answered this question many times for many different people but I never covered it on the blog. This post will explain how I built my system of red flags and I\u0026rsquo;ll explain some of the red flags that have been useful to me along the way.\nThink gradients, not binary values #I worked for a great leader in a previous job who had a talent for giving very direct feedback that I could put to good use quickly. He introduced me to impostor syndrome, the theory of constraints1, and empathy in a business setting.\nOur company was going through quite a few challenges at the time. I asked him about how to know when to stay on a project, dig in, and improve it or when it\u0026rsquo;s time to go. He sat down with me and explained his system of red flags. 🚩\nHe suggested making a list of two to three changes within the company that would reduce my faith in the company. These things would sew doubt about company leadership, products, or the approach to the customer experience. He explained that every change should be looked at with full context of the situation and that no single item should cause someone to make a rash decision.\nThere was one rule he was clear about: You must come up with your list of red flags when you aren\u0026rsquo;t under duress.\nThese red flags must be in place before the ship starts to sink. Why? All kinds of cognitive bias set in when you\u0026rsquo;re under frustrated or frightened. Once the body\u0026rsquo;s fight or flight system kicks in, all bets are off. It\u0026rsquo;s difficult to make sound judgments at that point.\nI asked him if he could give me one of his red flags as an example. He said he keeps a list of people in the company who he respects and watches for them to change roles or leave the company. Earning his respect at the company meant:\nYou are committed to quality work You are committed to the customer experience You bring your best each day His list changed over time and usually had five to ten people on the list at any one time. I asked him if he could give me another example and he immediately gave me that look and said:\nOnly you know what your red flags really are.\nMy red flags #Once I explain this concept to someone, they immediately want to know what I use. Here\u0026rsquo;s a brief overview of what I use.\nGreat people leave #I\u0026rsquo;ve watched incredibly talented people come and go during my career. However, much like my previous leader, I keep a list of five to ten people that I genuinely respect. The people on my list meet my criteria:\nYou bring your best effort to work consistently You admit when you don\u0026rsquo;t know something You\u0026rsquo;ve been in a tough spot and doubled down on making an improvement You take time to help others through tough situations You realize that success at work means more than writing code or managing infrastructure When people on my list leave the company, it starts raising red flags.\nAm I going to leave because one talented person left? Likely not.\nAm I going to leave because half of my list has left the company? Likely not. However, this will make me stop to think about my current situation and evaluate some other red flags.\nReduced belief #Survival in almost any company requires you to believe that your contributions create value for someone somewhere. You also need a belief that your chance for further opportunities in the company should improve as the product improves.\nA previous CEO once talked to us about \u0026ldquo;wobbles\u0026rdquo;. A wobble happens when your belief in your leadership and overall company direction is shaken. He argued that these wobbles are totally helpful and we must work to understand them. Simply shutting down a wobble only makes it worse.\nThis is one of those red flags where you must ask yourself how you could potentially change the outcome.\nYou might be the catalyst for a movement that changes the way your company thinks about your product and its customers! 🎉\nYou also might become a pariah for an effort that was doomed before it even started. 😥\nThis is where a great relationship with your manager comes in handy. If you have a relationship where you can talk about these big ideas and your manager supports you, then you have a good chance of success.\nIf not, then your best choice might be to walk away.\nDegraded quality #Quality means something different to everyone. When I talk about quality here, I am talking about delivering a high quality product and talking to customers about it honestly. Every product in the world has bugs and shortcomings that everyone wishes were better.\nIf a product needs to ship with ten features, but only eight are ready, then it might make sense to ship what you have. The key is that you\u0026rsquo;re honest with customers and say \u0026ldquo;Okay, we have 80% of what you wanted and we wanted to get it to you now so you can get started sooner.\u0026rdquo;\nThat\u0026rsquo;s not a quality issue. You did your best to hit the deadline, gave customers the largest amount of features you could, and then explained honestly that there\u0026rsquo;s work left to be done. That\u0026rsquo;s an honest job and it maintains customer trust.\nI look for situations where teams choose to release products without certain pieces and then find ways to hide it from the industry or from customers. An even bigger red flag goes up for me when those teams are confronted about it and they don\u0026rsquo;t see a problem.\nA lack of honesty with customers is a step down a very dark path for any company.\nWhat now? #Start small! 🤏\nMake a list of items, that if they changed, would frustrate you or make you nervous about your job. If it\u0026rsquo;s something that increases your stress, write it down.\nWhen red flags start to appear, talk to your manager about them as soon as possible!\nDon\u0026rsquo;t let them fester and get worse while you\u0026rsquo;re silently becoming more and more upset and stressed. Try to bring them up with your manager in the context of your experience with them.\nWhy focus on experiences? They are unique to you. They cannot be taken away. If someone on your list leaves the company and you are nervous about the future of the company after that, then that\u0026rsquo;s your experience. Start with something like:\nWhen he/she left the company, I became very nervous about the future of our product and I\u0026rsquo;m stressed about meeting our deadlines to customers.\nGreat managers will usually want to know more about your feelings and the impact that person had on the project. It\u0026rsquo;s entirely possible that your leaders had no idea how integral that person was to the project over time.\nI hope you can use this red flag framework as a method for reducing your stress and taking some of the emotion out of your decision to stay or leave. In the end, only you can make that decision. 🫂\nThis concept seriously changed my career. If you\u0026rsquo;d like to learn more, your first stop should be to read The Phoenix Project. Want to dig deeper? Read The Goal for a much more detailed and manufacturing-centric approach.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"5 January 2023","permalink":"/p/red-flags/","section":"Posts","summary":"Every job has its ups and downs, but when is it the right time to double down\nor the right time to leave? Make a list of red flags that help you decide. 🚩","title":"Red flags"},{"content":"Keeping things updated quickly becomes a monotonous task. I\u0026rsquo;m surrounded by devices that demand updates on different frequencies. Phones, computers, tables, cloud instances, containers, and even my car need constant attention for updates that improve security or fix bugs. (Sometimes the updates cause bugs, but let\u0026rsquo;s forget about those for now) 😉\nMy container infrastructure runs on Fedora CoreOS and it updates itself. It has an immutable layer underneath my containers that updates using ostree.\nHowever, keeping containers updated is a constant battle. Updating the containers themselves is fairly easy with a podman pull or docker pull followed by a stop and start. It\u0026rsquo;s a bit easier with docker-compose, but it\u0026rsquo;s still a nuisance to remember to update.\nEnter watchtower #Watchtower is incredibly handy and surprisingly simple to operate. At a high level, it reads the container tags from each running container and watches the upstream repositories for updates. When an updated container appears, watchtower springs into action, pulls the new container, and replaces the old container with the new one.\nIt accepts arguments to configure all kinds of aspects of updating containers. You can exclude certain containers from updates, choose your update interval, and send notifications when updates occur.\nAlso, if manually updating containers is something you find fascinating, watchtower can notify you about updates. Then you get that great feeling of running lots of commands on your own. (Wait, surely nobody likes that!)\nDeploy #There are plenty of configuration snippets in watchtower\u0026rsquo;s documentation, but you can start off with something as simple with this in your docker-compose.yaml:\nservices: watchtower: image: ghcr.io/containrrr/watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock restart: always privileged: true My configuration on my mastodon deployment looks something like this:\nservices: watchtower: container_name: watchtower image: ghcr.io/containrrr/watchtower hostname: watchtower.tootchute.com volumes: - /var/run/docker.sock:/var/run/docker.sock environment: - WATCHTOWER_CLEANUP=true - WATCHTOWER_POLL_INTERVAL=86400 - WATCHTOWER_NOTIFICATIONS=shoutrrr - WATCHTOWER_NOTIFICATION_URL=discord://DISCORD_WEBHOOK_KEY@DISCORD_CHANNEL_ID restart: always privileged: true This configuration sends me notifications via my Discord server whenever watchtower starts or when a container is updated.\nThe docker.sock volume mount allows watchtower to interact with the container daemon underneath watchtower. This could easily be done with docker, moby-engine, or podman.\nI\u0026rsquo;d like to remove the privileged setting at some point soon but I haven\u0026rsquo;t figured out a way to allow watchtower to talk to the docker.sock without it. 🤔\nFurther considerations #If you don\u0026rsquo;t trust the upstream where you download your containers, be careful using watchtower. A malicious container could be uploaded to a particular container image repository and your system might update itself to the malicious container before the malicious container is found. Then again, if you don\u0026rsquo;t trust the upstream where you download your containers, you should be building these containers yourself. 😉\nFor more complex services that might need some extra care around updates, such as database services, you may want to exclude them from automatic updates. You can run multiple instances of watchtower with customized configurations for different sets of containers.\n","date":"4 January 2023","permalink":"/p/watchtower/","section":"Posts","summary":"Watchtower keeps an eye on your running containers and updates them when new containers appear upstream. 📦","title":"Automatic container updates with watchtower"},{"content":"","date":null,"permalink":"/tags/mastodon/","section":"Tags","summary":"","title":"Mastodon"},{"content":"Mastodon caught my attention at the end of 2022 in the wake of all the Twitter shenanigans. At a high level, Mastodon is an implementation of ActivityPub and you can use it for \u0026ldquo;micro-blogging\u0026rdquo; much like you would use Twitter. (This is a really quick, high-level explanation and I skipped over plenty of detail.) 😉\nThis post covers my journey on Mastodon that led me to self-host my own Mastodon instance in a fairly reliable way.\nEarly start #My early Mastodon adventure started out much like the story of Goldilocks:\nI started out on mastodon.social, but it was too big. There were so many people on the server that the federated timeline was flying by. Rules seemed to be enforced well, but it was a bit like Twitter all over again. I deployed my own, but it was too small (the federated timeline was empty). Finding new people to talk to or following new topics was difficult. Finally, I discovered Fosstodon after several friends in the open source community joined. It felt just right. The admins of the Fosstodon instance are fantastic. Sure, there was downtime as the usage levels increased, but the admin team was quick to communicate the issues at hand along with future plans. My interactions with the community were almost all positive and it was fun to reconnect with some open source contributors that I had not spoken to in ages.\nAs time went on, I read various toots1 about Mastodon servers changing owners, suddenly going offline, or altering rules abruptly. Someone talked about taking control of your online identity and that Mastodon should be included in that.\nThis aligned with my existing approach to hosting blogs on my own domains. Also, after the Twitter fiasco, I\u0026rsquo;d like people to find me via the systems where I have full control, such as my blog.\nSelf-hosted adventure #So far, there are three main deployment methods for Mastodon that I\u0026rsquo;ve found:\nThe official guide uses a custom Ruby, lots of steps, and systemd Using docker-compose Deploying in kubernetes using Mastodon\u0026rsquo;s charts or the ones from Bitnami Official guide #Although the official guide looks fairly straightforward, it has a lot of steps. I struggled to get the right Ruby version compiled on Fedora 37 and I found spots where I needed to tweak the guide to make things work. Also, I wasn\u0026rsquo;t sure if I could get the steps done the same way again if I needed to migrate the instance or recover from a failure.\ndocker-compose #Next up was docker-compose. I use docker-compose quite often and I know my way around many of the rough edges. However, I couldn\u0026rsquo;t get the upstream compose file to work properly. Sometimes the database migrations would not run. Sometimes certain pieces of the Mastodon infrastructure couldn\u0026rsquo;t find each other. As soon as I tried to set passwords for postgres and redis, I couldn\u0026rsquo;t get Mastodon\u0026rsquo;s rails app to work again.\nIn addition, the docker-compose file from upstream builds containers on your local machine rather than pulling the official containers that were built and tested upstream. That\u0026rsquo;s a quick fix in the compose file, but I still had issues during the deployment.\nkubernetes #Finally, I looked at kubernetes. Surely you can just add kubernetes to something and make it better, right? 😆\nThe Bitnami charts made it much further along than the charts from upstream, but I still had errors flowing about database migrations cut off during their run and occasionally unreachable postgres servers.\nThere must be a better way.\nDeployment #For this Mastodon deployment to work well, I needed a few things:\nThe deployment should be mostly hands off. Said another way, moving it to another server or re-deploying should be a docker-compose up -d plus one or two commands maximum. It should be relatively easy to back up and restore. The big file of secret environment variables should be generated ahead of time and not at deploy time. After plenty of trial and error, I came up with this plan:\nStart with an empty secrets environment file. Deploy all of the containers and run the rake db:setup to generate the environment file. Copy the environment file to .env.production so that it can be used along with upstream\u0026rsquo;s docker-compose file. Delete the entire deployment. Remove all existing volumes and containers. Add Caddy to the deployment to handle TLS and serving cached content. Deploy again with docker-compose up -d and run rake db:setup to prepare the database with the environment file. Without further ado, let\u0026rsquo;s get to the guide!\nGenerate the environment file #Here\u0026rsquo;s my initial docker-compose file:\nversion: \u0026#39;3\u0026#39; services: postgres: restart: always container_name: postgres image: docker.io/library/postgres:14 networks: - internal_network healthcheck: test: [\u0026#39;CMD\u0026#39;, \u0026#39;pg_isready\u0026#39;, \u0026#39;-U\u0026#39;, \u0026#39;postgres\u0026#39;] volumes: - postgres:/var/lib/postgresql/data environment: - POSTGRES_HOST_AUTH_METHOD=trust - POSTGRES_PASSWORD=my-super-secret-postgres-password - POSTGRES_USER=postgres redis: restart: always container_name: redis image: redis:7 networks: - internal_network healthcheck: test: [\u0026#39;CMD\u0026#39;, \u0026#39;redis-cli\u0026#39;, \u0026#39;ping\u0026#39;] volumes: - redis:/data web: image: tootsuite/mastodon container_name: web restart: always env_file: .env.production command: bash -c \u0026#34;rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000\u0026#34; networks: - external_network - internal_network healthcheck: test: [\u0026#39;CMD-SHELL\u0026#39;, \u0026#39;wget -q --spider --proxy=off localhost:3000/health || exit 1\u0026#39;] ports: - \u0026#39;127.0.0.1:3000:3000\u0026#39; depends_on: - postgres - redis # - es volumes: - mastodon-public:/mastodon/public/system streaming: image: tootsuite/mastodon container_name: streaming restart: always env_file: .env.production command: node ./streaming networks: - external_network - internal_network healthcheck: test: [\u0026#39;CMD-SHELL\u0026#39;, \u0026#39;wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1\u0026#39;] ports: - \u0026#39;127.0.0.1:4000:4000\u0026#39; depends_on: - postgres - redis sidekiq: image: tootsuite/mastodon container_name: sidekiq restart: always env_file: .env.production command: bundle exec sidekiq -c 1 depends_on: - postgres - redis networks: - external_network - internal_network volumes: - mastodon-public:/mastodon/public/system healthcheck: test: [\u0026#39;CMD-SHELL\u0026#39;, \u0026#34;ps aux | grep \u0026#39;[s]idekiq\\ 6\u0026#39; || false\u0026#34;] networks: external_network: internal_network: internal: true volumes: mastodon-public: {} postgres: {} redis: {} I\u0026rsquo;ve made a few alterations to the upstream compose file:\nI\u0026rsquo;m using the upstream containers from docker hub rather than building them on startup My containers use docker volumes instead of mounting local directories The sidekiq container only uses one worker (keeping resource usage low) At this point, I can run docker-compose up -d and all of the containers are running. Now we can use Mastodon\u0026rsquo;s interactive configuration tool to generate our environments file:\ndocker-compose run --rm web bundle exec rake db:setup Go through the interactive configuration and answer all of the questions there.\nFor SMTP, I used Mailgun since it\u0026rsquo;s very inexpensive for my Mastodon use case. Once you set up your account there, look for the SMTP credentials under your domain in Mailgun\u0026rsquo;s control panel. The Mastodon setup process will ask for those credentials.\nAlso, I keep all of my assets in Backblaze B2 to avoid clogging up all of the storage on my VM that runs Mastodon. Create a public bucket in Backblaze and create some access keys. When Mastodon asks for your S3 endpoint, use https://s3.us-west-001.backblazeb2.com. If it asks for a hostname, you can use s3.us-west-001.backblazeb2.com.\nOnce the setup completes, take the environments file that prints to the screen and store that as .env.production.\nDelete the deployment (for real) #This is going to sound weird, but we need to throw everything away at this point. I like this step because it allows me to start fresh with a fully generated environments file. It\u0026rsquo;s a good simulation of how things might look in a brand new deployment or during a migration from one server to another.\n💣 WARNING! This assumes that Mastodon\u0026rsquo;s containers are the only ones running on your system. If you are running other containers for other services, don\u0026rsquo;t run these commands. You must go through each container, remove it, and remove the associated volume carefully. # Stop all of the current containers and delete them (see warning above!) $ docker-compose rm -sfv # Destroy all of the container volumes (see warning above!) $ docker system prune --volumes Add Caddy #For most container deployments, I\u0026rsquo;d use traefik here. Its configuration discovery abilities, especially when paired with docker-compose, are top-notch. There\u0026rsquo;s almost no little one-off configuration issues when you use traefik.\nHowever, Mastodon has tons of static assets, such as images, stylesheets, and other media. Serving those through Mastodon\u0026rsquo;s rails web server is possible, but it\u0026rsquo;s horribly inefficient. It chews up much more CPU time and it\u0026rsquo;s slower to respond.\nThat\u0026rsquo;s where Caddy comes in. Caddy has automatic TLS capabilities with LetsEncrypt and it can also serve static content. This takes the load off of Mastodon\u0026rsquo;s rails web server.\nStart by adding a new service to your compose file:\ncaddy: image: caddy:2-alpine restart: unless-stopped container_name: caddy ports: - \u0026#34;80:80\u0026#34; - \u0026#34;443:443\u0026#34; volumes: - ./caddy/etc-caddy:/etc/caddy:Z - ./caddy/logs:/logs:Z - mastodon-public:/srv/mastodon/public:ro hostname: \u0026#34;tootchute.com\u0026#34; networks: - internal_network - external_network Change the hostname to fit your server. The mastodon-public volume is the one that Mastodon uses for its public content and mounting it inside the Caddy container allows Caddy to serve those assets.\nIn my case, I created a caddy directory in my home directory to hold the configuration and log files:\n$ mkdir caddy/{etc-caddy,logs} 🤓 NERD ALERT. The :Z on the volumes for configuration and logs ensures that these directories have the right SELinux contexts so that the container can access the files in these directories. If your system does not use SELinux, you can omit the :Z.\nI wrote a caddy configuration in ./caddy/etc-caddy/Caddyfile that is a slight tweak of Robert Riemann\u0026rsquo;s version:\n{ # Global options block. Entirely optional, https is on by default # Optional email key for lets encrypt email major@mhtx.net # Optional staging lets encrypt for testing. Comment out for production. # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory # admin off } tootchute.com { log { # format single_field common_log output file /logs/access.log } root * /srv/mastodon/public encode gzip @static file handle @static { file_server } handle /api/v1/streaming* { reverse_proxy streaming:4000 } handle { reverse_proxy web:3000 } #header { # Strict-Transport-Security \u0026#34;max-age=31536000;\u0026#34; #} header /sw.js Cache-Control \u0026#34;public, max-age=0\u0026#34;; header /emoji* Cache-Control \u0026#34;public, max-age=31536000, immutable\u0026#34; header /packs* Cache-Control \u0026#34;public, max-age=31536000, immutable\u0026#34; header /system/accounts/avatars* Cache-Control \u0026#34;public, max-age=31536000, immutable\u0026#34; header /system/media_attachments/files* Cache-Control \u0026#34;public, max-age=31536000, immutable\u0026#34; handle_errors { @5xx expression `{http.error.status_code} \u0026gt;= 500 \u0026amp;\u0026amp; {http.error.status_code} \u0026lt; 600` rewrite @5xx /500.html file_server } } Be sure to change tootchute.com to your Mastodon server\u0026rsquo;s domain as well as email to your email. In addition, you may want to uncomment the acme_ca option shown there to avoid hitting LetsEncrypt\u0026rsquo;s production API limits while you are testing your deployment. (Comment out the staging server later to ensure you get a valid, trusted certificate.)\nLet\u0026rsquo;s bring up our new Caddy container!\n$ docker-compose up -d Initialize Mastodon #At this point, we have Caddy serving content and all of our Mastodon containers are running. However, the Mastodon database isn\u0026rsquo;t populated at all. Let\u0026rsquo;s do that now:\ndocker-compose run --rm web bundle exec rake db:setup This step uses your environments file to run all of Mastodon\u0026rsquo;s database migrations and perform some initial setup steps. It might take about 30 seconds to run.\nCreate our first user once the setup process finishes:\n$ docker-compose run --rm web bin/tootctl accounts create USERNAME --email YOUR_EMAIL --confirmed --role Owner This command creates a new administrative user, sets the email address for that user, and confirms the account. The confirmation part allows you to skip the email confirmation process for that first account. Your initial password prints out as soon as the command finishes.\nYou should be able to access your Mastodon deployment on the domain you chose (mine is tootchute.com and log in as the user you just created. If something doesn\u0026rsquo;t look right, examine the container logs to see if it\u0026rsquo;s something obvious:\n$ docker-compose logs -f --since 5m If a container is in a restart loop, you should catch it fairly quickly in the logs.\nNext steps #First, turn off new registrations if you plan to run a single user instance like I do. Click the preferences gear/cog on the main page, click Administration, *Server Settings, and Registrations.\nNext, enable two-factor authentication for your account. Click the preferences gear/cog on the main page, click Account, and then Two-factor Auth.\nFinally, back up your environments file (.env.production) and your docker-compose.yaml. This will make it much easier to recover from a failure or migrate to a new server.\nIf you\u0026rsquo;re using remote assets in S3 or Backblaze, you don\u0026rsquo;t need to back up that content. Focus on backing up postgres and redis on a regular basis:\n# Dump postgres data $ docker-compose exec postgres pg_dump -d mastodon -U postgres --no-owner \u0026gt; backups/pgdump-$(date +%F_%H-%M-%S).sql # Copy redis data $ docker-compose cp redis:/data/dump.rdb backups/ Let me know if you run into problems with the steps described in this post. I assembled them from my shell history and some notes I took along the way. There\u0026rsquo;s always a chance I missed something.\nPosts on Mastodon were called \u0026ldquo;toots\u0026rdquo; for ages since that\u0026rsquo;s the supposed sound that an elephant trunk makes. Many people want to switch that to \u0026ldquo;posts\u0026rdquo; and the latest version of Mastodon changed the \u0026ldquo;toot\u0026rdquo; button to \u0026ldquo;publish.\u0026rdquo; I\u0026rsquo;ll call them toots forever. Heck, I\u0026rsquo;m the owner of tootchute.com. 😉\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"2 January 2023","permalink":"/p/self-hosted-mastodon-second-try/","section":"Posts","summary":"Although my first attempt at self-hosting Mastodon was a failure, I went back for a second attempt with docker-compose. 🧗‍♂️","title":"Second try at self-hosting Mastodon"},{"content":"","date":null,"permalink":"/tags/selfhosted/","section":"Tags","summary":"","title":"Selfhosted"},{"content":"Bitwarden became my go-to password manager a few years ago after I finally abandoned LastPass. Once I read the recent news about stolen password vaults, I was even happier that I made the switch.\nMy original password manager from way back in my Apple days was 1Password. It had a great user interface on the Mac and on iPhones, but I found it frustrating to use when I switched to Linux laptops and Android phones.\nLots of people in my Mastodon timeline were singing the praises of 1Password\u0026rsquo;s security and user interface after the recent LastPass news, so I decided to give it another look. It has a great CLI now and the GUI application runs well in Linux. The CLI also connects to the application via PolicyKit and it has some helpful plugins for various other CLI tools, like the AWS cli.\nI decided to give 1Password another try, but then I ran into a problem with the CLI. 🤔\n🏃 In a hurry? Go straight to the fix.\nAuthentication problems #The 1Password application was up and running and I followed the documented steps for enabling Linux authentication for the CLI. However, I still couldn\u0026rsquo;t authenticate:\n$ op item list --vault Private [ERROR] 2022/12/31 11:26:54 authorization prompt dismissed, please try again The 1Password documentation says that the application will automatically write a polkit action file at /usr/share/polkit-1/actions/com.1password.1Password.policy to handle the authentication and that file was present:\n$ ls -al /usr/share/polkit-1/actions/com.1password.1Password.policy -rw-r--r--. 1 root root 1508 Dec 29 09:48 /usr/share/polkit-1/actions/com.1password.1Password.policy In addition, the PolicyKit daemon is running:\n$ ps aufx | grep polkit polkitd 1293 0.0 0.0 2692700 26736 ? Ssl 10:47 0:00 /usr/lib/polkit-1/polkitd --no-debug $ rpm -qf /usr/lib/polkit-1/polkitd polkit-121-4.fc37.x86_64 Putting it together #Then I stopped to think about what the system was telling me:\n1Password\u0026rsquo;s CLI says that the authentication prompt is being dismissed I never saw an authentication prompt The PolicyKit daemon is running properly without errors Running strace on polkitd and op showed everything looking good But wait, the policykit daemon is only half of what I needed. There needs to be some type of window manager integration to pop up an authentication prompt and I wasn\u0026rsquo;t seeing that prompt.\nOn i3, I use lxpolkit for policykit integration and it should have popped up some kind of prompt for me. lxpolkit is installed:\n$ rpm -q lxpolkit lxpolkit-0.5.5-8.D20210419git82580e45.fc37.x86_64 Then I noticed something strange. The actual lxpolkit daemon that handles the authentication prompts was not running even though it was configured to automatically start as soon as I logged in:\n$ rpm -ql lxpolkit | grep autostart /etc/xdg/autostart/lxpolkit.desktop $ pgrep lxpolkit # Nothing here My default i3 configuration starts all of these automatically with dex-autostart:\n$ grep dex ~/.config/i3/config exec --no-startup-id dex-autostart --autostart --environment i3 Then I saw the issue at the very end of the lxpolkit desktop file:\n$ tail /etc/xdg/autostart/lxpolkit.desktop Comment[tr]=Policykit Kimlik Doğrulama Aracı Comment[uk]=Агент авторизації Policykit Comment[zh_CN]=Policykit 认证代理 Comment[zh_TW]=Policykit 身分核對代理程式 Exec=lxpolkit TryExec=lxpolkit Icon=gtk-dialog-authentication Hidden=true X-Desktop-File-Install-Version=0.26 OnlyShowIn=LXDE; The OnlyShowIn=LXDE means that dex-autostart will skip it when the environment is set to i3!🤦‍♂️\nThe fix #I copied the desktop file into my local autostart directory and removed the last line:\n$ cp /etc/xdg/autostart/lxpolkit.desktop ~/.config/autostart/ $ tail -n 5 ~/.config/autostart/lxpolkit.desktop Exec=lxpolkit TryExec=lxpolkit Icon=gtk-dialog-authentication Hidden=true X-Desktop-File-Install-Version=0.26 Then I ran dex-autostart manually to ensure it worked:\n# dex-autostart --autostart --environment i3 --verbose Autostart file: /home/major/.config/autostart/lxpolkit.desktop Executing command: lxpolkit Success!\n$ ps aufx |grep lxpolkit major 20432 0.0 0.0 393124 12304 pts/3 Sl 11:54 0:00 lxpolkit I tried the 1Password command line application one more time\u0026hellip;\nIt worked! As long as the 1Password application is running and unlocked, I can use the op CLI tool with my normal Linux system authentication.\n","date":"30 December 2022","permalink":"/p/1password-cli-lxpolkit/","section":"Posts","summary":"1Password\u0026rsquo;s CLI tool connects via PolicyKit to the 1Password application for authentication, but this isn\u0026rsquo;t the easiest in i3. 🔑","title":"Connect 1Password's CLI and app in i3 with lxpolkit"},{"content":"","date":null,"permalink":"/tags/i3/","section":"Tags","summary":"","title":"I3"},{"content":"","date":null,"permalink":"/tags/policykit/","section":"Tags","summary":"","title":"Policykit"},{"content":"","date":null,"permalink":"/tags/health/","section":"Tags","summary":"","title":"Health"},{"content":"","date":null,"permalink":"/tags/keto/","section":"Tags","summary":"","title":"Keto"},{"content":"","date":null,"permalink":"/tags/running/","section":"Tags","summary":"","title":"Running"},{"content":"I\u0026rsquo;ve written about keto before on the blog, but as we approach the end of 2022 and many people are making resolutions for the new year, I get a lot of questions about it. Hopefully I can answer most of those here!\nWhat is keto? #Ketogenic diets often sound complicated. The TL;DR is:\nReduce your intake of carbohydrates to a very low level (less than 30g/day for most people) Increase your fat and protein intake to make up the difference Seriously. That\u0026rsquo;s it.\nBut before we go any further:\n🩺 Don\u0026rsquo;t start (or stop) a ketogenic diet without talking to a trusted medical professional. There are certain health conditions that require modifications to the common ketogenic diet.\nIn my experience, adopting a keto lifestyle helped me to:\nMaintain a steady blood glucose all day long (about 100 mg/dL) Reduce my weight (about 210 down to 170-175) Sleep better Exercise longer1 Balance my cholesterol levels Lower my triglycerides and A1C Avoid diabetes Top three things you should know #I\u0026rsquo;ve boiled down all of the questions I\u0026rsquo;ve answered via email, social media, and in person at various dinner parties when people question my unusual eating habits. 🤭\nIf you have questions that aren\u0026rsquo;t answered here, click one of the links on my profile page and send me your question!\nWhat do I need to buy? #This is an easy one. Nothing.\nYou\u0026rsquo;ll find tons of products online and at your local store with \u0026ldquo;keto\u0026rdquo; on the label. If you flip them over and read the ingredients, they\u0026rsquo;re chock full of artificial sweeteners, flour substitutes, and chemicals. Although many of these won\u0026rsquo;t spike your blood sugar, many of them will! Some artificial sweeteners, like malitol, have the same effect as raw sugar, just on a smaller scale.\nAll of these artificial foods push your keto progress backwards.\nYou do not need keto shakes, keto fat bombs, or keto vitamins. You do not need gluten-free foods as they\u0026rsquo;re packed with carbohydrates.\nYou need whole foods. You need high quality fats, proteins, and lower-carb vegetables and fruits.\nThe only exception I\u0026rsquo;ll give here is that you need some high quality electrolytes, especially as you start on keto. As your body transitions from burning easily accessible glucose in your blood to burning fat, you\u0026rsquo;ll find that your salt intake will be woefully insufficient. You can make your own keto electrolytes really easily (tons of recipes available online) or find electrolyte tables/mixes without sugar.\nHow do I handle going out to eat? #Here\u0026rsquo;s a big challenge. Your friends invite you to dinner or the holidays have arrived. How do you handle it?\nMy strategy involves the following:\nGet a look at the menu ahead of time. Look through the menu and find items that fit your diet goals. If you find a menu item that fits all of your requirements, that\u0026rsquo;s awesome! In those situations where the main item is perfect but the sides aren\u0026rsquo;t right (fries, potatoes, rice), look for alternative side dishes that you can ask about when you get there. Worst case scenario, just ask them to skip the side dish so that you\u0026rsquo;re not tempted to eat it. Have an answer ready if people ask about your menu choices. It\u0026rsquo;s inevitable that someone will say \u0026ldquo;oh, you don\u0026rsquo;t like potatoes\u0026rdquo; or \u0026ldquo;why are you just getting a salad?\u0026rdquo; Have your answer ready and be honest. Never say that you \u0026ldquo;can\u0026rsquo;t have\u0026rdquo; something. This makes it sound like you\u0026rsquo;re denying yourself something and it makes it difficult to stick to the diet. I remind myself that some foods look and taste delicious, but they don\u0026rsquo;t line up with my health goals, so I skip them. I tell others the same thing. Often times, I\u0026rsquo;ll say \u0026ldquo;Oh, that looks delicious, but it doesn\u0026rsquo;t fit in my diet.\u0026rdquo; You don\u0026rsquo;t want to offend anyone but you also need to be honest with yourself. Save up your carbs. Going out to eat for dinner? Try to eat only fat and protein all day long and save all of your carbs for dinner. You won\u0026rsquo;t be able to go wild with your carb intake, but you can enjoy a little more then you would otherwise. Key takeaway: Your diet is your choice. It\u0026rsquo;s the same situation with someone who doesn\u0026rsquo;t drink but their friends keep offering them alcohol. Don\u0026rsquo;t let anyone pressure you into changing your goals.\nWhat happens if I eat something I shouldn\u0026rsquo;t? #The holidays are coming! 😱\nFirst off, admit that mistakes happen. That\u0026rsquo;s okay!\nWhen I\u0026rsquo;ve eaten tons of carbs, I make a mental plan for the next day to get back on track. Some argue that starting the next day with a fast makes the most sense. I prefer to get back to my high fat and moderate protein foods the next day as I normally would.\nKeep in mind, though, that once you\u0026rsquo;ve done keto for a good while (usually 1-2 months for me), your digestive system has adapted to the new diet. Lots of the bacteria you used to have on a high carbohydrate diet are gone. Flooding the digestive system with lots of high carb food will make your intestines confused. Confused intestines are not comfortable. 🚽\nRemind yourself of why you started the keto diet, get back to eating right, and take it as a learning lesson.\nSometimes I see a delicious high carb food, like pizza, and I think:\n\u0026ldquo;That looks so good. But if I eat that, I\u0026rsquo;m going to have belly problems for two days. Is it worth it?\nI make myself think through the consequences before I eat it. Do I still eat it sometimes? I do. 🤷‍♂️\nWrapping up #Going keto delivered tons of benefits for me, but it certainly is not easy. However, the diet is fully open source and nobody needs to buy anything special to get started. You can still go out to eat with friends and recover from dieting errors easily.\nAt first, exercise will feel more difficult. Be patient and take it slow as you start.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"18 December 2022","permalink":"/p/keto-3-years/","section":"Posts","summary":"Adopting the keto lifestyle in 2023? Here are some pointers from me after three years. 🍽","title":"Three years of keto"},{"content":"Deploying applications in containers provides lots of flexibility and compatibility benefits.\nOnce you package your application and its dependencies in a container, that container runs almost anywhere without issues. Very few of the old \u0026ldquo;it worked on my machine!\u0026rdquo; problems remain. However, the challenge of running a container and linking it up with other helpful pieces of software still remains.\nWeb applications need something to serve HTTP requests and handle TLS. They also need databases, and those databases must be online and available first. All of these need reliable storage that is easily managed.\nIn my personal infrastructure, I keep coming back to docker-compose.\n🐇 In a hurry? Skip to the last section of this post if you want to skip my reasons for using docker-compose and you just want to see the steps.\nWhy docker-compose? #The tried-and-true docker-compose is one of the original \u0026ldquo;set your desired state\u0026rdquo; systems for managing containers. You specify what your container deployment should look like and docker-compose finds a way to get your containers in order. Sometimes that\u0026rsquo;s a fresh start without any existing containers. Sometimes it involves managing an existing fleet of containers and adjusting their configuration.\nAs an example, deploying your first container with docker-compose is easy. Assemble a basic YAML file, point docker-compose at it, and your container is running!\nNeed to change the configuration? Just make your changes in the YAML file, re-run docker-compose, and it knows enough to make the right changes. If containers need to be restarted, it takes care of that.\nWhy not kubernetes? #I\u0026rsquo;ve run my own kubernetes deployment and I\u0026rsquo;ve also been a consumer of large kubernetes and OpenShift deployments. All of this experience taught me two things:\nKubernetes and OpenShift are great. Like really great. Once you learn them, they are incredibly powerful tools that make a developer\u0026rsquo;s life much easier. Running my own kubernetes or OpenShift deployment on my own time (and own dime) is not enjoyable. Deploying, maintaining, and troubleshooting kubernetes infrastructure on your own is time consuming. Shared storage and networking caused me the most headaches in the past.\nWhat about k3s? #I love k3s. However, that still means I have to figure out networking for inter-container communication and load balancing. Shared storage is also needed.\nI could argue that running my own k3s deployment is easier than full kubernetes, but in the end, there\u0026rsquo;s more extra stuff around it that I don\u0026rsquo;t want to maintain.\nWhy don\u0026rsquo;t you use managed kubernetes? #Great question! Several providers have some excellent kubernetes offerings out there. Smaller providers, such as Digital Ocean and VULTR, have affordable offerings that are packed with features.\nThe challenge is that kubernetes deployments have overhead for the control plane, so you can\u0026rsquo;t use all of the virtual machines that you rent. For example, you may get three virtual machines in your cluster, each with 2GB RAM, but you can really only use about 1GB of RAM from each instance for your containers.\nIn addition, you will eventually need shared storage and some type of load balancer. The costs add up quickly.\nIt\u0026rsquo;s easy to start with a $50-$70 kubernetes offering and later find yourself cracking $100 per month after adding on storage and load balancers.\nWhat about podman? #Podman is a delight to use. You can toss some kubernetes YAML at it for deployments or pods and it will start them up for you. It also has a handy feature for exporting a container to a systemd unit file so you can manage it like any other systemd unit.\nHowever, when I want to make quick adjustments to a container configuration, it can be frustrating to get that done. I also like to make adjustments to all of my containers in one place since some applications depend on multiple containers running in tandem.\nThe podman-compose project helps quite a bit, but it still lacks some of docker-compose\u0026rsquo;s features.\nI use podman constantly on my laptop and desktop for development, testing, and toolbox containers.\nWhat\u0026rsquo;s the big deal about CoreOS? #Fedora CoreOS provides the foundation for my container infrastructure. I love Fedora already, but here\u0026rsquo;s what makes CoreOS special to me:\nAutomatic updates arrive via ostree as a fully tested minimal unit My system reboots automatically to apply the updates and it reverts back to the previous working update if the update fails It comes with all of the container tools and configurations that I need Whenever I need development or troubleshooting tools, toolbox containers are one step away It\u0026rsquo;s a constantly-updated OS that is designed for containers. What could be better than that?\nAdding docker-compose to CoreOS #Deploy CoreOS in your favorite cloud or on your favorite piece of hardware (I have a post about deploying it on Hetzner Cloud). Login as the core user via ssh.\nGo to the releases page for docker-compose and get the latest release for your architecture. Move it into place once you download it:\n$ curl -sLO https://github.com/docker/compose/releases/download/v2.14.1/docker-compose-linux-x86_64 $ sudo mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose $ sudo chmod +x /usr/local/bin/docker-compose Let\u0026rsquo;s add a really basic deployment of traefik\u0026rsquo;s whoami container. Start a new file called docker-compose.yaml:\nservices: whoami: container_name: whoami image: docker.io/traefik/whoami ports: - 8080:80 restart: unless-stopped If we try to bring up our containers now, we get an error:\n$ docker-compose up -d permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get \u0026#34;http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json?all=1\u0026amp;filters=%7B%22label%22%3A%7B%22com.docker.compose.project%3Dcore%22%3Atrue%7D%7D\u0026#34;: dial unix /var/run/docker.sock: connect: permission denied The core user is not in the docker group and does not have permissions to talk to the docker socket.\n$ id uid=1000(core) gid=1000(core) groups=1000(core),4(adm),10(wheel),16(sudo),190(systemd-journal) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 Add the docker group to the core user as a supplementary group:\n$ sudo usermod -a -G docker core Log out of your ssh session and log in again to pick up the new group. Start the containers once more:\n$ id uid=1000(core) gid=1000(core) groups=1000(core),4(adm),10(wheel),16(sudo),190(systemd-journal),982(docker) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 $ docker-compose up -d [+] Running 4/4 ⠿ whoami Pulled ⠿ 029cd1bf7e7c Pull complete ⠿ e73b694ead4f Pull complete ⠿ 99df6e9e9886 Pull complete [+] Running 2/2 ⠿ Network core_default Created ⠿ Container whoami Started Success!\n🤔 BUT WAIT! If we reboot our system, the containers won\u0026rsquo;t start up at boot time!\nOn CoreOS, there\u0026rsquo;s a docker.socket file that starts the actual docker service (which is actually moby-engine on Fedora and not full-fledged docker). As soon as something touches the socket, systemd starts the docker service. That keeps it out of the way until something actually asks to use it.\n$ systemctl status docker.socket ● docker.socket - Docker Socket for the API Loaded: loaded (/usr/lib/systemd/system/docker.socket; enabled; preset: enabled) Active: active (running) since Sat 2022-12-17 22:48:42 UTC; 4min 47s ago Until: Sat 2022-12-17 22:48:42 UTC; 4min 47s ago Triggers: ● docker.service Listen: /run/docker.sock (Stream) Tasks: 0 (limit: 2207) Memory: 0B CPU: 596us CGroup: /system.slice/docker.socket After a reboot, nothing pokes the socket and the associated service never starts. You can try it yourself! Reboot and the whoami container will be down. Run docker-compose ps one time and suddenly your containers are running!\nLet\u0026rsquo;s fix this so containers come up on boot without any extra work (or socket poking):\n$ sudo systemctl enable --now docker.service The socket will still be handled by systemd, but now the docker service itself will always start at boot whether someone touched it or not.\nEnjoy your new lightweight container infrastructure. ✨\n","date":"17 December 2022","permalink":"/p/docker-compose-on-coreos/","section":"Posts","summary":"My go-to method for managing containers easily is still docker-compose. It works really well on Fedora CoreOS. 📦","title":"docker-compose on Fedora CoreOS"},{"content":"","date":null,"permalink":"/tags/kubernetes/","section":"Tags","summary":"","title":"Kubernetes"},{"content":"","date":null,"permalink":"/tags/learning/","section":"Tags","summary":"","title":"Learning"},{"content":"The podcast bug bit me during the pandemic in 2020. As I started the keto diet in late 2019, I converted my lunch time into lunch walk time. That became a great time for listening to podcasts.\nThis post talks about how I consume podcasts and my favorite creators. I plan to keep this post updated over time as I add and remove favorites from my list.\nPodcast workflow #I went through plenty of podcast applications on my phone and via web browser, but my all-time favorite is Pocket Casts. It delivers an excellent experience on a phone that doesn\u0026rsquo;t get in my way. I can set certain podcasts to download automatically over wifi and filter my podcasts in different ways. It also lets me choose which podcasts to automatically add to my \u0026ldquo;Up Next\u0026rdquo; queue and which ones must be manually queued.\nIf you pay a little extra for Pocket Casts Plus (about $10 USD/year), you can manage your podcasts on your mobile devices and via a web browser. I love this feature because sometimes I have a few minutes left of a certain podcast when I finish my walk. I can switch to my computer and Pocket Casts picks up in Firefox right where it left off on my phone.\nPocket Casts has a great discovery feature to help find new podcasts in different genres or trending podcasts among Pocket Casts users.\nMy favorite podcasts #There are some podcasts that deliver episodes on a regular cadence (daily, weekly, etc) and here are my favorites:\nBeg to Differ: A weekly round table from The Bulwark with a great regular group. Topics are often around American politics, but they also include thoughtful analysis on social issues, relationships, and geopolitics. There\u0026rsquo;s usually a guest each week that gives lots of interesting insight in their area of expertise. If you wish you could find an elevated debate about politics that isn\u0026rsquo;t polarized and includes lots of centrist ideas, this is your podcast. (I support this podcast via a Substack subscription.)\nThe Daily: Reporters pull one story from the paper and dig into it during this awesome podcast. These are often feature stories that cover real people going through challenging situations. Each report usually contains real audio from interviews and events that weaves its way through the discussion. (I\u0026rsquo;m a New York Times subscriber, but this podcast is free.)\nLet\u0026rsquo;s Appreciate: Kyla Scanlon\u0026rsquo;s amazing podcast features commentary on economics, the stock market, and cryptocurrency. My favorite part about her podcast is how she includes the human element in everything. She has lots of writing about vibes and how human interactions change our economy and how we feel about money. (Her episodes are brief but dense. I can\u0026rsquo;t listen to them at a higher speed than 1x. 😉)\nSerious Trouble: I\u0026rsquo;ve always been curious about law and how attorneys think through challenging situations (and deal with challenging clients). Josh Barro and Ken White pull topics from the news and analyze them. They talk through the law involved and how they might advise clients to handle certain cases. There\u0026rsquo;s plenty of humor interspersed into the dialogue and Ken has a way with words. 😉 (I support this podcast via a Substack subscription.)\nThe Red Line Podcast: Imagine a podcast with a quality level approaching PBS\u0026rsquo; Frontline series, but more scrappy. Episodes normally appear once every two weeks. They pick apart all kinds of issues within geopolitics, especially in areas that you rarely hear on the news. Michael Hilliard has traveled the world and has a special interest in the politics of central Asia. (I support this podcast via their Patreon and it\u0026rsquo;s worth every penny.)\nTheta Gang: This may be a niche podcast, but it\u0026rsquo;s a great way to get involved in options trading \u0026ndash; especially the short volatility side. The host, Joonie, takes you through his thoughts on the market, answers questions from listeners, and shares personal updates. New episodes usually come out on the weekends.\n","date":"15 December 2022","permalink":"/p/favorite-podcasts/","section":"Posts","summary":"Podcasts provide a great way to keep up with current events or learn more\nabout the world around us, especially while we\u0026rsquo;re doing other activities. 🎧","title":"My favorite podcasts"},{"content":"","date":null,"permalink":"/tags/podcasts/","section":"Tags","summary":"","title":"Podcasts"},{"content":"","date":null,"permalink":"/tags/keyboard/","section":"Tags","summary":"","title":"Keyboard"},{"content":"Much of my work is driven by my keyboard, and I love finding new ways to do complicated actions in a hurry. That\u0026rsquo;s why I\u0026rsquo;m drawn towards tiling window managers like i3 and sway.\nMy team at work spans the globe and speaks many different languages. Many of these languages have diacritics (such as accents, tildes, or other marks) that completely change the pronunciation (or even the meaning!) of the word. Sure, I can type Tomas (TOM-oss) quickly, but it\u0026rsquo;s not the same as Tomaš (TOM-osh).\nUsing diacritics shows respect for someone\u0026rsquo;s name, their language and their culture. In some situations, it can be the difference between something totally normal and potentially offensive1.\nThe old way #I wrote about this way back in 2020 but my main method for getting this done was using the AltGr key.\nIf you want to type niños (children) in Spanish, you do this:\nHold down the right Alt key Hold down the Shift key Press the ~ key Press n Let go of everything and admire your ñ 😍 This felt so slick and I was off to the races typing all kinds of accents and other marks. I wrote the blog post, tweeted about it, and waited for others to reply with the same excitement.\nThe first reply from a European was:\nNice work, but what\u0026rsquo;s wrong with the compose key?\nWhat\u0026rsquo;s a compose key? 🤔\nCompose key #I stared at my keyboard. Where is this mysterious key? I\u0026rsquo;ve never seen it before!\nWikipedia held the answer:\nBecause Microsoft Windows and macOS do not support a compose key by default, the key does not exist on most keyboards designed for modern PC hardware. When software supports compose key behaviour, some other key is used. Common examples are the right-hand Windows key, the AltGr key, or one of the Ctrl keys. There is no LED or other indicator that a compose sequence is ongoing.\nI get to choose the key that becomes the compose key! After asking a few Europeans about what they use on a US keyboard layout, they suggested the right CTRL key.\nMy previous blog post had an i3 configuration line like this:\nexec_always --no-startup-id \u0026#34;setxkbmap us -variant altgr-intl\u0026#34; I figured there would need to be some kind of argument to pass to set the compose key. The man page for setxkbmap says that all of the keyboard sources are in /usr/share/X11/xkb/rules:\n\u0026gt; $ grep compose /usr/share/X11/xkb/rules/base.xml \u0026lt;name\u0026gt;mod_led:compose\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:ralt\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:lwin\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:lwin-altgr\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:rwin\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:rwin-altgr\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:menu\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:menu-altgr\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:lctrl\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:lctrl-altgr\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:rctrl\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:rctrl-altgr\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:caps\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:caps-altgr\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:102\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:102-altgr\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:paus\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:prsc\u0026lt;/name\u0026gt; \u0026lt;name\u0026gt;compose:sclk\u0026lt;/name\u0026gt; Opening that file shows the specific rctrl option:\n\u0026lt;configItem\u0026gt; \u0026lt;name\u0026gt;Compose key\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;Position of Compose key\u0026lt;/description\u0026gt; \u0026lt;/configItem\u0026gt; \u0026lt;!--- SNIP ---\u0026gt; \u0026lt;option\u0026gt; \u0026lt;configItem\u0026gt; \u0026lt;name\u0026gt;compose:rctrl\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;Right Ctrl\u0026lt;/description\u0026gt; \u0026lt;/configItem\u0026gt; \u0026lt;/option\u0026gt; My new i3 line looks like this:\nexec_always --no-startup-id \u0026#34;setxkbmap us -variant altgr-intl -option compose:rctrl\u0026#34; Now I get the best of both worlds:\nI can still use the Alt+Gr key for muscle memory. The compose key gives me access to more characters with less keypresses! There\u0026rsquo;s a long list of symbols you can type with your compose key!\nJust check out años versus anos in Español. 😉 🥔\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"12 December 2022","permalink":"/p/compose-key/","section":"Posts","summary":"Keep your composure with diacritics, symbols, and other characters with the compose key! ⌨","title":"Make your mark with the compose key"},{"content":"","date":null,"permalink":"/tags/audio/","section":"Tags","summary":"","title":"Audio"},{"content":"","date":null,"permalink":"/tags/bluetooth/","section":"Tags","summary":"","title":"Bluetooth"},{"content":"Working remotely involves lots of meetings in challenging home conditions and I love noise-canceling bluetooth headphones for meetings and music. It really helps me focus. However, as anyone knows who works with multiple audio devices on the same machine, getting all of the inputs and outputs working properly for each application is tedious. 😅\nThe last thing I want to wrestle with (as a scramble to a meeting) is redirecting audio from my speakers to my headphones in PulseAudio. 😱\nLuckily, there\u0026rsquo;s a pulseaudio module for that!\nGoal #Here\u0026rsquo;s what I want:\nHeadphones turn on, connect via Bluetooth, and the default audio sink (output) should switch to the headphones The input (source) should not change (I have a separate microphone on the desk) When the headphones turn off and disconnect, the audio should shift back to the original default Enabling the module #This was working really well on my desktop, but I recently switched to Fedora Silverblue. I forgot how I dealt with this before.\nA quick search led to an Arch Linux forum post. The pulseaudio module module-switch-on-connect handles this automatically without extra configuration.\nStart some music, disconnect the bluetooth headphones, and load the module:\n$ pactl load-module module-switch-on-connect Power on your headphones and wait for the connection. Audio should switch to your bluetooth headset automatically. Power off the bluetooth headphones and audio should shift back to the original source (perhaps your computer speakers).\nIf that didn\u0026rsquo;t work, you might need to tinker with your default audio sinks or add some udev rules to ensure your bluetooth headphones come up with the right sink. The Arch Linux pipewire docs provide a few options.\nMake it persistent #Loading the module with pactl only works until pulseaudio is restarted or the machine reboots. You can make it persistent by adding it to your window manager\u0026rsquo;s startup scripts.\nI use i3/sway, so my configuration line looks like this:\nexec_always --no-startup-id pactl load-module module-switch-on-connect ","date":"9 December 2022","permalink":"/p/bluetooth-automatic-switch/","section":"Posts","summary":"Automatically switch your system audio to your bluetooth headset as soon as they connect. 🎧","title":"Switch audio to bluetooth headphones automatically"},{"content":"Hetzner Cloud provides high performance cloud instances with excellent network connectivity at a reasonable price. They have two US regions (Virginia and Oregon) that give me good latency numbers here in Texas.\nHowever, they modify some of the Linux images they offer. This ensures every image looks similar when it boots, but it means that Fedora in one cloud doesn\u0026rsquo;t behave like Fedora in another cloud. (They\u0026rsquo;re not the only cloud that makes these changes.)\nThe modifications frustrate me for two reasons:\nI automate nearly everything and I expect images to match in different clouds. I want to use the unaltered image that went through Fedora\u0026rsquo;s QA process. Fortunately for us, Hetzner offers all the tools we need to deploy our own genuine image. Let\u0026rsquo;s get started! 🔧\nPreparing for a snapshot #We need a small instance to make our initial snapshot. Our first step is to make a cloud-init configuration. Here\u0026rsquo;s mine (be sure to change usernames and keys for yours):\n#cloud-config users: - name: major primary_group: major groups: - sudo - wheel sudo: ALL=(ALL) NOPASSWD:ALL shell: /bin/bash ssh_authorized_keys: - ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIcfW3YMH2Z6NpRnmy+hPnYVkOcxNWLdn9VmrIEq3H0Ei0qWA8RL6Bw6kBfuxW+UGYn1rrDBjz2BoOunWPP0VRM= major@amdbox We could create the instance using the web console, but I prefer to use hcloud. Fedora users can simply run dnf install hcloud to get the CLI installed quickly.\nLet\u0026rsquo;s use the CLI to start our instance:\n$ hcloud image list | grep fedora 69726282 system fedora-36 Fedora 36 - 5 GB Wed May 11 00:50:00 CDT 2022 - $ cat cloud-init.cfg | hcloud server create \\ --location ash \\ --image 69726282 \\ --name snapshotter \\ --type cpx11 \\ --user-data-from-file - Once the instance finishes building, make a note of the server number in the output.\nLet\u0026rsquo;s put the server into rescue mode so we can have full access to the disk:\n$ hcloud server enable-rescue 26341155 1.2s [===================================] 100.00% Rescue enabled for server 26341155 with root password: ***** Connect to the server via ssh using the root password from the rescue output. Once we\u0026rsquo;re connected, look for the root disk. Then we download the image and extract it to the root disk directly:\n# fdisk -l Disk /dev/sda: 38.15 GiB, 40961572864 bytes, 80003072 sectors Disk model: QEMU HARDDISK Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 170D9ADF-A82F-4F14-975A-CA9603329ABA Device Start End Sectors Size Type /dev/sda1 135168 80003038 79867871 38.1G Linux filesystem /dev/sda14 2048 133119 131072 64M EFI System /dev/sda15 133120 135167 2048 1M BIOS boot # export IMAGE_URL=https://mirrors.kernel.org/fedora/releases/37/Cloud/x86_64/images/Fedora-Cloud-Base-37-1.7.x86_64.raw.xz # curl -s $IMAGE_URL | xz -d - | dd of=/dev/sda status=progress 5343543808 bytes (5.3 GB, 5.0 GiB) copied, 172 s, 31.1 MB/s 10485760+0 records in 10485760+0 records out 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 172.625 s, 31.1 MB/s At this point, the root disk of the instance now holds the genuine Fedora 37 cloud image from the GA release. We don\u0026rsquo;t want to boot it before we capture it, so let\u0026rsquo;s disconnect the ssh session and then power off the instance:\n$ hcloud server poweroff 26341155 4.9s [===================================] 100.00% Server 26341155 stopped Make the snapshot #Take a snapshot of the root disk that contains the Fedora cloud image:\n$ hcloud server create-image \\ --description \u0026#34;Fedora 37 GA image\u0026#34; \\ --type snapshot 26341155 51s [====================================] 100.00% Image 91884132 created from server 26341155 Let\u0026rsquo;s get all of the available data about our snapshot:\n$ hcloud image describe 91884132 ID:\t91884132 Type:\tsnapshot Status:\tavailable Name:\t- Created:\tThu Dec 8 14:20:10 CST 2022 (1 minute ago) Description:\tFedora 37 GA image Image size:\t0.41 GB Disk size:\t40 GB OS flavor:\tfedora OS version:\t- Rapid deploy:\tno Protection: Delete:\tno Labels: No labels Now you can build instances from the snapshot! In my case, I would adjust my original server create command to use image 91884132:\n$ cat cloud-init.cfg | hcloud server create \\ --location ash \\ --image 91884132 \\ --name snapshotter \\ --type cpx11 \\ --user-data-from-file - 🧹 Clean up before you go! #Nothing is worse than getting a surprise bill at the end of the month for cloud infrastructure that you forgot you had! In this case, a CPX11 instance should cost you less than $5 per month. It could be worse. 🤭\nClean it up easily with one command:\n$ hcloud server delete 26341155 Server 26341155 deleted ","date":"8 December 2022","permalink":"/p/fedora-37-hetzner/","section":"Posts","summary":"Avoid cloud provider modifications and deploy a genuine release version of Fedora 37 on Hetzner Cloud. ⛅","title":"Deploy Fedora 37 on Hetzner Cloud 🇩🇪"},{"content":"","date":null,"permalink":"/tags/hetzner/","section":"Tags","summary":"","title":"Hetzner"},{"content":"My Ducky One 2 keyboard arrived around two years ago and I love it. I type more accurately and that clackety sound gives me that old computer feeling. (I went with Cherry MX Blue switches.)\nAlthough it proviees some basic controls for media, such as muting and adjusting volume, there are no buttons for pausing music or switching to different tracks. That function exists, but it takes some configuration to work.\nHandling media keys #For those of you running a large desktop environment like GNOME or KDE, you likely have built-in multimedia key handling in the environment. However, I use sway and i3 and there\u0026rsquo;s no native handling there.\nThere\u0026rsquo;s a great utility called playerctl that makes this really easy. If you\u0026rsquo;re on Fedora, run dnf install playerctl to get started.\nNext, you\u0026rsquo;ll need some hotkeys in i3/sway:\nbindsym XF86AudioPlay exec playerctl play-pause bindsym XF86AudioNext exec playerctl next bindsym XF86AudioPrev exec playerctl previous Press Mod+Shift+c to reload your Sway/i3 configuration.\nThese are the standard multimedia keys that many keyboards have, but my Ducky keyboard doesn\u0026rsquo;t have them. The keyboard does have the ability to send these keystrokes through, but we need to set up a macro for them! Skip to the next section for that.\nBut before we go, playerctl handles all kinds of different multimedia players on your system:\n$ playerctl -l firefox.instance4056 spotify I tend to use my media keys for Spotify most often, so I updated my Sway/i3 configuration to this:\nbindsym XF86AudioPlay exec playerctl -p spotify play-pause bindsym XF86AudioNext exec playerctl -p spotify next bindsym XF86AudioPrev exec playerctl -p spotify previous This ensures that my media keys won\u0026rsquo;t interfere with something in Firefox and will always control my Spotify media. 😉\n🚨 Before going any further, check that playerctl works! 🚨\nRun playerctl -p spotify play-pause one time and your music should play if it wasn\u0026rsquo;t playing before, or it should pause if it was already playing. Run it one more time to ensure it does the opposite the second time. Use a different player or remove the -p argument entirely to test it with other players.\nRTEM (Read the excellent manual) #The Ducky One 2 has a helpful manual in English and Chinese. We need a macro to get the multimedia keys working and that process isn\u0026rsquo;t easy to follow as it spans multiple pages.\n📚 If you need a manual for a different Ducky keyboard, their support page has a wizard that helps you find the exact manual for your keyboard.\nThe good stuff starts at page 41 in my manual:\nFirst, determine which profile you want to use. In my case, I chose profile 2.\nNext, we need to know which multimedia function keys are hidden away in the keyboard\u0026rsquo;s firmware:\nFor each key combination, we need to know which keys we want to press to trigger the multimedia key (letters above). I\u0026rsquo;m most interested in play/pause, previous track, and next track, so I\u0026rsquo;m building out my configuration like this:\nPlay/Pause: Fn+End (key D) Previous track: Fn+PgUp (key G) Next track: Fn+PgDown (key F) Making macros #Now we\u0026rsquo;re ready to record a macro.\nSwitch to profile 2 with Fn+2. The LED underneath the 2 should blink briefly.\nEnter macro mode by holding down Fn and Ctrl for three seconds (press Fn first, though). The keyboard indicators at the top right of the keyboard should be blinking slowly.\nCarefully set the macro:\nHold down Fn and Ctrl for three seconds. Indicator lights should blink slowly. Hold Fn and press End. Hold Fn, then hold the Windows key, then press D (for play/pause). Release all keys after pressing D. Exit macro recording mode by holding Fn and pressing Ctrl. Release all keys. Repeat for previous track:\nHold down Fn and Ctrl for three seconds. Indicator lights should blink slowly. Hold Fn and press PgUp. Hold Fn, then hold the Windows key, then press G (for previous track). Release all keys after pressing D. Exit macro recording mode by holding Fn and pressing Ctrl. Release all keys. Finally for next track:\nHold down Fn and Ctrl for three seconds. Indicator lights should blink slowly. Hold Fn and press PgDn. Hold Fn, then hold the Windows key, then press F (for next track). Release all keys after pressing D. Exit macro recording mode by holding Fn and pressing Ctrl. Release all keys. Testing #You did test playerctl by itself earlier, right? 😜 If you didn\u0026rsquo;t, go back to the first section and save yourself some frustration.\nPress Fn+End and your music should toggle between playing and paused. Press Fn+End once more and it should toggle again.\nIf playerctl works on the command line, but doesn\u0026rsquo;t work via the keyboard macro, you can go back through the macro setting steps above. You can also clear a macro from a key by holding Fn+Ctrl for three seconds, tapping the misconfigured key, tapping the same key again, and pressing Fn+Ctrl.\nEnjoy your quick access to multimedia keys! ⌨ 🎶\n","date":"5 December 2022","permalink":"/p/ducky-keyboard-multimedia-keys/","section":"Posts","summary":"Setting up the multimedia keys on Ducky One keyboards lets you manage your music quickly. ⌨","title":"Configure multimedia keys on a Ducky One keyboard"},{"content":"","date":null,"permalink":"/tags/multimedia/","section":"Tags","summary":"","title":"Multimedia"},{"content":"Many of my coworkers are on Central European Time (CET) and they\u0026rsquo;re seven hours ahead of me (most of the time). Then there are those weird times of year where they move their clocks for Daylight Savings Time before we do in the USA.\nI have a handy clock in my i3status bar, but I\u0026rsquo;d like to track my coworkers\u0026rsquo; timezones in addition to my own.\nConfiguration #By default, i3status looks for its configuration file in ~/.config/i3status/config. Open up the configuration file in your favorite editor and add two pieces.\nI\u0026rsquo;m most interested in CET, so I\u0026rsquo;ll use Berlin for my extra clock. First, add two tztime lines near the top:\norder += \u0026#34;tztime berlin\u0026#34; order += \u0026#34;tztime local\u0026#34; Then add the corresponding sections at the end of the configuration file:\ntztime local { format = \u0026#34;%Y-%m-%d %H:%M:%S 🇺🇸\u0026#34; } tztime berlin { format = \u0026#34;%H:%M 🇩🇪\u0026#34; timezone = \u0026#34;Europe/Berlin\u0026#34; hide_if_equals_localtime = true } The hide_if_equals_localtime configuration ensures that I only see one clock if my local timezone switches to CET. Emoji flags add a little bit of flair to the clocks in the status bar. 😉\nHere\u0026rsquo;s how it looks (with the Hack font):\nApply the change #Reload the i3 configuration with Mod+Shift+c and restart i3 with Mod+Shift+r.\n","date":"4 December 2022","permalink":"/p/i3status-timezones/","section":"Posts","summary":"Have family or coworkers in multiple time zones? Get multiple clocks with i3status. ⌚","title":"Clocks in multiple time zones with i3status"},{"content":"","date":null,"permalink":"/tags/i3status/","section":"Tags","summary":"","title":"I3status"},{"content":"","date":null,"permalink":"/tags/timezones/","section":"Tags","summary":"","title":"Timezones"},{"content":"","date":null,"permalink":"/tags/clipboard/","section":"Tags","summary":"","title":"Clipboard"},{"content":"","date":null,"permalink":"/tags/maim/","section":"Tags","summary":"","title":"Maim"},{"content":"My daily workflow includes taking tons of screenshots. I\u0026rsquo;m constantly relaying views of different data or results of various work between different chat systems and emails. As with all things that I do often, I look for ways to optimize them as much as possible.\n(I\u0026rsquo;m that guy who wrote a post on an efficient emoji workflow in Wayland.) 😂\nGoals #Screenshots must be easy to take and share, period. Modern versions of Firefox make this extremely easy with the built-in screenshot mechanism.\nRight click an element on a web page, choose Take Screenshot from the pop-up and screenshot a whole page or just a single DOM element. From there, you can copy it directly to the clipboard and paste it elsewhere or save it to a file.\nI want something as close to that for the i3 desktop environment.\nThe maim game #Fortunately, there\u0026rsquo;s a piece of software called maim!1 The README in the repository contains lots of helpful examples for basic screenshots all the way up to fancy conversions with drop shadows. ✨\nMy use case is quite simple. I want to press a key, get a selection crosshair, make my selection, and get my screenshot copied to the clipboard.\nHere\u0026rsquo;s how I do it in i3:\nbindsym Print exec maim -s -u | xclip -selection clipboard -t image/png -i Let\u0026rsquo;s break this down:\nFirst, we use bindsym to bind the Print Screen (Print) key Hitting Print Screen runs maim and pops a selection crosshair (-s) and hides the cursor (u) Once the selection is made, the image pipes straight into xclip xclip stores the image in the clipboard with the image/png mime type I\u0026rsquo;m able to go to my favorite application or browser window with Slack, Discord, or Mastodon open and simply paste the image into my message. 🏁\nBefore you get upset with the name, keep in mind that it\u0026rsquo;s the shortened version of make image. 😉\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"29 November 2022","permalink":"/p/i3-screenshot/","section":"Posts","summary":"Take quick screenshots and send them to the clipboard in i3 with maim. 📸","title":"Make screenshots quickly in i3 with maim and xclip"},{"content":"","date":null,"permalink":"/tags/screenshot/","section":"Tags","summary":"","title":"Screenshot"},{"content":"Now that Fedora 37 launched, I decided to wipe my main laptop and do a clean installation. I made some poor configuration choices while in a hurry over the past year and the mess finally caught up with me.\nThe latest version of the i3 spin caught my eye and I used it for the installation. Once my laptop booted up, I noticed a volume icon in my system tray that appeared automatically. I\u0026rsquo;ve normally used pasystray for this task, but volumeicon came with the i3 spin.\nThe volumeicon tray icon has a few handy features:\nNotifications via libnotify (or GTK) when something sound-related changes Quick access to muting and unmuting a sound device via clicking the icon Access to sound mixers and preferences via a right click However, it also catches key presses from my volume keys on my laptop. Catching the volume keys is disabled by default but you can change that via its configuration file.\nNormally, I would have an i3 configuration snippet like this one:\n# Use pactl to adjust volume in PulseAudio. set $refresh_i3status killall -SIGUSR1 i3status bindsym XF86AudioRaiseVolume exec --no-startup-id pactl set-sink-volume @DEFAULT_SINK@ +10% \u0026amp;\u0026amp; $refresh_i3status bindsym XF86AudioLowerVolume exec --no-startup-id pactl set-sink-volume @DEFAULT_SINK@ -10% \u0026amp;\u0026amp; $refresh_i3status bindsym XF86AudioMute exec --no-startup-id pactl set-sink-mute @DEFAULT_SINK@ toggle \u0026amp;\u0026amp; $refresh_i3status bindsym XF86AudioMicMute exec --no-startup-id pactl set-source-mute @DEFAULT_SOURCE@ toggle \u0026amp;\u0026amp; $refresh_i3status That works fine, but volumeicon can handle this for us. Here\u0026rsquo;s my current volumeicon configuration in ~/.config/volumeicon/volumeicon:\n[Alsa] card=default [Notification] show_notification=true notification_type=0 [StatusIcon] stepsize=5 onclick=pavucontrol theme=Default use_panel_specific_icons=false lmb_slider=false mmb_mute=false use_horizontal_slider=false show_sound_level=true use_transparent_background=false [Hotkeys] up_enabled=true down_enabled=true mute_enabled=true up=XF86AudioRaiseVolume down=XF86AudioLowerVolume mute=XF86AudioMute I\u0026rsquo;ve changed a few things from the defaults:\nEnabled notifications via GTK (libnotify notifications didn\u0026rsquo;t look great) Enabled the volume keys in the [Hotkeys] section i3 takes care of starting the icon for me with exec:\nexec --no-startup-id volumeicon Log out and log in again to test the changes.\n","date":"27 November 2022","permalink":"/p/i3-volumeicon/","section":"Posts","summary":"Simplify your i3 configuration and monitor sound levels with volumeicon in your tray with the i3 window manager. 🔈","title":"Manage sound volume with volumeicon in i3"},{"content":"","date":null,"permalink":"/tags/sound/","section":"Tags","summary":"","title":"Sound"},{"content":"All of the recent changes at Twitter inspired me to take a second look at mastodon. In short, mastodon is a federated social network that feels a bit like someone took Twitter and split it up into a vast network of independent servers.\nWhy mastodon? #It feels a lot like Twitter, but better.\nYou can search for people, follow them, and publish messages (called toots). They can also follow you and see the messages you publish.\nThe big difference is that you don\u0026rsquo;t join a central server with mastodon. There\u0026rsquo;s a massive network of servers to choose from and you can create accounts on one or more of those servers to get started. You can even run your own!\nMastodon reminds me of email for many reasons:\nThere\u0026rsquo;s no central server. You join a server (from the massive, growing list) and start publishing messages. Everything is on an eventual consistency model. If a mastodon server goes offline for a bit or has network issues, messages and other data will synchronize when it\u0026rsquo;s back online. You can follow people on your server or on other servers. You choose who to mute or block and you can create lists that help you group certain contacts. After joining the fosstodon.org server, I noticed that it was really easy to begin following people and get messages. I reconnected with people that I had not heard from in a very long time!\nMigrating from Twitter #One of my first questions after joining mastodon was: \u0026ldquo;How do I find the people I follow on Twitter?\u0026rdquo;\nMany Twitter users are adding their mastodon accounts to their bio to make them easier to find on mastodon. For example, I added my mastodon account, @major@fosstodon.org, to my twitter bio:\nTwitter bio showing off my mastodon link Adding this to your bio makes it easier for people to find you via some helpful tools. I used debirdify to look through my Twitter account for mastodon handles of the people I follow. Within seconds, it provided links to about 15 mastodon accounts and offered me a CSV that I could directly import into my mastodon server. 🎉 (Mastodon servers have some awesome import and export capabilities.)\nI\u0026rsquo;ve also heard good things about Fedifinder and Twitodon for helping you find Twitter friends on mastodon. There\u0026rsquo;s a helpful article on Wired with more suggestions.\nApps #I\u0026rsquo;m neck-deep in the Android ecosystem, so most of my suggestions here are for Android devices. I tried the main mastodon app first. It looks great, updates quickly, and is very easy to use. However, inserting GIFs into toots became really frustrating (although I hear that\u0026rsquo;s being fixed).\nI moved to Tusky and it\u0026rsquo;s my go-to mastodon app. You can add multiple accounts, posting media is incredibly easy, and it has tons of configuration knobs everywhere.\nThere are various desktop applications for mastodon, but the web interface is good enough for me! The default web interface looks a lot like Twitter with a big timeline running down the middle and section links on the right.\nHowever, I use Tweetdeck with Twitter and I wanted something similar on mastodon. Go into the settings for the web application, and choose Appearance. Click Enable advanced web interface, save the changes, and click Back to Mastodon. Enjoy your Tweetdeck-like multi-column interface! ✨\nRun your own instance #The federated nature of mastodon means you can run your own single user instance if you want! Buy a domain you like (or use a subdomain off an existing domain) and deploy!\nThe upstream repository has a helm chart which works well with kubernetes. Also, there\u0026rsquo;s a docker-compose file which works well for smaller deployments.\nI went the docker-compose route on a small cloud instance at Hetzner, but I modified the upstream docker-compose.yml:\nversion: \u0026#39;3\u0026#39; volumes: certs: postgres: redis: mastodon: services: traefik: image: docker.io/library/traefik:latest container_name: traefik restart: unless-stopped command: # Tell Traefik to discover containers using the Docker API - --providers.docker=true - --providers.docker.exposedbydefault=false # Enable the Trafik dashboard - --api.dashboard=true # Set up LetsEncrypt #- --certificatesresolvers.letsencrypt.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory - --certificatesresolvers.letsencrypt.acme.dnschallenge=true - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=porkbun - --certificatesresolvers.letsencrypt.acme.email=major@mhtx.net - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json # Set up an insecure listener that redirects all traffic to TLS - --entrypoints.web.address=:80 - --entrypoints.web.http.redirections.entrypoint.to=websecure - --entrypoints.web.http.redirections.entrypoint.scheme=https - --entrypoints.websecure.address=:443 # Set up the TLS configuration for our websecure listener - --entrypoints.websecure.http.tls=true - --entrypoints.websecure.http.tls.certResolver=letsencrypt - --entrypoints.websecure.http.tls.domains[0].main=toots.cloud - --entrypoints.websecure.http.tls.domains[0].sans=*.toots.cloud environment: - PORKBUN_SECRET_API_KEY=***** - PORKBUN_API_KEY=***** ports: - \u0026#39;80:80\u0026#39; - \u0026#39;443:443\u0026#39; volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - certs:/letsencrypt labels: - \u0026#34;traefik.enable=true\u0026#34; - \u0026#39;traefik.http.routers.traefik.rule=Host(`traefik.toots.cloud`)\u0026#39; - \u0026#34;traefik.http.routers.traefik.entrypoints=websecure\u0026#34; - \u0026#34;traefik.http.routers.traefik.tls.certresolver=letsencrypt\u0026#34; - \u0026#34;traefik.http.routers.traefik.service=api@internal\u0026#34; - \u0026#39;traefik.http.routers.traefik.middlewares=strip\u0026#39; - \u0026#39;traefik.http.middlewares.strip.stripprefix.prefixes=/traefik\u0026#39; postgres: container_name: postgres restart: always image: docker.io/library/postgres:14-alpine shm_size: 256mb env_file: .env.production healthcheck: test: [\u0026#39;CMD\u0026#39;, \u0026#39;pg_isready\u0026#39;, \u0026#39;-U\u0026#39;, \u0026#39;postgres\u0026#39;] volumes: - postgres:/var/lib/postgresql/data - ./postgres-setup.sh:/docker-entrypoint-initdb.d/init-user-db.sh:Z environment: - \u0026#39;POSTGRES_HOST_AUTH_METHOD=trust\u0026#39; redis: container_name: redis restart: always image: docker.io/library/redis:7-alpine healthcheck: test: [\u0026#39;CMD\u0026#39;, \u0026#39;redis-cli\u0026#39;, \u0026#39;ping\u0026#39;] volumes: - redis:/data web: container_name: web image: docker.io/tootsuite/mastodon:v3.5.3 restart: always env_file: .env.production command: bash -c \u0026#34;rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000\u0026#34; healthcheck: # prettier-ignore test: [\u0026#39;CMD-SHELL\u0026#39;, \u0026#39;wget -q --spider --proxy=off localhost:3000/health || exit 1\u0026#39;] ports: - \u0026#39;3000\u0026#39; depends_on: - postgres - redis volumes: - mastodon:/mastodon/public/system labels: - \u0026#34;traefik.enable=true\u0026#34; - \u0026#34;traefik.http.routers.web.rule=Host(`toots.cloud`)\u0026#34; - \u0026#34;traefik.http.routers.web.entrypoints=websecure\u0026#34; - \u0026#34;traefik.http.routers.web.tls.certresolver=letsencrypt\u0026#34; streaming: container_name: streaming image: docker.io/tootsuite/mastodon:v3.5.3 restart: always env_file: .env.production command: node ./streaming healthcheck: # prettier-ignore test: [\u0026#39;CMD-SHELL\u0026#39;, \u0026#39;wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1\u0026#39;] ports: - \u0026#39;4000\u0026#39; depends_on: - postgres - redis sidekiq: container_name: sidekiq image: docker.io/tootsuite/mastodon:v3.5.3 restart: always env_file: .env.production command: bundle exec sidekiq depends_on: - postgres - redis volumes: - mastodon:/mastodon/public/system healthcheck: test: [\u0026#39;CMD-SHELL\u0026#39;, \u0026#34;ps aux | grep \u0026#39;[s]idekiq\\ 6\u0026#39; || false\u0026#34;] Here are the main changes I made:\nSpecified the exact URL/tag for each container Added traefik to handle TLS Used named volumes instead of filesystem directories (made SELinux much happier) Added a provisioning script for postgres The provisioning script for postgres allows me to bring up postgres without needing to run any extra commands:\n#!/bin/bash set -e psql -v ON_ERROR_STOP=1 --username postgres \u0026lt;\u0026lt;-EOSQL CREATE USER mastodon WITH PASSWORD \u0026#39;super-secret-password\u0026#39; CREATEDB; EOSQL Once you have all of this in place, run the usual docker-compose up -d and wait for everything to start. Then you can run through the initial mastodon setup:\n$ docker-compose run --rm web bundle exec rake mastodon:setup You will need to answer lots of questions, including your domain name, postgres/redis details, email configuration, and object storage configuration. I use Mailgun for mastodon\u0026rsquo;s email since it makes the setup much easier and has a very low cost. For object storage, I went with a public Backblaze B2 bucket since it\u0026rsquo;s Amazon S3 compatible but very inexpensive1.\nWhen the setup finishes, it will dump an environments file to the screen. Be sure to save that file. This will allow you to start up all of the containers again with the same configuration later. A copy of the environments file will be kept inside the container storage as well.\nSelf-hosted instance takeaways #I\u0026rsquo;ve been running my own mastodon instance for a few days and I\u0026rsquo;m not sure if I will keep it. Sure, I love having an instance on a hilarious domain like toots.cloud and having full control over my mastodon experience.\nBut it\u0026rsquo;s one more thing to manage, patch, and back up.\nThe fosstodon.org community has been excellent so far and I\u0026rsquo;m contributing to their costs each month via their Patreon page. Every mastodon instance is going through growing pains recently due to really high demand.\nThe last count from the Mastodon Users bot shows massive interest:\n🤖 Mastodon user count bot If you\u0026rsquo;re on a server that isn\u0026rsquo;t performing well: be patient.\nAsk for ways that you can help technically or financially. One of the biggest reminders that I get from mastodon is that every server is a community. The community must come together to make each server successful as a part of the big fediverse.\nBe sure to note the endpoint for your Backblaze bucket when you create it. You will need to specify that endpoint when you set up mastodon. As an example, my endpoint is https://s3.us-west-001.backblazeb2.com and my region is us-west-001.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"11 November 2022","permalink":"/p/adventures-with-the-mastodon-herd/","section":"Posts","summary":"Ongoing changes at Twitter led me to take a second look at mastodon, including running my own mastodon instance. 🐘","title":"Adventures with the mastodon herd"},{"content":"","date":null,"permalink":"/tags/postgres/","section":"Tags","summary":"","title":"Postgres"},{"content":"","date":null,"permalink":"/tags/redis/","section":"Tags","summary":"","title":"Redis"},{"content":"","date":null,"permalink":"/tags/twitter/","section":"Tags","summary":"","title":"Twitter"},{"content":"I love to run. It gives me an opportunity go outside and create challenges for myself. It also provides time to think.\nGetting started was one of the toughest parts. I went through all kinds of running programs and equipment but never felt like I was improving as much as I wanted.\nThis post covers all of the great advice I received along my running journey and some warnings about mistakes I made.\nKnow your limits #First and foremost, know what your body can and should do. Talk to a medical professional you trust about any previous injuries or medical conditions that might impact you while you\u0026rsquo;re running.\nBad news doesn\u0026rsquo;t signal an end to your running ambitions. It just meant you might need to adapt your workout to fit within your body\u0026rsquo;s needs.\nMake a plan and follow the plan #As I said earlier, I had tons of difficulty getting started with a running routine. I have asthma and it was once a big limiter on what I could do when I exercised.\nTwo big things changed the game for me.\nFirst, a friend told me about the Couch to 5K (C25K) program. It\u0026rsquo;s designed for running novices and eases you into running longer and longer distances. Everything is free to use and the instructions are translated into plenty of languages.\nC25K offers a running schedule where you gradually increase the ratio of running intervals to walking intervals over time. You might start with a two minute walk followed by a one minute jog. Over time, that turns into a 90 second walk followed by a one minute jog. By the end, you\u0026rsquo;re spending much less time walking and more time running.\nSecond, I changed my diet. I\u0026rsquo;ve written a lot about the ketogenic diet, or keto, on this blog before. Long story short, you drastically reduce your carbohydrate intake to a very small amount (usually 20-30g or less) and increase your fat/protein intake. This has tons of effects inside the body that would take too long to list here, but the one that benefited me the most was a reduction in my overall inflammation markers in my body.\nHigher levels of inflammation lead to all kinds of problems, including asthma. After changing my diet, I worked with my doctor to stop all of the medications I was taking and my asthma is barely noticeable today.\nThese two parts of the process, following a plan and changing my diet, greatly improved my confidence while running.\nShoes #My tennis coach in high school had a wonderful phrase to remind us that our beloved rackets and shoes only did so much:\nIt\u0026rsquo;s not the tool, it\u0026rsquo;s the fool.\nHis point was that you can\u0026rsquo;t blame poor performance on your equipment. However, running shoes with poor cushioning and reduced support will certainly impede your progress.\nDoes that mean you must spend a fortune on running shoes? No.\nIt does mean that you need to fight the right equipment for your body.\nFind a running shop near you and talk with someone there about your goals and challenges. In the San Antonio area, we have a store called Fleet Feet that\u0026rsquo;s staffed by people who run all the time. They had a look at how I ran and they asked me where I felt the impact while running. I told them where I had pain and where the impact didn\u0026rsquo;t feel quite right.\nIt turns out I suffer from overpronation. My foot rolls inward as I run and it means the outer part of my foot hits the ground first. That caused a lot of pain.\nThey recommended shoes specifically meant to help with that condition. I picked up some Mizuno Wave Inspire shoes and they helped a ton! They usually go for $140-$170 and they lasted me 300-400 miles easily. My only complaints about them was that they are quite heavy. They feel bulky on my feet.\nA friend suggested the Brooks Adrenaline GTS and they\u0026rsquo;re my current go-to running shoe. I\u0026rsquo;m on my second pair now and my first pair easily went past 450 miles before the tread was gone. They are also on the pricier end around $140, but they\u0026rsquo;re incredibly durable, light, and supportive.\nTechnology #The sky\u0026rsquo;s the limit when it comes to running technology.\nI\u0026rsquo;ve had a Garmin Fenix 5 since 2017 and it\u0026rsquo;s one of the best-performing, most reliable pieces of technology I\u0026rsquo;ve ever bought. Why do I love it so much?\nIt\u0026rsquo;s comfortable to wear The GPS and health tracking is top notch and extremely accurate It works really well with the Garmin Connect Android app across multiple phones I can use it for almost any sport imaginable (running, walking, biking, swimming, etc) You can\u0026rsquo;t break it (it\u0026rsquo;s been beaten up many times) It\u0026rsquo;s reliable Over time, there were more things I wanted to track, like ground contact time and balance between my left and right feet. I picked up a Running Dynamics Pod for that. It\u0026rsquo;s a tiny device that attaches to your waistband and delivers tons of additional metrics automatically. It sends all of its data to your watch while you run.\nMusic #Music keeps me inspired while I run and I have a two part strategy for that.\nFirst come the earbuds. I\u0026rsquo;ve tried far too many earbuds and I\u0026rsquo;ve been disappointed with so many of them. I love the Jaybird Vista 2 earbuds for a few reasons:\nThey sound great The charging case is easy to use and has a built-in battery to charge the buds They don\u0026rsquo;t fall out of my ears Bluetooth sync is easy to set up and maintain Second, I needed a way to keep my phone attached to me where it wouldn\u0026rsquo;t be flopping around all over the place. Putting a big phone in my shorts pocket ensured that it would bang against my leg for a few miles. The folks at Fleet Feet suggested the FlipBelt.\nIt\u0026rsquo;s a stretchy belt that has lots of slash pockets to hold keys, credit cards, and your phone. You fold it over once and it stays tightly attached to your waist or hips. Now I can run for many miles and I forget my phone is on my waist.\nKnow your limits #Wait, didn\u0026rsquo;t I mention this before? I did, but it\u0026rsquo;s worth mentioning once more.\nMy usual distance for running is 5 km, or about 3.11 miles. A 5K is a very common race length in the USA and it usually takes me about a half hour to complete. However, when I first started, I was targeting a mile to a mile and a half.\nWork your way up to longer distances slowly.\nI traveled to Vancouver for the first time for a conference and I was mesmerized by the waterfront. After walking it for a while, I decided to run some of it. I went back to the hotel, changed, stretched, got dressed, and headed out.\nThe view was incredible. I loved it. Once I reached about 2.5km, I thought \u0026ldquo;Well, I could run back now and just complete a 5K. Perfect.\u0026rdquo;\nHowever, the weather was so nice and the views so beautiful that I kept going. Once I reached 5km, I thought \u0026ldquo;Okay, time to head back,\u0026rdquo; but then I didn\u0026rsquo;t.\nAfter running 10km down the waterfront, I was still feeling awesome but I knew it was time to turn back. I realized something was wrong a few minutes later. My right knee felt like I\u0026rsquo;d been shot.\nI slowly hobbled several kilometers to the hotel and put ice on my knee. The rest of the week was spent hobbling through the conference and suffering through the plane flights home in a cramped seat.\nWhat did I learn? Don\u0026rsquo;t overdo it. Let your body tell you when to stop.\nEven if the view is amazing.\n","date":"6 November 2022","permalink":"/p/amateur-guide-to-running/","section":"Posts","summary":"Running gets me outside and gives me a challenge where I can compete against myself. Here are my tips for becoming an amateur runner. 🎽","title":"Amateur Guide to Running"},{"content":"","date":null,"permalink":"/tags/exercise/","section":"Tags","summary":"","title":"Exercise"},{"content":"Cyberpower UPS units saved me from plenty of issues in the past with power outages. However, although I love the units themselves, I found that the quality of replacement batteries varies widely. This leads me to keep a close watch on my UPS units and test them regularly.\nEnergy conservation ranks high on my list of priorities, too. I monitor the power draw on my UPS units to know about usage spikes or to review electricity consumption after I make changes.\nMy Raspberry Pi did a great job of monitoring my UPS for my network devices but it failed after a recent reboot. My network woes from September left me with a Mikrotik hEXs running my home network and I noticed it had a USB port.\nCan you monitor a UPS with a Mikrotik device and query its status remotely? You can!\nInitial setup #My Cyberpower CP1500AVRLCD has a USB port on the back for monitoring and control. The hEXs router has a USB-A port on the side that can be used for mass storage, LTE modems, and yes \u0026ndash; UPS units.\nHowever, UPS monitoring does not come standard with RouterOS 7.x and it must be installed via a separate package. Follow these steps to get started:\nIdentify the CPU architecture of your Mikrotik. It should be shown on the product page. The Mikrotik hEXs is a MMIPS (microMIPS) device. Go to the RouterOS download page and download the Extra packages file for your architecture. Unpack the zip file you downloaded and locate the ups-7.x-mmips.npk package. Upload the ups-7.x-mmips.npk file via your preferred method. FTP, ssh, and the web interface work well for this. Reboot your Mikrotik device. Enable monitoring #After the reboot, your Mikrotik should now have a /system/ups entry on the command line. Let\u0026rsquo;s add monitoring for our UPS:\n[major@hexs] \u0026gt; /system/ups [major@hexs] /system/ups\u0026gt; add name=ups min-runtime=never port=usbhid1 If you don\u0026rsquo;t know what your port is called, type in add port= and press TAB to see the available ports. Refer to the Mikrotik System/UPS manual for more help here.\nI set the min-runtime to never which means that the Mikrotik will never hibernate even if the UPS power runs low. It uses so little power and it\u0026rsquo;s so critical for my home network that it should be the last system to go offline during an outage.\nAll that\u0026rsquo;s left is to enable read-only SNMP so that we can monitor the UPS remotely. Back to the Mikrotik command line:\n[major@hexs] \u0026gt; /snmp/set enabled=yes This enables unrestricted read-only SNMP access for your entire network without authentication under the community name public. I restrict SNMP access with firewall rules but you may want to consider further restrictions on your SNMP community.\nGetting data #From another machine on the network, I dumped all of the SNMP data from the Mikrotik into a file:\n$ snmpwalk -v2c -c public 192.168.10.1 | tee -a /tmp/snmpwalk.txt Then I looked for my UPS\u0026rsquo; model name:\n$ grep LCD /tmp/snmpwalk.txt SNMPv2-SMI::mib-2.33.1.1.2.0 = STRING: \u0026#34;CP1500AVRLCDa\u0026#34; SNMPv2-SMI::mib-2.47.1.1.1.1.2.262146 = STRING: \u0026#34;CPS CP1500AVRLCDa\u0026#34; Let\u0026rsquo;s see if the first entry gives us the data we need:\n\u0026gt; $ grep \u0026#34;^SNMPv2-SMI::mib-2.33\u0026#34; /tmp/snmpwalk.txt SNMPv2-SMI::mib-2.33.1.1.2.0 = STRING: \u0026#34;CP1500AVRLCDa\u0026#34; SNMPv2-SMI::mib-2.33.1.1.3.0 = \u0026#34;\u0026#34; SNMPv2-SMI::mib-2.33.1.2.1.0 = INTEGER: 2 SNMPv2-SMI::mib-2.33.1.2.3.0 = INTEGER: 103 SNMPv2-SMI::mib-2.33.1.2.4.0 = INTEGER: 100 SNMPv2-SMI::mib-2.33.1.2.5.0 = INTEGER: 0 SNMPv2-SMI::mib-2.33.1.2.7.0 = INTEGER: 0 SNMPv2-SMI::mib-2.33.1.3.2.0 = INTEGER: 1 SNMPv2-SMI::mib-2.33.1.3.3.1.2.3 = INTEGER: 0 SNMPv2-SMI::mib-2.33.1.3.3.1.3.3 = INTEGER: 122 SNMPv2-SMI::mib-2.33.1.4.3.0 = INTEGER: 1 SNMPv2-SMI::mib-2.33.1.4.4.1.2.3 = INTEGER: 122 SNMPv2-SMI::mib-2.33.1.4.4.1.5.3 = INTEGER: 8 SNMPv2-SMI::mib-2.33.1.6.1.0 = Gauge32: 0 What the heck do all these numbers mean? A quick trip to a MIB browser shows us that there are a few important items here:\nupsOutputPercentLoad is 1.4.4.1.5 (8%) upsOutputVoltage is 1.4.4.1.2 (122V) upsEstimatedChargeRemaining is 1.2.4.0 (100%) These are the three numbers I care most about. However, the percent load of 8% isn\u0026rsquo;t terribly useful. I\u0026rsquo;d rather have watts.\nLet\u0026rsquo;s write a script to get the value, and convert the percentage to watts:\n#!/bin/bash set -euo pipefail # From the CP1500AVRLCDa spec sheet MAX_LOAD_WATTS=815 # SNMP MIB for load percentage SNMP_MIB=\u0026#34;SNMPv2-SMI::mib-2.33.1.4.4.1.5.3\u0026#34; # Get the load integer only. CURRENT_LOAD=$(snmpget -Oqv -v2c -c public 192.168.10.1 $SNMP_MIB) # Convert the percentage into wattage consumed right now. CURRENT_WATTS=$(($MAX_LOAD_WATTS * $CURRENT_LOAD / 100)) echo \u0026#34;${CURRENT_WATTS}\u0026#34; Let\u0026rsquo;s test the script!\n$ ./get_wattage.sh 65 Awesome! 🎉\n","date":"28 October 2022","permalink":"/p/monitor-ups-with-mikrotik-snmp/","section":"Posts","summary":"Mikrotik routers and switches serve as efficient network devices, but they know other tricks, too. Monitor your UPS with a Mikrotik device and query it via SNMP. 🔌","title":"Monitor a UPS with a Mikrotik router via SNMP"},{"content":"","date":null,"permalink":"/tags/snmp/","section":"Tags","summary":"","title":"Snmp"},{"content":"","date":null,"permalink":"/tags/ups/","section":"Tags","summary":"","title":"Ups"},{"content":"Once upon a time, I spent hours and hours fumbling through openvpn configurations, certificates, and firewalls to get VPNs working between servers. One small configuration error led to lots of debugging. Adding new servers meant wallowing through this process all over again.\nA friend told me about Tailscale and it makes private networking incredibly simple.\nTailscale makes it easy to add nodes to a private network called a tailnet where they can communicate. In short, it\u0026rsquo;s a dead simple mesh network (with advanced capabilities if you\u0026rsquo;re interested).\nThis post covers how to create an exit node for your Tailscale network using firewalld Fedora, CentOS Stream, and Red Hat Enterprise Linux (RHEL).\nWhat\u0026rsquo;s an exit node? #Every node on a Tailscale network, or tailnet, can communicate with each other1. However, it can use be useful to use one of those nodes on the network as an exit node.\nExit nodes allow traffic to leave the tailnet and go out to other networks or the public internet. This allows you to join an untrusted network, such as a coffee shop\u0026rsquo;s wifi network, and send your traffic out through one of your tailnet nodes. It works more like a traditional VPN.\nTailscale does this by changing the routes on your device to use the exit node on your tailnet for all traffic. Creating an exit node involves some configuration on the node itself and within Tailscale\u0026rsquo;s administrative interface.\nDeploy Tailscale #In this example, I\u0026rsquo;ll use Fedora 36, but these instructions work for CentOS Stream and RHEL, too.\nStart by installing Tailscale on your system:\nFedora CentOS Stream RHEL Let\u0026rsquo;s get this going on Fedora:\n$ sudo dnf config-manager --add-repo https://pkgs.tailscale.com/stable/fedora/tailscale.repo Adding repo from: https://pkgs.tailscale.com/stable/fedora/tailscale.repo $ sudo dnf install tailscale $ sudo systemctl enable --now tailscaled $ sudo tailscale up To authenticate, visit: https://login.tailscale.com/a/xxxxxx Click the link to authorize the node to join your tailnet and it should appear in your list of nodes!\nTrusting the tailnet #In my case, I treat my tailnet interfaces as trusted interfaces on each node. (This might not fit your use case, so please read the docs on Network access controls (ACLs) if you need extra security layers.)\nStart by adding the new tailscale0 interface as a trusted interface:\n# firewall-cmd --add-interface=tailscale0 --zone=trusted success # firewall-cmd --list-all --zone=trusted trusted (active) target: ACCEPT icmp-block-inversion: no interfaces: tailscale0 sources: services: ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: This permits all traffic in and out of the tailscale interface.\nCreating an exit node #Let\u0026rsquo;s reconfigure tailscale to allow this node to serve as an exit node:\n# tailscale up --advertise-exit-node Warning: IP forwarding is disabled, subnet routing/exit nodes will not work. See https://tailscale.com/kb/1104/enable-ip-forwarding/ Uh oh. Tailscale is telling us that it\u0026rsquo;s happy to reconfigure itself, but we\u0026rsquo;re going to run into IP forwarding issues. Let\u0026rsquo;s see what our primary firewall zone has:\n# firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: dhcpv6-client mdns ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: We can enable masquerading in firewalld and it will take care of everything for us, including NAT rules and sysctl settings for IP forwarding.\n# firewall-cmd --add-masquerade --zone=public success # firewall-cmd --list-all | grep masq masquerade: yes Let\u0026rsquo;s try the Tailscale reconfiguration once more:\n# tailscale up --advertise-exit-node Warning: IPv6 forwarding is disabled. Subnet routes and exit nodes may not work correctly. See https://tailscale.com/kb/1104/enable-ip-forwarding/ We fixed the IPv4 forwarding, but IPv6 is still not configured properly. This was done automatically in the past, but firewalld does not automatically enable IPv6 forwarding in recent versions. Rich rules come to the rescue:\n# firewall-cmd --add-rich-rule=\u0026#39;rule family=ipv6 masquerade\u0026#39; success # sysctl -a | grep net.ipv6.conf.all.forwarding net.ipv6.conf.all.forwarding = 1 One more try:\n# tailscale up --advertise-exit-node Success! But wait, we still can\u0026rsquo;t send traffic through the exit node until we authorize it in the Tailscale admin interface:\nGo back to your machines list at Tailscale and find your exit node. Right underneath the name of the node, you should see Exit Node followed by a circle with an exclamation point. Click the three dots on the far right of that row and click Edit Route Settings\u0026hellip;. When the modal appears, click the slider to the left of Use as exit node. Now you can test your exit node from your mobile device by choosing an exit node via the settings menu. Another Linux machine can use it as an exit node as well just by running another tailscale up command:\n# Start using an exit node. sudo tailscale up --exit-node my-exit-node-name # Stop using an exit node. sudo tailscale up --exit-node \u0026#39;\u0026#39; Tailscale offers complex access control lists (ACLs) that allow you to limit connectivity based on tons of factors.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"27 October 2022","permalink":"/p/build-tailscale-exit-node-firewalld/","section":"Posts","summary":"Tailscale exit nodes allow you to route your traffic through nearly any system in your tailnet. Learn how to build an exit node using firewalld. 🕳️","title":"Build a Tailscale exit node with firewalld"},{"content":"","date":null,"permalink":"/tags/firewalld/","section":"Tags","summary":"","title":"Firewalld"},{"content":"","date":null,"permalink":"/tags/rhel/","section":"Tags","summary":"","title":"Rhel"},{"content":"","date":null,"permalink":"/tags/tailscale/","section":"Tags","summary":"","title":"Tailscale"},{"content":"","date":null,"permalink":"/tags/mentorship/","section":"Tags","summary":"","title":"Mentorship"},{"content":"My first full-time job was purely technical. Most of my interview centered around my abilities to manage and maintain Linux servers since the company had very limited Linux knowledge. Once I was hired, my main responsibility was maintaining a single Red Hat Linux 9 server.\nYes. A single server. And no, not Red Hat Enterprise Linux 9. Red Hat Linux 9. (It was 2005.)\nI spent the first three months of that job focused on learning as much as I could about everything that was running on that Linux server. My success was directly tied to my ability to keep it running.\nHowever, as I build trust and joined more conversations at the very small company, I realized that I had another set of skills that were definitely lacking in the company: soft skills. Although I didn\u0026rsquo;t know it at the time, those skills would carry me further in my career than my technical skills.\nWhat\u0026rsquo;s a soft skill? #Soft skills are all about interacting with other people. Much of it is centered around communication, but that doesn\u0026rsquo;t include just talking and listening.\nFor me, soft skills are centered around several areas:\nMeaningful communication: Communicate with purpose, whether you are talking or listening.\nEmpathy: Put yourself in someone else\u0026rsquo;s shoes and see things from their perspective. Understand their positions and emotions well enough to be able to summarize their position back to them.\nExplain the why: Explain your position from a customer or user\u0026rsquo;s perspective. Ensure the situation feels real to everyone so they can see the reason why something needs to be done.\nFoster growth: Make everyone around you better. Help people level up, learn new skills, and become more effective on the team.\nEveryone can develop at least one interpersonal skill and many people can develop a few of them at a time.\nHow can soft skills help me? #I\u0026rsquo;ve mentored several technical people over the years and these common themes show up often:\nMy ideas never seem to get any attention in meetings.\nNobody ever reviews my pull requests.\nI want a promotion but it seems so far away.\nIt\u0026rsquo;s entirely possible that you\u0026rsquo;re performing terribly in your role and writing awful code, but I doubt it. Much of this comes down to how you communicate in your daily work.\nIn the book Switch: How to change things when change is hard, the authors talk about the concept of a rider and an elephant. (Please read the book because I can\u0026rsquo;t do it justice here. It\u0026rsquo;s a great read.)\nElephants are very large, very stubborn, and very strong. The rider could certainly make small adjustments in the elephant\u0026rsquo;s path, but if the elephant want to make a hard left turn, the rider is along for the ride.\nThe elephant is a metaphor for our emotions. They have a huge impact on how we act and they\u0026rsquo;re very difficult to change. Emotions usually don\u0026rsquo;t match up with reason at all.\nThe rider is a metaphor for reason and analytical thinking. Reason and data can only do so much with our emotions are in control.\nThe secret is to make the problem real for your coworkers and appeal to their emotions. (For a lot more detail here, see my post on Persuasion Engineering.) This means you must tell a story with a beginning, middle, and end from a very important perspective. For most companies, the customer perspective is paramount.\n(If you would like a handy framework for thinking through situations and perspectives, I wrote a post about writing SBAR documents.)\nHow do I work on my soft skills? #An easy first step is to look back at recent interactions that didn\u0026rsquo;t go as you expected. Try to put yourself in the other person\u0026rsquo;s shoes and come up with reasons why they rejected your change.\nWere they confused about the goal? Was it difficult to see how it affected the customer? Did you explain the why?\nA mentor (formal or informal) can help here, too. Find someone you trust and get candid feedback from them about your performance. Trust is key, because without it, you won\u0026rsquo;t get the feedback you\u0026rsquo;re looking for. Make sure you ask about specific aspects of your performance, such as a particular meeting or pull request.\nYou can ask the mentor to give you feedback in the SBI format:\nSituation: What situation happened that caused you to seek feedback to improve your skills? Behavior: What behavior occurred that should or should not happen again? Impact: In that situation, what impact (positive or negative), did the behavior have? Take the feedback to heart and plot a plan to make small, gradual improvements over time in your communication skills. Try to set a small goal, such as presenting a new change with a full explanation of the \u0026ldquo;why\u0026rdquo;. Check in with someone you trust for feedback as you improve.\n","date":"8 September 2022","permalink":"/p/strong-impacts-require-soft-skills/","section":"Posts","summary":"Success at work depends on more than your technical ability. Improve your soft skills to increase your impact. 💪","title":"Strong impacts require soft skills"},{"content":"","date":null,"permalink":"/tags/pxe/","section":"Tags","summary":"","title":"Pxe"},{"content":"The first RFCs for PXE, or preboot execution environment, showed up in June 1981 and it\u0026rsquo;s still a popular tool today. It enables computers to boot up and download some software that runs early in the boot process.\nAlthough PXE has been with us for ages, it\u0026rsquo;s still extremely relevant today:\nProvisioning: Deploying new operating systems to machines is easily automated with PXE. Rescue: Fix a broken system by booting into a live OS and then make repairs. Validation: PXE boot a machine into a validation suite that checks hardware or puts it through a burn-in process. Ephemeral OS: Boot into a live operating system that runs completely in RAM and disappears on reboot. A good friend of mine started a project to take PXE booting to the next level.\nEnter netboot.xyz #Most of the PXE deployments I\u0026rsquo;ve used in the past were restricted to a company\u0026rsquo;s internal network for specfic uses. These deployments often served as a provisioning method.\nAlthough the actual mechanisms for booting machines via PXE are not difficult, writing the backend scripts and creating keyboard-friendly menus is challenging. One of my favorite people on the planet, Ant Messerli, started the netboot.xyz project several years ago.\nWhat does netboot.xyz do for you? For one, you don\u0026rsquo;t need to write your own menu scripts. All you do is PXE boot and use the menus already available on the site. You can even add your own in netboot.xyz\u0026rsquo;s GitHub repository\nThe site relies on ipxe, an open source boot firmware. There\u0026rsquo;s no need to compile your own ipxe binary. netboot.xyz offers pre-built ipxe binaries that already connect you to netboot.xyz on the first boot!\nHere\u0026rsquo;s what you\u0026rsquo;ll see from netboot.xyz on your first boot:\nAnimation from netboot.xyz\u0026rsquo;s website\nPXE ingredients #Now that netboot.xyz did the hard part, what\u0026rsquo;s left? A PXE environment requires a few items:\nDHCP server TFTP server The DHCP server normally tells the machine about its IP address, gateway, DNS servers and more. However, we need it to provide two extra pieces of information:\nThe server running a TFTP daemon The filename to request The boot process for a machine on your network will go something like this:\nMachine makes a DHCP request Your DHCP server replies with the usual IP information plus a server IP and filename for the PXE software Machine sets its IP, gateway, netmask, and DNS Machine downloads the PXE software from the server provided by the DHCP server PXE software runs on the machine Mikrotik PXE configuration #Let\u0026rsquo;s update the DHCP server configuration first. Log into your Mikrotik router via ssh and add configuration to your DHCP server\u0026rsquo;s network configuration\n[major@hexs] \u0026gt; /ip/dhcp-server/ [major@hexs] /ip/dhcp-server\u0026gt; [major@hexs] /ip/dhcp-server/network\u0026gt; print Columns: ADDRESS, GATEWAY, DNS-SERVER # ADDRESS GATEWAY DNS-SERVER 0 192.168.10.0/24 192.168.10.1 192.168.10.1 [major@hexs] /ip/dhcp-server/network\u0026gt; set next-server=192.168.10.1 boot-file-name=pxeboot numbers=0 Great. Now our DHCP server will tell new machines where to find their PXE image. Now we need to get our PXE boot image. Most of my machines support UEFI, so I use the UEFI DHCP image. Upload the image to the mikrotik however you prefer. I normally use FTP or the web interface.\nMy PXE image is stored on the Mikrotik as /netboot.xyz/netboot.xyz.efi. Now we can set configure the Mikrotik\u0026rsquo;s built-in FTP server:\n[major@hexs] \u0026gt; /ip tftp add ip-addresses=192.168.10.0/24 real-filename=\\ /netboot.xyz/netboot.xyz.efi req-filename=.* [major@hexs] \u0026gt; /ip tftp settings set max-block-size=8192 These settings enable TFTP access for anything on my LAN. Also, my PXE image from netboot.xyz is returned no matter what is in the request.\nNow it\u0026rsquo;s time for a quick test! On Fedora, you can install a tftp client by running dnf install tftp.\n❯ tftp 192.168.10.1 -v -m binary -c get pxeboot mode set to octet Connected to 192.168.10.1 (192.168.10.1), port 69 getting from 192.168.10.1:pxeboot to pxeboot [octet] Received 1074688 bytes in 0.6 seconds [14810621 bit/s] Awesome! Let\u0026rsquo;s make sure we downloaded everything correctly:\n# Check the software downloaded from TFTP ❯ sha256sum pxeboot ef4b7d62d360bd8b58a3e83dfa87f8c645d459340554ce4ad66c0ef341fc3653 pxeboot # Check our original file ❯ sha256sum ~/Downloads/netboot.xyz.efi ef4b7d62d360bd8b58a3e83dfa87f8c645d459340554ce4ad66c0ef341fc3653 /home/major/Downloads/netboot.xyz.efi Now your systems on your local network can PXE boot using netboot.xyz! During the boot routine, you may need to press a key (usually F12, F11, F2, or maybe DEL) to bring up a boot selection menu. Pick the PXE or network boot option (choose IPv4 if asked) and boot!\nDuring the boot, your machine will boot the locally downloaded PXE image and it will automatically call out to netboot.xyz for menu selections. Scroll through the menus, choose your image, and enjoy! 🤓\n","date":"2 September 2022","permalink":"/p/pxeboot-netboot.xyz-on-mikrotik-router/","section":"Posts","summary":"Get systems online quickly or rescue a broken system by PXE booting from netboot.xyz using a Mikrotik router. 🛠","title":"PXE boot netboot.xyz on a Mikrotik router"},{"content":"During a mentoring meeting today at work, my mentee asked me how I make time to write blog posts. I hadn\u0026rsquo;t really thought about it before, so I joked that I needed to write a blog post on that. That\u0026rsquo;s so meta.\nAfter thinking about it more, a blog post felt like a good idea. Let\u0026rsquo;s get right to it.\nWhy write blog posts anyway? #I\u0026rsquo;ve written about writing1 before, especially about why technical people should write more often. Writing about things you know, things you love, and things you want other people to know has plenty of benefits:\nIt helps you structure your own thoughts about a topic Talking about a topic makes the knowledge more solid in your brain You can link posts to people who want to know more about the topic Eventually people come along, read your posts, and they let you know about it If there\u0026rsquo;s one thing that you should take away from this post, it\u0026rsquo;s this: Write for you \u0026ndash; not for anyone else.\nHere\u0026rsquo;s what I mean by that. Write because it benefits you. Write because it makes you happy. Write because it helps you organize your thoughts. Write because you want to leave a small mark on the world when you\u0026rsquo;re gone.2\nSometimes I write a post and think \u0026ldquo;Nobody will ever read this.\u0026rdquo; I write it anyway.\nHowever, there are those times where you write something and a reader gains something from it. A tiny portion of those readers will send you something about the post. I use this as my fuel to keep going and it serves as a reminder that I\u0026rsquo;m doing something for myself that other people enjoy.\nStructure is everything #I try to follow a formula for most of my posts and it helps me organize my thoughts efficiently. Here\u0026rsquo;s what I do:\nIntroduction: Start with something brief that helps people determine whether they want to read the post or not. The last thing I want to do is waste someone\u0026rsquo;s time. If the topic applies to them or seems interesting, great! If not, they get some time back and they can skip the post.\nWhy: A reader that was hooked on the introduction might be interested in the topic but unsure why it\u0026rsquo;s needed. Explain how you arrived at your decision point. This could uncover a use case the reader never considered, or it might highlight a blind spot in their thought process.\nWhat: Explain how to do something! Walk through each step and take a pause to explain why each step is needed. Highlight optional steps or areas where extra attention and consideration might be required. Diagrams, command line output, and screenshots deliver tons of value here, so don\u0026rsquo;t hold back.\nExtras: You might carry a topic to a certain point, but a reader might want to go farther. Provide links to documentation or point to areas where a reader might want to keep exploring to do more on the topic. Avoid leaving the reader with a dead end.\nEye of the beholder #You may pass up a blog post opportunity because you think everyone knows enough on the topic already. You might do the same if a topic seems to complex.\nMy advice: write it anyway.\nI\u0026rsquo;ve written incredibly detailed posts about kerberos setups and silly simple posts about deleting a single iptables rule. They both get decent web traffic. They were both fun to write.\nLearn from other writers #I read a lot of blogs regularly. Here are some I bloggers I really enjoy reading:\nCaleb Schoepp Florian Haas Jacob Kaplan-Moss Paul Kehrer When I read posts, I think about the topic in the post itself, but I also think about how the author writes. I pick out one or two things I really enjoy from their writing and I begin adding those things to my writing habits.\nSome of these habits are complex. Others aren\u0026rsquo;t.\nFor example, Derek Sivers\u0026rsquo; Writing One Sentence Per Line post was incredibly helpful and simple. I\u0026rsquo;ve implemented it in my writing and it makes a huge difference.\n(Try writing one sentence per line. Trust me. It works.)\nOver time, just like everything else, you will find ways to make your writing better while writing more efficiently. Avoid getting discouraged and always keep it fun. Write for you.\nSorry for going meta again.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nI turned 40 today, so please forgive me for thinking about death. 🎂 😂\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"17 August 2022","permalink":"/p/how-i-write-blog-posts/","section":"Posts","summary":"This feels very meta, but I thought it would be a good idea to share my blog post writing process anyway. 📝","title":"How I write blog posts"},{"content":"","date":null,"permalink":"/tags/writing/","section":"Tags","summary":"","title":"Writing"},{"content":"I finished Dr. Jason Fung\u0026rsquo;s The Obesity Code earlier this week and it really made me think about how the nature of food and eating have changed over time. At a high level, the book focuses on not only what we eat, but when we eat.\nThis post includes some of the biggest takeaways that stayed with me after I finished reading it. However, the book has tons of excellent analogies to help you understand complicated systems in the body and their related medical conditions. It also includes results from many studies done over the past 200 years as well as deep dives into where some of our modern nutritional guidance started.\nI highly recommend reading the entire book. Dr. Fung\u0026rsquo;s writing style makes it easy to understand complicated tasks, and if you don\u0026rsquo;t understand something, he reiterates his points in subsequent chapters to drive important messages home. (Love books? Get a copy of Dr. Fung\u0026rsquo;s other excellent book, The Cancer Code.)\nBut before we get into this topic, we need a disclaimer.\nDisclaimer #I\u0026rsquo;m not a medical doctor and I couldn\u0026rsquo;t play one on TV even if I tried. I\u0026rsquo;m just a person trying to improve my own health without landing in a fad diet that doesn\u0026rsquo;t work. I also have nothing to sell to you.\nBefore you embark on any big changes with eating or exercise, talk with a medical professional to ensure you don\u0026rsquo;t have any underlying conditions that would worsen with the changes. But, if your doctor tries to get you onto a low-fat, high-carb diet, I\u0026rsquo;d strongly recommend getting a second opinion. 😉\nWhy this topic? #I started on the keto diet back at the end of 2019 in the hope of improving my own health. My sleep patterns were terrible, I was on blood pressure medication, and my seasonal allergies really slowed me down. My weight wasn\u0026rsquo;t ideal, either.\nIt\u0026rsquo;s August 2022 and I\u0026rsquo;m still going with keto. My weight is down from 200-210 lbs (90.7-95.25kg) to 170-175 lbs (77.1-79.4kg) and that puts me in a healthy zone at 6'1\u0026quot; (1.85m) tall. My blood pressure medicine went away a long time ago, as did most of my allergy medications (prescription and over the counter). I\u0026rsquo;m an active runner and I either run or walk about 3.2 miles (5km) daily.\nI feel great.\nHowever, I see constant updates on statistics around obesity in the news and in medical studies. I\u0026rsquo;m eager to find any details out there that reinforce my knowledge on the diet, question my assumptions, or offer me ways to improve it. This book is just one of many I\u0026rsquo;ve read to better understand how my own body works.\nMy takeaways #Just as a reminder, these are only the big things that stuck with me after finishing the book. You should read it to get the full picture since you may be more interested in different topics.\nCaloric restrictions don\u0026rsquo;t work #Many diets focus on reducing calories and they\u0026rsquo;ve been proven to fail over and over again. Dr. Fung provides several studies in the book that show a few different things:\nInitially, you lose weight, but it comes back Your body reduces the caloric output when you reduce the input Keeping a low calorie diet going for long periods of time is extremely difficult Lowering calories doesn\u0026rsquo;t lower insulin levels over the long term He provides a great analogy to help understand how the body handles weight. Everyone has a weight point set in their body, much like a thermostat for an air conditioner. If it\u0026rsquo;s too cold in your room, you could start up a space heater or start the fireplace. The thermostat will notice the temperature increase and turn on the air conditioner. At some point, your space heater burns out while fighting the air conditioner or you run out of firewood.\nThis is how your body works.\nYour body has a set point for weight, like a thermostat, and you must find a way to lower that setting. Dr. Fung notes that insulin levels are a huge factor. If insulin levels remain high for extended periods, the body thinks it needs to take on more fat and store the energy that is coming in. The weight thermostat moves up.\nTurning that thermostat down requires lowering insulin levels over the long term. These levels spike every time we eat, no matter what we eat.\nHow do we lower the thermostat? Keep reading.\nAvoid refined carbohydrates #You probably knew this was coming. Dr. Fung notes that nothing raises insulin levels higher in the body than sugars. However, there are differences based on what you\u0026rsquo;re consuming, whether it\u0026rsquo;s plain table sugar, high fructose corn syrup, or artificial sugars.\nSo much of the wheat we consume today isn\u0026rsquo;t the same as it used to be. Lots of packaged foods contain hidden sugars or so much processing that barely any of the original nutrients remain. Any keto dieter can tell you that removing these foods will kick the keto diet into high gear.\nAs stated earlier, caloric reduction doesn\u0026rsquo;t work. So if you\u0026rsquo;re removing calories from refined carbohydrates, something must fill the gap left behind.\nThat\u0026rsquo;s where an increase in fat and protein helps. Fat and protein do raise insulin levels, like any food, but the levels increase by a much smaller amount. In addition, he cites several studies that show saturated fat levels in the blood remaining relatively steady when people consume high-fat meals.\nI can vouch for this myself. My diet consists of fat as the number one source of calories and my triglycerides look fantastic. My A1C is down from where it was before starting keto and my cholesterol numbers are in a normal range.\nThat leads to another good point from the book: 75% of the cholesterol in your blood comes from what your body makes. Only about 25% comes from your diet. But not all cholesterol is the same. You can break the HDL/LDL into many different types and there are types in both categories that are helpful and harmful.\nThink more about when and less about what #Dr. Fung argues that you should focus on when you eat a lot more than about what you eat. (But avoid the refined carbohydrates anyway.)\nBut there\u0026rsquo;s one thing I am sure of: intermittent fasting is my nemesis.\nDr. Fung provides lots of study results that show fasting as a critical part of reducing obesity and keeping insulin levels in check. He even cites a study where someone went on a therapeutic (and medically supervised) fast for 382 days and was totally healthy afterwards. Reading things like that makes it seem less insurmountable.\nThe body knows how to handle a fast and takes many actions to ensure we are performing our best during a fast. Think about early humans. Sometimes they would have a large meal after a hunt but then winter comes and food becomes scarce. When the body goes without food, it begins to think: \u0026ldquo;Hey, we better be smart and be on the lookout for something to eat.\u0026rdquo;\nKeto dieters who intermittently fast will tell you that during their fast, their focus improves. The body makes adjustments to blood glucose levels and overall blood flow to ensure we\u0026rsquo;re operating at our best. If we\u0026rsquo;re not, then we might miss our next meal.\nDr. Fung provides 24-hour or 36-hour fasts as examples. These may be tough for beginners. I\u0026rsquo;ve found that any adjustments I can make, even if it\u0026rsquo;s just skipping a meal, seems to improve how I feel.\nMy current pattern is called the 16/8 because you spend 16 hours fasting with an 8 hour eating window. My window runs from 11AM to 7PM so I can eat lunch and dinner without being tempted to eat late in the evening (which is rough on sleep patterns). If you want to try something simple, try skipping breakfast (which is named as such because it\u0026rsquo;s a break in the fast you have while sleeping). 😉\nGoing forward #My ability to resist refined carbohydrates and high sugar meals is superb right now, but it\u0026rsquo;s taken a long time to get here. I thought this would be the most difficult thing to overcome, but surprisingly, a cake, a dish of crème brûlée, or an ice cream seem unappetizing to me now. (Boy, I never thought I could escape the clutches of a good crème brûlée.)\nHowever, there are two things I don\u0026rsquo;t do well right now:\nI turn to artificial sugars far too often but they raise insulin levels, too (sometimes as much as regular sugar) I still don\u0026rsquo;t drink enough water My plan is to drink more water regularly, especially while fasting, and find a way to reduce my artificial sugar intake a bit more.\nLuckily, a friend told me about r/Hydrohomies on Reddit so I can get my water reminder along with hilarious memes. 🤣 💦\nAs always, if you want any of my perspectives on keto and intermittent fasting so far, ask me anytime.\n","date":"12 August 2022","permalink":"/p/takeaways-from-the-obesity-code/","section":"Posts","summary":"This book teaches you more than dieting \u0026ndash; it changes how you think about food entirely. 🍽","title":"Takeaways from The Obesity Code"},{"content":"Every Linux user experienced at least one \u0026ldquo;battle of the text editors\u0026rdquo; once in their lifetime. I even participated in a few! Text editors form the foundation of nearly every Linux user\u0026rsquo;s workflow. You need to use one eventually, whether for quick configuration file edits, developing software, or writing blog posts in markdown (like this one)!\nAn older and much wiser Linux engineer told me this early in my career:\nEveryone spends time arguing about the best text editor. Nobody spends time being grateful that we have so many great choices!\nHe was totally right. Sometimes we quickly forget about the benefits of choice in open source software.\nBut before I could say anything else, he said:\nAnd, naturally, emacs is the best editor out there, anyway.\n🤦‍♂️\nWhy migrate to vim? #Visual Studio Code, or vscode, comes from Microsoft and delivers a full-featured editor and IDE with tons of plugins available. It also offers plenty of extensions that enable extra functions for certain file types and merges testing output into the same interface.\nHowever, newer releases performed poorly on my machine. I spent too much time going through extensions to find out which one was causing the performance drops. Then I noticed that my extension usage had gone way overboard and I wasn\u0026rsquo;t vetting new extensions as I should have.\nPrivacy questions came up from time to time, too. Switching to something like vscodium helped with the privacy issues but the slowdowns from extensions came right back.\nWhat really pushed me over the edge was inconsistency.\nWhenever I made edits of system configuration files, wrote comments in git commits, or made a quick change in a text document, I was in vim. I began packaging more software in Fedora and it was much quicker to open a spec file in vim, make commits, and test my changes. My vim configuration file grew as I changed settings to make edits easier and I loaded a few plugins (with vim-plug).\nI found myself opening vscode less and less often. I was gradually improving my skills in vim with visual selections, copy/paste, and moving quickly through files (oh how I love using curly braces to jump between ansible tasks). Did I make mistakes? Oh, yes. And if anyone was watching over my shoulder, it would have been hilarious (for them).\nNew adventures outside my comfort zone are always fun for me, so I embarked on a migration to vim as my full time editor.\nInitial challenges #Old habits die hard.\nMy muscle memory of running code -n . kept kicking in when I went to edit something, so I removed vscode from my system entirely. Just like throwing out the cookies as you start a diet, there\u0026rsquo;s no going back now.\nFile manager #I struggled with replicating the file manager component of vscode that runs down the left side. My dependency on that file list was deep. I stumbled upon a blog post called Oil and vinegar - split windows and the project drawer that argued against some of the file manager drawer designs.\nA friend showed me fzf and the corrsponding vim plugin, fzf.vim. At first, I was totally lost. Then I found a video from samoshkin that explained how to use fzf with the zsh shell as well as vim.\nIt\u0026rsquo;s now a critical part of my workflow in large projects. If I know the filename, I type :Files, press enter, and search for the file. If I know what\u0026rsquo;s in the file, I type :Ag, press enter, and type search strings to match files.\nSpell checking #Vim makes it easy to write markdown, but I missed the spell checking in vscode. Luckily vim has built-in spell checking and a friend showed me how to set up a toggle that turns it on and off:\nnoremap \u0026lt;silent\u0026gt;\u0026lt;leader\u0026gt;S :set spell!\u0026lt;CR\u0026gt; The default leader key is the backslash, so I can hit backslash followed by a capital S to enable spell checking. Hitting the same keys (backslash, then capital S) turns the spell checking off. I leave spell checking off by default and enable it just before publishing when I do my proofreading. It reduces distractions while I\u0026rsquo;m writing.\nColors #The solarized-dark theme has been my workhorse for years and it\u0026rsquo;s easy on my eyes. I found some vim examples online and the author was using the nord theme. It\u0026rsquo;s a blend of blues and light greys that gives me a little more contrast while still being easy on my eyes during the workday.\nI started with a few plugins:\ncall plug#begin() Plug \u0026#39;arcticicestudio/nord-vim\u0026#39; Plug \u0026#39;vim-airline/vim-airline\u0026#39; Plug \u0026#39;vim-airline/vim-airline-themes\u0026#39; call plug#end() This adds support for the nord theme for all of vim as well as for vim-airline (an awesome vim status bar plugin). I enabled nord by default as well:\n\u0026#34; Colors colorscheme nord highlight Comment ctermfg=darkgray cterm=italic let g:airline_theme=\u0026#39;nord\u0026#39; The ctermfg-darkgray helps improve contrast for some of the darkest colors and cterm=italic makes comments italicized.\nMy vim configuration #I manage all of my dotfiles with chezmoi and you can get my current .vimrc file in my dotfiles repository.\nTo use my config as-is, follow these steps:\nDownload the file and store it as ~/.vimrc in your home directory. Ensure you have ag, fzf, and ripgrep installed for fuzzy finding. Install vim-plug Open vim, type :PlugInstall, and press enter. Close vim and re-open it. Enjoy! Look for more vim-related posts here as I get more comfortable with vim and find more time-saving ideas. 🤓\n","date":"11 August 2022","permalink":"/p/migrating-from-vscode-to-vim/","section":"Posts","summary":"Some people say I just enjoy the sound of my mechanical keyboard too much. 🤭 I see it as a simpler, more consistent workflow.","title":"Migrating from vscode to vim"},{"content":"","date":null,"permalink":"/tags/terminal/","section":"Tags","summary":"","title":"Terminal"},{"content":"","date":null,"permalink":"/tags/vim/","section":"Tags","summary":"","title":"Vim"},{"content":"","date":null,"permalink":"/tags/vscode/","section":"Tags","summary":"","title":"Vscode"},{"content":"","date":null,"permalink":"/tags/ssh/","section":"Tags","summary":"","title":"Ssh"},{"content":"SSH key authentication makes it easier to secure SSH servers and it opens the door to automation with projects such as Ansible. However, working with encrypted SSH keys becomes tedious when you have several of them for different services. This is where an SSH agent can help!\nBut before we talk about SSH agents:\nYou do, don\u0026rsquo;t you? 🤔\nEncrypting an existing ssh key #I won\u0026rsquo;t tell anyone, but if you happen to have unencrypted SSH keys laying around (which I\u0026rsquo;m sure you don\u0026rsquo;t), I\u0026rsquo;ll give you a quick primer on encrypting them. But you surely won\u0026rsquo;t need these instructions. I\u0026rsquo;ll do it anyway.\nHop into your ~/.ssh directory and encrypt the key with ssh-keygen:\n$ cd ~/.ssh $ ssh-keygen -o -p -f my_private_key_filename If the first response from ssh-keygen says Enter new passphrase, then you have an unencrypted key. If it says Enter old passphrase: instead, your private key is already encrypted. You can proceed through the prompts to set the password on the key.\nAgents can help #One thing you quickly notice when you have a fleet of encrypted SSH keys is that you are constantly entering passwords. That\u0026rsquo;s tedious. Instead, an agent helps you enter a password for a key one time per session. Every time to use the key after that first time, the agent steps in to help. 🕵🏻\nIt goes something like this:\nYou ssh to another server using ssh alice@server Your ssh client hands over your public key to the remote server The remote server says \u0026ldquo;Okay, I\u0026rsquo;ve been told this key is from a user I can trust. How about you sign this with your private key so I know it\u0026rsquo;s you?\u0026rdquo; Your ssh client takes the request from the remote server and hands it to the agent The agent signs the message for the client Your client sends the signed message back to the server The server verifies the signature The ssh connection is connected 🎊 There are some really important things to note here:\nThe agent holds the key or certificate in an unencrypted state in memory The agent doesn\u0026rsquo;t write anything to the disk Your password is not stored in memory once the initial decryption with ssh-add is done Communication happens over a Unix socket that is owned by only your user This setup is far better than entering passwords over and over again, but if you forget to use ssh-add before connecting to another server, you can get stuck in a loop like I do:\nOpen ssh connection to a server \u0026ldquo;Darn, I have to put in my password. I should have used ssh-add\u0026rdquo; 🤦🏻‍♂️ Time passes Open ssh connection to another server \u0026ldquo;Oh my gosh, I forgot ssh-add again!\u0026rdquo; 😱 Luckily, there\u0026rsquo;s a better way.\nAdding GNOME Keyring #GNOME Keyring has been around for many years and it provides tons of helpful features. You can store secrets, certificates, and SSH keys in the keyring. The keyring prompts you for a password when you log in to unlock the keyring and it locks again on reboot or shutdown.\nIt also provides ssh-agent functionality with a key difference: when it asks you for your SSH key password one time, it stores it for the next time. That means no ssh-add doom loops like I talked about earlier. You run ssh to connect to a server, get prompted for the key\u0026rsquo;s password, and that\u0026rsquo;s it. That key won\u0026rsquo;t need to be decrypted again as long as your session is active.\nLet\u0026rsquo;s look at the options for gnome-keyring-daemon:\n❯ gnome-keyring-daemon --help Usage: gnome-keyring-daemon [OPTION…] - The Gnome Keyring Daemon Help Options: -h, --help Show help options Application Options: -s, --start Start a dameon or initialize an already running daemon. -r, --replace Replace the daemon for this desktop login environment. -f, --foreground Run in the foreground -d, --daemonize Run as a daemon -l, --login Run by PAM for a user login. Read login password from stdin --unlock Prompt for login keyring password, or read from stdin -c, --components=pkcs11,secrets,ssh The optional components to run -C, --control-directory The directory for sockets and control data -V, --version Show the version number and exit. You\u0026rsquo;ll notice that the default set of components includes ssh for the ssh-agent functionality. However, Fedora handles things a little differently by default:\n❯ rpm -ql gnome-keyring | grep user /usr/lib/systemd/user/gnome-keyring-daemon.service /usr/lib/systemd/user/gnome-keyring-daemon.socket ❯ cat /usr/lib/systemd/user/gnome-keyring-daemon.service [Unit] Description=GNOME Keyring daemon Requires=gnome-keyring-daemon.socket [Service] Type=simple StandardError=journal ExecStart=/usr/bin/gnome-keyring-daemon --foreground --components=\u0026#34;pkcs11,secrets\u0026#34; --control-directory=%t/keyring Restart=on-failure [Install] Also=gnome-keyring-daemon.socket WantedBy=default.target The ssh component is missing from the user systemd unit! 😱\nLet\u0026rsquo;s start by copying this unit to our systemd user unit directory so we can modify it:\n❯ mkdir -vp ~/.config/systemd/user/ ❯ cp /usr/lib/systemd/user/gnome-keyring-daemon.service ~/.config/systemd/user/ Open ~/.config/systemd/user/gnome-keyring-daemon.service in your favorite text editor and add ssh to the --components argument so it looks like this:\n❯ grep components ~/.config/systemd/user/gnome-keyring-daemon.service ExecStart=/usr/bin/gnome-keyring-daemon --foreground --components=\u0026#34;pkcs11,secrets,ssh\u0026#34; --control-directory=%t/keyring Let\u0026rsquo;s reload the systemd user units and start the service:\n❯ systemctl daemon-reload --user ❯ systemctl enable --now --user gnome-keyring-daemon ❯ systemctl status --user gnome-keyring-daemon ● gnome-keyring-daemon.service - GNOME Keyring daemon Loaded: loaded (/home/major/.config/systemd/user/gnome-keyring-daemon.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2022-08-05 10:34:55 CDT; 4h 2min ago TriggeredBy: ● gnome-keyring-daemon.socket Main PID: 63261 (gnome-keyring-d) Tasks: 5 (limit: 38353) Memory: 2.9M CPU: 533ms CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/gnome-keyring-daemon.service ├─ 63261 /usr/bin/gnome-keyring-daemon --foreground --components=pkcs11,secrets,ssh --control-directory=/run/user/1000/keyring └─ 63789 /usr/bin/ssh-agent -D -a /run/user/1000/keyring/.ssh Aug 05 10:34:55 amdbox systemd[4324]: Started gnome-keyring-daemon.service - GNOME Keyring daemon. Aug 05 10:34:55 amdbox gnome-keyring-daemon[63261]: GNOME_KEYRING_CONTROL=/run/user/1000/keyring Aug 05 10:34:55 amdbox gnome-keyring-daemon[63261]: SSH_AUTH_SOCK=/run/user/1000/keyring/ssh Awesome! There\u0026rsquo;s only one last step. The SSH client needs to know where to look for the agent socket. The last line of the status output line shows the answer: SSH_AUTH_SOCK=/run/user/1000/keyring/ssh\nOpen your ~/.bashrc or ~/.zshrc (or whatever you use for your shell) and add this line:\nexport SSH_AUTH_SOCK=/run/user/1000/keyring/ssh Open a new terminal or reload your shell with source ~/.bashrc or source ~/.zshrc. Run ssh to connect to a server with an encrypted key and you should get a password prompt like this one:\nExtra credit #GNOME Keyring \u0026ldquo;just works\u0026rdquo; for 99% of my tasks, but sometimes I want to adjust a key or read a secret quickly. For that, give Seahorse a try. It\u0026rsquo;s a graphical application that gives you access to everything GNOME Keyring stores and you can quickly lock your keyring at any time. The Arch documentation on GNOME Keyring also has plenty of tips for more automation and how to handle corner cases.\n","date":"5 August 2022","permalink":"/p/use-gnome-keyring-with-sway/","section":"Posts","summary":"Add encrypted ssh keys to your workflow more efficiently with gnome-keyring in the sway window manager.","title":"Use GNOME Keyring with Sway"},{"content":"","date":null,"permalink":"/tags/communication/","section":"Tags","summary":"","title":"Communication"},{"content":"","date":null,"permalink":"/tags/devops/","section":"Tags","summary":"","title":"Devops"},{"content":"","date":null,"permalink":"/tags/engineering/","section":"Tags","summary":"","title":"Engineering"},{"content":"You discovered a problem at work. If left unchecked, the problem could affect customers and impact revenue. You cannot ignore it.\nWhat now? Who do you tell? Will they listen? Better yet, will they understand?\nI find myself in situations like these constantly. My roles over the years involved problems that demanded discussion, thought, and solutions. Some problems were simple but others required complicated fixes that took months or years.\nCommunicating with other people about complex problems remains a challenge for me, but I learned a new tool that helps me kick off these discussions and share my recommended solutions as efficiently as possible. The SBAR technique shows up frequently in medical settings but it also works well in IT.\nThis post covers the nuts and bolts of the SBAR format, how to use it to your advantage, and how to communicate clearly with it.\nComponents #Every SBAR contains four components. Let\u0026rsquo;s go through each one.\nSituation #Hook the reader with an explanation of the events happening right now. Save the backstory for later and focus on the shortest possible list of current events.\nLet\u0026rsquo;s use a favorite example that every system administrator can relate to: a server is down. Your situation section might look something like this:\nSituation\nWeb01 stopped responding at 3PM today and disrupted web traffic to our website at example.com. We cannot process orders online and customers cannot browse our list of products. The marketing team must update the website by the end of the day with information about next week\u0026rsquo;s trade show.\nThe situation section includes several critical pieces of information:\nWhat is happening right now? Who is affected by the problem right now? How does the problem affect people right now? Do you see a pattern? Do you see it right now? This section demands short sentences, active language (more on that later), and a focus on right now. Your efforts here charge up the reader to continue into the next section which is often the longest.\nBackground #Now that you won your reader\u0026rsquo;s attention, start giving some backstory. Let\u0026rsquo;s continue our example above about our unresponsive server:\nBackground\nOur datacenter technicians replaced the power supply on web01 last week following a power surge. They found a broken case fan after replacing the power supply and they replaced that as well. Fred called the server vendor about the failures and they warned us that we may have a damaged motherboard following the power surge. Jennifer ordered a replacement motherboard which arrives this Friday.\nThe server crashed abruptly and rebooted two more times since the maintenance. Normally the server rebooted without any problems, but it did not come back online after today\u0026rsquo;s crash.\nThis section adds color and detail about the events that led up to the current situation. It answers that question that executives enjoy asking: \u0026ldquo;How did we get here?\u0026rdquo;\nAvoid two main pitfalls here:\nStick to facts, not opinions or your intepretation. (That\u0026rsquo;s the next section!) Keep the facts pertinent and relevant. If the fact doesn\u0026rsquo;t help directly explain the current situation at hand, leave it out. Toss extra details into an appendix if needed. The reader now understands the events happening right now and they know how we got here. Now your experience and expertise comes into play as you bring all of the data together.\nAssessment #This remains the toughest section for me with any SBAR. You must thread the needle of tying all the facts together without making a recommendation to the reader.\nImagine the last action movie you saw where the villain has the ability to do something terrible. After the main character explains what the villian has, how dangerous it is, and how vulnerable the good guys are, everyone looks at them and asks: \u0026ldquo;Okay, give it to me straight: How bad is this?\u0026rdquo;\nGoing back to our server down example:\nAssessment\nWe have very low confidence that web01 can host our website reliably at this critical time for our company. The power surge likely damaged the motherboard, but the replacement does not arrive for three more days. While we hoped to have web02 ready for production this week, it still needs more work.\nTake all the facts you have and share your thoughts on where they stand. Readers without your experience or level of familiarity with the situation appreciate this section since it ties the first two sections together. Clearly state the root cause of the current situation, the severity, and the impact if left unresolved. Your goal here is to get the reader to ask the right question for the next section: \u0026ldquo;What do we do now?\u0026rdquo;\nRecommendation #Now it\u0026rsquo;s time to plot a course forward out of the situation and back to happier times. Make clear recommendations that include a what, why, and how. Assume that every recommendation must survive cross-examination from other engineers and executives.\nOur recommendations for fixing our server could include:\nRecommendation\nDeploy a temporary web server at our public cloud provider to bridge the gap while web01 is awaiting a hardware replacement. Our deployment scripts already work in cloud instances and the our cost for the temporary instance is within our IT budget. Finish provisioning web02 and configure high availability services so that it can run in active-active mode with web01 as soon as the replacement motherboard arrives. Repair web01 on Friday and run burn-in tests during the week of the trade show to ensure it operates reliably under load. Schedule a maintenance after the trade show to tie web01 and web02 together as an H/A pair and move the website traffic back from the cloud provider. Order a spare parts package from the server vendor to have on hand for future issues with either server. Be specific and detailed here. Keep the following things in mind:\nThink about potential objections (time, cost, etc) and address those in the recommendation itself. Consider work items beyond the current situation to prevent it from happening again later. If you have a team working on it, consider multiple simultaneous workstreams and what you could accomplish with each. Your reader wants to know what to do, so don\u0026rsquo;t be bashful here. You are an expert and you know the situation well. Tell the reader what must be done, why it must be done, and how to do it (at a high level).\nExtra credit #Now that you know all the sections of an SBAR, you\u0026rsquo;re ready to write one! I learned plenty of tips along the way1 and these should help you improve your SBAR skills.\nUse active language #The SBAR format provides an efficient method for communicating complex problems and recommendations with other people. Using active language keeps the readers attention and allows you to speak more directly and forcefully.\nTake the following example:\nThe server is down because a cart was rolling down the hallway and it hit the server after a datacenter technician forgot to set the brake.\nUgh. That\u0026rsquo;s a mouthful. How about this instead:\nA datacenter technician left a cart unattended on the ramp without setting the brake. The cart rolled down and knocked the server off the rack, taking it offline.\nActive language keeps the reader\u0026rsquo;s attention by keeping the subject followed by a verb. There\u0026rsquo;s no question of who did what and how that caused a problem. Remove any passive voice that you find and look for sentences without a subject followed by a verb.\nUse an appendix #Sometimes an assessment leads to multiple options for a solution. Add these potential solutions to an appendix and allow readers to review those if they need more detail. This reduces clutter in the main part of the SBAR but allows detail-oriented readers to pull out more context without bothering you.\nAppendixes are great places for charts, diagrams, and command line output that reinforce your recommendations.\nKeep it collaborative #As you write the SBAR, invite others to collaborate with you. My company uses Google Docs heavily and it allows me to bring in more contributors to improve various parts of the document.\nThis also helps when you\u0026rsquo;re ready to present your work. Readers can comment and ask questions right in the document. You can add extra details or answer the questions right in the SBAR document.\nWrapping up #The SBAR process offers you a great opportunity to improve your communication skills as an engineer, especially when you communicate with less technical people. You can learn what motivates different people within your organization and tailor your communication so that it matches the things they care about.\nThese soft skills can also take your career to the next level. Highly technical people who communicate well about complex topics are always in demand.\nI learned tips from making mistakes. 🤭\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"2 August 2022","permalink":"/p/raise-the-bar-with-an-sbar/","section":"Posts","summary":"Efficiently communicate a problem and your recommendation in record time with an SBAR. 📝","title":"Raise the bar with an SBAR"},{"content":"Every great thing has its end, and the extra services I launched along with icanhazip.com are no exception. I started icanhazip.com way back in 2009 and detailed much of the history when I transferred ownership to Cloudflare.\nThe extra services, such as icanhazptr.com, icanhaztrace.com, and icanhaztraceroute.com, came online in 2013 and they weren\u0026rsquo;t part of the Cloudflare transfer. These services add extra challenges since they need IPv6 connectivity and they don\u0026rsquo;t play well with containers. Relative to icanhazip.com, these services receive very little traffic.\nAs much as I\u0026rsquo;d like to keep running these sites, the extra services will go offline on August 17, 2022.\nAnd if you can\u0026rsquo;t live without it #Still need PTR record lookups and traceroutes on your network? All of the code is on GitHub in major/icanhaz. To run it, simply execute the icanhaz.py script on your machine.\nYou can also use gunicorn with a command like this one:\ngunicorn icanhaz:app You can also get very fancy with a systemd unit that exposes a UNIX socket:\n[Unit] Description=Gunicorn instance to serve icanhaz After=network.target [Service] User=nginx Group=nginx WorkingDirectory=/opt/icanhaz ExecStart=/usr/bin/gunicorn --workers 4 --bind unix:icanhaz.sock -m 007 icanhaz:app [Install] WantedBy=multi-user.target And then configure nginx to serve traffic from the socket:\nserver { listen 80; listen [::]:80; server_name _; root /usr/share/nginx/html; location / { proxy_set_header Host $http_host; proxy_pass http://unix:/opt/icanhaz/icanhaz.sock; } } Thanks for all the support over the last 13 years! 🫂\n","date":"28 July 2022","permalink":"/p/extra-icanhaz-services-going-offline/","section":"Posts","summary":"The original icanhazip.com lives on, but the other services are going offline. 😢","title":"Extra icanhazip services going offline"},{"content":"","date":null,"permalink":"/tags/icanhazip/","section":"Tags","summary":"","title":"Icanhazip"},{"content":"I recently moved over to the Sway window manager (as I mentioned in my last post) and it runs on Wayland. That means bidding farewell to X. Although this is a step forward, it caused some of my workflows to break.\nMy original post about my efficient emoji workflow inspired many people to give it a try. Everything was great until I moved to Wayland and suddenly rofimoji stopped pasting emojis on demand. 😱\nSo is this a big deal or something? #Well, yes.\nSome people would shrug this off and go on about their day. But wait \u0026ndash; emojis are core to my workflow. Sure, I use them liberally in my communications via chat or email, but I also sprinkle them in various places to ensure applications handle unicode characters properly.\nWhen I worked on the Continuous Kernel Integration (CKI) team at Red Hat, we wanted to send concise and informative emails in plain text. Our team added colorful emoji symbols, reduced the length of our emails, and won praise (and some consternation) from kernel developers. We even named our releases using emojis for a while (the first one was 🐣). 🤭\nEmojis in wayland #In Fedora, we need some packages:\n$ sudo dnf install rofimoji wl-clipboard wtype Why do we need these?\nrofimoji pops up an emoji picker in rofi where you can quickly search for emojis wl-clipboard gives you the wl-copy and wl-paste tools, similar to xclip from X wtype replaces xdotool from X Sway is heavily keyboard driven and that\u0026rsquo;s why I love it. More typing and less mouse. We need a keyboard shortcut for rofimoji. I use MOD-D for regular rofi, so I chose to use MOD-E for rofimoji. (MOD is likely the key on your keyboard with the Windows logo on it.)\nOpen your sway configuration file (usually ~/.config/sway/config) and add a line:\nbindsym $mod+e exec ~/bin/launch_rofimoji Save the shell script to ~/bin/launch_rofimoji:\n#!/bin/bash # Determine which output is currently active (where the mouse pointer is). 🤔 MONITOR_ID=XWAYLAND$(swaymsg -t get_outputs | jq \u0026#39;[.[].focused] | index(true)\u0026#39;) # Let\u0026#39;s pick our emojis! 🎉 rofimoji --action type --skin-tone light \\ --selector-args=\u0026#34;-theme solarized -font \u0026#39;Hack 12\u0026#39; -monitor ${MONITOR_ID}\u0026#34; Ensure the script is executable:\n$ chmod +x ~/bin/launch_rofimoji At this point, you can reload sway\u0026rsquo;s configuration with MOD+SHIFT-C. After a brief screen flicker, try the MOD+E keyboard combination and rofimoji should appear! Search for your favorite emoji, press enter, and enjoy! 🍰\nCaveats #This script works well for me terminals, Visual Studio Code, Firefox, and most GTK-based applications. However, I still have issues using it with Electron based applications (such as Slack). If I bind a sway key combination to something super simple, like wtype banana, I end up with numbers pasted into Electron applications. That\u0026rsquo;s something I am still working to solve1.\nIf you run into issues where emojis don\u0026rsquo;t appear, or you have unusual carriage returns after your emojis, you may want to remove --action type and try --action copy or --action clipboard. These different actions use different methods for copying and pasting emojis into your applications. You can stack actions separated by spaces, such as --action clipboard type. You may need to experiment with these to figure out what works on your machine.\nStill having trouble? Consider using clipman. It\u0026rsquo;s a more robust clipboard for wayland and you can start it automatically in sway:\nexec wl-paste -t text --watch clipman store --no-persist You can query clipman history as well. This could help you determine what\u0026rsquo;s not being copied across correctly from rofimoji.\nMy workaround is to use Slack in Firefox. I\u0026rsquo;d love to just stop using Slack altogether, but I don\u0026rsquo;t have a choice. 😭\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"27 May 2022","permalink":"/p/efficient-emoji-experience-wayland/","section":"Posts","summary":"Because nobody wants an inefficient emoji workflow. 🙈","title":"Efficient emoji experience in Wayland"},{"content":"","date":null,"permalink":"/tags/emojis/","section":"Tags","summary":"","title":"Emojis"},{"content":"","date":null,"permalink":"/tags/swaywm/","section":"Tags","summary":"","title":"Swaywm"},{"content":"My workday takes me from email to terminals to browsers to documents. I love tiling window managers because they keep me organized and less distracted. Many are less resource-intensive as well.\nAlthough i3 has graced my displays for years now (and I\u0026rsquo;ve written many posts about it), I recently picked up an AMD graphics card and made my way to sway.\nThe biggest difference between the two is that sway runs on Wayland rather than X. Sway has control over how nearly everything works, such as input devices, displays, backgrounds, and more. If you were frustrated with lots of hacks in i3 for various X-related things, you might enjoy sway.\nBut then stuff breaks #Much of my early experience with sway involves this process:\nChange a configuration in sway 1.7.1 Reload the configuration live Scrunch my face and say \u0026ldquo;Well, that\u0026rsquo;s not quite right.\u0026rdquo; Read the docs Go to step #1 😉 Fortunately, sway puts up with my constant reloads of the configuration until one day when I changed lots of configuration items and suddenly every reload caused Firefox to crash. My primary version of Firefox is the Developer Edition (currently 101.0b9), but I also keep a stable version of Firefox (version 100) around that Fedora provides.\nI started up the default Fedora version of Firefox, reloaded the configuration in sway, and the stable Firefox crashed, too. 🔥\nNothing appeared on the screen when Firefox crashed. The window just disappeared and my terminal filled up the screen where Firefox was. I could reproduce the crash multiple times with both versions of Firefox. The crash kept occurring even after a reboot.\nWhen I ran either version of Firefox from the command line, I finally got some error output:\nLost connection to Wayland compositor. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Hmm, it looks like Firefox can\u0026rsquo;t talk to Sway for a moment and that might cause the crash. But why did this suddenly start happening?\nWorking backwards #I gradually backed out the recent sway configuration changes and finally found the configuration involved in the crash:\ninput * { # Enable numlock when sway starts. xkb_numlock enable # Set up compose keys. xkb_options compose:rctrl } If I removed that section, Firefox stopped crashing on sway config reloads. If I put it back in, Firefox crashed.\nI tried removing only the xkb_numlock line. Firefox crashed. The same thing happened when I removed only the xkb_options line.\nSway\u0026rsquo;s documentation notes that you can apply input configuration to all devices with input *, so the configuration was valid.\nSearch time #Armed with an error message and a method for reproducing my crash, I set off to use the most powerful system administrator tool on the market: Google. 😜\nI landed on Mozilla Bug 1652820 where other Firefox users noted that sway config reloads caused crashes for them, too. A user noted further down in the bug that if they removed the * from their input configuration and specified the actual device identifier, the problem went away.\nFixing the crash #I ran back and looked at my problematic configuration:\ninput * { xkb_numlock enable xkb_options compose:rctrl } Now the big question: how do I identify my keyboard? 🤔\nAs I mentioned before, sway controls everything, including input devices. You can query about things that sway knows by using swaymsg and it automatically dumps data in JSON format if you use a pipe:\n$ swaymsg -t get_inputs | jq -r \u0026#39;.[].identifier\u0026#39; | grep -i keyboard 1241:662:USB-HID_Keyboard 1241:662:USB-HID_Keyboard_Mouse 1241:662:USB-HID_Keyboard_Consumer_Control 1241:662:USB-HID_Keyboard_System_Control 1241:662:USB-HID_Keyboard My keyboard is identified as 1241:662:USB-HID_Keyboard according to sway. I updated my input configuration to specify the exact device:\ninput \u0026#34;1241:662:USB-HID_Keyboard\u0026#34; { xkb_numlock enable xkb_options compose:rctrl } I reloaded the sway configuration, started Firefox, reloaded the configuration once more, and Firefox was still running. 🎉\n🐙 Have multiple keyboards or input devices? Other users noted in the bug report that the crash will happen again (even with specific identifiers) if you have two keyboards and both keyboards are connected. If you run into this problem, another user shared their workaround using swaymsg called via exec.\n","date":"24 May 2022","permalink":"/p/sway-reload-causes-firefox-crash/","section":"Posts","summary":"Reload your sway config without disrupting Firefox. 🔥","title":"Sway reload causes a Firefox crash"},{"content":"This is my third post about Image Builder, so I guess you could say that I enjoy using it1. It\u0026rsquo;s a great way to define a custom cloud image, build it, and (optionally) ship it to a supported cloud provider.\nThis post covers how to build a customized CentOS Stream 9 image along with a custom repository for additional packages. In this case, that\u0026rsquo;s Extra Packages for Enterprise Linux (EPEL).\nWhy do I need a custom image anyway? #Building your own image empowers you to choose which packages you want, which services run at boot time, and where you deploy your image. Some cloud providers may not have an image from the Linux distribution you enjoy most, or they might have an image with the wrong package set.\nSome cloud providers build images with too many packages or too few. Sometimes they add configuration that doesn\u0026rsquo;t exist in the OS itself. I\u0026rsquo;ve even found some that alter cloud-init and force you to log in directly as the root user. 😱\nI enjoy building my own images so I know exactly what it contains and I know that the configuration came from the OS itself.\nFirst steps #This post uses CentOS Stream 9 as an example. You will need a physical host, virtual machine, or cloud instance running CentOS Stream 9 first. We start by installing some packages:\n$ sudo dnf install osbuild-composer weldr-client What do these packages contain?\nosbuild-composer ensures you have osbuild, the low-level image build component, along with configuration and an osbuild-composer2 worker that builds the image. weldr-client contains the composer-cli command line tool that makes it easy to interact with osbuild-composer One nice thing about this stack is that it starts via systemd\u0026rsquo;s socket activation and it only runs when you query it. Let\u0026rsquo;s start the socket now and ensure it comes up on a reboot:\n$ sudo systemctl enable --now osbuild-composer.socket Verify that the API is responding:\n$ composer-cli status show API server status: Database version: 0 Database supported: true Schema version: 0 API version: 1 Backend: osbuild-composer Build: NEVRA:osbuild-composer-46-1.el9.x86_64 Adding EPEL #CentOS Stream 9 has most of the packages I want, but I really love this program called htop that displays resource usage and allows you to introspect certain processes or namespaces easily. This package is only available in the EPEL repository, so we need to add that one to our list of enabled repositories for image builds.\nosbuild-composer comes with its own set of repositories in the package and does not use the system\u0026rsquo;s repositories. You can list all the enabled repositories that it knows about:\n# composer-cli sources list AppStream BaseOS RT If we want to add EPEL, we can dump the configuration from one of these to a file and edit it:\n# composer-cli sources info AppStream | tee epel.ini check_gpg = true check_ssl = true id = \u0026#34;AppStream\u0026#34; name = \u0026#34;AppStream\u0026#34; rhsm = false system = true type = \u0026#34;yum-baseurl\u0026#34; url = \u0026#34;https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/AppStream/x86_64/os/\u0026#34; After editing the EPEL repository file, it should look like this:\ncheck_gpg = true check_ssl = true id = \u0026#34;EPEL9\u0026#34; name = \u0026#34;EPEL9\u0026#34; rhsm = false system = false type = \u0026#34;yum-baseurl\u0026#34; url = \u0026#34;https://mirrors.kernel.org/fedora-epel/9/Everything/x86_64/\u0026#34; You can use any mirror you prefer, but the kernel mirrors are very fast for me from most locations. Now we need to add this repository:\n# composer-cli sources add epel.ini # composer-cli sources list AppStream BaseOS EPEL9 RT We now have EPEL9 in our list. 🎉\nDefine our image #All image definitions, or blueprints, are in TOML format. Here\u0026rsquo;s my simple one for this post:\n# Save this file as image.toml name = \u0026#34;centos9\u0026#34; description = \u0026#34;Major\u0026#39;s awesome CentOS 9 image\u0026#34; version = \u0026#34;0.0.1\u0026#34; [[packages]] name = \u0026#34;tmux\u0026#34; [[packages]] name = \u0026#34;vim\u0026#34; # This is the one that comes from EPEL. [[packages]] name = \u0026#34;htop\u0026#34; Now we push our blueprint and solve the dependencies to ensure we added our EPEL repository properly:\n# composer-cli blueprints push image.toml # composer-cli blueprints depsolve centos9 -- SNIP -- 2:vim-filesystem-8.2.2637-16.el9.noarch which-2.21-27.el9.x86_64 xz-5.2.5-7.el9.x86_64 xz-libs-5.2.5-7.el9.x86_64 zlib-1.2.11-33.el9.x86_64 htop-3.1.2-3.el9.x86_64 And there\u0026rsquo;s htop at the end of the list! 🎉\nMake the image #The fun part has arrived! Let\u0026rsquo;s build an image:\n# composer-cli compose start centos9 ami --size=4096 Compose ca57fd64-11ea-41d4-b924-9b8f5bdcaf5e added to the queue This command does a few things:\nStarts an image build with our centos9 blueprint (from the name section of my image.toml file) Outputs an image type that works well on AWS (Amazon Machine Image, or AMI) Limits the image size to 4GB (be sure this is not too large for your preferred instance size) 🤔 Note that you do not need to set the size explicitly here, but I do it as a good measure. When your instance boots, cloud-init runs growpart to expand the storage to fit the disk size in your cloud instance.\n💣 However, growpart will not shrink the disk at boot time. If you choose a size that is larger than the disk space in your cloud instance, you will likely see an error at provisioning time.\nLet\u0026rsquo;s check the status after a few minutes:\n# composer-cli compose status ca57fd64-11ea-41d4-b924-9b8f5bdcaf5e FINISHED Fri May 6 16:16:08 2022 centos9 0.0.1 ami 4096 If you want to get a copy of the image and import it yourself into your favorite cloud, you can do that now:\n# composer-cli compose image ca57fd64-11ea-41d4-b924-9b8f5bdcaf5e ca57fd64-11ea-41d4-b924-9b8f5bdcaf5e-image.raw # ls -alh ca57fd64-11ea-41d4-b924-9b8f5bdcaf5e-image.raw -rw-------. 1 root root 2.7G May 6 16:20 ca57fd64-11ea-41d4-b924-9b8f5bdcaf5e-image.raw You can also let osbuild-composer do this for you! I have one post on this blog about automatically uploading to AWS and another post on the Red Hat blog about doing the same with Azure.\nI once worked on the team that makes Image Builder happen, so I may be a little bit biased. Enjoy the post anyway. 😉\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nMost people at Red Hat just call it composer, but I\u0026rsquo;ll use the full name here to avoid confusion.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"6 May 2022","permalink":"/p/build-custom-centos-stream-cloud-image/","section":"Posts","summary":"Learn how to customize a CentOS Stream 9 cloud image with the stuff you want and nothing that you don\u0026rsquo;t. 📦","title":"Build a custom CentOS Stream 9 cloud image"},{"content":"","date":null,"permalink":"/tags/epel/","section":"Tags","summary":"","title":"Epel"},{"content":"","date":null,"permalink":"/tags/imagebuilder/","section":"Tags","summary":"","title":"Imagebuilder"},{"content":"Basic access authentication dates back to 1993 and it\u0026rsquo;s still heavily used today. The server provides a WWW-Authenticate header to the client and the client responds with an Authorization header and a base64-encoded (not encrypted) string to authenticate. When done over a secure TLS connection, this method of authentication works well.\nTraefik is an application proxy that takes requests from clients and routes them to different backends. You can use it by itself, in conjunction with Docker, or in a kubernetes deployment. I love it because it gets most of its information and configuration details from the environment around it. I don\u0026rsquo;t have to tell Traefik where my services are. It knows where they are based on the resources I add in kubernetes.\nIn this post, I\u0026rsquo;ll explain how to add kubernetes resources that allow Traefik to handle basic authentication for backend applications. This particular example covers authentication for Traefik\u0026rsquo;s dashboard. The dashboard displays lots of helpful diagnostic information about routing and services that helps you troubleshoot configuration errors.\nOn the other hand, this information is also quite useful to attackers and it\u0026rsquo;s a good idea to keep it hidden away. 🕵🏻\nImportant ingress #Kubernetes aficionados know the ingress resource type well. It\u0026rsquo;s a resource that signals to a load balancer how it should route traffic within the cluster. Here\u0026rsquo;s an example:\napiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress spec: rules: - host: \u0026#34;foo.bar.com\u0026#34; http: paths: - pathType: Prefix path: \u0026#34;/bar\u0026#34; backend: service: name: service1 port: number: 80 This ingress takes traffic to foo.bar.com underneath the URI /bar and sends it to the service service1 on port 80. Most kubernetes load balancers can take this information and begin routing requests.\nTraefik takes this up a notch with the IngressRoute resource. This CRD is Traefik-specific, but it makes configuration easier:\napiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: ingressroutetls namespace: default spec: entryPoints: - websecure routes: - match: Host(`your.example.com`) \u0026amp;\u0026amp; PathPrefix(`/tls`) kind: Rule services: - name: whoami port: 80 tls: certResolver: myresolver We now have access to specify which Traefik entry points to use (these are ports that Traefik listens to) as well as a certificate resolver of some sort. We can match the host header and URI prefix on the same line with complex rules and send the traffic to the whoami service on port 80.\nOur goal from here on out will be to:\nAdd basic authentication to the traefik dashboard Enable the traefik dashboard so we can reach it from the outside Enable authentication for the dashboard #I\u0026rsquo;m using flux to manage my kubernetes cluster (read more on that in yesterday\u0026rsquo;s post) and I\u0026rsquo;m using its HelmRelease resource type to deploy Traefik. You can follow along with the files in my gitops-ng repository.\nFirst, we need a namespace in namespace.yaml:\napiVersion: v1 kind: Namespace metadata: name: traefik Let\u0026rsquo;s deploy Traefik using a HelmRelease resource and store this in release.yaml:\napiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease metadata: name: traefik namespace: traefik spec: interval: 5m timeout: 20m install: crds: CreateReplace upgrade: crds: CreateReplace chart: spec: chart: traefik version: \u0026#34;10.19.4\u0026#34; sourceRef: kind: HelmRepository name: traefik namespace: flux-system # https://github.com/traefik/traefik-helm-chart/blob/master/traefik/values.yaml values: ports: web: redirectTo: websecure I\u0026rsquo;m specifying that I want to install Traefik\u0026rsquo;s helm chart (version 10.19.4) into the traefik namespace and I want to update the CRDs (which gives me the IngressRoute resource type) on installation and during updates. At the end, I specify that all non-secure HTTP requests on port 80 should be redirected to a TLS connection.\nNow we need to set up the dashboard by creating a dashboard.yaml file:\n--- apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: dashboard-ingress-auth namespace: traefik spec: basicAuth: secret: dashboard-auth-secret removeHeader: true --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: traefik-dashboard namespace: traefik spec: entryPoints: - websecure routes: - match: Host(`traefik.example.com`) kind: Rule middlewares: - name: dashboard-ingress-auth namespace: traefik services: - name: api@internal kind: TraefikService tls: certResolver: letsencrypt The Middleware resource specifies that I want basic authentication using the dashboard-auth-secret secret (which we will create momentarily). The IngressRoute specifies that all traffic coming to the websecure TLS-secured frontend that is destined for traefik.example.com should be redirected to the internal Traefik dashboard api@internal. Before that happens, the dashboard-ingress-auth middleware must be applied.\n👀 In my case, I have a certificate resolver called letsencrypt already configured. This is outside the scope of this post, but the ACME docs have good examples of HTTP and DNS validation for certificates from LetsEncrypt.\nNow we reach the tricky part. We need to create a username and password combination for basic authentication and store it in a secret. The easiest method here us to use htpasswd, create a secret from the file it creates, and then encrypt the file with SOPS:\n$ htpasswd -nB secretuser | tee auth-string New password: Re-type new password: secretuser:$2y$05$W4zCVrqGg8wKtIjOAU.gGu8MQC9k7sH4Wd1v238UfiVuGkf0xfDUu $ kubectl create secret generic -n traefik dashboard-auth-secret \\ --from-file=users=auth-string -o yaml --dry-run=client | tee dashboard-auth-secret.yaml apiVersion: v1 data: users: c2VjcmV0dXNlcjokMnkkMDUkVzR6Q1ZycUdnOHdLdElqT0FVLmdHdThNUUM5azdzSDRXZDF2MjM4VWZpVnVHa2YweGZEVXUKCg== kind: Secret metadata: creationTimestamp: null name: dashboard-auth-secret namespace: traefik $ sops -e --in-place dashboard-auth-secret.yaml Add all of the files we created into your flux repository or apply them with kubectl apply -f (and consider embracing gitops later).\nAccess your Traefik dashboard URL and you should see a basic authentication prompt. Enter the credentials you set with htpasswd and you should see your Traefik dashboard!\n","date":"20 April 2022","permalink":"/p/basic-auth-with-traefik-on-kubernetes/","section":"Posts","summary":"Keep prying eyes away from your sites behind Traefik with basic authentication. 🛃","title":"Basic authentication with Traefik on kubernetes"},{"content":"","date":null,"permalink":"/tags/flux/","section":"Tags","summary":"","title":"Flux"},{"content":"","date":null,"permalink":"/tags/gitops/","section":"Tags","summary":"","title":"Gitops"},{"content":"","date":null,"permalink":"/tags/radio/","section":"Tags","summary":"","title":"Radio"},{"content":"","date":null,"permalink":"/tags/traefik/","section":"Tags","summary":"","title":"Traefik"},{"content":"I\u0026rsquo;m an amateur radio operator licensed in the United States under the callsign W5WUT (formerly KG5VYL). The radio bug bit me back in 2017 after attending the Overland Expo in North Carolina.\nOur society depends on communication to survive. Amateur radio fills critical gaps during emergencies and it\u0026rsquo;s a great hobby to learn more about radios, electronics, computers, and just communication in general.\nThere\u0026rsquo;s a ham radio FAQ on the blog already that I need to update. 😉\nEquipment #Almost every conversation with a radio operator turns to \u0026ldquo;what equipment do you use?\u0026rdquo;:\nMobile Kenwood TM-D710GA Comet SBB-1 and SBB-5 Home Icom IC-746 (2m + HF) Icom IC-7300 (HF) Comet GP-3 2m/70cm RigExpert AA-55 ZOOM Portable Buddipole 6m dipole 20m EFHW Handheld Kenwood KH-20A Yaesu VX-8 Find me on the air #I enjoy SSB and FT8 on 20m on the weekends, but during the week, you can find me on 146.520 in the San Antonio area. I hop on the KE5HBB repeater in Live Oak, TX as well.\n","date":"20 April 2022","permalink":"/w5wut/","section":"Major Hayden","summary":"Ham radio is the best (and most frustrating) hobby on the planet! 📻","title":"W5WUT: My amateur radio station"},{"content":"Kubernetes has always felt like an enigma to me. On one hand, I love containers and I use them daily for personal and work projects. On the other hand, kubernetes feels like a heavy, burdensome set of tools that can be difficult to maintain over time. Keeping things organized in kubernetes deployments always felt challenging and unwieldy.\nWhat about this gitops thing? #A friend suggested looking into the gitops realm as a way to tame container deployments. Quick Google searches revealed that gitops is a mindset shift (like DevOps) and not a product that a vendor can sell you.\nAt its core, gitops involves tracking the state of a deployment through version control. Nothing updates the deployment unless it comes through version control first, and the deployment should deploy itself based on the state specified in version control.\nI found a few things intriguing:\nThe gitops mindset forces you to get organized before you deploy, not after. Gitops favors smaller change sets with better notes on each change. CI can tell you how a change will work (or not work). You can look at other people\u0026rsquo;s gitops repositories for how they accomplished certain automation tasks. (Sometimes these include best practices and sometimes they most definitely do not.) 🤭 This sounds great! My kubernetes manifests and configuration lives in one place in an organized way. But wait \u0026ndash; how do I handle secrets? 😱\nAbout secrets #Kubernetes offers a resource type called secrets. Although secrets and ConfigMaps both do similar jobs of providing configuration data for various kubernetes resources, secrets exist to hold sensitive information such as passwords, API keys, or TLS certificate data. Bear in mind that neither are encrypted within the cluster itself.\nIn the past, I loaded kubernetes secrets by hand with kubectl apply and kept them out of any shared storage, including git repositories. However, in my quest to follow the gitops way, I wanted a better option with much less manual work. My goal is to build a kubernetes deployment that could be redeployed from the git repository at a moment\u0026rsquo;s notice with the least amount of work required.\nSecrets in git #Everyone knows that one should never store secrets in git. GitHub even has a special bot that roams around repositories to find accidentally committed keys and tokens. The bot notifies you about these problems within moments of your git push and it even takes steps to disable certain API or SSH keys if they\u0026rsquo;re attached to your repository somewhere.\nWhat about a private GitHub repository? Sure, that\u0026rsquo;s one way to keep secrets away from prying eyes, but if you ever want to open up the repository later, you have some secrets in your history that must be cleaned. You also need deploy keys so that your cluster can access the code in your private repository. It\u0026rsquo;s a hassle.\nWhat about encrypting the secrets before uploading? On the plus side, you can use a public repository and share your code with someone else. No secrets appear in your git history, either. However, your kubernetes cluster must have a way to decrypt these secrets on the fly so it can reconcile any changes you make in the git repository.\nDecrypting secrets with flux #After lots of reading and poking through git repositories, I settled on flux as my gitops tool for kubernetes. It has an easy bootstrap process and it takes care of configuring git repositories for you. It supports various decryption tools, including the very popular SOPS from Mozilla.\nSOPS takes a kubernetes secret and encrypts it while maintaining the original structure of the secrets file itself. This is handy because it encrypts the secret value but leaves the keys as plain text. Troubleshooting gets easier when you know an environment variable is present even if you can\u0026rsquo;t see the value.\nFlux provides great documentation for using SOPS to manage secrets.\nBut wait, SOPS supports PGP, age, Google Cloud\u0026rsquo;s KMS, Azure\u0026rsquo;s Key Vault, Hashicorp Vault, and others. How do we decide?\nSecrets backend bonanza #I want to keep my kubernetes deployment as lean and simple as possible, so that eliminated the SOPS backends that require additional services, such as the Google Cloud, Azure, or Hashicorp Vault options.\nThat leaves me with PGP and age. I\u0026rsquo;ve used PGP a million times and it seemed like the obvious choice. But then I thought: what the heck is age?\nA friend told me that age (pronounced AHH-gey) saved him plenty of headaches because it\u0026rsquo;s so much simpler than dealing with gnupg keyrings and PGP keys. It has smaller keys that alleviate copy/paste issues and it\u0026rsquo;s designed for encrypting files. Sensible defaults also eliminate the need for complex configuration.\nLet\u0026rsquo;s combine SOPS with an age backend for storing our secrets in GitHub with flux decrypting those secrets on the fly!\nGenerating a key #Start by installing SOPS and age using their documentation:\nInstall SOPS Install age Enjoy the hilariously brief age-keygen help text:\n$ age-keygen --help Usage: age-keygen [-o OUTPUT] age-keygen -y [-o OUTPUT] [INPUT] Options: -o, --output OUTPUT Write the result to the file at path OUTPUT. -y Convert an identity file to a recipients file. Let\u0026rsquo;s make a key!\n$ age-keygen -o sops-key.txt Public key: age1wnvnq64tpze4zjdmq2n44eh7jzkxf5ra7mxjvjld6cjwtaddffqqc54w23 $ cat sops-key.txt # created: 2022-04-19T14:41:19-05:00 # public key: age1wnvnq64tpze4zjdmq2n44eh7jzkxf5ra7mxjvjld6cjwtaddffqqc54w23 AGE-SECRET-KEY-13T0N7N0W9NZKDXEFYYPWU7GN65W3UPV6LRERXUZ3ZGED8SUAAQ4SK6SMDL As you might expect with any other encryption scheme, the public key is the one we use to encrypt (and it\u0026rsquo;s okay to share), while the secret key decrypts data (and must be kept private).\nNext, make encryption easier by creating a small configuration file for SOPS. This allows you to encrypt quickly without telling SOPS which key you want to use. Create a .sops.yaml file like this one in the root directory of your flux repository:\ncreation_rules: - encrypted_regex: \u0026#39;^(data|stringData)$\u0026#39; age: age1wnvnq64tpze4zjdmq2n44eh7jzkxf5ra7mxjvjld6cjwtaddffqqc54w23 Add your public key to the age key above in the YAML file.\nLet\u0026rsquo;s test it to ensure SOPS and age are working together:\n$ kubectl create secret generic sopstest --from-literal=foo=bar -o yaml \\ --dry-run=client | tee sops-test-secret.yaml apiVersion: v1 data: foo: YmFy kind: Secret metadata: creationTimestamp: null name: sopstest $ sops -e sops-test-secret.yaml | tee sops-test-secret-encrypted.yaml apiVersion: v1 data: foo: ENC[AES256_GCM,data:UZY1VQ==,iv:54ce6xcRc28sjBQU4OjvbBUkvFhs4UKxaM8lOQtsbI4=,tag:Ms906PUkzSgNVpV2A2oG9Q==,type:str] kind: Secret metadata: creationTimestamp: null name: sopstest sops: kms: [] gcp_kms: [] azure_kv: [] hc_vault: [] age: - recipient: age1w8dts3ptgqsqac60z8v2asney6akyktad43k5reguj5suj6y83rstgyh8v enc: | -----BEGIN AGE ENCRYPTED FILE----- YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBCM2prVitpS09wY3Q4NFpZ eVlEc0xnOHRxT0poSk0wSWwrMDM2QVRJbjJRCjBMTjhiU1BwYWYwbVo5bWZlTjVF c3Z6QXdNekM4Y0wrcGVNZ052VUR3MDgKLS0tIFo5dGNyM2Nxb2NVNm5odzkwNVJs T1BXK0JhN3lKK0VaZTZTWUhyTHF0aWMKZtB5/fOeyjTy4FCkmlfn15OPabe0VKeZ rJMdx3MyF+RDQZHjs9nk9drb2bnAZ2ew1uwx31DkayhGDGF3rpk+oA== -----END AGE ENCRYPTED FILE----- lastmodified: \u0026#34;2022-04-19T19:50:20Z\u0026#34; mac: ENC[AES256_GCM,data:Hhu+4TxpI5Vpi4ZSXI79Lw+wEaZ6HxwfCTyRg6kExCBLHLJbULEfug11VTMrbMz6hpLnaRqBkq/FqLWqcxphzwTJ37p7OMeEtm7c7fN//t1sGjF96TP3MyqRypDbIFQCOPXEpnegASpis5HHLCLkvELXwyd/ucHlQs7gTUTzT4g=,iv:ssAD21AJ+wZr+XqrdZlRKmJeHbF5Sop5SGC8kAlQF+E=,tag:xZQvQltcb3wSnS5nQOjBFg==,type:str] pgp: [] encrypted_regex: ^(data|stringData)$ version: 3.7.2 So what did we just do?\nWe created a generic secret containing foo: bar and dumped it into a file without sending it to kubernetes. You might notice that bar became YmFy there. This is because kubernetes uses base64 to encode (not encrypt) secret values to avoid YAML parsing issues. Finally, we told SOPS to encrypt our secret to stdout, which we placed into a new file. SOPS knew which key to use because of our .sops.yaml configuration file. 💣 Use caution with raw, unencrypted secret files in your local repository. Ensure they cannot be committed to a repository accidentally via some sort of mechanism, such as listing them in your .gitignore or removing them as soon as you\u0026rsquo;ve finished encrypting them.\nIf we need to check or update our secret, we can always decrypt it using sops -d.\nDecryption in flux #In one of the earlier sections, I talked about that the system that reconciles the deployment with the repository (flux in this case), must be able to decrypt secrets all by itself. But wait, how do we give flux the key?\nI have not found a good automated way to get this done (yet), so this step is manual for now. Luckily, this is a once per cluster task.\nOur original key generation step created a sops-key.txt file and we need to create a secret from that file that only flux can see:\nkubectl -n flux-system create secret generic sops-age \\ --from-file=age.agekey=sops-key.txt This command creates a generic secret called sops-age with our key text stored in the age.ageKey YAML key. The secret exists only inside the flux-system namespace so that only the pods in that namespace have permission to read it.\nFinally, we must tell flux that it needs to decrypt secrets and we must provide the location of the decryption key. Flux is built heavily on kustomize manifests and that\u0026rsquo;s where our key configuration belongs.\nHere\u0026rsquo;s an example from my kustomization file for deploying traefik:\n--- apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization metadata: name: traefik namespace: flux-system spec: interval: 10m0s path: ./apps/traefik prune: true dependsOn: - name: cert-manager-config sourceRef: kind: GitRepository name: flux-system # Decryption configuration starts here decryption: provider: sops secretRef: name: sops-age The last four lines tell flux about our sops-age secret and that we\u0026rsquo;re using the SOPS backend for decryption. Commit this change and push it to your git repository.\nSo what happens when you commit and push an encrypted secrets file like the one we made above for foo: bar?\nFlux sees the change in the git repository. When it reaches the encrypted secret, it digs up the decryption configuration. From there, it retrieves the sops-age secret, reads the key, and uses SOPS with age to decrypt the secret. Flux applies the secret resource in kubernetes. At this point, if you retrieve the secret with kubectl -n my_namespace get secret/mysecret -o yaml, you get the unencrypted secret. Flux decrypts the secret from your git repository and adds it to kubernetes, but it remains unencrypted in the kubernetes cluster. This allows pods in the namespace to read data from the secret without any further decryption.\nEpilogue #You might be asking: \u0026ldquo;How does this whole flux thing work? How do I set up flux and fully embrace the gitops lifestyle?\u0026rdquo;\nDon\u0026rsquo;t worry. You didn\u0026rsquo;t miss anything. That\u0026rsquo;s a post I have yet to write. 😉\n","date":"19 April 2022","permalink":"/p/encrypted-gitops-secrets-with-flux-and-age/","section":"Posts","summary":"Store encrypted kubernetes secrets safely in your gitops repository with easy-to-use age encryption. 🔐","title":"Encrypted gitops secrets with flux and age"},{"content":"Kubernetes offers a plethora of storage options for mounting volumes in pods, and NFS is included. I have a Synology NAS at home and some of my pods in my home kubernetes deployment need access to files via NFS.\nAlthough the Kubernetes documentation has a bunch of examples about setting up NFS mounts, I ended up being more confused than when I started. This post covers a simple example that you can copy, adapt, and paste as needed.\nVerify that NFS is working #NFS can be tricky to get right and it\u0026rsquo;s important to verify that it\u0026rsquo;s working outside of kubernetes before you try mounting it in a pod. Trust me \u0026ndash; NFS looks quite simple at first glance but you can get confused quickly. Test out the easiest stuff first.\nAs an example: Synology made some recent changes to their NFS configuration where you must specify shares with a netmask (192.168.10.0/255.255.255.0) or in CIDR notation (192.168.10.0/24). That took me a while to figure out going back and forth from server to client and back again.\nIf you\u0026rsquo;re making your shares from a regular Linux server, refer to the Arch Linux NFS documentation. It\u0026rsquo;s one of the best write-ups on NFS around!\nFirst, I verified that the mount is showing up via showmount:\n$ showmount -e 192.168.10.60 Export list for 192.168.10.60: /volume1/media 192.168.10.50/32 My NFS server is on 192.168.10.60 and my NFS client (running kubernetes) is on 192.168.10.50.\n🤔 Got an error or can\u0026rsquo;t see any exports? Double check the IP addresses allowed to access the share on the server side and verify your client machine\u0026rsquo;s IP address.\n☝🏻 Remember to re-export your shares on the server with exportfs -arv if you made changes! The NFS server won\u0026rsquo;t pick them up automatically. Display your currently running exports with exportfs -v.\nLet\u0026rsquo;s try mounting the share next:\n$ sudo mount -t nfs 192.168.10.60:/volume1/media /tmp/test $ df -hT /tmp/test Filesystem Type Size Used Avail Use% Mounted on 192.168.10.60:/volume1/media nfs4 16T 7.5T 8.3T 48% /tmp/test Awesome! 🎉\nUnmount the test:\n$ sudo umount /tmp/test Mount NFS in a pod #First off, we need a deployment where we can mount up an NFS share. I decided to take a Fedora 35 container and create a really basic deployment:\napiVersion: apps/v1 kind: Deployment metadata: labels: app: fedoratest name: fedoratest namespace: fedoratest spec: template: spec: containers: - image: registry.fedoraproject.org/fedora:35 name: fedora command: [\u0026#34;/bin/bash\u0026#34;, \u0026#34;-c\u0026#34;, \u0026#34;--\u0026#34;] args: [\u0026#34;while true; do sleep 30; done;\u0026#34;] This is a really silly deployment that causes Fedora to sleep forever until someone stops the pod. I mainly want something that I can shell into and ensure NFS is working.\nNow we need to add two pieces to the deployment:\nAn NFS volume: volumes: - name: nfs-vol nfs: server: 192.168.10.60 path: /volume1/media A path to mount the volume volumeMounts: - name: nfs-vol mountPath: /opt/nfs When we add in those NFS pieces, we get the following deployment:\napiVersion: apps/v1 kind: Deployment metadata: labels: app: fedoratest name: fedoratest namespace: fedoratest spec: template: spec: containers: - image: registry.fedoraproject.org/fedora:35 name: fedora command: [\u0026#34;/bin/bash\u0026#34;, \u0026#34;-c\u0026#34;, \u0026#34;--\u0026#34;] args: [\u0026#34;while true; do sleep 30; done;\u0026#34;] volumeMounts: - name: nfs-vol mountPath: /opt/nfs volumes: - name: nfs-vol nfs: server: 192.168.10.60 path: /volume1/media Save that as fedoratest.yaml and apply it:\n$ kubectl apply -f fedoratest.yaml Let\u0026rsquo;s see if the volume worked:\n$ kubectl describe -n fedoratest deployment/fedoratest ✂ Volumes: nfs-vol: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.10.60 Path: /volume1/media ReadOnly: false ✂ Now let\u0026rsquo;s have a look at it inside the container itself:\n$ kubectl -n fedoratest exec -it deployment/fedoratest -- sh sh-5.1$ cd /opt/nfs sh-5.1$ ls dir1 dir2 dir3 Let\u0026rsquo;s ensure we can write files:\nsh-5.1$ touch doot sh-5.1$ ls -al doot -rwxrwxrwx. 1 root root 0 Apr 8 21:03 doot sh-5.1$ rm doot I can write files, but writing them as root causes problems for other applications. In this case, my NFS server uses UID 1035 for my user and GID 100 for my group. Lucky for us, we can set this up within our deployment configuration using securityContext:\napiVersion: apps/v1 kind: Deployment metadata: labels: app: fedoratest name: fedoratest namespace: fedoratest spec: template: spec: securityContext: runAsUser: 1035 # Use my UID on the NFS server runAsGroup: 100 # Use my GID on the NFS server containers: - image: registry.fedoraproject.org/fedora:35 name: fedora command: [\u0026#34;/bin/bash\u0026#34;, \u0026#34;-c\u0026#34;, \u0026#34;--\u0026#34;] args: [\u0026#34;while true; do sleep 30; done;\u0026#34;] volumeMounts: - name: nfs-vol mountPath: /opt/nfs volumes: - name: nfs-vol nfs: server: 192.168.10.60 path: /volume1/media Apply this change:\n$ kubectl apply -f fedoratest.yaml Now try to write a file again:\n$ kubectl -n fedoratest exec -it deployment/fedoratest -- sh sh-5.1$ touch doot sh-5.1$ ls -al doot -rwxrwxrwx. 1 1035 100 0 Apr 8 21:08 doot sh-5.1$ rm doot Perfect! Now files are owned by the correct UID and GID.\nExtra credit #If you plan to have plenty of pods mounting storage from the same NFS server, you might want to consider building out a persistent volume first and then making claims from it. The kubernetes examples repository has a good example of a persistent NFS volume and a persistent volume claim made from that volume.\n","date":"8 April 2022","permalink":"/p/mount-nfs-shares-in-kubernetes/","section":"Posts","summary":"Access files over NFS within kubernetes pods with a quick volume mount. 🗄","title":"Mount NFS shares in kubernetes"},{"content":"","date":null,"permalink":"/tags/nfs/","section":"Tags","summary":"","title":"Nfs"},{"content":"","date":null,"permalink":"/tags/hardware/","section":"Tags","summary":"","title":"Hardware"},{"content":"","date":null,"permalink":"/tags/supermicro/","section":"Tags","summary":"","title":"Supermicro"},{"content":"The Linux Vendor Firmware Service (LVFS) and fwupd turned the troublesome and time consuming activities of updating all kinds of firmware for laptops, desktops, and servers into something much easier. Check your list of updated firmware, update it, and submit feedback for the vendors when something doesn\u0026rsquo;t work. You can even get notifications right inside GUI applications, such as GNOME Software, that notify you about updates and allow you to install them with one click.\nHowever, not all vendors participate in LVFS and some vendors only participate in LVFS for some devices. I\u0026rsquo;ve had a small Supermicro server at home that\u0026rsquo;s been offline for quite some time. I decided to get it back online for some new projects and I discovered that the BIOS and BMC firmware were both extremely old.\nThis device isn\u0026rsquo;t included in LVFS, so we\u0026rsquo;re stuck with older methods.\nUpdate the BMC #The baseband management controller, or BMC, is an always-running component that provides out-of-band access to my server. I can control the server\u0026rsquo;s power, view a virtrual console, or control certain BIOS configurations from a web browser or IPMI client.\nLuckily, Supermicro makes it easy to update the BMC directly from the web interface. Our first step is to identify which board is inside the machine:\n$ sudo dmidecode -t 2 # dmidecode 3.3 Getting SMBIOS data from sysfs. SMBIOS 2.8 present. Handle 0x0002, DMI type 2, 15 bytes Base Board Information Manufacturer: Supermicro Product Name: X10SDV-TLN4F Version: 1.02 Serial Number: xxxxxxxxxxxx Asset Tag: To be filled by O.E.M. Features: Board is a hosting board Board is replaceable Location In Chassis: To be filled by O.E.M. Chassis Handle: 0x0003 Type: Motherboard Contained Object Handles: 0 Asking dmidecode for type 2 DMI data should give you the motherboard model in the Product Name field. In this case, mine is X10SDV-TLN4F. I ran over to Supermicro\u0026rsquo;s BMC List, typed in the motherboard model number, and downloaded the zip file. After unpacking the zip, I had a bunch of items:\n$ unzip -q REDFISH_X10_388_20200221_unsigned.zip $ ls *.bin REDFISH_X10_388_20200221_unsigned.bin This REDFISH_X10_388_20200221_unsigned.bin contains the firmware update for the BMC. Now access your IPMI interface via a web browser and authenticate. Follow these steps:\nClick the Maintenance menu and select Firmware Update Click Enter Update Mode Select your .bin file and update the BMC The upload process took around a minute and the BMC update took four or five minutes. The BMC will respond to pings early after the update but it will take a while for the web interface to respond. Be patient!\nUpdate the BIOS #If you have the appropriate license for your BMC, you can update the BIOS right from the BMC interface. I don\u0026rsquo;t have that license. Let\u0026rsquo;s find another way!\nStart by downloading Supermicro\u0026rsquo;s SUM utility. The software is available after the registration step (which is free). In my case, I needed the second download (not the UEFI one) since my server is a little older.\nNow we need to download the BIOS firmware itself. One of the easiest methods I\u0026rsquo;ve seen for this is to throw supermicro X10SDV-TLN4F (replace with your motherboard model) into Google and click the result. Then look for a Update your BIOS link under Links \u0026amp; Resources on the right side. That takes you directly to a zip file to download.\nCreate a directory (perhaps bios) and move the SUM zip as well as your firmware zip file in that directory. Some of these zip files have no directory prefix included and they will clobber your working directory. 🤦🏻‍♂️\nUnpack both zip files and find the sum executable:\n$ find . -name sum ./sum_2.8.0_Linux_x86_64/sum Great! Now we need to find the BIOS firmware file. In most cases, they start with a few characters from your motherboard and end in a numerical extension:\n$ ls -1 AFUDOSU.smc BIOS_X10SDV-TLNF_20210604_2.3_STDsp.zip CHOICE.SMC FDT.smc FLASH.BAT \u0026#39;Readme for AMI BIOS.txt\u0026#39; sum_2.8.0_Linux_x86_64 sum_2.8.0_Linux_x86_64_20220126.tar.gz X10SDVF1.604 The BIOS firmware is inside X10SDVF1.604 in my case. But first:\n💣 UPGRADING BIOS FIRMWARE IS SERIOUS BUSINESS. 😱 If an upgrade goes wrong, it may be challenging to get the system running properly again. I\u0026rsquo;ve recovered from some pretty awful BIOS update failures in the past on most x86 systems, but it was rarely an enjoyable process. Be sure you have stable power for the device, you are running the update inside tmux (especially if connected via ssh), and you have time to complete the operation.\nYou have been warned! 👀\n🚨 Start with a tmux or screen session, always. Seriously. Don\u0026rsquo;t skip this step.\nInside the tmux or screen session (which I\u0026rsquo;m sure you started because you were paying attention), let\u0026rsquo;s update the firmware:\n$ sudo ./sum -c UpdateBios --file /home/major/bios/X10SDVF1.604 Supermicro Update Manager (for UEFI BIOS) 2.8.0 (2022/01/26) (x86_64) Copyright(C) 2013-2022 Super Micro Computer, Inc. All rights reserved. WARNING: BIOS setting will be reset without option --preserve_setting Reading BIOS flash ..................... (100%) Writing BIOS flash ..................... (100%) Verifying BIOS flash ................... (100%) Checking ME Firmware ... Putting ME data to BIOS ................ (100%) Writing ME region in BIOS flash ... - FDT won\u0026#39;t be updated when ME is not in Manufacturing mode!! BIOS upgrade continues... - Updated Recovery Loader to OPRx - Updated FPT, MFSB, FTPR and MFS - ME Entire Image done WARNING:Must power cycle or restart the system for the changes to take effect! Awesome! My preference here is to power down, wait a few seconds, and power it back up. I\u0026rsquo;ve had issues in the past with soft restarts after BIOS upgrades on non-laptop systems and I\u0026rsquo;m ultra cautious.\n$ sudo poweroff Once it\u0026rsquo;s fully powered down, power it back up using the BMC/IPMI or via the button on the device. If all goes well, you should see a new firmware version after boot:\n$ sudo dmidecode -t 0 # dmidecode 3.3 Getting SMBIOS data from sysfs. SMBIOS 2.8 present. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: American Megatrends Inc. Version: 2.3 Release Date: 06/04/2021 Address: 0xF0000 Runtime Size: 64 kB ROM Size: 16 MB Perfect! 🎉 I downloaded version 2.3 from Supermicro\u0026rsquo;s site and it\u0026rsquo;s now running on my server!\nExtra credit #You may want to do some additional (optional) steps depending on your configuration.\nI rebooted into the BIOS and chose to load the optimized defaults in case something important was changed in the latest BIOS firmware. This may revert a few of your settings if you had some customizations, so be sure to roll through the BIOS menu and look for any of those issues.\n","date":"7 April 2022","permalink":"/p/update-supermicro-bios-firmware-from-linux/","section":"Posts","summary":"Upgrade your Supermicro BIOS firmware from Linux using their SUM utility. 🔧","title":"Update Supermicro BIOS firmware from Linux"},{"content":"Over the past two years, I picked up stock trading and general finance knowledge as a hobby. There are plenty of things I enjoy here: complex math, understanding trends, and making educated guesses on what happens next. Getting the right tools makes this job a little bit easier.\nI use TD Ameritrade for the majority of my trading and learning. They offer a desktop application with a great name: ThinkOrSwim. Using it feels a bit like flying the Space Shuttle at first, but it delivers tons of information and analysis in a small package.\nThis post isn\u0026rsquo;t about stock trading \u0026ndash; it\u0026rsquo;s about how to wrestle ThinkOrSwim onto a Fedora Linux machine and get everything working as it should. 🐧\nGetting (the right) Java #ThinkOrSwim\u0026rsquo;s download page mentions installing Zulu OpenJDK 11 first. This is a special certified release of the JDK that is well-tested with ThinkOrSwim.\nVisit the RPM-based Linux page for Azul\u0026rsquo;s OpenJDK and follow the steps they provide:\n$ sudo dnf install -y https://cdn.azul.com/zulu/bin/zulu-repo-1.0.0-1.noarch.rpm $ sudo dnf install zulu11-jdk If you already have a JDK installed, you\u0026rsquo;ll need to switch to the Azul JDK as the primary one. I have some other Java applications on my desktop and they seem to work just fine with this JDK. We will use the alternatives script to manage the symlinks for our default JDK:\n$ sudo alternatives --config java There are 2 programs which provide \u0026#39;java\u0026#39;. Selection Command ----------------------------------------------- *+ 1 java-17-openjdk.x86_64 (/usr/lib/jvm/java-17-openjdk-17.0.2.0.8-7.fc36.x86_64/bin/java) 2 /usr/lib/jvm/zulu11/bin/java Enter to keep the current selection[+], or type selection number: Press 2 here and then press ENTER. Double check that the Azul JDK is the primary:\n$ java --version openjdk 11.0.14.1 2022-02-08 LTS OpenJDK Runtime Environment Zulu11.54+25-CA (build 11.0.14.1+1-LTS) OpenJDK 64-Bit Server VM Zulu11.54+25-CA (build 11.0.14.1+1-LTS, mixed mode) Adding ALSA support #Yes, we are going back in time and getting ThinkOrSwim talking with ALSA. We will need libasound_module_pcm_pulse.so on the system:\n$ sudo dnf whatprovides \u0026#39;*/libasound_module_pcm_pulse.so\u0026#39; Last metadata expiration check: 3:09:00 ago on Thu 31 Mar 2022 11:14:57 AM CDT. alsa-plugins-pulseaudio-1.2.6-2.fc36.i686 : Alsa to PulseAudio backend Repo : fedora Matched from: Filename : /usr/lib/alsa-lib/libasound_module_pcm_pulse.so alsa-plugins-pulseaudio-1.2.6-2.fc36.x86_64 : Alsa to PulseAudio backend Repo : @System Matched from: Filename : /usr/lib64/alsa-lib/libasound_module_pcm_pulse.so alsa-plugins-pulseaudio-1.2.6-2.fc36.x86_64 : Alsa to PulseAudio backend Repo : fedora Matched from: Filename : /usr/lib64/alsa-lib/libasound_module_pcm_pulse.so $ sudo dnf install alsa-plugins-pulseaudio Now that we have ALSA support, let\u0026rsquo;s move on to configuring the JDK to use it properly. Keith Packard ran into sound problems in Ubuntu and fixed it with a small change to his sound.properties file:\n#javax.sound.sampled.Clip=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider #javax.sound.sampled.Port=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider #javax.sound.sampled.SourceDataLine=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider #javax.sound.sampled.TargetDataLine=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider javax.sound.sampled.Clip=com.sun.media.sound.DirectAudioDeviceProvider javax.sound.sampled.Port=com.sun.media.sound.PortMixerProvider javax.sound.sampled.SourceDataLine=com.sun.media.sound.DirectAudioDeviceProvider javax.sound.sampled.TargetDataLine=com.sun.media.sound.DirectAudioDeviceProvider Let\u0026rsquo;s double check that this will work for the Azul JDK. We need to see if these com.sun.media.sound import paths exists for us:\n$ find /usr/lib/j* |grep -i sound /usr/lib/jvm/java-17-openjdk-17.0.2.0.8-7.fc36.x86_64/lib/libjsound.so /usr/lib/jvm/zulu11-ca/conf/sound.properties /usr/lib/jvm/zulu11-ca/lib/libjsound.so $ strings /usr/lib/jvm/zulu11-ca/lib/libjsound.so | grep DirectAudioDeviceProvider Java_com_sun_media_sound_DirectAudioDeviceProvider_nNewDirectAudioDeviceInfo Java_com_sun_media_sound_DirectAudioDeviceProvider_nGetNumDevices ?com/sun/media/sound/DirectAudioDeviceProvider$DirectAudioDeviceInfo Java_com_sun_media_sound_DirectAudioDeviceProvider_nNewDirectAudioDeviceInfo Java_com_sun_media_sound_DirectAudioDeviceProvider_nGetNumDevices Awesome! Those paths match up exactly! 🎉\nLet\u0026rsquo;s find our sound.properties file and make the same modifications:\n$ find /usr/lib/jvm -name sound.properties /usr/lib/jvm/zulu11-ca/conf/sound.properties Open /usr/lib/jvm/zulu11-ca/conf/sound.properties in your favorite editor and add on Keith\u0026rsquo;s four lines at the end:\n# /usr/lib/jvm/zulu11-ca/conf/sound.properties javax.sound.sampled.Clip=com.sun.media.sound.DirectAudioDeviceProvider javax.sound.sampled.Port=com.sun.media.sound.PortMixerProvider javax.sound.sampled.SourceDataLine=com.sun.media.sound.DirectAudioDeviceProvider javax.sound.sampled.TargetDataLine=com.sun.media.sound.DirectAudioDeviceProvider Installing ThinkOrSwim #Head over to ThinkOrSwim\u0026rsquo;s download page to download the installer. It\u0026rsquo;s a big installer bundled up inside a shell script (ugly, I know). Run the script with /bin/bash thinkorswim_installer.sh and follow the prompts. I choose to install it only for my user so that it installs in my home directory.\nI use i3wm and the installer doesn\u0026rsquo;t put a desktop file in the right place for me. Here\u0026rsquo;s what I drop into ~/.local/share/applications/thinkorswim.desktop:\n# ~/.local/share/applications/thinkorswim.desktop [Desktop Entry] Name=ThinkOrSwim Comment=ThinkOrSwim Desktop Exec=/home/major/thinkorswim/thinkorswim Type=Application Categories=Finance Of course, if you aren\u0026rsquo;t running as the user major or if you installed ThinkOrSwim in a different location, be sure to change the Exec= line above. 😉\nStart up ThinkOrSwim using your desktop launcher and enjoy trading on Linux! 🎉\n","date":"31 March 2022","permalink":"/p/install-thinkorswim-on-fedora-linux/","section":"Posts","summary":"Learn how to install TD Ameritrade\u0026rsquo;s ThinkOrSwim desktop application on Linux and get everything working. 💸","title":"Install ThinkOrSwim on Fedora Linux"},{"content":"","date":null,"permalink":"/tags/options/","section":"Tags","summary":"","title":"Options"},{"content":"","date":null,"permalink":"/tags/stocks/","section":"Tags","summary":"","title":"Stocks"},{"content":"","date":null,"permalink":"/tags/alacritty/","section":"Tags","summary":"","title":"Alacritty"},{"content":"The alacritty terminal remains my favorite terminal because of its simple configuration, regular expression hints, and incredible performance. It\u0026rsquo;s written in Rust and it uses OpenGL to accelerate the terminal output.\nI also like high DPI displays. My desktop has two 4K monitors (3840x2160) and my X1 Nano (2160x1350) crams plenty of pixels into a small display. With Linux, you get two options:\nUse HiDPI with larger fonts for a clear, crisp display. It really does look pretty. Disable HiDPI and get a lot more screen real estate with smaller fonts. Prepare to squint. 🥸 I prefer more screen real estate (and I wear glasses), so I usually disable HiDPI on most of my machines. After wiping my laptop and starting over recently, I realized my alacritty terminals had massive fonts again. If only I had written down how I fixed it. 🤔\nLucky for you, I\u0026rsquo;m writing about those options here:\nDisable HiDPI just for alacritty #You might be saying \u0026ldquo;just lower the font size in the alacritty configuration\u0026rdquo; and call it a day. Well, that method leaves you with smaller fonts, sure, but then there are gaps in spacing between character and lines. It just looks clunky.\nThere\u0026rsquo;s an environment variable we can use: WINIT_X11_SCALE_FACTOR. To disable HiDPI and use pixels at a 1:1 ratio, set the following option in your alacritty configuration file (usually ~/.config/alacritty/alacritty.yml):\nenv: WINIT_X11_SCALE_FACTOR: \u0026#34;1\u0026#34; Close all of your alacritty terminals and open a new one. You should now see smaller fonts and a terminal with HiDPI disabled.\nDisable HiDPI across the board #In many of the full-featured window manager, such as GNOME or KDE, you can disable HiDPI within the system settings. The i3 window manager requires some manual work (as you might expect).\nFrom the old days of X comes .Xresources! These X resources set all kinds of configuration options for all applications running under X. Here\u0026rsquo;s my current ~/.Xresources file:\nXft.dpi: 96 Xft.autohint: 0 Xft.lcdfilter: lcddefault Xft.hintstyle: hintmedium Xft.hinting: 1 Xft.antialias: 1 Xft.rgba: rgb The first line sets the DPI to 96 (which is my preference). Increasing that number will take you closer to a HiDPI setting and potentially make text crisper, but larger. Lowering it will make text smaller and sometimes a bit ugly if you go below 85.\nTo use X resources, you first need xrdb. On Fedora, install it, load your current configuration, and query it:\n$ dnf install /usr/bin/xrdb $ xrdb -merge ~/.Xresources $ xrdb -query -all Xft.antialias:\t1 Xft.autohint:\t0 Xft.dpi:\t96 Xft.hinting:\t1 Xft.hintstyle:\thintmedium Xft.lcdfilter:\tlcddefault Xft.rgba:\trgb You need to quit most applications and start them again before you can see the DPI changes. In some situations, I needed to reboot to get the changes in place for all applications.\n","date":"25 March 2022","permalink":"/p/disable-hidpi-alacritty/","section":"Posts","summary":"The alacritty terminal on Fedora enables HiDPI mode by default. Break out your magnifying glasses as we disable HiDPI. 👓","title":"Disable HiDPI in alacritty"},{"content":"","date":null,"permalink":"/tags/hidpi/","section":"Tags","summary":"","title":"Hidpi"},{"content":"Shortened URLs make it easier to quickly reference complicated URLs and share them with other people. For example, https://url.major.io/reviews is definitely an easier method for sharing my Fedora package review list with other people instead of the full Bugzilla URL:\nhttps://bugzilla.redhat.com/buglist.cgi?bug_status=__open__\u0026amp;component=Package%20Review\u0026amp;email1=mhayden%40redhat.com\u0026amp;emailreporter1=1\u0026amp;emailtype1=substring\u0026amp;list_id=12512813\u0026amp;product=Fedora\u0026amp;query_format=advanced It also avoids those situations where you share a URL only to find that a chat system gobbled up special characters and your URL arrives broken on the other end.\nHowever, most URL shorteners depend on a web server and possibly a database server to serve up shortened URLs. This is a really quick setup that nearly any system administrator has done a hundred times or more, but what about the ongoing maintenance and updates? What about redundancy? How much will it cost?\nThat\u0026rsquo;s simply too much work. I went on the hunt for an alternative.\nShortening only with GitHub #My first stop was nelsontky\u0026rsquo;s gh-pages-url-shortener. It uses GitHub issues to manage URLs, but the author mentions that the solution is a bit hacky. I need something more reliable.\nShortening with Cloudflare Pages\u0026rsquo; redirects #Cloudflare offers redirects via a simple text file on its Pages service. (This blog is hosted via Cloudflare pages and it\u0026rsquo;s been extremely reliable and fast.)\nThe only downside is the limits applied to the redirects:\nA project is limited to 100 total redirects. Each redirect declaration has a 1000-character limit. Malformed definitions are ignored. If there are multiple redirects for the same source path, the topmost redirect is applied.\nIf you think you\u0026rsquo;ll need less than 100 redirects and your destination URLs are under 1,000 characters, this might work for you. Head on over to Deploying your site and get going!\nWhat if you want (nearly) limitless redirects?\nNearly limitless redirects with Cloudflare Workers #More Googling led to a blog post titled World\u0026rsquo;s Simplest URL Shortener using Cloudflare Workers. In the post, Patrick Reader lays out a simple javascript handler that takes the URI provided, compares it to a list of JSON keys, and then returns the destination URL.\nHis instructions in the post get you up and running quickly. He also offers up a link to his own GitHub repo so you can fork it and get done quickly. The urls.json file expects the keys to be short URIs and the values to be the long URLs, like this:\n{ \u0026#34;\u0026#34;: \u0026#34;https://example.com/myblog\u0026#34;, \u0026#34;recipes\u0026#34;: \u0026#34;https://example.com/long/url/to/my/favorite/recipe\u0026#34;, \u0026#34;twitter\u0026#34;: \u0026#34;https://twitter.com/myusername\u0026#34; } I decided to build out my repository with wrangler\u0026rsquo;s generate command and then brought over Patrick\u0026rsquo;s script. Your wrangler.toml will need a few adjustments to get going:\nSet type to webpack. account_id: Find your account ID by going to the Cloudflare dashboard and clicking on Workers on the left side. (Look for Account ID on the right side.) Set workers_dev to false if you want to use your own domain. Set route to the URL matcher you want to apply to the Worker. I use url.major.io as my domain for my URL shortener, so my route is url.major.io/*. Your zone_id is also in the Cloudflare dashboard. Click Websites on the left side, click your domain name, and look for Zone ID on the right side. Manual work is not my thing, so I wanted to get GitHub Actions to do the work for me when I changed my list of short URLs. For that, I first needed an API key from Cloudflare. For that, go to your API Tokens page at Cloudflare and do these steps:\nClick Create Token. Click Use template to the right of Edit Cloudflare Workers. Under Zone Resources, choose the domain name where you want to use the URL shortener. Select your account under Account Resources. Click Continue to summary and copy your API key. Run over to your repo in GitHub, click Settings, Secrets, then Actions. Click New repository secret. Use CF_API_TOKEN as the Name and fill in your API key as the Value. Click Add secret. All that\u0026rsquo;s left is to drop in a GitHub Actions workflow to make it all automated. You can copy the workflow file from my repository:\nname: Deploy on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest name: Deploy steps: - uses: actions/checkout@v3 - name: Publish uses: cloudflare/wrangler-action@1.3.0 with: apiToken: ${{ secrets.CF_API_TOKEN }} Each time you change a URL in urls.json, GitHub Actions assembles your application for Cloudflare Workers and ships it to Cloudflare. You can edit your URLs right in the GitHub web editor, save them, and they\u0026rsquo;ll be active in a minute or two!\nEnjoy! 🎉\n","date":"24 March 2022","permalink":"/p/build-url-shortener-with-cloudflare-workers/","section":"Posts","summary":"Host your own personal URL shortener with GitHub Actions and Cloudflare Workers. No web or database servers required! 🥰","title":"Build a URL shortener with Cloudflare Workers"},{"content":"","date":null,"permalink":"/tags/cloudflare/","section":"Tags","summary":"","title":"Cloudflare"},{"content":"","date":null,"permalink":"/tags/github/","section":"Tags","summary":"","title":"Github"},{"content":"","date":null,"permalink":"/tags/javascript/","section":"Tags","summary":"","title":"Javascript"},{"content":"","date":null,"permalink":"/tags/serverless/","section":"Tags","summary":"","title":"Serverless"},{"content":"","date":null,"permalink":"/tags/brave/","section":"Tags","summary":"","title":"Brave"},{"content":"","date":null,"permalink":"/tags/kerberos/","section":"Tags","summary":"","title":"Kerberos"},{"content":"My primary browser flips back and forth between Brave and Firefox depending on my current tasks, but kerberos logins are integral to my workflow at work and also as a Fedora contributor. Kerberos provides a single sign on (SSO) capability so you can authenticate one time and then perform lots of actions against various targets without authenticating again.\nKerberos is a protocol that runs something like this:\nYou authenticate to an authentication server with username/password/2FA That server forwards the authentication result to a key server That key server gives you a ticket From then on, you present your ticket to complete the authentication steps. There\u0026rsquo;s no need to provide your username, password, or two-factor authentication once you have your ticket (for most implementations). When your ticket expires, you authenticate once more, get a new ticket, and go on about your day.\nThe real time-saver here is that your browser can handle kerberos tickets when you authenticate to various services in your browser. However, you must tell your browser about the sites you trust before you start handing over your ticket. That\u0026rsquo;s where something went wrong with Brave for me last week.\nThe problem #I went through my usual kinit steps to get my kerberos tickets when I started work in the morning, but I was prompted to authenticate to various sites when I accessed them in my browser. Normally there\u0026rsquo;s a short delay with a couple of redirects through an SSO portal, but I was stuck staring at login screens even though I had valid tickets.\nYou can double check your ticket validity with klist -A and sure enough, my tickets were valid for several hours more. Firefox didn\u0026rsquo;t have the issue and I sailed through SSO logins on my usual sites.\nGenerally, Brave looks for managed policies that describe kerberos authentication delegations in the usual spot where Chomium stores them: /etc/chromium/policies/managed. My policies were there. For example, here\u0026rsquo;s the one I use for Fedora:\n$ cat /etc/chromium/policies/managed/fedora_kerberos.json { \u0026#34;AuthServerAllowlist\u0026#34;: \u0026#34;*.fedoraproject.org\u0026#34;, } This configuration tells the browser that it can use kerberos authentication with any system that matches *.fedoraproject.org. My configuration hasn\u0026rsquo;t changed in ages.\nCould it be Brave\u0026rsquo;s fault? 🤔\nSome digging #I also noticed that Brave didn\u0026rsquo;t have it\u0026rsquo;s usual warning about my organization having managed policies on the system, so Brave wasn\u0026rsquo;t reading the configurations at all. The first thing I needed to do was to see what Brave was looking for during startup.\nAfter closing all of my Brave windows, I used strace to dump what was happening during startup:\n$ strace -f -o brave-strace.txt brave-browser As soon as Brave fully appeared on screen, I closed it and stopped strace with CTRL-c. It was time to see where Brave was looking for the configuration:\n$ grep policies brave-strace.txt 9917 stat(\u0026#34;/etc/brave/policies/managed\u0026#34;, {st_mode=S_IFDIR|0755, st_size=144, ...}) = 0 9917 openat(AT_FDCWD, \u0026#34;/etc/brave/policies/managed\u0026#34;, O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 14 What is this /etc/brave? It always looked for configuration in /etc/chromium! 🤦🏻‍♂️\nFixing it #I brought over the config from /etc/chromium into /etc/brave to see if that would help:\n$ mkdir -p /etc/brave/policies/managed $ sudo cp /etc/chromium/policies/managed/* /etc/brave/policies/managed/ After starting Brave one more time, I noticed the Managed by your organization warning in the options menu again. I was then able to wander around to various sites at work and within Fedora\u0026rsquo;s infrastructure and my kerberos SSO worked once again! 🎉\n","date":"18 December 2021","permalink":"/p/kerberos-logins-brave-linux/","section":"Posts","summary":"Brave recently changed how their browser reads managed policy configuration, but luckily the fix is an easy one. 🔧","title":"Kerberos logins with Brave on Linux"},{"content":"Photo credit: Marek Piwnicki\nLet\u0026rsquo;s start this post with a quote:\n\u0026ldquo;We are dying from overthinking. We are slowly killing ourselves by thinking about everything.\u0026rdquo; \u0026ndash; Anthony Hopkins\nThat\u0026rsquo;s an accurate summary of my life on Twitter over the past few years. I lost the enjoyment from connecting with other people and suddenly found myself doom scrolling \u0026ndash; looking over my Twitter feed with glossy eyes, seeing only negativity and feeling less connected to other people.\nThen I thought: \u0026ldquo;Why am I using social media if it makes me feel less social and less connected?\u0026rdquo;\nIs it Twitter? Is there something wrong with the platform? Is it the same problem that people see with Facebook?\nIs it me?\nThe problem #Social media companies thrive on eyeballs. More eyeballs looking at the platform for longer periods means higher ad revenue. Sure, cute and fuzzy cats or funny family videos certainly capture attention, but there\u0026rsquo;s nothing like the attention you gain through spreading fear, anger, and contempt.\nMany argue that platforms like Twitter, Facebook, Instagram, and Tiktok have algorithms that show you content that cause you to spend more time on the site. Much of the content out there is negative and that certainly drives traffic.\nHowever, the humans using the platform an interacting with each other are also responsible. Lies and misleading information spread like wildfire, especially if they disparage a group or a belief that is divisive. By the time the truth comes out, the lie has done irreparable damage.\nIn Tyler Merritt\u0026rsquo;s book, I Take My Coffee Black, he talks about divisiveness that appeared in his church and community. He writes that distance breeds contempt and distrust, but proximity brings understanding. You see this constantly on social media where people wage war over politics but then they do their best to keep quiet about it when they meet in person.\nMy problem with Twitter #I started with Twitter in 2008 and my account is just over 10,000 followers. Sure, that\u0026rsquo;s a tiny fraction compared to many other accounts, but to me, that\u0026rsquo;s a tremendous amount. I\u0026rsquo;m truly humbled at the amount of people who are interested in what I say and share. Thank you.\nOn the other hand, when it came time to follow people, I felt like it was important to follow anyone who followed me. If you\u0026rsquo;re going to take time to listen to what I have to say, then I should do the same. Right?\nAt first, this worked out well. Over time, I followed some people who shared things that I was upset about. That lead me to follow other people who reinforced my beliefs, biases, and choices. I\u0026rsquo;d run into opinions that differed from mine and I wouldn\u0026rsquo;t follow those people or extend the reach of what they shared. My set of beliefs were continually reinforced.\nThrough all this time, I had really close friends who would ask: \u0026ldquo;Hey, did you see that blog post I wrote? I put it on Twitter.\u0026rdquo; My response would usually be \u0026ldquo;Oh, well, I follow a lot of people and it\u0026rsquo;s hard to keep up with all of that.\u0026rdquo;\nThat was a cop-out. The problem was staring me straight in the face.\nI stopped using social media to connect with people. I was in a downward spiral of negativity that I had brought upon myself.\nNew goals #Last weekend, I put up a final tweet saying I was done with Twitter for a while and provided some other contact information for anyone who wanted to reach out. I deleted Twitter from my devices, signed out on my computers, and enjoyed my weekend.\nThat was the key. I enjoyed my weekend.\nThen I thought, \u0026ldquo;How can I enjoy next week and next weekend, too?\u0026rdquo;\nSo I set up some goals for myself with social media:\nReduce my list of people I follow to people who I truly care about. Someone should earn that spot not from simply following me, but by sharing things that add value to the world. If anyone I follow violates the first rule, consider removing them or temporarily moving them to a list. Find a way to filter out retweets so I can see what people are saying, not what they\u0026rsquo;re passing along. Keep everything else (news, financial information, etc) compartmentalized into lists that I can review when I need that information. The reset #After my weekend off Twitter, I realized I had over 57,000 tweets (many of which violated my own first rule above) and I was following thousands of people on Twitter. Many of those people I followed were people I cared nothing about (celebrities, angry people, divisive political figures).\nThis required some drastic action.\nI downloaded all of my Twitter data in a big zip file. I used the tweepy Python module to delete every tweet, every favorite, every direct message, and every retweet. Every person I followed was removed. Yes. All of it. Deleting 57,000 tweets took several hours but the remaining bits were removed in less than an hour. The tweepy package made the process painless and it was easy to just leave running in a terminal while I worked on other things.\nGoing forward #My social media goals (see above) drive my current decisions on Twitter. If I followed you before, but I don\u0026rsquo;t follow you now: please don\u0026rsquo;t take it personally. I\u0026rsquo;m gradually bringing people back into my feed and I\u0026rsquo;m looking at Twitter less often.\nIt might take me a little while to find you again. That\u0026rsquo;s okay. 🤗\nIn addition, I plan to share more real, interesting, original content and avoid the favorite and retweet buttons unless I find something incredibly valuable. I plan to avoid negativity but I will share things that make us question the norms that surround us.\nAfter all, questioning the status quo so much of what makes us human. We know there\u0026rsquo;s something better out there and we want to achieve it. We can\u0026rsquo;t do it by tearing each other down, but we can surely do it by building meaningful connections between all of us.\n","date":"17 December 2021","permalink":"/p/my-twitter-reset/","section":"Posts","summary":"At first I thought Twitter was the problem, but then I realized I was making poor choices. 🤔","title":"My Twitter reset"},{"content":"","date":null,"permalink":"/tags/social-media/","section":"Tags","summary":"","title":"Social Media"},{"content":"","date":null,"permalink":"/tags/amazon/","section":"Tags","summary":"","title":"Amazon"},{"content":"Fedora reigns supreme as my Linux distribution of choice when I deploy new workloads to public clouds. It gives me a well-tested, modern Linux system with tons of helpful tools.\nFedora\u0026rsquo;s cloud images provide a great base to begin building a cloud deployment, but sometimes I find myself wanting a highly customized image with some features I care about. For example, I may want some packages pre-installed that aren\u0026rsquo;t included with the default cloud image, or I may want certain services stopped or started at boot time.\nFortunately, Fedora includes Image Builder. Image Builder does the hard work of building your image, uploading it to a cloud provider, and then registering that image. It has built-in support for handing AMIs (Amazon Machine Images) at AWS and you can use it as a one-stop-shop for customizing an AMI for your use case.\nConfigure the AWS cli #Start by installing the awscli package:\n$ sudo dnf install awscli The cli will need some way to access your AWS account. I usually create an administrative user and attach an administrative policy to the user. This allows me to control my entire account via the cli tool.\n💣 (Note: Although this works well for my simple uses here, be careful using administrative users on large accounts. You may want to be more restrictive with your IAM policies, but that\u0026rsquo;s a topic for a different post.)\nWe start by accessing the IAM dashboard in your AWS account. Follow these steps:\nClick Users on the left and then Add users on the right Enter a name for your user (I used desktop-cli) Click the Access key - Programmatic access checkbox Click Next: Permissions Click Attach existing policies directly and then tick the box next to Administrator Access Click Next: Tags and add any tags if your organization requires them Click Next: Review and then Create User The next screen shows your Access key ID and your Secret access key. Click Show to display your secret access key. Go back to your cli tool and run the configuration command:\n$ aws configure --profile personal AWS Access Key ID [None]: \u0026lt;\u0026lt; enter your access key id here \u0026gt;\u0026gt; AWS Secret Access Key [None]:\u0026lt;\u0026lt; enter your secret access key here \u0026gt;\u0026gt; Default region name [None]: us-east-1 Default output format [None]: Let\u0026rsquo;s verify that the credentials work:\n$ aws --profile personal sts get-caller-identity { \u0026#34;UserId\u0026#34;: \u0026#34;xxx\u0026#34;, \u0026#34;Account\u0026#34;: \u0026#34;xxx\u0026#34;, \u0026#34;Arn\u0026#34;: \u0026#34;arn:aws:iam::xxx:user/desktop-cli\u0026#34; } To avoid typing your profile over and over, just export an environment variable:\n$ export AWS_PROFILE=personal Preparing permissions for image imports #When you import images into AWS, some services must take action on your behalf to copy your image from S3 into an EBS snapshot, and then register that snapshot as an AMI. This requires an S3 bucket and some extra permissions for some AWS services.\nFirst off, let\u0026rsquo;s create an S3 bucket to hold our image:\n$ aws s3 mb s3://image-upload-bucket-blog-post make_bucket: image-upload-bucket-blog-post From here, we follow the AWS documentation for importing images with some slight modifications for Image Builder. Save this file as trust-policy.json:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;Service\u0026#34;: \u0026#34;vmie.amazonaws.com\u0026#34; }, \u0026#34;Action\u0026#34;: \u0026#34;sts:AssumeRole\u0026#34;, \u0026#34;Condition\u0026#34;: { \u0026#34;StringEquals\u0026#34;:{ \u0026#34;sts:Externalid\u0026#34;: \u0026#34;vmimport\u0026#34; } } } ] } Now we create the vmimport role with the policy that allows AWS to assume this role and import an image:\n$ aws iam create-role --role-name vmimport \\ --assume-role-policy-document \u0026#34;file://trust-policy.json\u0026#34; We need to set some policy for our new vmimport role now to limit what AWS is allowed to do in our account. Save the following as role-policy.json:\n{ \u0026#34;Version\u0026#34;:\u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;:[ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetBucketLocation\u0026#34;, \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:ListBucket\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:s3:::image-upload-bucket-blog-post\u0026#34;, \u0026#34;arn:aws:s3:::image-upload-bucket-blog-post/*\u0026#34; ] }, { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;ec2:ModifySnapshotAttribute\u0026#34;, \u0026#34;ec2:CopySnapshot\u0026#34;, \u0026#34;ec2:RegisterImage\u0026#34;, \u0026#34;ec2:Describe*\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; } ] } 🛑 Stop here and change image-upload-bucket-blog-post to the S3 bucket name that you used in the first step of this section.\n$ aws iam put-role-policy --role-name vmimport --policy-name vmimport \\ --policy-document \u0026#34;file://role-policy.json\u0026#34; Image Builder also needs a user that can perform some functions inside AWS to upload and import the image. Let\u0026rsquo;s create a policy for a new user and save it as image-builder-policy.json:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;VisualEditor0\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;ec2:CreateTags\u0026#34;, \u0026#34;ec2:RegisterImage\u0026#34;, \u0026#34;ec2:ImportSnapshot\u0026#34;, \u0026#34;ec2:DescribeImportSnapshotTasks\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; }, { \u0026#34;Sid\u0026#34;: \u0026#34;VisualEditor1\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:DeleteObject\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:s3:::image-upload-bucket-blog-post/*\u0026#34; } ] } Add the policy, create a user, and attach the policy to your user:\n$ aws iam create-policy --policy-name imagebuilder \\ --policy-document \u0026#34;file://image-builder-policy.json\u0026#34; $ aws iam create-user --user-name imagebuilder $ aws iam attach-user-policy --user-name imagebuilder \\ --policy-arn arn:aws:iam::YOUR_ACCOUNT_NUMBER:policy/imagebuilder The ARN for your IAM policy includes your account number and the ARN should appear after running the create-policy command.\nInstall Image Builder #It\u0026rsquo;s no secret that I love good command line tools over graphical interfaces, so we will follow the cli steps for Image Builder in the remainder of this post. Let\u0026rsquo;s start by installing everything we need for Image Builder and starting the socket activation unit:\n$ sudo dnf install composer-cli osbuild-composer $ sudo systemctl enable --now osbuild-composer.socket Verify that osbuild-composer is listening:\n$ composer-cli status show API server status: Database version: 0 Database supported: true Schema version: 0 API version: 1 Backend: osbuild-composer Build: NEVRA:osbuild-composer-%{epoch}:37-1.fc35.x86_64 That was easy!\nBuild and deploy our AMI #Image Builder uses specifications called blueprints. These are TOML files that tell Image Builder how to configure your image. You can configure these in many different ways, but here\u0026rsquo;s the one I\u0026rsquo;m using for this post:\nname = \u0026#34;major-perfect-f35\u0026#34; description = \u0026#34;Major\u0026#39;s perfect Fedora 35 cloud image\u0026#34; version = \u0026#34;0.0.1\u0026#34; [[packages]] name = \u0026#34;firewalld\u0026#34; [[packages]] name = \u0026#34;tmux\u0026#34; [[packages]] name = \u0026#34;vim\u0026#34; [[packages]] name = \u0026#34;zsh\u0026#34; [[customizations.user]] name = \u0026#34;major\u0026#34; key = \u0026#34;ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIcfW3YMH2Z6NpRnmy+hPnYVkOcxNWLdn9VmrIEq3H0Ei0qWA8RL6Bw6kBfuxW+UGYn1rrDBjz2BoOunWPP0VRM= major@amdbox\u0026#34; shell = \u0026#34;/usr/bin/zsh\u0026#34; groups = [\u0026#34;wheel\u0026#34;] [customizations.timezone] timezone = \u0026#34;America/Chicago\u0026#34; [customizations.firewall.services] enabled = [\u0026#34;ssh\u0026#34;, \u0026#34;dhcpv6-client\u0026#34;] [customizations.services] enabled = [\u0026#34;sshd\u0026#34;, \u0026#34;firewalld\u0026#34;] I saved this file as image.toml. The next step involves pushing the blueprint and solving the dependencies in the blueprint:\n$ composer-cli blueprints push image.toml $ composer-cli blueprints depsolve major-perfect-f35 We need some extra configuration to tell Image Builder how to authenticate with AWS. Save this file as aws.toml:\nprovider = \u0026#34;aws\u0026#34; [settings] accessKeyID = \u0026#34;YOUR_ACCESS_KEY_ID\u0026#34; secretAccessKey = \u0026#34;YOUR_SECRET_ACCESS_KEY\u0026#34; bucket = \u0026#34;image-upload-bucket-blog-post\u0026#34; region = \u0026#34;us-east-1\u0026#34; key = \u0026#34;major-perfect-f35\u0026#34; Replace the bucket name with your S3 bucket and set an S3 object name in key. To get your access key and secret access key, run this command:\n$ aws iam create-access-key --user imagebuilder Finally, we\u0026rsquo;re ready to tell Image Builder to deploy our image! Run this last command to start the compose:\ncomposer-cli compose start major-perfect-f35 ami major-perfect-f35 aws.toml Replace major-perfect-35 with the name in your blueprint. Now, follow along in the system journal as your image is deployed:\n[AWS] 🚀 Uploading image to S3: image-upload-bucket-blog-post/major-perfect-f35 [AWS] 📥 Importing snapshot from image: image-upload-bucket-blog-post/major-perfect-f35 [AWS] 🚚 Waiting for snapshot to finish importing: import-snap-06bc48cb9779f98d8 [AWS] 🧹 Deleting image from S3: image-upload-bucket-blog-post/major-perfect-f35 [AWS] 📋 Registering AMI from imported snapshot: snap-02ed2710572b7b94b [AWS] 🎉 AMI registered: ami-0964ea222b6a6711e (Don\u0026rsquo;t ask me who put all of the emojis in the logging code. 🤭)\nLet\u0026rsquo;s verify that our image is fully imported:\n$ aws ec2 describe-images --filters \u0026#34;Name=tag:Name,Values=major-perfect-f35\u0026#34; { \u0026#34;Images\u0026#34;: [ { \u0026#34;Architecture\u0026#34;: \u0026#34;x86_64\u0026#34;, \u0026#34;CreationDate\u0026#34;: \u0026#34;2021-11-16T18:12:33.000Z\u0026#34;, \u0026#34;ImageId\u0026#34;: \u0026#34;ami-0964ea222b6a6711e\u0026#34;, \u0026#34;ImageLocation\u0026#34;: \u0026#34;xxx/major-perfect-f35\u0026#34;, \u0026#34;ImageType\u0026#34;: \u0026#34;machine\u0026#34;, \u0026#34;Public\u0026#34;: false, \u0026#34;OwnerId\u0026#34;: \u0026#34;xxx\u0026#34;, \u0026#34;PlatformDetails\u0026#34;: \u0026#34;Linux/UNIX\u0026#34;, \u0026#34;UsageOperation\u0026#34;: \u0026#34;RunInstances\u0026#34;, \u0026#34;State\u0026#34;: \u0026#34;available\u0026#34;, \u0026#34;BlockDeviceMappings\u0026#34;: [ { \u0026#34;DeviceName\u0026#34;: \u0026#34;/dev/sda1\u0026#34;, \u0026#34;Ebs\u0026#34;: { \u0026#34;DeleteOnTermination\u0026#34;: true, \u0026#34;SnapshotId\u0026#34;: \u0026#34;snap-02ed2710572b7b94b\u0026#34;, \u0026#34;VolumeSize\u0026#34;: 6, \u0026#34;VolumeType\u0026#34;: \u0026#34;gp2\u0026#34;, \u0026#34;Encrypted\u0026#34;: false } } ], \u0026#34;EnaSupport\u0026#34;: true, \u0026#34;Hypervisor\u0026#34;: \u0026#34;xen\u0026#34;, \u0026#34;Name\u0026#34;: \u0026#34;major-perfect-f35\u0026#34;, \u0026#34;RootDeviceName\u0026#34;: \u0026#34;/dev/sda1\u0026#34;, \u0026#34;RootDeviceType\u0026#34;: \u0026#34;ebs\u0026#34;, \u0026#34;Tags\u0026#34;: [ { \u0026#34;Key\u0026#34;: \u0026#34;Name\u0026#34;, \u0026#34;Value\u0026#34;: \u0026#34;major-perfect-f35\u0026#34; } ], \u0026#34;VirtualizationType\u0026#34;: \u0026#34;hvm\u0026#34; } ] } Go forth and build instances with your new, customized Fedora 35 AMI! 🎉\nPhoto credit: Svetlozar Apostolov\n","date":"16 November 2021","permalink":"/p/deploy-custom-fedora-35-aws-image-builder/","section":"Posts","summary":"Want to build your own Fedora 35 image for AWS? Use Image Builder to build and deploy an image made just for you. 🏗","title":"Deploy a custom Fedora 35 AMI to AWS with Image Builder"},{"content":"","date":null,"permalink":"/tags/azure/","section":"Tags","summary":"","title":"Azure"},{"content":"I started work on packaging the Azure CLI and all of its components in Fedora back in July 2021 and the work finally finished just as the Fedora 35 development cycled ended. This required plenty of packaging work and I was thankful for all the advice I received along the way from experienced Fedora packagers.\nInstalling Azure CLI #Make sure you\u0026rsquo;re on Fedora 35 or later first. Then install azure-cli:\n$ sudo dnf -y install azure-cli $ az --version azure-cli 2.29.0 * core 2.29.0 * telemetry 1.0.6 Extensions: aks-preview 0.5.29 Python location \u0026#39;/usr/bin/python3\u0026#39; Extensions directory \u0026#39;/home/major/.azure/cliextensions\u0026#39; Authenticate with Azure #You have two methods for authenticating with Azure:\nvia a web browser (good for desktops and workstations) via a device code (good for remote servers or virtual machines) To authenticate with a browser, type az login and complete the steps in the browser window that appears.\nOtherwise, run az login --use-device-code and complete the steps manually using the URL and the access code provided on the command line.\nIf everything works well, you should get a message saying You have logged in. followed by some information about your account in JSON format.\nTo the cloud! #Most resources in Azure live inside a resource group, so let\u0026rsquo;s try to create one to ensure the CLI is working and authenticated properly:\n$ az group create --location eastus --resource-group major-testing-eastus { \u0026#34;id\u0026#34;: \u0026#34;/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/major-testing-eastus\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;eastus\u0026#34;, \u0026#34;managedBy\u0026#34;: null, \u0026#34;name\u0026#34;: \u0026#34;major-testing-eastus\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;provisioningState\u0026#34;: \u0026#34;Succeeded\u0026#34; }, \u0026#34;tags\u0026#34;: null, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Resources/resourceGroups\u0026#34; } Perfect! 🎉\nPhoto credit: Sergi Marló\n","date":"1 November 2021","permalink":"/p/install-azure-cli-fedora-35/","section":"Posts","summary":"Provision services on Microsoft\u0026rsquo;s Azure CLI on Fedora 35. 💙","title":"Install Azure CLI on Fedora 35"},{"content":"Much of my daily work involves using multiple clouds and I do the same for my personal infrastructure, too. Building mesh networks between each piece of cloud infrastructure, my home, and my mobile phone quickly became overwhelming. That\u0026rsquo;s where Tailscale came in and completely changed my workflow.\nWhat is Tailscale? #The company claims it\u0026rsquo;s \u0026ldquo;a secure network that just works\u0026rdquo; and that definition fits well. Tailscale builds on protocols used in Wireguard to dynamically maintain a mesh network between any number of devices. Forget about sharing keys, managing complex IP space, and automating configuration changes. It handles all of that for you.\nSetting up Tailscale is outside of the scope of this post, but once you get going, you can add a node to the network and access anything else running Tailscale within your account. The free account comes with 20 devices and a subnet router.\nA subnet router allows you to install Tailscale on one device (perhaps one device in your home) and then relay traffic to all devices on that network through that single node. I use this on my home router and this saves me the work of installing Tailscale on multiple devices at home.\nEven private networks need security #Tailscale does give you a private mesh network with automatically updating encryption keys and access control lists (ACLs), but it needs to be secured like any other private network. If an attacker gains access to any of the nodes you have running Tailscale, then they could potentially wander throughout your Tailscale network.\nYou may be tempted to trust all traffic on your Tailscale network, but a firewall gives you one extra layer of protection so you\u0026rsquo;re only exposing the right ports to the right nodes. I wrote about the need for host firewalls (even in the cloud) on the Red Hat blog.\nTailscale and firewalld #I use firewalld to manage my firewall configuration and to keep it consistent for both IPv4 and IPv6 connections. You can use it to easily secure your Tailscale network.\nFor those unfamiliar with firewalld, it uses zones to define what traffic should be allowed in or out of a system. Each zone can have lots of rules assigned for certain services, ports, sources, and destinations. You can add network interfaces to each zone to control access.\nA router has a public-facing port that goes out to the internet and an internal-facing port that goes to the local area network (LAN). I put my internet-facing network device in the public zone and I put the internal-facing network device in the internal zone. Then I can allow inbound DNS on the internal zone but block it in the public zone.\nTailscale adds a network interface called tailscale0 by default. All of the traffic that flows into and out of your mesh network goes through that device. First off, decide which firewall zone you want to use for Tailscale. You could certainly create a new zone, but I\u0026rsquo;m using dmz for mine.\n$ sudo firewall-cmd --list-all --zone=dmz dmz target: default icmp-block-inversion: no interfaces: sources: services: ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: By default, dmz allows the ssh service and nothing else. If you want to expose a web server to the Tailscale network, just add the services:\n$ sudo firewall-cmd --add-service=http --zone=dmz $ sudo firewall-cmd --add-service=https --zone=dmz The last step is to add the tailscale0 interface to the dmz zone:\n$ sudo firewall-cmd --add-interface=tailscale0 --zone=dmz Now we can check the zone to be sure everything is ready:\n$ sudo firewall-cmd --list-all --zone=dmz dmz target: default icmp-block-inversion: no interfaces: tailscale0 sources: services: http https ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: The last step is to save the firewall configuration to permanent storage so that it\u0026rsquo;s applied after a reboot:\n$ sudo firewall-cmd --runtime-to-permanent Enjoy! 💻\nPhoto credit: zhang kaiyv\n","date":"30 October 2021","permalink":"/p/secure-tailscale-networks-with-firewalld/","section":"Posts","summary":"Tailscale provides a handy private network mesh across multiple devices but it needs security just like any other network. 🕵","title":"Secure Tailscale networks with firewalld"},{"content":"","date":null,"permalink":"/tags/lenovo/","section":"Tags","summary":"","title":"Lenovo"},{"content":"My ThinkPad T490 was showing its age and wrestling with the internal NVIDIA GPU was a constant pain. Other than some unexpected early battery wear, it has been a good laptop for me.\nHowever, the annual Lenovo sale email came through my inbox recently and I decided to look for something new. I picked up an X1 Nano Gen 1 and the experience has been good so far.\nMy search #My criteria for a new laptop included:\nUnder $2,000 Durable and well built No dedicated GPU Screen resolution better than FHD (1920x1080) Good Linux support High quality keyboard MicroSD would be nice, but not required Battery life of 8 hours or more I hoped to find a laptop with an AMD chip that met my requirements because I really like the technologies that AMD packs into their designs. Every AMD laptop I could find had a FHD screen or worse on a 13-15\u0026quot; screen. My T490 was the high resolution model and I did give the latest Dell XPS 13 a shot, but the screen resolution left me with really fuzzy text in terminals.\nAs an aside, I\u0026rsquo;ve now purchased two Dell XPS 13 laptops about three years apart and I\u0026rsquo;ve returned both. These devices look great on paper but I was really dissatisfied with them in person.\nWhy I picked the X1 Nano #The Nano met nearly all of my requirements except for the MicroSD reader. It only has three ports: two USB-C ports and a combination microphone/headphone port. Everything else was great!\nThe screen is 2160x1350, which is slightly better than FHD and it\u0026rsquo;s a great balance between higher resolution and reduced battery usage.\nIt came out to $1,581 with taxes included.\nMy thoughts #Most of the thoughts I have are fairly subjective, but NotebookCheck has an extremely detailed review that goes into more detail than I will ever need. If you\u0026rsquo;re wondering about screen view angles, nits, and deep dives into battery usage, the folks at NotebookCheck are hard to beat.\nI installed Fedora 35 Beta easily and all of the hardware was fully recognized without any extra work. That surprised me since this laptop has plenty of modern hardware inside and it seems like every ThinkPad has one or two quirks to fix after installation.\nI\u0026rsquo;m a big fan of tlp for power management. One thing I noticed with the X1 Nano is that it performs with very low latency even when I crank down tlp to its most aggressive power saving settings. Screen refresh is immediate, terminals perform well, and Firefox seems to run at normal speed even with powertop showing all power saving settings set to the maximum. On my T490, I\u0026rsquo;d notice lag in my terminals while typing and Firefox had some weird artifacting that would appear briefly between page loads.\nMy battery life experience with the Nano is stellar. I\u0026rsquo;ve been working on this post for about 30-45 minutes with plenty of browser tabs open and the battery life has gone from about 82% to 77% during that time. This makes a world of difference for me when I take my ham radio gear into the field and I need a laptop battery that lasts.\nOne of my biggest fears for this laptop was the keyboard and touchpad. ThinkPad input devices are typically some of the best in the industry and I wondered how well that would work in a smaller form factor. They worked hard to shrink the buttons that would impact a user the least. The buttons along the edge shrunk the most, but the letter and number keys seem the same size. Key travel is a little reduced from the T490 but this is a very thin laptop.\nThe touchpad feels great and the size works just fine for me. The mouse buttons just above the touchpad are definitely not as tall as the ones on the T490 and that takes some adjustment. As for the trackpoint (the red \u0026ldquo;keyboard nipple\u0026rdquo;), I\u0026rsquo;ve never been good with these and I\u0026rsquo;m just as bad with this one as all of the other ones. 🤣\nSound from the speakers is above average, but it\u0026rsquo;s not going to win any awards. The microphone and webcam perform just as well as any other ThinkPad I\u0026rsquo;ve owned.\nMy configuration #My X1 Nano Gen 1 came with a i5-1140G7, 16GB of RAM, and a 512GB NVMe disk (Western Digital SN530). Here\u0026rsquo;s a look at what\u0026rsquo;s on the PCI bus and USB hubs:\n$ lspci 00:00.0 Host bridge: Intel Corporation Device 9a12 (rev 01) 00:02.0 VGA compatible controller: Intel Corporation Device 9a40 (rev 01) 00:04.0 Signal processing controller: Intel Corporation TigerLake-LP Dynamic Tuning Processor Participant (rev 01) 00:06.0 PCI bridge: Intel Corporation 11th Gen Core Processor PCIe Controller (rev 01) 00:07.0 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt 4 PCI Express Root Port #1 (rev 01) 00:07.2 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt 4 PCI Express Root Port #2 (rev 01) 00:08.0 System peripheral: Intel Corporation GNA Scoring Accelerator module (rev 01) 00:0a.0 Signal processing controller: Intel Corporation Tigerlake Telemetry Aggregator Driver (rev 01) 00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 USB Controller (rev 01) 00:0d.2 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 NHI #0 (rev 01) 00:0d.3 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 NHI #1 (rev 01) 00:12.0 Serial controller: Intel Corporation Tiger Lake-LP Integrated Sensor Hub (rev 20) 00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2x1 xHCI Host Controller (rev 20) 00:14.2 RAM memory: Intel Corporation Tiger Lake-LP Shared SRAM (rev 20) 00:14.3 Network controller: Intel Corporation Wi-Fi 6 AX201 (rev 20) 00:15.0 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #0 (rev 20) 00:15.3 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #3 (rev 20) 00:16.0 Communication controller: Intel Corporation Tiger Lake-LP Management Engine Interface (rev 20) 00:1f.0 ISA bridge: Intel Corporation Device a087 (rev 20) 00:1f.3 Audio device: Intel Corporation Tiger Lake-LP Smart Sound Technology Audio Controller (rev 20) 00:1f.4 SMBus: Intel Corporation Tiger Lake-LP SMBus Controller (rev 20) 00:1f.5 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP SPI Controller (rev 20) 04:00.0 Non-Volatile memory controller: Sandisk Corp Device 5008 (rev 01) $ lsusb Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 003: ID 13d3:5411 IMC Networks Integrated Camera Bus 003 Device 002: ID 06cb:00bd Synaptics, Inc. Prometheus MIS Touch Fingerprint Reader Bus 003 Device 004: ID 8087:0026 Intel Corp. AX201 Bluetooth Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The webcam, fingerprint reader, and the bluetooth/wifi card are on the USB bus and this is fairly common for modern ThinkPads.\nConclusion #This is an all-around great laptop and I\u0026rsquo;d highly recommend it for anyone who travels often. It\u0026rsquo;s incredibly light and I barely notice when it\u0026rsquo;s in a backpack. I wouldn\u0026rsquo;t use this as a desktop replacement, but it can drive external monitors if needed and it performs really well on AC power. Text in terminals is easy to read and you won\u0026rsquo;t spend much time wrestling with hardware support in Linux.\nPhoto credit: Lenovo\n","date":"23 October 2021","permalink":"/p/thinkpad-x1-nano-gen1-review/","section":"Posts","summary":"One of the smallest ThinkPads delivers one of the best experiences I\u0026rsquo;ve had on a laptop. 💻","title":"ThinkPad X1 Nano Gen 1 Review"},{"content":"Containers are a great way to deliver and run all kinds of applications. Although many people build containers for server applications, you can also use them for client applications on your local workstation. This helps when you want to test new applications without disrupting your existing system or when you use an immutable system such as Fedora Silverblue.\nPodman takes this further by allowing you to run a client application without root access or daemons. This post covers how to build a container with an Xorg application and run it on a Fedora system.\nBuilding the container #Let\u0026rsquo;s start with a simple container that contains xeyes. This simple application simply puts a pair of eyes on your screen that follow your mouse movements around the desktop. It has very few dependencies and it\u0026rsquo;s a great way to test several capabilities on the desktop.\nHere\u0026rsquo;s a very simple container build file:\n# xeyes-container FROM registry.fedoraproject.org/fedora:latest RUN dnf -y install xeyes CMD xeyes Let\u0026rsquo;s install podman and build the container:\n$ sudo dnf -y install podman $ podman build -t xeyes -f xeyes-container . Run the container #Now that we have our xeyes container, let\u0026rsquo;s run it.\n$ podman run --rm xeyes Error: Can\u0026#39;t open display: We\u0026rsquo;re missing the DISPLAY variable inside the container. Let\u0026rsquo;s add it:\n$ echo $DISPLAY :0 $ podman run --rm -e DISPLAY xeyes Error: Can\u0026#39;t open display: :0 Well, we have the display variable inside now, but there\u0026rsquo;s another problem. Inside the container, xeyes can\u0026rsquo;t make a connection to our X daemon. This socket normally appears in /tmp/.X11-unix, but the container doesn\u0026rsquo;t have it. Let\u0026rsquo;s try adding this inside the container:\n$ podman run --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix xeyes Error: Can\u0026#39;t open display: :0 Darn! This should be working. Let\u0026rsquo;s check the system journal:\nAVC avc: denied { write } for pid=10817 comm=\u0026#34;xeyes\u0026#34; name=\u0026#34;X0\u0026#34; dev=\u0026#34;tmpfs\u0026#34; ino=42 scontext=system_u:system_r:container_t:s0:c143,c574 tcontext=system_u:object_r:user_tmp_t:s0 tclass=sock_file permissive=0 Uh oh. SELinux is upset that a container is trying to mess with the X0 socket for our Xorg server that sits in /tmp/.X11-unix. You may be tempted to run setenforce 0, but wait. We can fix this with podman!\nPodman allows you to set security options for a particular container with --security-opt. We need to run this container with an SELinux context that allows it to talk to something in tmpfs. Examining the container-selinux project shows that container_runtime_t can work with tmpfs:\ntype container_runtime_tmp_t alias docker_tmp_t; files_tmp_file(container_runtime_tmp_t) Let\u0026rsquo;s try adding this to our podman command now:\n$ podman run --rm -e DISPLAY \\ -v /tmp/.X11-unix:/tmp/.X11-unix \\ --security-opt label=type:container_runtime_t xeyes I now have a set of eyeballs on my desktop! 👀\nxeyes running on my desktop Extra credit #The xeyes application is extremely simple, but you can run much more complex applications using this same method. Keep in mind that certain applications might require extra packages inside the container, such as fonts or GTK themes. Jessie Frazelle has a great repository full of containers that she uses regularly and this might give you inspiration to create some of your own! 🤓\nPhoto credit: Jonny Gios on Unsplash\n","date":"17 October 2021","permalink":"/p/run-xorg-applications-with-podman/","section":"Posts","summary":"Package up graphical applications in containers and run them with podman. 🚢","title":"Run Xorg applications with podman"},{"content":"","date":null,"permalink":"/tags/xorg/","section":"Tags","summary":"","title":"Xorg"},{"content":"Controlling the LED backlight brightness on a laptop in Linux used to be a chore, but most window managers automatically configure the brightness buttons on your laptop. However, everything is much more customizable in i3 and it requires a little more configuration.\nControlling the light #First off, we need something that allows us to control the brightness. There\u0026rsquo;s a perfectly named project called light that does exactly this task! In Fedora, install it via:\n$ sudo dnf -y install light You can query the current brightness:\n$ light -G 5.00 On my laptop, 5% is very dim. Now we can increase the brightness using a simple command:\n$ light -A 5 $ light -G 9.99 And we can bring it right back down:\n$ light -U 5 $ light -G 5.00 Setting up hotkeys #Open your i3 configuration (usually ~/.config/i3/config) and add the hotkey configuration:\n# Handle backlight. bindsym XF86MonBrightnessUp exec light -A 5 bindsym XF86MonBrightnessDown exec light -U 5 Refresh the i3 configuration with $mod + shift + r.\nEach time you press the brightness up button on your laptop, the brightness level goes up by 5%. The brightness down button lowers it by 5%. This is a good setup for me since I normally only need to adjust it by a few stops depending on ambient light.\nHowever, if your lighting changes drastically from time to time, you can set up a different keybinding for a much more aggressive change:\n# Handle backlight. bindsym XF86MonBrightnessUp exec light -A 5 bindsym XF86MonBrightnessDown exec light -U 5 bindsym shift+XF86MonBrightnessUp exec light -A 25 bindsym shift+XF86MonBrightnessDown exec light -U 25 Hold shift and press brightness up or down. Now you are moving up and down by 25% brightness each time. Enjoy! 💡\nPhoto credit: Ariel on Unsplash\n","date":"14 October 2021","permalink":"/p/backlight-control-with-i3/","section":"Posts","summary":"Adjust the LED backlight on your laptop quickly in i3 on Linux. 💡","title":"Backlight control with i3"},{"content":"I\u0026rsquo;ve tamed many of my complex firewall rules with firewalld over the years. It allows you to divide your devices, destinations, and network interfaces into zones. From there, you apply rules to zones. In addition, it handles all of the difficult work on the backend with iptables and nftables.\nForwarding ports remains a tricky process in firewalld, but there are a few different ways to work through it.\nUsing the simple syntax #The firewall-cmd man page shows the syntax for setting a forward port rule. Here\u0026rsquo;s a simple one for port 80 going to a device on a LAN:\n--add-forward-port=port=80:proto=tcp:toport=8080:toaddr=192.168.10.50 This line says to catch packets on port 80 and forward them to port 8080 on 192.168.10.50. You can also leave the toaddr off the arguments to forward the port to the same server where the firewall is running:\n--add-forward-port=port=80:proto=tcp:toport=8080 This rule catches packets on port 80 and redirects them to port 8080 on the same host. This could be handy for running a rootless podman container on a host where the container doesn\u0026rsquo;t have enough privileges to run on port 80.\nLet\u0026rsquo;s try this example on our firewall. First, I\u0026rsquo;ll start a rootless podman container on port 8080:\n$ podman run -d -p 8080:80 docker.io/traefik/whoami Trying to pull docker.io/traefik/whoami:latest... Getting image source signatures Copying blob 69d0b1dd9140 done Copying blob f089da391c25 done Copying blob 65aa504e5268 done Copying config 10d7504ea2 done Writing manifest to image destination Storing signatures 6e32b04c8933a6864249b20fd9aa27b8fc85c7c75f1c5b6f5f1ae76457f58c1c (The whoami container is a great way to hunt down networking problems with containers or routers in front of containers.)\nNow I can try to connect with curl via IPv4:\n$ curl -4 testserver curl: (7) Failed to connect to testserver port 80: No route to host Ah, we forgot to forward the port! 🤦🏻‍♂️ Let\u0026rsquo;s do that now:\n# firewall-cmd --add-forward-port=port=80:proto=tcp:toport=8080 success # firewall-cmd --list-all FedoraServer (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: port=80:proto=tcp:toport=8080:toaddr= source-ports: icmp-blocks: rich rules: And now the curl:\n➜ curl -4 testserver Hostname: 6e32b04c8933 IP: 127.0.0.1 IP: ::1 IP: 10.0.2.100 IP: fe80::3c96:8dff:fec3:2d01 RemoteAddr: 10.0.2.100:35586 GET / HTTP/1.1 Host: testserver User-Agent: curl/7.76.1 Accept: */* And now we have the reply from the whoami container on port 80! Let\u0026rsquo;s try IPv6:\n➜ curl -6 testserver curl: (7) Failed to connect to testserver port 80: Permission denied Darn! 🤔\nInvestigating IPv6 #As I mentioned earlier, firewalld manages iptables and nftables on the backend for you automatically. I\u0026rsquo;m using Fedora 34, and firewalld uses nftables by default. We need to see which rules nftables has for port 80:\n# nft list tables table inet firewalld table ip firewalld table ip6 firewalld # nft list table ip firewalld | grep 80 tcp dport 80 redirect to :8080 # nft list table ip6 firewalld | grep 80 # Ah, the rules for IPv6 aren\u0026rsquo;t there! There\u0026rsquo;s a little note in the firewall-cmd man page for us:\nFor IPv6 forward ports, please use the rich language.\nTime to get rich #You have two options here to get port forwarding working on both IPv4 and IPv6:\nUse the simple syntax for IPv4 and the rich rules for IPv6 Use rich rules for both IPv4 and IPv6 Option 2 is my preferred one since it\u0026rsquo;s consistent between both IPv4 and IPv6. Let\u0026rsquo;s start by removing our port forward rule (just run the same command as before but replace add with remove):\n# firewall-cmd --remove-forward-port=port=80:proto=tcp:toport=8080 success Now, let\u0026rsquo;s add some rich rules:\n# firewall-cmd --add-rich-rule=\u0026#39;rule family=ipv4 forward-port to-port=8080 protocol=tcp port=80\u0026#39; success # firewall-cmd --add-rich-rule=\u0026#39;rule family=ipv6 forward-port to-port=8080 protocol=tcp port=80\u0026#39; success # firewall-cmd --list-all FedoraServer (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family=\u0026#34;ipv6\u0026#34; forward-port port=\u0026#34;80\u0026#34; protocol=\u0026#34;tcp\u0026#34; to-port=\u0026#34;8080\u0026#34; rule family=\u0026#34;ipv4\u0026#34; forward-port port=\u0026#34;80\u0026#34; protocol=\u0026#34;tcp\u0026#34; to-port=\u0026#34;8080\u0026#34; Now let\u0026rsquo;s try connecting on IPv6:\n$ curl -6 testserver Hostname: 6e32b04c8933 IP: 127.0.0.1 IP: ::1 IP: 10.0.2.100 IP: fe80::3c96:8dff:fec3:2d01 RemoteAddr: 10.0.2.100:35590 GET / HTTP/1.1 Host: testserver User-Agent: curl/7.76.1 Accept: */* Double check the nftables rules:\n# nft list table ip firewalld | grep 80 tcp dport 80 redirect to :8080 # nft list table ip6 firewalld | grep 80 tcp dport 80 redirect to :8080 And save the firewalld configuration so it persists through reboots:\n# firewall-cmd --runtime-to-permanent success Enjoy your freshly forwarded ports. 🎉\nPhoto credit: Eric Prouzet on Unsplash\n","date":"11 October 2021","permalink":"/p/forwarding-ports-with-firewalld/","section":"Posts","summary":"Learn how to forward ports with firewalld for IPv4 and IPv6 destinations. 🕵🏻","title":"Forwarding ports with firewalld"},{"content":"","date":null,"permalink":"/tags/iptables/","section":"Tags","summary":"","title":"Iptables"},{"content":"","date":null,"permalink":"/tags/ipv6/","section":"Tags","summary":"","title":"Ipv6"},{"content":"I set a goal this summer to read a little each day and work through my reading list on Goodreads. I managed to make it through nine books! If you\u0026rsquo;re looking for some interesting books to read, this post highlights several of the ones I enjoyed.\nTom Clancy #I saw all the big Tom Clancy movies as a kid, such as The Hunt for Red October, Clear and Present Danger, and Patriot Games. All of them felt so real and so possible. My goal was to read them in order of publication and I went through a few this summer.\nBefore the summer, I read The Hunt for Red October, Without Remorse, and Patriot Games. Red October reins supreme as my most favorite of the Jack Ryan universe so far, but I really enjoyed the origin story of John Clark in Without Remorse. It shed so much more light on his character that explains his actions in later books.\nThe Cardinal of the Kremlin #This book packed a lot of suspense into each chapter. It followed three groups: members of the US intelligence community, members of the Soviet Union, and tribes in Afghanistan. At many times in the book I found myself wondering how big the clash would be if all three groups ran into each other. 😱\nMy favorite parts of most Tom Clancy books is that the plot usually feels plausible and the plot devices feel extremely well researched. The story in this book is no exception. Also, the book gives you a follow-up on Red October and explains what happened to Ramius and the boat itself.\nClear and Present Danger #I was very familiar with the movie of the same name starring Harrison Ford as I watched it multiple times. The movie has always been one of my favorites, but as soon as I started the book, I realized the movie covered a tiny part of the story in the book.\nThe book had an incredible amount of action (more than the movie!) and the story went much deeper into the US drug war. The culture of cartels and infighting among them was laid bare many times. You really begin to question who had more virtue \u0026ndash; the US coming in to fight the cartels or the cartels themselves.\nThere was also a lot more to many of the higher-up government characters that appeared briefly in the movie. If you thought you disliked Ritter after seeing the movie, just go ahead and read the book. 😠\nThe Sum of All Fears #This book has it all. Nuclear weapons, submarine warfare, tensions in East Germany, battles in the Middle East, and a US presidential cabinet that is incredibly dysfunctional. During all of it, Jack Ryan finds himself in very challenging conditions where he questions many of his beliefs. Many parts of this book feel predictable for a while, but then something crazy happens and nothing you predicted comes true.\n💣 Warning: the movie version of this book is awful. The movie features the wrong bad guys, about 5% of the story from the book is included, and Jack Ryan isn\u0026rsquo;t even in the same stage of his life as he is in the book! The ending made no sense and the presidential cabinet challenges were cut down to almost nothing.\nTrust me: read the book, skip the movie.\nBusiness time #In an effort to change gears, I sat down to read Barbarians at the Gate: The Fall of RJR Nabisco by Bryan Burrough and John Helyar. It\u0026rsquo;s the very detailed story of the leveraged buyout (LBO) of RJR Nabisco in the 1980\u0026rsquo;s. Many people say that it\u0026rsquo;s one of the best business books out there and I can see why.\nBarbarians at the Gate #The book has tons of quotes and stories from everyone involved in the process and those are woven into a (sometimes meandering) story that showcases how everything gradually fell apart. Greed, incompetence, and ego reigns supreme throughout the book and I learned more about LBOs than I ever wanted to know.\nYou don\u0026rsquo;t need deep business knowledge to read this book, but knowing a little bit about how the stock market works is helpful. The book focuses mainly on the people and the author works diligently to explain some of the trickier financial events.\nSomething dark #I was looking for a new video game to play with a good story, and Spec Ops: The Line showed up in one of the lists. I played it a long time ago and the story was incredibly unique and gripping. The only thing that came close for me was Half-Life. The story was loosely based on Heart of Darkness, by Joseph Conrad. I set out to read the story to better understand the connection. (If you\u0026rsquo;ve seen the movie Apocalypse Now, that one is loosely inspired by the same book.)\nHeart of Darkness #First off, this story is quite dark and there are quite a few cringe-worthy and racist descriptions of people that the main character meets along the way. The book was written in the 1890\u0026rsquo;s, so take that into consideration if you read it.\nThe book is told in first person by a single man. Some of it feels really clear while some of the story is shrouded in a haze. The author leaves a lot to the imagination and I found myself going backwards a page or two to figure out what was happening. Prepare yourself for a dark, troubling story that will leave you with more questions than answers.\nWhat\u0026rsquo;s better than an apocalypse? #After reading Heart of Darkness, I searched for some apocalyptic and post-apocalyptic fiction to keep the dark theme going. I read through a bunch of recommended lists and found two books: The Passage and Earth Abides.\nThe Passage #The Passage is the first book in a trilogy by Justin Cronin. My favorite part about this book is that you get to see events before the apocalypse, the events during, and the events long after. So many of these books seem to jump in after something bad has happened and you\u0026rsquo;re left wondering what the event really was. (The Road from Cormac McCarthy fits this description but it\u0026rsquo;s an incredible book and I recommend it.)\nThe book starts in a version of the USA that feels like the present, but some things are a little off. For example, there are government checkpoints on various highways and New Orleans is essentially abandoned. A strange virus was discovered in South America that somehow cures various conditions (even cancer), but then people either die after a while or they go into an unusual vampire/zombie state. However, this isn\u0026rsquo;t your average zombie book \u0026ndash; it goes much, much deeper.\nThe first third of the book covers the time before and during the world-ending event. The second third goes forward about 100 years to people trying to survive afterwards. The last third goes forward a few more years when everything gets worse (you wonder how this is possible).\nThis book has deep commentary on life, death, love, relationships, religion, and humanity. It lays bare what it means to be human. I wish I could explain it better, but that\u0026rsquo;s the best I can do.\nI could not put this book down.\nEarth Abides #I took a break from The Passage trilogy to change gears and read one of the original post-apocalyptic (called \u0026ldquo;after the fall\u0026rdquo; back then) books called Earth Abides by George R. Stewart. It was written in the 1950\u0026rsquo;s and follows the main character as he wakes up after a snake bite to find out that some kind of disease has stricken the entire USA.\nAt first, he\u0026rsquo;s confused at where everyone has gone, but he finds newspaper clippings and clues in his small town that help him understand what happened. The thing I liked about this book is that things at first are easy. Lots of people had food left over in fridges and grocery stores were well stocked. Even the power and water were working.\nOver time, all of that degrades, as does the main character\u0026rsquo;s mood. Things become more desperate and a gap widens between the people who lived before the apocalypse and those born after.\nThis book has some derogatory and outdated stereotypes around people of color and women, but there is plenty of commentary on humanity and society that holds true. I found it to be quite boring in parts, but I enjoyed it overall.\nThe Twelve #Back to Justin Cronin\u0026rsquo;s trilogy I went! His second book shows that there\u0026rsquo;s a lot more to the infected people (called \u0026ldquo;virals\u0026rdquo; in the book) than meets the eye. The disease has mind-control properties where certain beings used as test subjects in the first novel had power over the viral hordes.\nMany of the character from the first book are back and different challenges await them. This book feels much more desperate than the first. Sometimes I find that the second book in some trilogies lacks in suspense compared to the first, but you\u0026rsquo;ll find plenty of suspenseful moments here.\nIf you don\u0026rsquo;t think things can get worse, they can. This book proves it in great detail.\nThe City of Mirrors #The final book of The Passage trilogy ties up all of the loose ends. As you read it, you\u0026rsquo;re bouncing between two or three different events at the same time that all seem like they will come together to destroy everyone.\nThe book begins on a fairly positive note, but later, everyone seems to be running on empty and all may be lost. One of the most critical characters in the book, Amy, has her true purpose exposed (finally!) and you finally learn why you see those diary excerpts being presented at a conference after the year 1000 A.V. (after virus).\nThis trilogy ends in such a way where you feel like the story is resolved, but you still have questions. Some may look at the ending and say \u0026ldquo;Wow, that\u0026rsquo;s terribly sad and soul-crushing.\u0026rdquo; while others look at it as the most beautiful ending imaginable. You\u0026rsquo;ll have to see for yourself. 😉\nPhoto credit: Nick Hillier on Unsplash\n","date":"6 September 2021","permalink":"/p/my-summer-2021-reading-list/","section":"Posts","summary":"I set out to read a bunch of books this summer and succeeded! Here\u0026rsquo;s my reading list. 📚","title":"My summer 2021 reading list"},{"content":"Hetzner has always been a reliable and cost-effective hosting company for me for several years. I\u0026rsquo;ve run icanhazip.com on their dedicated servers and I run several small applications in their cloud.\nWhen I run containers, I love using Fedora CoreOS for its easy updates and very small server footprint. Almost everything you need for hosting containers is provided right out of the box, but you can add extra packages via rpm-ostree layers and reboot to use them.\nThis post shows you how to deploy Fedora CoreOS in Hetzner\u0026rsquo;s cloud as an inexpensive and efficient method for hosting your container workloads.\nRoadblocks #First off, Hetzner does not offer Fedora CoreOS as one of its cloud base images. That normally wouldn\u0026rsquo;t be a problem, but they don\u0026rsquo;t allow you to upload snapshots or base images, either. Don\u0026rsquo;t worry! We can think creatively and deploy the image via the cloud rescue environment.\nAnother challenge is that ignition, the CoreOS first boot configuration tool, does not support Hetzner\u0026rsquo;s metadata service at this time. I\u0026rsquo;m working on a pull request to add this support. We can get creative here and embed our ignition configuration inside the instance itself. That\u0026rsquo;s a bit annoying since all of your instances will get the same configuration, but that works out fine for my needs since I mainly care about ssh keys being present.\nGenerating ignition configuration #The ignition configuration allows us to specify ssh keys (and many other possible configurations) for the first boot. First, we start with a really basic butane configuration:\n# hetzner-coreos.butane variant: fcos version: 1.4.0 passwd: users: - name: core groups: - wheel ssh_authorized_keys: - ssh-rsa AAAAB3NzaC1y... You can use any username here that you prefer and add your public ssh key. Compile the configuration into an ignition file with butane:\nbutane hetzner-coreos.butane \u0026gt; config.ign Keep this config.ign file handy because we need it before we snapshot our Fedora CoreOS image later.\nPrepare the CoreOS snapshot #Although Hetzner doesn\u0026rsquo;t allow for uploading your own snapshot, you can replace the entire root disk of an instance from the rescue system. From there, we can snapshot the root disk storage and use that as our new base image.\nDownload and install the hcloud tool first. Follow the installation instructions to generate an API key and store it on your system.\nStart by building a basic Fedora 34 instance where we can deploy the Fedora CoreOS image:\n$ hcloud server create --datacenter nbg1-dc3 --image fedora-34 \\ --type cpx11 --name coreos-deployer Activate the rescue system for the instance you just created and reboot into the rescue environment:\n$ hcloud server enable-rescue coreos-deployer 1.148s [=================================] 100.00% Rescue enabled for server 14168423 with root password: xxxxx $ hcloud server reboot coreos-deployer 656ms [==================================] 100.00% Server 14168423 rebooted Use ssh to log into the rescue environment (user: root, password provided in the enable-rescue step). The rescue environment is in a ramdisk and we don\u0026rsquo;t have enough space to build the coreos-installer or download a raw disk, but we can get creative and stream the filesystem directly from a Fedora CoreOS image download.\nI prefer to live dangerously and I run the testing release, but there are three releases available for download. Download, decompress, and write the image to the root disk all at once:\nexport COREOS_DISK=\u0026#34;https://builds.coreos.fedoraproject.org/prod/streams/testing/builds/34.20210821.2.0/x86_64/fedora-coreos-34.20210821.2.0-metal.x86_64.raw.xz\u0026#34; curl -sL $COREOS_DISK | xz -d | dd of=/dev/sda status=progress This process should take one or two minutes to complete.\nEmbed the ignition configuration #We could hop out of rescue now and reboot right into Fedora CoreOS, but we need to provide SSH keys for our instance. The config.ign file we generated in the first section of this post will be deployed as /ignition/config.ign in the image.\nLet\u0026rsquo;s find the boot partition:\n# fdisk -l /dev/sda Disk /dev/sda: 38.2 GiB, 40961572864 bytes, 80003072 sectors Disk model: QEMU HARDDISK Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 00000000-0000-4000-A000-000000000001 Device Start End Sectors Size Type /dev/sda1 2048 4095 2048 1M BIOS boot /dev/sda2 4096 264191 260096 127M EFI System /dev/sda3 264192 1050623 786432 384M Linux filesystem « boot /dev/sda4 1050624 5003230 3952607 1.9G Linux filesystem « root Mount the boot partition and deploy the config.ign:\nmount /dev/sda3 /mnt mkdir /mnt/ignition vi /mnt/ignition/config.ign « copy/paste your config.ign from earlier umount /mnt Let\u0026rsquo;s power off the instance to avoid booting it. We want all subsequent boots to be as clean as possible.\nroot@rescue ~ # poweroff Make the snapshot #Create a snapshot from our powered off server:\nhcloud server create-image --description fedora-34-coreos \\ --type snapshot coreos-deployer This usually takes 1-2 minutes. Let\u0026rsquo;s get our image ID:\n$ hcloud image list | grep fedora-34-coreos 46874212 snapshot - fedora-34-coreos 0.96 GB 40 GB Wed Sep 1 10:07:04 CDT 2021 - Boot the instance #Now that we have a snapshot with our ignition configuration embedded in it, let\u0026rsquo;s make a new instance!\nhcloud server create --datacenter nbg1-dc3 --image 46874212 --type cpx11 \\ --ssh-key personal_servers --name first-coreos-instance Hetzner normally boots cloud images really quickly, but it takes a bit longer when booting from snapshots. I assume that they have the common base images cached on most hypervisors so they can provision them really quickly. The delay isn\u0026rsquo;t too bad here: the instances usually take about 90 seconds to boot.\nOnce it boots, you should see notes from ignition on bootup about your configuration:\nFedora CoreOS console Use ssh to login as the core user:\n$ ssh core@INSTANCE_IP_ADDRESS Fedora CoreOS 34.20210821.2.0 Tracker: https://github.com/coreos/fedora-coreos-tracker Discuss: https://discussion.fedoraproject.org/c/server/coreos/ [core@localhost ~]$ podman --version podman version 3.3.0 Wrapping up #Now that you have a snapshot made, you can delete your original instance (I called it coreos-deployer above) and just build off that snapshot whenever you need to. That should save you a few Euros per month. 💸\nOnce ignition has support for Hetzner\u0026rsquo;s metadata service, the extra step of embedding your configuration won\u0026rsquo;t be needed.\nAlso, if anyone from Hetzner is reading this post, I\u0026rsquo;d love to get Fedora CoreOS as one of the options for base images in your cloud! 🤗\nPhoto credit: Daniel Seßler on Unsplash\n","date":"20 August 2021","permalink":"/p/deploy-fedora-coreos-in-hetzner-cloud/","section":"Posts","summary":"Launch your containers on Fedora CoreOS instances in Hetzner cloud with a few workarounds. 🚀","title":"Deploy Fedora CoreOS in Hetzner cloud"},{"content":"Sometimes automation is your best friend and sometimes it isn\u0026rsquo;t. Typically, when two devices are connected via ethernet cables, they negotiate the best speed they can manage across a network link. They also try to agree on whether they can run full or half duplex across the network link.\nMost of the time, this works beautifully. It can break down with strange networking configs, damaged adapters, or finicky cables. In those situations, if you can rule out physical damage to any parts involved, you may need to disable autonegotiation to get the functionality you want.\nSlow home network #My home internet connection has a 400 megabit per second downlink, but noticed my downloads were slower this week and if someone was actively downloading something, such as an Xbox update, the internet latency shot up past 300ms. A few speed tests later and I found my internet speeds were limited to about 88 megabits per second.\nMy home firewall is a Dell Optiplex 7060 with an Intel I350-T2 dual-port gigabit ethernet card. My internet-facing NIC connects to a Netgear CM500-NAS cable modem. A quick check of my firewall\u0026rsquo;s external adapter revealed the problem:\n# ethtool enp2s0f0 | grep Speed Speed: 100Mb/s Well that\u0026rsquo;s certainly not right. The I350 is a gigabit card and the CM500-NAS is rated for speeds well over 100 mb/s. Rebooting the cable modem and the router itself didn\u0026rsquo;t change anything. I replaced the ethernet cable with a few others and there was no change there, either.\nAt this point, I was worried that my cable modem or adapter might have malfunction. At least there\u0026rsquo;s one more option.\nManually set the link speed #Before we approach this section, here\u0026rsquo;s a reminder:\n💣 Be sure that you have some other way to get into your system if manually setting the link speed fails.\nIf the system is remote to you, such as a dedicated server, virtual machine, or a faraway edge device that requires a 5 hour drive, you may want to consider other options. There\u0026rsquo;s a chance that manually setting the link speed may cause the link negotiation to fail entirely.\nsystemd-networkd gives you plenty of low-level control over network interfaces using the link files. These files have two parts:\na [Match] section that tells systemd-networkd about the network devices that need special configuration a [Link] section that has special configuration for a network interface We need two of these configurations in the [Link] section:\nBitsPerSecond= specifies the speed for the device with K/M/G suffixes Duplex= specifies half or full duplex In this example, I\u0026rsquo;ll match the interface on its MAC address and set the speed/duplex:\n# /etc/systemd/networkd/internet.link [Match] MACAddress=a0:36:9f:6e:52:26 [Link] BitsPerSecond=1G Duplex=full We can apply the configuration change by restarting systemd-networkd:\nsystemctl restart systemd-networkd Now let\u0026rsquo;s check the speeds:\n# ethtool enp2s0f0 | grep Speed Speed: 1000Mb/s Perfect! 🎉\nDigging for answers #Once the network speeds were working well again and my kids weren\u0026rsquo;t upset by glitches in their Netflix shows, I decided to look for issues that might be causing the negotiation to fail. Some quick checks of the network card show some potential issues:\n# ethtool -S enp2s0f0 | grep error rx_crc_errors: 22 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_window_errors: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 rx_errors: 33 tx_errors: 0 rx_length_errors: 0 rx_over_errors: 0 rx_frame_errors: 0 rx_fifo_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 I\u0026rsquo;ve swapped network cables between the devices a few times and these errors continue to appear frequently. My I350 card is several years old and discontinued from Intel, so this could be the culprit. It\u0026rsquo;s also been moved between quite a few different computers over the years. A replacement with something new might be in my future.\nPhoto credit: Charles Deluvio on Unsplash\n","date":"20 August 2021","permalink":"/p/set-network-interface-speed-systemd-networkd/","section":"Posts","summary":"Sometimes network interface autonegotiation doesn\u0026rsquo;t work as well as it should. Luckily, you can fix it with systemd-networkd. 🔧","title":"Set network interface speed with systemd-networkd"},{"content":"","date":null,"permalink":"/tags/systemd-networkd/","section":"Tags","summary":"","title":"Systemd-Networkd"},{"content":"","date":null,"permalink":"/tags/certificates/","section":"Tags","summary":"","title":"Certificates"},{"content":"","date":null,"permalink":"/tags/letsencrypt/","section":"Tags","summary":"","title":"Letsencrypt"},{"content":"Wildcard certificates make it easy to secure lots of subdomains under a single domain. For example, you can secure web.example.com and mail.example.com with a single certificate for *.example.com. Fortunately, LetsEncrypt allows you to get wildcard certificates via a DNS ownership check (often called a DNS-01 challenge).\nFortunately, Traefik can request a certificate from LetsEncrypt automatically and complete the challenge for you. It can publish DNS records to multiple providers, but my favorite is Cloudflare. They will host your DNS zones and records for free. They also have a robust API for managing DNS records (also free).\nIn this post, we will cover the basics of getting TLS working with Traefik. We can add a wildcard certificate on top and then re-use that same certificate for other containers running behind Traefik.\nBasic setup #First, we need a running instance of Traefik. The Traefik documentation explains this entire process in detail and I highly recommend reading the basics on configuration discovery, routers, and TLS settings.\nWe will use docker-compose to make this easier to manage. If you\u0026rsquo;re on Fedora, install docker-compose:\ndnf install docker-compose Now we need a docker-compose.yml file:\n--- version: \u0026#34;3\u0026#34; services: traefik: image: traefik:latest container_name: traefik restart: unless-stopped command: # Tell Traefik to discover containers using the Docker API - --providers.docker=true # Enable the Trafik dashboard - --api.dashboard=true # Set up LetsEncrypt - --certificatesresolvers.letsencrypt.acme.dnschallenge=true - --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare - --certificatesresolvers.letsencrypt.acme.email=EMAIL_ADDRESS - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json # Set up an insecure listener that redirects all traffic to TLS - --entrypoints.web.address=:80 - --entrypoints.web.http.redirections.entrypoint.to=websecure - --entrypoints.web.http.redirections.entrypoint.scheme=https - --entrypoints.websecure.address=:443 # Set up the TLS configuration for our websecure listener - --entrypoints.websecure.http.tls=true - --entrypoints.websecure.http.tls.certResolver=letsencrypt - --entrypoints.websecure.http.tls.domains[0].main=home.example.com - --entrypoints.websecure.http.tls.domains[0].sans=*.home.example.com environment: - CLOUDFLARE_EMAIL=CLOUDFLARE_ACCOUNT_EMAIL_ADDRESS - CLOUDFLARE_DNS_API_TOKEN=CLOUDFLARE_TOKEN_GOES_HERE ports: - 80:80 - 443:443 volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - certs:/letsencrypt labels: - \u0026#34;traefik.enable=true\u0026#34; - \u0026#39;traefik.http.routers.traefik.rule=Host(`home.example.com`)\u0026#39; - \u0026#34;traefik.http.routers.traefik.entrypoints=websecure\u0026#34; - \u0026#34;traefik.http.routers.traefik.tls.certresolver=letsencrypt\u0026#34; - \u0026#34;traefik.http.routers.traefik.service=api@internal\u0026#34; - \u0026#39;traefik.http.routers.traefik.middlewares=strip\u0026#39; - \u0026#39;traefik.http.middlewares.strip.stripprefix.prefixes=/traefik\u0026#39; In this example, we tell Traefik about our desired setup in the command section, including our listeners. Our insecure listener on port 80 redirects to secure connections on port 443 and we tell Traefik that we plan to use LetsEncrypt to get the certificates.\nWe provide the username and Cloudflare API key in the environment section. Follow Cloudflare\u0026rsquo;s guides for managing API tokens and keys carefully to generate a token.\nThe labels section sets up a rule where traffic destined for home.example.com goes to the Traefik dashboard. this is helpful in case you make mistakes or you can\u0026rsquo;t figure out why something is working. You can go to the dashboard to show all of the existing services, listeners, and other configurations.\n☝🏻 Before applying this docker-compose file, change a few things:\nSet your LetsEncrypt email address in the line with --certificatesresolvers.letsencrypt.acme.email Set your Cloudflare account email address for the CLOUDFLARE_EMAIL environment variable Set your Cloudflare DNS API token for the CLOUDFLARE_DNS_API_TOKEN environment variable Change the Host() rules from example.com to match your domain name Run docker-compose up -d and then docker-compose logs -f traefik to see if Traefik came up successfully with certificates. If you run into any problems, double check that your Cloudflare email and token are accurate. Also verify that your Cloudflare token has the correct permissions to adjust the dns zone.\nAdding a container #At this point, we can add another container and it can use the same TLS certificate we requested from LetsEncrypt already!\nThe librespeed project provides a self-hosted network speed test that you can run on any network. It also runs perfectly inside a container. The linuxserver.io librespeed container is well maintained and easy to run.\nAdd this to your docker-compose.yml right under the Traefik configuration:\nlibrespeed: image: ghcr.io/linuxserver/librespeed container_name: librespeed restart: unless-stopped ports: - 80 labels: - \u0026#34;traefik.enable=true\u0026#34; - \u0026#34;traefik.http.routers.librespeed.rule=Host(`librespeed.home.example.com`)\u0026#34; - \u0026#34;traefik.http.routers.librespeed.entrypoints=websecure\u0026#34; - \u0026#34;traefik.http.routers.librespeed.tls.certresolver=letsencrypt\u0026#34; Check the labels section. We first enable Traefik so it will route requests to the container. Then we set a host rule so that traffic for librespeed.home.example.com comes to this container. We only listen for TLS traffic (remember our redirect for insecure traffic earlier).\nFinally, we tell Traefik to use the same certresolver as before. Traefik is smart enough to know that *.home.example.com covers the librespeed.home.example.com subdomain just fine.\nRun docker-compose up -d once more and now librespeed has a secure connection using the original wildcard certificate.\nRenewals #LetsEncrypt certificates are valid for only 90 days. That\u0026rsquo;s why automation plays such an important role in handling renewals. You certainly don\u0026rsquo;t want to set calendar reminders to log into your server and run a script every 90 days. 😱\nTraefik automatically knows when the expiration date approaches. When the certificate has less than 30 days left until the expiration date, Traefik automatically renews the certificate.\n💣 Be careful with your DNS zone and with your DNS API keys! If you accidentally delete the API key or make big changes to your DNS zone, there\u0026rsquo;s a chance that Traefik may not be able to renew the certificate.\nPhoto credit: Veron Wessels on Unsplash\n","date":"16 August 2021","permalink":"/p/wildcard-letsencrypt-certificates-traefik-cloudflare/","section":"Posts","summary":"Re-use the same wildcard TLS certificate for multiple containers running behind traefik. 🚦","title":"Wildcard LetsEncrypt certificates with Traefik and Cloudflare"},{"content":"GitHub Actions provides infrastructure for all kinds of amazing automation. Anyone can test software, build packages, deploy applications, or even publish a blog (like this one!) with a few snippets of YAML. I often use it to bundle my software in a container after testing it. 🤖\nOne day, as I was working through another Packer configuration, I wondered if there was a way to build cloud images directly in GitHub Actions without building an instance in the cloud, making tons of changes, and snapshotting that image. Building a cloud image without booting it first seems like a cleaner way to work and it seems like it could be an easier workflow.\nI worked on the Image Builder team at Red Hat last year and really enjoyed the way we could build an image anywhere and then ship that image anywhere. That gave me an idea: What if I could use Image Builder in GitHub Actions to ship images to AWS with all the customizations I want, perhaps even on a schedule? 🤔\nWhat is Image Builder? #The idea behind Image Builder is that anyone should be able to create images for various clouds and virtualization platforms with some simple software. Nobody should worry about how a particular cloud sets up cloud-init or what kernel configuration might be required to run in a particular cloud. Someone should do that for you and you should focus on what you need in your images to be successful at your task.\nImage Builder has two parts:\nosbuild: It drives the low-level image building processes. All of the loopback setup, image packaging, and per-cloud configuration adjustments all happen here.\nosbuild-composer: It exposes different APIs which allow you to specify how the image should be built in a brief TOML configuration. You pass the TOML blueprint to osbuild-composer, tell it what type of image you want, and where you want the image delivered. It takes care of all of that.\nYou can get Image Builder on all current versions of Fedora, CentOS Stream, and Red Hat Enterprise Linux 8.3 or later. Install it via dnf:\n$ sudo dnf install osbuild-composer $ sudo systemctl enable --now osbuild-composer.socket If you love to DIY (do it yourself), read my Build AWS images with Image Builder blog post from last summer. From here on out, I\u0026rsquo;ll only talk about consuming Image Builder via GitHub Actions.\nChallenges #Building images in GitHub Actions comes with some challenges. For example, the only Linux choice is Ubuntu but Image Builder is not supported on Ubuntu currently. Luckily, we have containers!\nI started with a repository to build containers with Image Builder included. The repository builds containers for Fedora 34, Fedora rawhide (the next Fedora release), and CentOS Stream 8. You can download these containers locally and run them, too:\n# With podman $ podman pull ghcr.io/major/imagebuilder:centos-stream8 $ podman pull ghcr.io/major/imagebuilder:fedora-34 $ podman pull ghcr.io/major/imagebuilder:fedora-rawhide # With docker $ docker pull ghcr.io/major/imagebuilder:centos-stream8 $ docker pull ghcr.io/major/imagebuilder:fedora-34 $ docker pull ghcr.io/major/imagebuilder:fedora-rawhide Image Builder relies on systemd socket activation and that means systemd must be running inside the container. For nearly all systems, that requires adding the --privileged argument when you run the container and running the container as root. It\u0026rsquo;s not ideal, but it works fine in GitHub Actions since the instance is thrown away immediately after the image build is done.\nTo run these containers on your local system, you may need something like this:\n$ sudo podman run --rm --detach --privileged --name imagebuilder \\ ghcr.io/major/imagebuilder:fedora-34 Initially, I thought the single CPU on the Actions runner would make the build process too slow, but I was pleasantly surprised to see that most builds finished in 4-7 minutes. The network throughput from the runner to AWS was also quite fast. 👏🏻\nBuilding images #I often need a Fedora container or VM for doing packaging work and testing other contributors\u0026rsquo; packages, so I set out to make a proof of concept for Fedora. My proof of concept is over in GitHub at major/imagebuilder-fedora. You can fork my PoC and customize everything as you wish!\nThe workflow follows a set of steps that I\u0026rsquo;ll explain below.\nFirst, we need AWS credentials so we can drop off the image at AWS. There is a basic template in TOML format that contains placeholders for account credentials and other data:\nprovider = \u0026#34;aws\u0026#34; [settings] accessKeyID = \u0026#34;$AWS_ACCESS_KEY_ID\u0026#34; secretAccessKey = \u0026#34;$AWS_SECRET_ACCESS_KEY\u0026#34; bucket = \u0026#34;$AWS_S3_BUCKET\u0026#34; region = \u0026#34;$AWS_DEFAULT_REGION\u0026#34; key = \u0026#34;$IMAGE_KEY\u0026#34; The actions workflow fills in that TOML file with information from GitHub Actions secrets and environment variables:\n- name: Fill in the AWS template run: | cat shared/aws-template.toml | envsubst \u0026gt; shared/aws-config.toml env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_S3_BUCKET: major-aws-image-import AWS_DEFAULT_REGION: us-east-1 IMAGE_KEY: \u0026#34;${{ matrix.blueprint }}-${{ github.run_id }}\u0026#34; From there, the workflow runs the build-image.sh script and here\u0026rsquo;s where the fun starts. The container starts up and we wait for the osbuild-composer API to respond:\n# Start the container. echo \u0026#34;🚀 Launching the container\u0026#34; sudo podman run --rm --detach --privileged \\ -v $(pwd)/shared:/repo \\ --name $CONTAINER_NAME \\ $CONTAINER # Wait for composer to be fully running. echo \u0026#34;⏱ Waiting for composer to start\u0026#34; for i in `seq 1 10`; do sleep 1 composer-cli status show \u0026amp;\u0026amp; break done Once the API is up, we push the blueprint into osbuild-composer and tell it to solve the dependencies. The depsolve step is optional, but it can find problems with your package set fairly quickly so you can make adjustments.\necho \u0026#34;📥 Pushing the blueprint\u0026#34; composer-cli blueprints push /repo/${BLUEPRINT_NAME}.toml echo \u0026#34;🔎 Solving dependencies in the blueprint\u0026#34; composer-cli blueprints depsolve ${BLUEPRINT_NAME} \u0026gt; /dev/null The blueprints are in the shared directory in the repository. For example, there\u0026rsquo;s a fedora-imagebuilder blueprint that builds an image with Image Builder inside it so you can build an image with Image Builder with Image Builder. (This reminds me of a meme. 🤭)\nname = \u0026#34;fedora-imagebuilder\u0026#34; description = \u0026#34;Image Builder - rawhide\u0026#34; version = \u0026#34;0.0.2\u0026#34; modules = [] groups = [] [[packages]] name = \u0026#34;cockpit-composer\u0026#34; version = \u0026#34;*\u0026#34; [[packages]] name = \u0026#34;osbuild\u0026#34; version = \u0026#34;*\u0026#34; [[packages]] name = \u0026#34;osbuild-composer\u0026#34; version = \u0026#34;*\u0026#34; [customizations.services] enabled = [\u0026#34;cockpit.socket\u0026#34;, \u0026#34;osbuild-composer.socket\u0026#34;] Now we\u0026rsquo;re ready to build the image (or in Image Builder terms, start the compose). After starting it, we extract the ID of the compose so we can monitor it while it runs.\nif [[ $SHIP_TO_AWS == \u0026#34;yes\u0026#34; ]]; then echo \u0026#34;🛠 Build the image and ship to AWS\u0026#34; composer-cli --json \\ compose start $BLUEPRINT_NAME ami $IMAGE_KEY /repo/aws-config.toml \\ | tee compose_start.json \u0026gt; /dev/null else echo \u0026#34;🛠 Build the image\u0026#34; composer-cli --json compose start ${BLUEPRINT_NAME} ami | tee compose_start.json fi COMPOSE_ID=$(jq -r \u0026#39;.body.build_id\u0026#39; compose_start.json) I get a little nervous when I can\u0026rsquo;t see any status updates, so I follow the systemd journal while the build runs. The script checks on the build frequently to see if it has finished. This process takes about 4-7 minutes in GitHub Actions for most of the images I\u0026rsquo;ve built.\n# Watch the logs while the build runs. podman-exec journalctl -af \u0026amp; COUNTER=0 while true; do composer-cli --json compose info \u0026#34;${COMPOSE_ID}\u0026#34; | tee compose_info.json \u0026gt; /dev/null COMPOSE_STATUS=$(jq -r \u0026#39;.body.queue_status\u0026#39; compose_info.json) # Print a status line once per minute. if [ $((COUNTER%60)) -eq 0 ]; then echo \u0026#34;💤 Waiting for the compose to finish at $(date +%H:%M:%S)\u0026#34; fi # Is the compose finished? if [[ $COMPOSE_STATUS != RUNNING ]] \u0026amp;\u0026amp; [[ $COMPOSE_STATUS != WAITING ]]; then echo \u0026#34;🎉 Compose finished.\u0026#34; break fi sleep 1 let COUNTER=COUNTER+1 done Once the images finish building and they deploy to AWS (usally less than 15 minutes altogether), you should be able to see them inside your AWS account:\nAWS console showing AMIs registered by Image Builder in GitHub Actions AWS console showing snapshots imported by Image Builder in GitHub Actions Extra credit #Your automation doesn\u0026rsquo;t have to end here! 🤖\nYou can also add extra repositories to your compose with composer-cli sources add... if you have custom repositories with your software or if you need packages from RPMFusion.\nGitHub Actions could boot an instance from your image, run some basic tests, and apply tags to the AMI to make provisioning easier. I always like having an AMI with a latest tag so I can deploy from the most recently built image whenever I need to test something.\nPhoto credit: Cameron Venti on Unsplash\n","date":"6 August 2021","permalink":"/p/build-fedora-aws-images-in-github-actions-with-image-builder/","section":"Posts","summary":"Build images for AWS and deploy them to your AWS account all within GitHub Actions. 🤖","title":"Build Fedora AWS images in GitHub Actions with Image Builder"},{"content":"Professional Summary #I love digging into problems that involve people, processes, and technology. When someone says \u0026ldquo;it can\u0026rsquo;t be done\u0026rdquo; or \u0026ldquo;it\u0026rsquo;s too complicated\u0026rdquo;, I jump at the opportunity (especially when it means learning something new in the process).\nOnce I learn something new, I write about it and teach as many people as I can. This blog started in 2007 and it covers a broad spectrum of topics from system administration to software development to personal career growth.\nSkills # Software development: I spend most of my time writing Python and shell scripts. Systems engineering: I use Ansible, Terraform, and Packer to manage my cloud deployments and local systems at home. My monitoring setup includes Prometheus, Grafana, and various exporters that Prometheus scrapes. Networking: Most of my networking experience involves Linux-based routers and firewalls, but I frequently use Mikrotik RouterOS and EdgeRouter/VyOS devices. I maintain several Wireguard network links in addition to some Tailscale meshes. There\u0026rsquo;s still a little OpenVPN around, too. Open source contributions: I served on the Fedora Project Board from 2012-2014, participated in the OpenStack project, and I maintain lots of packages in Fedora. I launched the ansible-hardening project within OpenStack. You can find plenty of open source work over at GitHub. I built icanhazip.com and recently transferred ownership to Cloudflare. Leadership: My experience includes director-level roles responsible for budgets, people management, and mentoring. I love helping others find a way to level up in the technical capabilities and in their job role. Work experience #Red Hat (2018) #I work at Red Hat now as a RHEL Cloud Architect. My main focus is to make the RHEL on cloud experience as good, if not better, than the on-premises experience.\nIn the past, I worked with the Image Builder team as we build a robust platform for building and deploying RHEL cloud images to various public clouds. Before that, I worked on the Continuous Kernel Integration team to provide CI/CD for the Linux kernel inside and outside of Red Hat.\nRackspace (2006-2018) #I wore plenty of hats at Rackspace (and sometimes more than one at the same time):\nChief Security Architect, Director Principal Architect - OpenStack Cloud Architect - Cloud Servers Linux Engineer, OpenStack Engineering Manager, Cloud Servers Operations Senior Systems Engineer Linux Systems Administrator SecureTrust Corporation (2004-2006) #SecureTrust Corporation now is a wholly owned subsidiary of TrustWave Holdings, Inc.\nVP of Operations Lead Developer American Medical Response (2002-2003) # Field Medic / EMT Education #I earned a Bachelors of Science in Biology from the University of Texas at San Antonio in December 2004. 🤙🏻\nCertifications #I hold several industry certifications from GIAC and Red Hat.\nGIAC # GIAC Security Essentials (GSEC) GIAC Certified UNIX Security Administrator (GCUX)1 GIAC Gold Certification via Securing Linux Containers Gold Paper Red Hat #To verify my Red Hat certifications, visit my certification page.\nRed Hat Certified Engineer Red Hat Certified Systems Administrator Red Hat Certified Specialist in Ansible Automation I previously held more certifications from Red Hat, but these have expired:\nRed Hat Certified Architect Level II 💀 Red Hat Certified Datacenter Specialist 💀 Red Hat Certified Architect Level II 💀 Red Hat Certified Datacenter Specialist 💀 GIAC recently announced that they are retiring the GCUX certification in late 2021. 😞\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"6 August 2021","permalink":"/cv/","section":"Major Hayden","summary":"Learn more about me, my work experience, and the things I\u0026rsquo;ve created. 👨🏻‍💼","title":"Major's CV"},{"content":"My home internet comes from Spectrum (formerly Time Warner Cable) and they offer IPv6 addresses for cable modem subscribers. One of the handy features they provide is DHCPv6 prefix delegation. If you\u0026rsquo;re not familiar with that topic, here\u0026rsquo;s a primer on how you get IPv6 addresses:\nSLAAC: Your machine selects an IPv6 address based on router advertisements DHCPv6: Your machine makes a DHCPv6 request (a lot like DHCP requests) and gets an address back to use DHCPv6 with prefix delegation: Your machine makes a special DHCPv6 request where you provide a hint about the size of the IPv6 network prefix you want. Are you new to IPv6 subnets and how they\u0026rsquo;re different from IPv4? If so, you might want to read up on IPv6 subnets first.\nIn a previous post, I wrote about using wide-dhcpv6 to get IPv6 addresses and that guide still works, but using systemd-networkd makes the process much easier.\nWho needs this many IP addresses? #Yes, I know that a /64 IPv6 network contains 18,446,744,073,709,551,616 addresses and that should be enough for most home networks. Using one /64 network block per interface has plenty of benefits for simplifying your network, though.\nA /56 from your provider contains 256 /64 networks and this makes it easy to configure up to 256 internal networks with a /64 on each. Breaking up a /64 subnet into pieces becomes frustrating very quickly.\nLaying out the basic network #My Linux router is a Dell Optiplex running Fedora 34 with a dual-port Intel I350 network card. These little machines are assembled well and last a long time. My network interfaces are set up like this:\nenp2s0f0: connected to my cable modem for internet access enp2s0f1: connected to a network bridge (br0) for my LAN network My LAN (192.168.10.0/24) gateway sits on br0 and masquerades traffic out through enp2s0f0, the external network interface.\nAll of the systemd-networkd configuration lives in /etc/systemd/network and we will add some files there. First off, we need to set up the external network:\n# /etc/systemd/network/wan.network [Match] Name=enp2s0f0 [Network] DHCP=yes Now we need to set up the internal bridge br0:\n# /etc/systemd/network/lanbridge.network [NetDev] Name=br0 Kind=bridge Then we can configure the br0 network interface and IP address:\n# /etc/systemd/network/lanbridge.network [Match] Name=br0 [Network] Address=192.168.10.1/24 ConfigureWithoutCarrier=yes 🤔 Special note: I like to add the ConfigureWithoutCarrier option here because systemd-network sometimes takes a while to bring the bridge online after a reboot and that makes certain daemons, like dnsmasq, fail to start.\nNow let\u0026rsquo;s connect the bridge to the physical network interface with a bind:\n# /etc/systemd/network/lanbridge-bind.network [Match] Name=enp2s0f1 [Network] Bridge=br0 ConfigureWithoutCarrier=true Just run systemctl restart systemd-networkd and ensure all of your networks are alive:\n$ networkctl IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 enp2s0f0 ether routable configured 4 enp2s0f1 ether enslaved configured 5 br0 bridge routable configured IPv6 time #Every ISP is a bit different with how they assign IPv6 addresses and what size blocks they will allocate to you. I\u0026rsquo;ve seen where some will only give a /64, others give a /56, and others give something in between. As for Spectrum, they provide up to a /56 with a prefix delegation request. You may need to experiment with a /56 first and slowly back down towards /64 to see what your ISP might do.\nLet\u0026rsquo;s go back to the configuration for the external interface and add our prefix delegation hint:\n# /etc/systemd/networkd/wan.network [Match] Name=enp2s0f0 [Network] DHCP=yes [DHCPv6] PrefixDelegationHint=::/56 When we apply this configuration, systemd-networkd will send a DHCPv6 request with a prefix hint included.\nThat\u0026rsquo;s half the battle. We also need a way to take a /64 block from the big /56 block and assign it to various network interfaces on our router. You can do this manually by looking at the /56, choosing how to subnet your network, and then manually assigning /64 blocks to each interface.\nManually assigning subnets is not a fun task. It gets worse when your ISP suddenly changes the network blocks assigned to you on a whim. 😱\nLuckily, systemd-networkd has built in functionality to do this for you automatically! Let\u0026rsquo;s go back to the configuration for br0 and add a few lines:\n# /etc/systemd/networkd/lanbridge.network [Match] Name=br0 [Network] Address=192.168.10.1/24 ConfigureWithoutCarrier=yes [Network] IPv6SendRA=yes DHCPv6PrefixDelegation=yes Run systemctl restart systemd-networkd to apply the changes. You should see an IPv6 network assigned to the interface! 🎉\nThe IPv6SendRA option tells systemd-networkd to automatically announce the network block on the interface so that computers on the network will automatically assign their own addresses via SLAAC. (You can retire radvd if you used this in the past.)\nSetting DHCPv6PrefixDelegation to yes will automatically pull a subnet from the prefix we asked for on the external network interface and add it to this interface (br0 in this case). There\u0026rsquo;s no need to calculate subnets, manage configurations, or deal with changes. It all happens automatically.\nIf you have other interfaces, such as VLANs, simply add the IPv6SendRA and DHCPv6PrefixDelegation options to their network configurations (the .network files, not the .netdev files), and apply the configuration.\nPhoto credit: tian kuan on Unsplash\n","date":"28 July 2021","permalink":"/p/dhcpv6-prefix-delegation-with-systemd-networkd/","section":"Posts","summary":"Use the new DHCPv6 prefix delegation features in systemd-networkd to make IPv6 subnetting easy! 🎉","title":"DHCPv6 prefix delegation with systemd-networkd"},{"content":"Most modern web browsers, such as Firefox, take cues from the desktop environment or from themes applied to the browser to determine whether a user wants light or dark mode from websites. This is often done through the prefers-color-scheme CSS media feature:\n.day { background: #eee; color: black; } .night { background: #333; color: white; } @media (prefers-color-scheme: dark) { .day.dark-scheme { background: #333; color: white; } .night.dark-scheme { background: black; color: #ddd; } } @media (prefers-color-scheme: light) { .day.light-scheme { background: white; color: #555; } .night.light-scheme { background: #eee; color: black; } } There are those situations where you want web pages to prefer a dark mode, but you don\u0026rsquo;t want to change your desktop settings or apply a darker theme to Firefox. You can follow these steps to prefer dark color schemes in Firefox:\nType about:config in the address bar and press enter. Click Accept the Risk and Continue. (if you want to accept the risk) 😉 In the search box, type ui.systemUsesDarkTheme Click the Number radio button below the search box. Press the plus (+) on the far right side. Set the value to 1 and press enter. Now, close the about:config page and load up a website that has dark color schemes available.\nIf you want to test the change quickly, just reload this page! My blog has light and dark color schemes set depending on what your browser prefers. 🎉 😎\nPhoto credit: Andre Benz on Unsplash\n","date":"19 July 2021","permalink":"/p/enable-dark-mode-in-firefox/","section":"Posts","summary":"Firefox allows you to set dark mode as the default without changing themes or changing your desktop configuration. 😎","title":"Enable dark mode in Firefox without changing themes"},{"content":"One of the first things I look for on a fresh installation of a laptop is how to enable tap-to-click automatically. Most window managers and desktop environments make this easy with a control panel that has toggles or drop-down menus.\nHowever, this requires a little more effort in i3. Fortunately, there are two routes to get it enabled: in xorg\u0026rsquo;s configuration or via your i3 configuration.\nVia the i3 configuration #The advantage of this method is that it\u0026rsquo;s easy to configure and test out quickly. On the other hand, this configuration change will only affect i3 on your system. (Other window managers won\u0026rsquo;t be affected.)\nStart with the xinput command to determine which devices are on your system. If you\u0026rsquo;re on Fedora, just run dnf install xinput to install it.\nHere\u0026rsquo;s the output on my Lenovo ThinkPad T490:\n➜ xinput ⎡ Virtual core pointer id=2\t[master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4\t[slave pointer (2)] ⎜ ↳ SynPS/2 Synaptics TouchPad id=12\t[slave pointer (2)] ⎜ ↳ TPPS/2 Elan TrackPoint id=13\t[slave pointer (2)] ⎣ Virtual core keyboard id=3\t[master keyboard (2)] ↳ Virtual core XTEST keyboard id=5\t[slave keyboard (3)] ↳ Power Button id=6\t[slave keyboard (3)] ↳ Video Bus id=7\t[slave keyboard (3)] ↳ Sleep Button id=8\t[slave keyboard (3)] ↳ Integrated Camera: Integrated C id=9\t[slave keyboard (3)] ↳ Integrated Camera: Integrated I id=10\t[slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=11\t[slave keyboard (3)] ↳ ThinkPad Extra Buttons id=14\t[slave keyboard (3)] My touchpad is the second entry in the first group: SynPS/2 Synaptics TouchPad. Now we can list all the properties of this device using the id number (12 in my case) or the full name:\n➜ xinput list-props \u0026#34;SynPS/2 Synaptics TouchPad\u0026#34; Device \u0026#39;SynPS/2 Synaptics TouchPad\u0026#39;: Device Enabled (187):\t1 Coordinate Transformation Matrix (189):\t1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 libinput Tapping Enabled (322):\t0 libinput Tapping Enabled Default (323):\t0 libinput Tapping Drag Enabled (324):\t1 libinput Tapping Drag Enabled Default (325):\t1 libinput Tapping Drag Lock Enabled (326):\t0 libinput Tapping Drag Lock Enabled Default (327):\t0 libinput Tapping Button Mapping Enabled (328):\t1, 0 libinput Tapping Button Mapping Default (329):\t1, 0 libinput Natural Scrolling Enabled (330):\t0 libinput Natural Scrolling Enabled Default (331):\t0 libinput Disable While Typing Enabled (332):\t1 libinput Disable While Typing Enabled Default (333):\t1 libinput Scroll Methods Available (334):\t1, 1, 0 libinput Scroll Method Enabled (335):\t1, 0, 0 libinput Scroll Method Enabled Default (336):\t1, 0, 0 libinput Click Methods Available (337):\t1, 1 libinput Click Method Enabled (338):\t1, 0 libinput Click Method Enabled Default (339):\t1, 0 libinput Middle Emulation Enabled (340):\t0 libinput Middle Emulation Enabled Default (341):\t0 libinput Accel Speed (342):\t0.000000 libinput Accel Speed Default (343):\t0.000000 libinput Accel Profiles Available (344):\t1, 1 libinput Accel Profile Enabled (345):\t1, 0 libinput Accel Profile Enabled Default (346):\t1, 0 libinput Left Handed Enabled (347):\t0 libinput Left Handed Enabled Default (348):\t0 libinput Send Events Modes Available (307):\t1, 1 libinput Send Events Mode Enabled (308):\t0, 0 libinput Send Events Mode Enabled Default (309):\t0, 0 Device Node (310):\t\u0026#34;/dev/input/event4\u0026#34; Device Product ID (311):\t2, 7 libinput Drag Lock Buttons (349):\t\u0026lt;no items\u0026gt; libinput Horizontal Scroll Enabled (350):\t1 The important line in the output is this one:\nlibinput Tapping Enabled (322):\t0 Let\u0026rsquo;s turn on tap-to-click for the touchpad:\nxinput set-prop \u0026#34;SynPS/2 Synaptics TouchPad\u0026#34; \u0026#34;libinput Tapping Enabled\u0026#34; 1 Your tap-to-click should now work! If it doesn\u0026rsquo;t, go back to the list of input devices and double check that there isn\u0026rsquo;t another touchpad. Some laptops show multiple touchpads even though there\u0026rsquo;s only one in the system. This is due to extra buttons being labeled as a touchpad on some laptops.\nLet\u0026rsquo;s make it permanent in the i3 configuration. Open up ~/.config/i3/config and add a line:\nexec xinput set-prop \u0026#34;SynPS/2 Synaptics TouchPad\u0026#34; \u0026#34;libinput Tapping Enabled\u0026#34; 1 You\u0026rsquo;re all set!\nVia the xorg configuration method #This method affects all window managers on your machine, so keep that in mind. Make a new file at /etc/X11/xorg.conf.d/touchpad-tap.conf and add the following:\nSection \u0026#34;InputClass\u0026#34; Identifier \u0026#34;libinput touchpad catchall\u0026#34; MatchIsTouchpad \u0026#34;on\u0026#34; MatchDevicePath \u0026#34;/dev/input/event*\u0026#34; Driver \u0026#34;libinput\u0026#34; Option \u0026#34;Tapping\u0026#34; \u0026#34;on\u0026#34; EndSection We\u0026rsquo;re telling xorg to apply this configuration to any libinput touchpad on the system (but you could use the specific name of the device here if you want), and we\u0026rsquo;re enabling the tapping option.\nYou can make this change effective immediately with:\nxinput set-prop \u0026#34;SynPS/2 Synaptics TouchPad\u0026#34; \u0026#34;libinput Tapping Enabled\u0026#34; 1 The xorg configuration change takes effect when you log out of your X session or you reboot your computer.\nPhoto credit: pine watt on Unsplash\n","date":"18 July 2021","permalink":"/p/tray-icons-in-i3/","section":"Posts","summary":"Enable tap-to-click on your laptop\u0026rsquo;s touchpad in i3 with one of two methods. 💻","title":"Enable touchpad tap to click in i3"},{"content":"Mentorship stands out as one of my favorite parts of working in technology and I\u0026rsquo;ve been fortunate to be on both sides of mentoring relationships over the years. One common aspect of career growth is the ability to come up with a solution and then persuade other people to get on board with it.\nNot every change is a winner, but if you feel strongly that your solution will improve your product, transform your customer experience, or just make everyone\u0026rsquo;s lives a little easier, how do you convince other people to join you?\nThis post covers several lessons I learned while trying to convince others to join me on a new technology trek. Although there\u0026rsquo;s no perfect answer that fits all situations, you can use bits and pieces of each of these to improve your persuasion skills at work.\nMake the problem real # \u0026ldquo;Fall in love with the problem, not the solution.\u0026rdquo; » Uri Levine, co-founder of Waze\nTechnology exists to solve problems, but engineers often lose sight of the problem they want to solve. This makes persuasion difficult. Before you can propose a solution and get others interested in contributing, you must identify and get agreement on the problem.\nHowever, not everyone sees problems the same way. Some may see your problem as a non-issue since it doesn\u0026rsquo;t affect them. Others may see your problem as an issue to fix, but other issues are more pressing. A smaller set will likely see the problem the same way you do, but they have other ideas for solving it. (Don\u0026rsquo;t worry about this for now. More on that later.)\nMy journalism teacher in grade school always told us that stories that focus on people and their experiences, called \u0026ldquo;feature stories\u0026rdquo;, always generate more attention. But why? Feature stories talk about people and their experiences, often by allowing the people to tell their own story in their own words. They make the story real for more readers. When you finish reading, you know what happened and you know how it affected real people.\nDiscussing problems works the same way. The problem becomes real not when it is explained or presented, but when your audience understands how it affects people. Let affected people speak for themselves by gathering comments from customers or coworkers to accelerate this process.\nLet\u0026rsquo;s say there\u0026rsquo;s a performance issue in your technology that affects large customers and they experience slow performance. You\u0026rsquo;ve identified the root of the problem and you\u0026rsquo;re eager to solve it. A basic approach might go like this:\n\u0026ldquo;Large customers are upset because our web interface is too slow when loading their data. We must fix our database server soon.\u0026rdquo;\nThis is okay, but what if we expand it a bit and make the problem more real for everyone?\n\u0026ldquo;Several of our large customers are frustrated because their account listings take too long to load in our web application. For some customers, such as Company X, the wait time is over 60 seconds and they often can\u0026rsquo;t get their data at all. Beth from Sales says she has three large customers who want to renew but want this problem fixed first. That\u0026rsquo;s $500,000 in renewal income that we could lose. Dan from Support says his team spends too much time on these issues and their ticket queues have increased 30% \u0026ndash; the customer NPS surveys are down, too. This could affect our upcoming presentation at Conference Y where we\u0026rsquo;ve paid for a prime sponsorship slot. Our database administrators explained that they are having difficulty making backups since the database servers are incredibly busy. This may affect our business continuity plans.\u0026rdquo;\nWhat\u0026rsquo;s different with the second approach?\nThe problem now has multiple real impacts on multiple teams. Revenue loss hangs in the balance since we\u0026rsquo;re unsure about renewals or getting our money\u0026rsquo;s worth for our conference sponsorship. Customers are not being well served by support. A potential emergency event looms in the future without good backups. We\u0026rsquo;ve turned a problem statement into a feature story. Your audience has heard about the problem, but they also feel the problem. They know how it is affecting their coworkers, but most importantly, they understand how it is affecting customers.\nThis feeling the problem step is critical and often overlooked. You\u0026rsquo;re making an appeal to the emotions of your audience first, followed by an appeal to their reasoning. That leads well into the next section.\nAppeal to emotions # \u0026ldquo;The weakness of the Elephant, our emotional and instinctive side, is clear: It’s lazy and skittish, often looking for the quick payoff (ice cream cone) over the long-term payoff (being thin). When change efforts fail, it’s usually the Elephant’s fault, since the kinds of change we want typically involve short-term sacrifices for long-term payoffs.\u0026rdquo; » Chip Heath, Switch\nIn the fantastic book Switch: How to Change Things When Change is Hard, Chip and Dan Heath talk about the elephant as a metaphor represents our emotions. Emotion drives everything we do (and don\u0026rsquo;t do), and you can\u0026rsquo;t use reasoning with someone if their emotions are not on board.\nThink about a child with an ice cream cone and the ice cream falls to the ground. The child screams and screams because they feel like a great experience was taken from them. A parent might say \u0026ldquo;It\u0026rsquo;s okay! We can get another one!\u0026rdquo;, but the child continues to scream. But why? Getting another ice cream is easy and they still get to enjoy the experience.\nAt that point, the child\u0026rsquo;s emotions are completely overwhelmed. Reasoning won\u0026rsquo;t work. That part of the brain is like a rider on an elephant. If the elephant is frightened by something or feels strongly about going a certain direction, there\u0026rsquo;s nothing that rider can do to change its mind. The only way to get back on track is to appeal to the child\u0026rsquo;s emotions (\u0026ldquo;That\u0026rsquo;s terrible. I know. Here, I\u0026rsquo;ll give you a hug\u0026rdquo;) until they are calm, and then apply some reasoning (\u0026ldquo;Can I get you another ice cream? Would you like the same flavor?\u0026rdquo;).\nEveryone\u0026rsquo;s emotions are triggered and managed differently, so it\u0026rsquo;s best to cast a wide net. In the example from the last section, we talked about losing money, dissatisfied customers, and potential mortal danger to the company. The goal is to make an appeal on multiple levels in the hopes that at least one will catch the attention of someone\u0026rsquo;s elephant (their emotions).\nAs a software developer, the best reward you can get is when someone uses what you wrote and they enjoy using it. If I learn that someone is dissatisfied with something I made, I want to know everything I can about it. Their dissatisfaction triggers my emotions. It\u0026rsquo;s one of the strongest appeals that makes me want to stop what I am doing and improve what I\u0026rsquo;ve created.\nYou have two main goals here:\nAppeal to the emotions of people who are unaware of the problem or who prioritize the problem lower than you do. Appeal to the emotions of people who are very aware of the problem and reinforce that their voice has been heard and understood. Work backwards to move forward # \u0026ldquo;When solving problems, dig at the roots instead of just hacking at the leaves.\u0026rdquo; » Anthony J. D\u0026rsquo;Angelo\nYour next step is to work backwards to ensure you\u0026rsquo;ve found the root of the problem. There are plenty of methods to work through the process, but keep in mind that it is a process. The process brings you closer to the root of the problem to ensure you\u0026rsquo;re solving the right problem.\nOne of my favorite methods is the Five Whys. It really just involves asking why until you get that feeling that there\u0026rsquo;s nothing else to ask. This is a great activity to do on your own before bringing it to the group. It helps you anticipate different conversations and prepare for them. Do the same process with the group, too.\nGoing back to our example from the first section, it could go something like this:\nOur web interface is too slow for our largest customers.\nWhy?\nThe web interface sits around for a long time to get data from the database.\nWhy?\nThe database takes too long to send data back.\nWhy?\nThere\u0026rsquo;s a lot of data to retrieve and the queries take a long time to run.\nWhy?\nWe store data that we don\u0026rsquo;t need and our queries retrieve more data than we need.\nWhy?\nWe\u0026rsquo;ve never optimized this part of our application before.\nAwesome! What did we learn about the problems we need to solve?\nDatabase queries need to retrieve the least amount of data required by the web application. Some information in the database might need to be cleaned up or moved elsewhere. Database queries might need some optimization in general. The connection between the web and database servers might need to be improved. Future features should include considerations or questions around how it affects data retrieval times from the database. Now we have a list of problems to solve and some things that need consideration during future development. This leads us to consider short, medium, and long term solutions, and that\u0026rsquo;s our next step.\nSolutions that last come last # \u0026ldquo;Every problem has in it the seeds of its own solution. If you don\u0026rsquo;t have any problems, you don\u0026rsquo;t get any seeds.\u0026rdquo; » Norman Vincent Peale\nEverything we\u0026rsquo;ve done has led to this moment. We need solutions to our problem from the first section. In a previous life, I was an EMT on ambulances and we faced constant problems. Some were immediate and life-threatening while others could become an emergency over time.\nWe can think through the solutions process in much the same way as I approached patients on the ambulance. I use this process:\nShort term: What must be solved right now to alleviate pain and stop the bleeding? Other things may need to be done after, but what must I do right now? Long term: What are the complicated things we need to do that will take some time, but are still very important? Medium term: What things are complicated but important that can be solved by changing how we work or adopting better processes going forward? You may wonder why the medium term tasks come last. I keep them at the end because as you argue about the short and long term items, you\u0026rsquo;ll have work that lands in gray areas between short and long term. For example, you may want to fix a critical problem, but fixing it involves lots of planning or organization with other groups. Sure, it\u0026rsquo;s critical, but it\u0026rsquo;s not something you can do quickly.\nShort term solutions should obviously be critical ones, but they should be ones that can be done by smaller group of people. Some might call these \u0026ldquo;low-hanging fruit\u0026rdquo;. These are the things where you look at a coworker and say \u0026ldquo;Hey, let\u0026rsquo;s sit down and try a few different fixes to see which one works best.\u0026rdquo; Avoid anything here that requires wide cooperation or regulatory changes. You want the changes to quickly snowball into big improvements so everyone can feel that something is getting better.\nFrom our example, short term things might be:\nIdentify the queries that are retrieving too much data and inventory what data is actually needed. Deploy a backup read-only copy of the database server to take backups. Get a list of customers who are willing to try a preview of the newer, faster web interface. Next, look for the long term solutions. These are things that require multiple groups to collaborate, consultations with vendors, or regulatory changes. Although these may take a while, the building momentum from the short term changes should build confidence that these can get done.\nThese might include:\nMigrate the database to a faster server. Consult with auditors to understand what data could be removed over time and adjust customer agreements accordingly. Finally, the medium term solutions are made up of those things that fit in between short and long term things. The best solutions here are process-based to reduce the chance of the problem happening again before long term solutions are implemented.\nFrom our example, this could be a new process added to quality assurance that checks web interface performance after any feature or bug fix is proposed. The CI system could do a test to see if response times improved or worsened after the change. This would allow developers to determine which changes must be held back until performance reductions are fixed. These medium term solutions ensure that the problem doesn\u0026rsquo;t worsen before the long term issues are fixed.\nMeasure, report, and repeat #The solutions snowball should continue to build over time as problems are crushed one by one. The maintenance of this momentum drives everything forward. Ensure that you measure the impacts of these changes over time and let everyone know about the progress.\nTake time to celebrate the wins, no matter how big or small. This builds comradery among the teams and reminds people less about the problem and more about how they overcame it. I have t-shirts in my closet from big solutions to big problems from my previous work.\nSure, people remember the problem that started it all, but they remember the hard work that came afterwards so much more.\nIf you\u0026rsquo;ve seen the movie Apollo 13, where a failed trip to the moon put three astronauts in mortal danger, what do you remember most?\nDo you remember what broke on the spaceship? I do. It had something to do with oxygen tanks.\nWhat do you really remember from the movie? I remember three astronauts and a ton of people back on Earth working through plenty of solutions and eventually succeeding. I remember everyone trying to figure out how to filter out carbon dioxide with the wrong parts. I remember three astronauts making it back to Earth with the world watching. I remember seeing people relieved and amazed by the work they did how they turned an awful situation into an unforgettable ending.\nWhat will your coworkers remember?\nPhoto credit: Giulia Hetherington on Unsplash\n","date":"11 July 2021","permalink":"/p/persuasion-engineering/","section":"Posts","summary":"Improve your persuasive skills to get your team on board with solutions to tough problems. 🤔","title":"Persuasion engineering"},{"content":"Everyone has an opinion for the best way to manage containers, and there are many contenders depending on how much complexity you can handle and how much automation you require. One of my favorite ways to manage containers is docker-compose.\nOverview of docker-compose #docker-compose uses a simple YAML syntax to explain what your desired end state should look like. The compose specification covers all of the relevant configurations for containers, volumes, networks, and more. After each change, docker-compose compares your configuration to the running containers and makes all of the required changes.\nThis provides some advantages over using docker run ... or podman run ... since you can put the YAML into version control and track your configuration changes all in one place. I was tracking the configuration in shell scripts that ran docker with lots of parameters and that became difficult to manage.\nWhat about Podman? #Podman is a tool for managing containers, much like Docker, but it has some distinct advantages:\nNo daemons are needed You can run containers as your user, or as root The commands and arguments are nearly identical to docker (no swarm support) Podman 3 added a complete Docker-compatible API This last part, the Docker-compatible API is quite interesting and this allows docker-compose to work with podman as well as it does with docker.\nLet\u0026rsquo;s try it out!\nGetting everything ready #Start with a working Fedora 34 system and install some packages:\n💣 HEADS UP: The podman-docker package brings in podman, an alias for the docker command that actually runs podman, and the docker-compatible API via a socket. If you want to run podman and docker side by side on the same machine, install podman instead of podman-docker here. If you had docker installed already, you may need to remove it with dnf remove docker-ce docker-ce-cli.\ndnf install docker-compose podman-docker We\u0026rsquo;re going to do something different here. Intead of starting the podman socket or docker daemon as root, we\u0026rsquo;re going to start the podman socket as a regular user. Switch to a regular user and start the socket:\n$ systemctl enable --now --user podman.socket Created symlink /home/major/.config/systemd/user/sockets.target.wants/podman.socket → /usr/lib/systemd/user/podman.socket. But wait, where\u0026rsquo;s the socket?\n$ ls -al $XDG_RUNTIME_DIR/podman/podman.sock srw-rw----. 1 major major 0 Jul 9 16:49 /run/user/1000/podman/podman.sock That\u0026rsquo;s a podman socket running as my user and exposing a docker-compatible API. 🎉\nTime for docker-compose #Now it\u0026rsquo;s time to use docker-compose with podman as a regular user and run a container as our regular user.\nWe can use librespeed for this example, and the LinuxServer librespeed container is a great way to deploy it. It\u0026rsquo;s a self-hosted speed test application that works well with desktops and mobile devices.\nFirst, we begin with the suggested docker-compose configuration:\n--- version: \u0026#34;2.1\u0026#34; services: librespeed: image: ghcr.io/linuxserver/librespeed container_name: librespeed environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - librespeed:/config ports: - 8080:80 restart: unless-stopped volumes: librespeed: {} Save that as docker-compose.yml in your current directory.\nKeep in mind that docker-compose is expecting to find our docker socket in /var/run/docker.sock, but we\u0026rsquo;re running the podman socket as our regular user. Let\u0026rsquo;s export the DOCKER_HOST variable and run docker-compose to bring up our new container:\n$ export DOCKER_HOST=\u0026#34;unix:$XDG_RUNTIME_DIR/podman/podman.sock\u0026#34; $ docker-compose up -d Pulling librespeed (ghcr.io/linuxserver/librespeed:)... 10f45b17b9ab: Download complete f23b92877416: Download complete a5bf9c523af4: Download complete 00fe9b963179: Download complete bfafa0ba1dc9: Download complete c583b34264f1: Download complete 9d26cce56b8d: Download complete 70de87880afd: Download complete 0ad6c2578069: Download complete a8792749de3b: Download complete 2d31530d2d8b: Download complete Creating librespeed ... done $ docker-compose ps Name Command State Ports -------------------------------------------- librespeed /init Up () :8080-\u0026gt;80/tcp The container is up and running as our user. Let\u0026rsquo;s check the nginx process inside the container to be sure:\n$ ps -xu |grep \u0026#34;nginx: master\u0026#34; major 3805 0.0 0.4 5860 4692 ? Ss 16:53 0:00 nginx: master process /usr/sbin/nginx -c /config/nginx/nginx.conf Sweet! 🥳\nTime for a speed test #If we\u0026rsquo;ve come this far, we might as well test our internet speed to ensure the container works!\nLibrespeed speed test interface before testing Remember that we used port 8080 as a replacement for 80 in our docker-compose file to avoid issues with regular users being denied access to create a listener on ports under 1024.\nLet\u0026rsquo;s see how fast my connection is today:\nLibrespeed speed test interface after testing Photo credit: Michael D Beckwith on Unsplash\n","date":"9 July 2021","permalink":"/p/rootless-container-management-with-docker-compose-and-podman/","section":"Posts","summary":"Run rootless Linux containers without any daemons using docker-compose and podman on Fedora! 📦","title":"Rootless container management with docker-compose and podman"},{"content":"icanhazip.com has a new owner! #Starting in June 2021, icanhazip.com is now owned and operated by Cloudflare! Read more about it in the blog post: A new future for icanhazip.\n","date":"5 July 2021","permalink":"/icanhazip-com-faq/","section":"Major Hayden","summary":"The family of icanhaz sites help you get more information about your network connection.","title":"icanhazip.com FAQ"},{"content":"In the summer of 2009, I had an idea. My workdays were spent deploying tons of cloud infrastructure as Rackspace acquired Slicehost and we rushed to keep up with the constant demands for new infrastructure from our customers. Working quickly led to challenges with hardware and networking.\nThat was a time where the I Can Has Cheeseburger meme was red hot just about everywhere. We needed a way to quickly check the public-facing IP address of lots of backend infrastructure and our customers sometimes needed that information, too.\nThat\u0026rsquo;s when icanhazip.com was born.\nIt has always been simple site that returns your external IP address and nothing else. No ads. No trackers. No goofy requirements. Sure, if you looked hard enough, you could spot my attempt at jokes in the HTTP headers. Other than that, the site had a narrow use case and started out mainly as an internal tool.\nThat\u0026rsquo;s when things got a little crazy #Lifehacker\u0026rsquo;s Australian site featured a post about icanhazip.com and traffic went through the roof. My little Slicehost instance was inundated and I quickly realized my Apache and Python setup was not going to work long term.\nI migrated to nginx and set up nginx to answer the requests by itself and removed the Python scripts. The load on my small cloud instances came down quickly and I figured the issue would be resolved for a while.\nFast forward to 2015 and icanhazip.com was serving well over 100M requests per day. My cloud instances were getting crushed again, so I deployed more with round robin DNS. (My budget for icanhazip is tiny.) Once that was overloaded, I moved to Hetzner in Germany since I could get physical servers there with better network cards along with unlimited traffic.\nThe Hetzner servers were not expensive, but I was paying almost $200/month to keep the site afloat and the site made no money. I met some people who worked for Packet.net (now Equinix Metal) and they offered to sponsor the site. This brought my expenses down a lot and I deployed icanhazip.com on one server at Packet.\nThe site soon crossed 500M requests per day and I deployed a second server. Traffic was still overloading the servers. I didn\u0026rsquo;t want to spin up more servers at Packet since they were already helping me out quite a bit, so I decided to look under the hood of the kernel and make some improvements.\nI learned more than I ever wanted to know about TCP backlogs, TCP/VLAN offloading, packet coalescing, IRQ balancing, and a hundred other things. Some Red Hat network experts helped me (before I joined the company) to continue tweaking. The site was running well after that and I was thankful for the support.\nEven crazier still #Soon the site exceeded 1B requests per day. I went back to the people who helped me at Red Hat and after they looked through everything I sent, their response was similar to the well-known line from Jaws: \u0026ldquo;You\u0026rsquo;re gonna need a bigger boat.\u0026rdquo;\nI languished on Twitter about how things were getting out of control and someone from Cloudflare reached out to help. We configured Cloudflare to filter traffic in front of the site and this reduced the impact from SYN floods, half-open TLS connections, and other malicious clients that I couldn\u0026rsquo;t even see when I hosted the site on my own.\nLater, Cloudflare launched workers and my contact there said I should consider it since my responses were fairly simple and the workers product would handle it well. The cost for workers looked horrifying at my traffic levels, but the folks at Cloudflare offered to run my workers for free. Their new product was getting bucket loads of traffic and I was able to scale the site even further.\nIn 2021, the traffic I once received in a month started arriving in 24 hours. The site went from 1B requests per day to 30-35B requests per day over a weekend. Almost all of that traffic came from several network blocks in China. Through all of this, Cloudflare\u0026rsquo;s workers kept chugging along and my response times barely moved. I was grateful for the help.\nCloudflare was doing a lot for me and I wanted to curb some of the malicious traffic to reduce the load on their products. I tried many times to reach out to the email addresses on the Chinese ASNs and couldn\u0026rsquo;t make contact with anyone. Some former coworkers told me that my chances of changing that traffic or getting a response to an abuse request was near zero.\nMalware almost ended everything #There was a phase for a few years where malware authors kept writing malware that would call out to icanhazip.com to find out what they had infected. If they could find out the external IP address of the systems they had compromised, they could quickly assess the value of the target. Upatre was the first, but many followed after that.\nI received emails from companies, US state governments, and even US three letter agencies (TLA). Most were very friendly and they had lots of questions. I explained how the site worked and rarely heard a lot more communication after that.\nNot all of the interactions were positive, however. One CISO of a US state emailed me and threatened all kinds of legal action claming that icanhazip.com was involved in a malware infection in his state\u0026rsquo;s computer systems. I tried repeatedly to explain how the site worked and that the malware authors were calling out to my site and I was powerless to stop it.\nAlong the way, many of my hosting providers received abuse emails about the site. I was using a colocation provider in Dallas for a while and the tech called me about an abuse email:\n\u0026ldquo;So we got another abuse email for you,\u0026rdquo; they said.\n\u0026ldquo;For icanhazip.com?\u0026rdquo;\n\u0026ldquo;Yes. I didn\u0026rsquo;t know that was running here, I use it all the time!\u0026rdquo;\n\u0026ldquo;Thanks! What do we do?\u0026rdquo;\n\u0026ldquo;Your site just returns IP addresses, right?\u0026rdquo;\n\u0026ldquo;Yes, that\u0026rsquo;s it.\u0026rdquo;\n\u0026ldquo;You know what, I\u0026rsquo;ll write up a generic response and just start replying to these idiots for you from now on.\u0026rdquo;\nThere were many times where I saw a big traffic jump and I realized the traffic was coming from the same ASN, and likely from the same company. I tried reaching out to these companies when I saw it but they rarely ever replied. Some even became extremely hostile to my emails.\nThe passion left in my passion project started shrinking by the day.\nThe fun totally dried up #Seeing that over 90% of my traffic load was malicious and abusive was frustrating. Dealing with the abuse emails and complaints was worse.\nI built the site originally as just a utility for my team to use, but then it grew and it was fun to find new ways to handle the load without increasing cost. Seeing 2 petabytes of data flowing out per month and knowing that almost all of it was garbage pushed me over the line. I knew I needed a change.\nI received a few small offers from various small companies ($5,000 or less), but I realized that the money wasn\u0026rsquo;t what I was after. I wanted someone to run the site and help the information security industry to stop some of these malicious actors.\nicanhazip.com lives on at Cloudflare #I\u0026rsquo;ve worked closely with my contacts at Cloudflare for a long time and they\u0026rsquo;ve always jumped in to help me when something wasn\u0026rsquo;t working well. Their sponsorship of icanhazip.com has saved me tens of thousands of dollars per month. It has also managed to keep the site alive even under horrific traffic load.\nI made this decision because Cloudflare has always done right by me and they\u0026rsquo;ve pledged not only to keep the site running, but to work through the traffic load and determine how to stop the malicious traffic. Their coordinated work with other companies to stop compromised machines from degrading the performance of so many sites was a great selling point for me.\nIf you\u0026rsquo;re curious, Cloudflare did pay me for the site. We made a deal for them to pay me $8.03; the cost of the domain registration. The goal was never to make money from the site (although I did get about $75 in total donations from 2009 to 2021). The goal was to provide a service to the internet. Cloudflare has helped me do that and they will continue to do it as the new owners and operators of icanhazip.com.\nGratitude #I\u0026rsquo;d like to thank everyone who has helped me with icanhazip.com along the way. Tons of people stepped up to help with hosting and server optimization. Hosting providers helped me field an onslaught of abuse requests and DDoS attacks. Most of all, thanks to the people who used the site and helped to promote it.\nPhoto credit: Sebastien Gabriel on Unsplash\n","date":"6 June 2021","permalink":"/p/a-new-future-for-icanhazip/","section":"Posts","summary":"icanhazip.com lives on with the same mission, but with a new owner 🤗","title":"A new future for icanhazip"},{"content":"Emojis brighten up any message or document. They also serve as excellent methods for testing whether your application handles strings appropriately. (This can be a lot of fun.) 🤭\nI constantly obsess with efficiency and shortening the time and effort required to get my work done. I noticed that I could type short text emoticons like :) and ;) so much faster than I could use emojis. This simply would not do. 😉\nFirst attempts #Emoji Copy was my first try at getting the emojis I needed quickly. The site also offers a native emoji mode which allows you to see if your system is handling emojis correctly. The site loads quickly and the search finds emojis in a flash, but it was annoying to open a browser tab just to find an emoji. 🤦🏻‍♂️\nThe GNOME Extension called Emoji Selector made the selection process faster, but I moved from GNOME to i3 and lost my GNOME extensions. 🤷🏻‍♂️\nOther methods, such as the Emoji input method and the ibus-typing-booster, also worked, but I knew there had to be something more efficient than those. 🤔\nEnter rofimoji #The rofi launcher quickly become part of my core workflow in i3 (replacing dmenu) and I was pleasantly surprised to find rofimoji in GitHub. 🤗\nThe rofimoji launcher follows in rofi\u0026rsquo;s footsteps and gives you quick access to emojis. Using rofimoji is easy:\nBind a key combination to run rofimoji (I use Mod+E) Type in a search term to find the perfect emoji Press enter to input it directly in the active window or shift+enter to copy it to the clipboard 🎉 Depending on the application you\u0026rsquo;re using, you might need to mess around with roflmoji\u0026rsquo;s --action parameter. Some applications will take the emoji directly as if you typed it from a keyboard, but most of the ones I use seem to like a copy/paste method via the clipboard. 📋\nI use the --action clipboard parameter and it works well across browsers and terminals. Here\u0026rsquo;s the line from the i3 configuration file:\nbindsym $mod+e exec --no-startup-id rofimoji --skin-tone light --action clipboard --rofi-args=\u0026#39;-theme solarized -font \u0026#34;hack 12\u0026#34; -width 800\u0026#39; RPMs for Fedora, CentOS, and RHEL #At the moment, rofimoji is not packaged for Fedora, CentOS, or Red Hat Enterprise Linux (RHEL). However, you can install it from my COPR packages repository:\nsudo dnf copr enable mhayden/packages sudo dnf install python3-rofimoji Enjoy! 🍰\nPhoto credit: Kelvin Yan on Unsplash\n","date":"15 May 2021","permalink":"/p/efficient-emojis-with-rofimoji/","section":"Posts","summary":"Emojis brighten up any message or document. 🌻 Search, select, and use emojis quickly on Linux with rofimoji. 🤗","title":"Efficient emojis with rofimoji"},{"content":"","date":null,"permalink":"/tags/rpm/","section":"Tags","summary":"","title":"Rpm"},{"content":"Every new trader in the stock market must wade through a myriad of tools, platforms, and websites that claim to have the best stock market research around. Some are free, some are not, and some cost so much that they eat into plenty of your profits.\nThis post rounds up all of the best research options out there that you can have for free (or almost free). I\u0026rsquo;m sure I missed a few, but these are the ones I rely on most often.\nCharting #It\u0026rsquo;s difficult to beat the charting features on Yahoo Finance. Load up a particular stock, fund, or index and click on Full screen above the small chart. Take a look at the chart for SPY and click Indicators at the top left. You get tons of options for the basics, such as moving averages and relative strength index (RSI), and you get plenty of more advanced indicators as well. You can share the charts directly from the site.\nMy runner-ups in charting include:\nStockCharts: The basic charts from StockCharts are great and the new ACP platform is useful, too. (ACP has limitations without a paid account.) TradingView: Charts on TradingView are gorgeous, but there are limitations on the number of indicators on a free account. Finviz: The automated technical analysis charts are helpful when you\u0026rsquo;re in a hurry. Research #My favorite research choice may be surprising, but it\u0026rsquo;s Fidelity. They have tons of research, guidance, screeners, analyst reports, and more. Finding all of the information can be challenging at times because the interface changes depending on the information you are viewing. It feels like some of the pages are left over from an old system, but the information is still very useful.\nFidelity will require you to have an account before you can use the research tools, but this is probably as simple as setting up a brokerage account and making a small deposit. I already have a couple of accounts there for work-related investments and this gives me full access to their research tools.\nThere\u0026rsquo;s no API access and some of the data is a little sluggish to be updated, but the information is useful even for shorter term investors and traders.\nAs for the research runner-ups:\nBarchart: Stock research, earnings information, charting, technical analysis, and more are available on Barchart. You can access plenty of data for free, but they have paid accounts that give you more access to screeners. If you aren\u0026rsquo;t able to get a Fidelity account where you live, Barchart will give you almost the same amount of information. Finviz: There\u0026rsquo;s plenty of data per page on Finviz and the screeners are extremely fast and useful. Stockrow: It feels like a copy of Finviz in quite a few places, but it presents data in some different ways. They offer paid accounts, but the free account has plenty of data. Tradytics: It delivers a modern view of the overall market, the options flow, and plenty of other data. This one has paid accounts, too, but there are lots of free features that can help you make better decisions. fintel.io: This site gives you insight into what big hedge funds are doing in the market. Just keep in mind that the data is delayed (based on reports submitted from the funds themselves), and it\u0026rsquo;s difficult to tell which side of the trade the hedge funds are on when they submit their forms. I like it better than HedgeFollow and HedgeMind, but those can be useful as well. APIs #Most APIs for stock market data are so expensive that they\u0026rsquo;re not worth considering. However, if you don\u0026rsquo;t need up-to-the-second data, there are some free options with great data.\nYahoo! Finance has a great API for stock quotes, company information, and options quotes. The API also has historical data that normally costs hundreds or more from other APIs. If you use Python, the yfinance module is great. You can pull it into pandas so you can slice and dice the data all you want.\nAlthough Finviz doesn\u0026rsquo;t have an official API, the finvizfinance Python module allows you to retrieve plenty of corporate data, quotes, and charts from Finviz.\niexcloud has one of the best APIs around. You can get a free account with a decent amount of credit (if you use it wisely) and their paid plans start at $9 per month.\nNews #Almost all of the previously mentioned sites have some sort of news flow available on their site. However, Benzinga has a feature where you can load up your watchlist and get near real-time data in your email inbox. You can select the stocks you care about and the types of news and filings for each. Their paid plan has more access to news that you won\u0026rsquo;t see elsewhere, but it\u0026rsquo;s expensive.\nThe SEC Edgar Filing Tracker is a great way to track SEC filings for the stocks you care about. You can search through filings via a simple interface and you can set up email alerts for certain stocks.\nIf you are a TD Ameritrade customer, the ThinkOrSwim platform has tons of real-time news that you can filter from within the application itself.\nWhy is all of this important? #You should trade stocks for companies you love and the companies that make products you love. However, it\u0026rsquo;s critical to understand if these companies are generating positive cash flow, making products that build a wider moat, and drawing the interest of institutional investors. These are just a few factors to consider when choosing to invest or trade.\nFortunately for you, almost all of the data you need to make these decisions can be acquired for free or at a very low cost. Good luck!\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\n","date":"22 April 2021","permalink":"/p/free-resources-for-the-stock-market/","section":"Posts","summary":"Investing in stock or trading options is complicated, but there are plenty of free resources available to make research easier.","title":"Free resources for the stock market"},{"content":"","date":null,"permalink":"/tags/investing/","section":"Tags","summary":"","title":"Investing"},{"content":"🤔 This is another post in a set of posts on options trading. If you are new to options trading, you may want to start with some of my earlier posts.\nAfter writing my original blog post about selling options contracts in the stock market, I received some feedback saying that the post jumped over too many steps and used too much terminology. I went back, re-read the post, and found lots of room for improvement.\nIf you understood the original post, then this post probably won\u0026rsquo;t give you much additional insight. However, if the original post left you perplexed, I hope this post helps.\nStarting out with an asset #Let\u0026rsquo;s say you love classic cars. Many older cars that are one of a kind or very well maintained can keep their value for a long time. Sometimes they even go up in value over time due to demand!\nYou find a great deal on a gorgeous car for $15,000. It\u0026rsquo;s perfect! Also, demand has gone up for the car recently because it was featured in a popular movie, so it might be worth more after a while.\nSo you invested $15,000 in the car. But wait, what if the car is stolen or involved in a collision. Would it be worth $15,000 then? Likely not. Let\u0026rsquo;s assume that the car would lose half its value if it was involved in a collision.\nHow do you protect it? Insurance.\nYou call your insurance company and ask how much it costs for a six month auto policy on the car valued at $15,000. They offer you a deal of $500 for six months. You give them $500 and they insure your car.\nLet\u0026rsquo;s convert this to options terminology:\nYou bought the car (which is like buying stock). You\u0026rsquo;re worried about the value of the car going down, so you bought insurance (which is like buying a put option contract). The insurance company sold you the policy (like selling a put). The movie is a hit! #The movie that featured the same kind of car as yours was a huge success and now the demand for your classic car is way up. You can easily sell the car for $30,000 now!\nBut what about that insurance policy when the car was worth $15,000? It\u0026rsquo;s worth less now because the value of the car has increased so much. If you got in a wreck now and the price of the car was cut in half to $15,000, your insurance policy would still be there, but the value of the car is insured at $15,000.\nSo you call your insurance company and ask for a new policy. They agree to make a new policy for six months for $1,250. But wait, the car doubled in value and the insurance policy went up by 2.5x. Why is that?\nThe increase in price of the insurance policy is due to the value going up, but now the value is volatile. It\u0026rsquo;s more difficult for the insurance company to gauge the value of the car, how much the parts cost, and how much the labor would be to fix it.\nYou decide to sell your original $500 policy and buy the new one for $1,250 that protects your car valued at $30,000.\nBack to options terminology:\nThe value of the car (stock) went up. When you needed insurance for the higher value (a put with a higher strike price), the insurance company wanted more money to cover the cost and the volatility of the value of the car (similar to implied volatility for put options). The insurance company bought back your old insurance plan (like buying a sold put). You bought the new insurance plan (like buying a new put). Too fast, too furious #You\u0026rsquo;re on the freeway in your classic car and you weren\u0026rsquo;t paying attention to the speed limit. A police officer saw you, pulled you over, and issued you a big ticket which included reckless driving.\nIt\u0026rsquo;s time to renew your insurance policy again. But now, the insurance company wants $1,500 to insure your $30,000 car. Why are you paying more for the same amount of insurance coverage?\nThe insurance company was notified about your massive speeding ticket and they believe that increased the chances of your car being involved in an accident. They want more money from you because there\u0026rsquo;s a higher chance of the value of your car going down drastically (due to an collision).\nThis happens with options trades, too. When there\u0026rsquo;s a bigger chance of a swing in the price of a stock, the implied volatility goes up. That means it\u0026rsquo;s more expensive to buy options and that selling options pays out more premium (just like the car insurance company could charge extra). The insurance seller (put seller) is taking more risk by selling insurance (puts) and demands to be paid more for it.\nIt wasn\u0026rsquo;t my fault! #You learned from your speeding ticket, which is good, but you ran a red light in town and collided with another car! Luckily everyone was okay, but your car is in terrible shape. It will need a lot of repairs.\nYou call your insurance company to tell them about it and they send you to a repair shop. The estimate to repair the car is $10,000. The insurance company pays for the repairs, but they have lost money on your policy ($1,500 premium minus $10k in repairs).\nOnce the repairs are finished, your insurance company says if you want to get insured again, it\u0026rsquo;s going to be $3,000 for six months. Again, volatility is in play here. Your car went from $15k to $30k to $20k in the eyes of the insurance company and that\u0026rsquo;s plenty of volatility.\nConclusion #Selling puts in the stock market allows you to act like an insurance agent for traders and institutions that want to buy insurance for their stock. If you choose high-quality companies that have good fundamentals and desirable products, then the chances of that insurance getting a claim is relatively low.\nVery risky stocks, including those that have not been trading long (think IPOs) and those that have wild swings in value, pay out much more premium for this insurance, but as a put seller, you are on the hook for that swing in valuation.\nStriking the right balance between the amount of premium received and the amount of risk taken is a difficult one. Companies that have been around for a hundred years with steady growth are not likely going down any time soon, but there\u0026rsquo;s not much premium for puts there. On the other hand, brand new companies could swing up and down drastically and the premium is high for those stocks.\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Linh Ha on Unsplash\n","date":"17 March 2021","permalink":"/p/selling-options-made-simpler/","section":"Posts","summary":"Feedback from my original options selling post said that the concept was too difficult to follow. Let\u0026rsquo;s use an analogy!","title":"Selling options made simpler"},{"content":"After the recent snow apocalypse that swept through Texas followed by widespread power crisis, I realized that my UPS monitoring strategy needed improvement. One had batteries that were near death and my other two had loads that were not well balanced.\nI have a few CyberPower UPS units and an old APC UPS. Although CyberPower does offer relatively expensive monitoring cards that puts the UPS on the local network, none of them worked with my 1350/1500VA units. However, all of them do have USB serial connectivity and I wondered how I could monitor them more effectively.\nEnter the Raspberry Pi Zero W #The Pi Zero W is an extension of the old Pi Zero with wireless network connectivity included. It also has a USB port available that would allow me to connect it to a UPS. It runs an older Broadcom BCM2835 and only has 512MiB of RAM, but that\u0026rsquo;s plenty to do the job.\nThe Vilros kit contains nearly everything you need for a Pi Zero W. I added on a 64GB SD card for $11 and that brought the total to just under $40 per UPS.\nSetting up #While the Raspberry Pi OS is popular, I\u0026rsquo;ve been using Arch Linux a lot lately and decided to use their build for the Pi Zero W. The installation instructions for Arch we perfect, except for one step: I needed wireless network connectivity as soon as the Pi booted.\nYou can enable wireless network at boot time by following the Arch Linux instructions and stopping just before you unmount the filesystems on the SD card. The first step is to add a systemd-networkd config to use DHCP on the wireless network interface:\ncat \u0026lt;\u0026lt; EOF \u0026gt;\u0026gt; root/etc/systemd/network/wlan0.network [Match] Name=wlan0 [Network] DHCP=yes EOF Next, we need to store our wpa_supplicant configuration for wlan0.\nwpa_passphrase \u0026#34;YOUR_SSID\u0026#34; \u0026#34;WIFI_PASSWORD\u0026#34; \\ \u0026gt; root/etc/wpa_supplicant/wpa_supplicant-wlan0.conf This prepares the wpa_supplicant configuration at boot time, but we need to tell systemd to start wpa_supplicant when the Pi boots:\nln -s \\ /usr/lib/systemd/system/wpa_supplicant@.service \\ root/etc/systemd/system/multi-user.target.wants/wpa_supplicant@wlan0.service We\u0026rsquo;ve now enabled DHCP at boot time, stored the wireless connection credentials, and enabled wpa_supplicant at boot time. Follow the remaining Arch Linux installation instructions starting with unmounting the boot and root filesystems.\nPop the SD card into the Pi, connect your UPS\u0026rsquo; USB cable, and connect it to power. Once it boots, be sure to follow the last two steps from the Arch Linux installation instructions:\npacman-key --init pacman-key --populate archlinuxarm Nuts and bolts #When it comes to monitoring UPS devices in Linux, it\u0026rsquo;s hard to beat Network UPS Tools, or nut. Install it on your Pi:\npacman -S nut usbutils Start by running lsusb to ensure your USB is connected and recognized:\n$ lsusb Bus 001 Device 003: ID 0764:0501 Cyber Power System, Inc. CP1500 AVR UPS Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub If you can\u0026rsquo;t see the UPS, double check your USB cable. You may need to disconnect and reconnect it after installing nut to pick up udev changes.\nNext, it\u0026rsquo;s time to update some configuration files. Start by opening /etc/nut/nut.conf and setting the server mode to netserver so that nut can listen on the network:\nMODE=netserver Open /etc/nut/ups.conf and tell nut how to talk to your UPS:\n[amd-desktop] driver = usbhid-ups port = auto desc = \u0026#34;CyberPower 1500VA AMD Desktop\u0026#34; Almost all modern UPS units will use the usbhid-ups driver. The name in the section header (amd-desktop in my example) is how the UPS is named when you query nut for status.\nNow we need to tell nut to listen on the network. I trust my local network, so I open it up to the LAN. Edit /etc/nut/upsd.conf and adjust LISTEN:\n# LISTEN \u0026lt;address\u0026gt; [\u0026lt;port\u0026gt;] # LISTEN 127.0.0.1 3493 # LISTEN ::1 3493 LISTEN 0.0.0.0 3493 # # This defaults to the localhost listening addresses and port 3493. # In case of IP v4 or v6 disabled kernel, only the available one will be used. The next step is to set up an admin user for nut. This is completely optional, but you will need this if you want to tell nut to execute certain commands on the UPS, such as disabling the beeping alarm or running self tests. Edit /etc/nut/upsd.users and add a user:\n[admin] password = ihavethepower actions = SET instcmds = ALL We\u0026rsquo;re ready to start the service and ensure it comes up on reboots:\nsystemctl enable --now nut-server We can test it:\n$ upsc -l amd-desktop $ upsc amd-desktop@localhost battery.charge: 100 battery.charge.low: 10 battery.charge.warning: 20 battery.mfr.date: CPS battery.runtime: 1875 battery.runtime.low: 300 battery.type: PbAcid battery.voltage: 13.5 battery.voltage.nominal: 12 device.mfr: CPS device.model: CP 1350C device.type: ups driver.name: usbhid-ups driver.parameter.pollfreq: 30 driver.parameter.pollinterval: 2 driver.parameter.port: auto driver.parameter.synchronous: no driver.version: 2.7.4 driver.version.data: CyberPower HID 0.4 driver.version.internal: 0.41 input.transfer.high: 140 input.transfer.low: 90 input.voltage: 124.0 input.voltage.nominal: 120 output.voltage: 124.0 ups.beeper.status: enabled ups.delay.shutdown: 20 ups.delay.start: 30 ups.load: 17 ups.mfr: CPS ups.model: CP 1350C ups.productid: 0501 ups.realpower.nominal: 298 ups.status: OL ups.test.result: Done and passed ups.timer.shutdown: -60 ups.timer.start: 0 ups.vendorid: 0764 Sweet! You can test connectivity from another system on your network by specifying the IP address instead of localhost.\nAdding HomeAssistant #Setting up HomeAssistant is well outside the scope of this post, but it can monitor all kinds of things on your home network and allow you to run certain automations when devices get into a certain state. You can put a sensor on your garage door and get a text when it opens or closes. You can lower your thermostat when your CPU temperature gets too hot.\nFortunately, you can also monitor UPS devices and create alerts! Follow these steps to add your UPS to HomeAssistant:\nFrom the main HomeAssistant screen, click Configuration. Click Integrations. Click Add Integration at the bottom right. Search for nut in the list and add it. In the next window, specify your Pi\u0026rsquo;s IP address and port for nut. Add your username and password that you configured in upsd.users earlier. Click Submit. Choose all of the aspects of your UPS you want to monitor. I keep an eye on load, battery voltage, input voltage, and runtime. Click Submit again and your UPS should appear in the integrations list! Once HomeAssistant monitors your UPS for a while, you should have some useful data! Here\u0026rsquo;s a graph of my UPS load during my workday:\nGraph of UPS load from HomeAssistant You can see that my workday starts just after 6AM and ends after 4PM. Using this data, you can set up all kinds of automations when UPS load is too high, input voltage is too low (brownout/blackout), or the runtime falls to a low level (could be dying batteries).\nThe Pi Zero W draws a tiny amount of power and can monitor your UPS for an extended period without having an impact on the runtime or your wallet! 💸\nPhoto credit: Johannes Plenio on Unsplash\n","date":"15 March 2021","permalink":"/p/monitor-ups-with-raspberry-pi-zero-w/","section":"Posts","summary":"Monitor nearly any uninterruptible power supply (UPS) with a Raspberry Pi Zero W and HomeAssistant","title":"Monitor a UPS with a Raspberry Pi Zero W"},{"content":"","date":null,"permalink":"/tags/raspberrypi/","section":"Tags","summary":"","title":"Raspberrypi"},{"content":"🤔 This is another post in a set of posts on options trading. If you are new to options trading, you may want to start with some of my earlier posts.\nYou did your research. You waited for a good time to sell a put option. You received a decent premium for your option trade. Now your trade has turned into a loss. What can you do now?\nThere are good choices for these situations, but there\u0026rsquo;s something important to remember. If you made the trade with strong conviction, you know your maximum loss, and you like the stock, then sticking with your original plan might be the best option.\nThis post will cover some common options scenarios and what you can do to work through a losing trade.\nLosses in the first hour after making the trade #It\u0026rsquo;s very common to sell an option, either puts or calls, and see the trade show up as a loss immediately. This is unsettling as you first get started with trading.\nOften times, this happens when you sell near the bid side of the bid/ask spread. Remember that these spreads have a few properties:\nBid: What the buyers are offering to pay to buy an option. Ask: Price at which sellers are willing to sell their option. Mark: Halfway in between the bid and ask price. Last: The price of the last trade that was made. Almost all brokerage software shows the current price for a stock or option as the mark price. If the bid/ask spread is $1.00 and $1.10, then the mark is $1.05.\nIf you sell an option at the bid price of $1.00, your trading software knows the mark price is $1.05. You will see a $0.05 loss on your trade. That doesn\u0026rsquo;t mean that you lost money, but it does mean that you could have made an extra $5 in premium on the trade. To avoid this, make your next trade at the mark price or higher. Everything has a downside, and the downside of making a trade above the bid price is that your order fill may be delayed or may not occur at all.\nKeep in mind that when you sell options, you make more money on volatile stocks. Volatile stocks have a high probability of larger price changes. After you sell your option, the underlying stock can move downwards a little, and this makes your sold put more valuable. Stick with the conviction you had when you originally made the trade and give it a little more time.\nThe underlying stock price moved down closer to the put strike price #Let\u0026rsquo;s say you\u0026rsquo;ve sold a $100 put on your favorite stock, and a few days later, the stock moves down to around $102. You will see a loss in your trading software. The put option gained value as the stock moved closer to the strike price.\nAt this point, you still have your premium in your pocket and the stock is still over the strike price. You are in a good position to exit the trade profitable and the best option here is to wait.\nOn the flip side, if you sell a covered call at $110 and the stock creeps up to $108, your best option could be to wait.\nThe stock price moved just under the put strike price #This was a spooky situation when I first started trading. Let\u0026rsquo;s say you sold a $100 put for $2.50 premium and the stock moved down to $98. Your put is now \u0026ldquo;in the money\u0026rdquo; and it\u0026rsquo;s a lot more valuable than it was when you first sold it. You will see a loss in your trading software.\nYour breakeven on this trade is $97.50 since the strike is $100 and you received $2.50 in premium. If the stock is at $98, then you are still sitting on a $50 profit.\nWhen my trades reach this point, I will usually hold and wait. Best case, the stock will go back over the strike price and I can buy back my put for a profit.\nIf it expires in the money, but I am above the breakeven, I get a little less profit (still a profit!) and I hold 100 shares. I can then sell covered calls to collect more premium.\nThe stock price moved well under the strike price #In our previous example, we sold a $100 put for $2.50 in premium. What if the stock price moves down to $90?\nAt this point, you have a loss on paper of $750 since your breakeven is $97.50. Your put is \u0026ldquo;in the money\u0026rdquo; and you will likely be assigned 100 shares. Buying back the put doesn\u0026rsquo;t make much sense here unless you have a very strong conviction that the stock is going to go down a significant amount.\nThe goal is to reduce your cost basis. Your cost basis is how much you spent to get the shares you have. In this example, your cost basis is $97.50 when the stock price falls to $90.\nBuy shares. You can reduce the cost basis by buying shares at $90. Do your research and ensure that the stock has some clear indicators that it is bottoming out. Use indicators like the 14 day RSI under 50%, stock price touching 50 or 200 day moving averages, or positive news about the company. Be careful here because you don\u0026rsquo;t want to throw good money after bad.\nIf you know you\u0026rsquo;ll have 100 shares at a cost basis of $97.50, buy 100 more at $90. That lowers your cost basis down to $93.75. If the stock price begins moving up, you\u0026rsquo;ll collect those gains and be able to sell two covered calls instead of just one.\nSell more puts. Selling puts allows you to collect more premium or potentially get assigned at a lower price. These can help you reduce your cost basis further. As said before, be sure to do your research to ensure the stock is bottoming out. Taking a loss on a second put will be frustrating.\nIf you sell an $85 put for $3.00, the stock stays at $90, and you get assigned on your first put at a cost basis of $97.50, your cost basis drops to $94.50 with the collected premium from the put.\nClosing deep thought: volatile stocks are volatile #The more volatile the stock, the higher the premium when you sell options. That\u0026rsquo;s because the options buyer is paying a premium to transfer some risk to you, the seller. There\u0026rsquo;s always a risk of getting assigned, and sometimes those assignments are done at a loss.\nWhen your option trade turns red, keep in mind that the volatility that sent the stock price down could easily send the stock price back up. You entered the trade with a strong conviction and often the best action to take is no action at all.\nStay focused on your cost basis and think about what you can do to lower it. Buy shares when the stock falls and begins to bottom out or sell more options to collect premium.\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Yuriy Bogdanov on Unsplash\n","date":"10 February 2021","permalink":"/p/defending-losing-options-trades/","section":"Posts","summary":"You did your research and made a great options trade, but now it is a losing trade. What can you do now?","title":"Defending losing options trades"},{"content":"🤔 This is another post in a set of posts on options trading. If you are new to options trading, you may want to start with some of my earlier posts.\nOne of my sports coaches in high school used to say: \u0026ldquo;It\u0026rsquo;s not the tool, it\u0026rsquo;s the fool.\u0026rdquo; This was his reminder that when something goes wrong in the game, it\u0026rsquo;s usually the fault of the person and not the equipment.\nHowever, when it comes to investing, your choice of brokerage can be critical. There are many factors to consider. My focus in this post will be brokerages for US-based traders.\nIn the end, every broker gives you access to trade various things in the market, including options, equity, futures contracts, and more. They are often different in many areas.\nData and research #Start by looking at the data that a brokerage provides for your trades. I usually look for the following:\nStock calendars that include earnings dates, product events, and other stock events. Information about company financials and SEC filings. Real-time quotes for equities and options. (Extremely important.) Charting with technical analysis, such as moving averages. Watchlists for your favorite stocks. Try to look for brokerages that put as much of this data in a place that is easy to access. For example, a brokerage that puts earnings event data in the watchlist or close to where you make a trade can help you avoid making mistakes around binary events.\nKeep in mind that some brokerages charge extra for detailed research data or they may charge for real-time quotes (especially real-time options quotes). (Costs are coming up later in the post.)\nTrading inferfaces #Be sure that the brokerage allows you to trade where you want to trade. If you can\u0026rsquo;t trade at work on a computer, be sure your brokerage has a good Android or iPhone application. If you can only use a web browser to trade, be sure that the broker has a first-class browser-based experience.\nSome brokers have full desktop applications that provide the best experience. Be sure to try out their applications on your computer. Almost every broker with a desktop client works on Windows, most work on Mac, and a few work on Linux. Ensure you have a strong enough CPU with enough memory to run the application well.\nFor trading, ensure that the broker allows a variety of order types, such as day, good til canceled (GTC), trailing stop, and order cancels other (OCO). Day and GTC orders are what I use about 99% of the time, but trailing stops can be handy for swing trades. If you can do a simultaneous cancel/replace (similar to being able to edit an active order), that\u0026rsquo;s also a plus.\nI enjoy brokers that provide a full confirmation step of the order that allows me to read my potential order in plain English along with potential risks and profits. Everyone has made mistakes on an order from time to time, but you certainly don\u0026rsquo;t want to get caught selling a call when you meant to sell a put. 😬\nContact options #Most traders won\u0026rsquo;t need to use the customer service team at most brokerages since these platforms are built for traders to do as many things as possible on their own. After all, it is expensive to staff a large customer service team to do things that traders should be able to do by themselves.\nSometimes things do go wrong and that\u0026rsquo;s when things get complicated. Early assignments on sold options have caught me by surprise in the past and I\u0026rsquo;ve used customer service agents to ensure I handled the exercise process carefully.\nWhen you get into a difficult situation and lots of your money is on the line, do you want someone you can call immediately? Is an email ticketing system enough for you? How about live chat as an in between option? Decide what is important to you and choose a broker with the right customer service options.\nLive chat is acceptable for me but I prefer a toll-free number to call.\nMoving money #Look for brokers that make it easy to connect a bank account for deposits and withdrawals. Some brokers will give you a portion of your deposit to use while the remainder clears. Some may give you all of your deposit to trade immediately, but you can only buy stock with it until it clears.\nRead the fine print for your broker to ensure you know how long it takes for deposits and withdrawals to clear. Be sure to read about any potential fees for moving money. If you have more than one account with the broker, find out what is involved with moving money (and stocks) between your accounts.\nTrade execution #Find out if the broker sends orders directly to the exchange or through a third party. Third parties can reduce cost, but they can sometimes add delays to trades or cause them to be executed at prices which are not the best.\nWhen you are buying and selling stocks with limit orders, the difference here could be a few pennies or less. However, if you\u0026rsquo;re buying stocks in groups of 100 or more, those pennies add up quickly. For options, a $0.05 price difference is $5 worth of profit (or loss)!\nPoor trade execution can cost a whole lot more than the fees from a brokerage with better execution. 💸\nCost #I saved the discussion of cost for last because there are a lot of factors involved here. Many low cost (or no cost) brokers are a great deal for certain trades. Others can be terrible. Lower cost doesn\u0026rsquo;t always make sense.\nWhen I analyze a brokerage\u0026rsquo;s costs, I ask questions like these:\nCan I get detailed company research for free? How much does it cost to get real-time quotes for stocks and options? What is the cost per equity or option trade? If I get assigned stock on a sold option, what fees are involved? What fees to I pay to exercise an option? Will my costs go down as I make more trades? Some brokerages have special fee structures where you can avoid certain fees based on how you trade. As example, Fidelity does not charge me an options fee if I buy back a sold option for $0.65 or less. If I sell an option for $1.50, I enter a buy order for $0.75 most of the time (to collect 50% profit). However, I can save about $0.70 per trade if I bump that buy order down to $0.65. Read all of the fine print! 🤓\nMy experience with brokerages #I\u0026rsquo;ve tried quite a few brokerages as I\u0026rsquo;ve learned to trade options and here are my thoughts for each with my most preferred brokerages at the top of the list. I currently use TD for almost all of my trades and I use Fidelity to trade options in my HSA.\nTD Ameritrade / ThinkOrSwim #TD checks a lot of boxes for me. The full ThinkOrSwim desktop client works solidly on Linux, their Android applications are easy to use, and their web interfaces are straightforward. Options trades are $0.65 each, but the direct trade execution is totally worth it. There\u0026rsquo;s no fee for being assigned.\nThe desktop application is full of data and it is completely and utterly overwhelming at first. It takes time to learn the system and where you can find all of the things you need. Over time, it becomes much easier to find information and make trades. The charting is incredibly detailed and quick to render.\nThe trade confirmation process at TD is really good and I\u0026rsquo;ve caught some mistakes in the confirmation process. Real-time quotes are included for free and the updates are fast. You can configure the update rate to make it less dramatic.\nTD and ThinkOrSwim give you a one-stop shop. You can do all of your trading, screening, and research all in one place. Their customer service is a toll-free phone call away or you can use the live chat that\u0026rsquo;s built into the ThinkOrSwim application.\nTastyworks #If your goal is to purely trade options, Tastyworks is a great platform. You can buy stocks, too, but that\u0026rsquo;s not their top priority. Their order entry process for options (buying and selling) is superb and I\u0026rsquo;ve never found one that I liked better.\nOnce you place your trade, you can put in a 50% profit order with a couple of clicks. Tracking your trade\u0026rsquo;s process is done with a handy progress bar.\nTrade execution is really fast and the pricing structure is interesting. You pay $1 per option trade to open with a $10 maximum fee. Closing the trade is free. If you get assigned, there\u0026rsquo;s a $5 fee.\nThe real downside about Tastyworks is the mobile experience. The Android application is really difficult to use and everything oddly abbreviated. One could argue that you shouldn\u0026rsquo;t do much of your trading from a mobile device, but I do like to adjust trades on the go from my phone.\nAnother complaint I have is that transferring money or equities between multiple accounts must be done with signed paper documents. It\u0026rsquo;s not possible to do the process online. With other brokers, such as TD or Fidelity, you can do this instantly via self-service processes on their websites.\nYou will need to do your research somewhere else. Tastyworks has charting and tracking of earnings dates, but that\u0026rsquo;s about it.\nFidelity #Fidelity has full desktop client, but it does not run on Linux, so I haven\u0026rsquo;t used it yet. Their website works well, although it is pretty basic.\nTheir trade execution is quite fast and I often find that they improve my limit order price by $1-3 on each option contract. The fees per option trade are around $0.69 and that\u0026rsquo;s fairly close to the industry standard $0.65.\nThere are some research and charting tools on Fidelity\u0026rsquo;s website, but I prefer to do my research elsewhere.\nRobinhood #Say what you will about Robinhood, but it\u0026rsquo;s an all around decent brokerage. You can get extra research data, real-time quotes, and margin for just $5 per month. Stock trades and options trades are free (but there are some small $0.01-$0.02 clearing fees for some trades). Even OTC trades, such as NTDOY, are free.\nMy main complaint about Robinhood is around trade execution. I usually set my limit orders on options trades near the mark price (halfway between bid and ask). I often have a difficult time getting an option trade to fill unless I adjust the price close to the bid. This isn\u0026rsquo;t a problem on TD or Fidelity. Spending $0.65 for TD\u0026rsquo;s better execution makes sense when I\u0026rsquo;m losing $1-2 per trade on Robinhood due to bad fills.\nRobinhood\u0026rsquo;s customer service has always been superb, but it\u0026rsquo;s only available via email. I was sweating through an assignment on a put credit spread one Saturday morning and I was able to get a reply in a few hours with the right advice.\nTheir Android and website interfaces are superb and fast. When trading gets really busy, the website often lags and it can be difficult to enter trades. I\u0026rsquo;ve found it difficult to get trades done during the first 15-30 minutes of the day. In some situations, my trade has already executed but the website doesn\u0026rsquo;t show it.\nWebull #Webull is another low/no-cost brokerage. Their Android and website interfaces provide some great ways to get research donw fast. They present a ton of information in a small amount of space that would normally require a lot of legwork at other brokerages. I often use my Android Webull app for quick research and charting.\nAs with Robinhood, trade execution is a challenge. Also, I got into a few situations where I couldn\u0026rsquo;t enter a GTC order for options trades.\nYou can\u0026rsquo;t queue orders outside of certain hours either. Sometimes I would want to adjust an order right before going to sleep but I wasn\u0026rsquo;t allowed to change my queued order until the next morning just before the market\u0026rsquo;s open.\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Edan Cohen on Unsplash\n","date":"24 January 2021","permalink":"/p/which-stock-broker-should-you-use/","section":"Posts","summary":"Not all stock brokerages are the same. Think about your requirements, shop around, and read the fine print.","title":"Which stock broker should you use?"},{"content":"🤔 This is another post in a set of posts on options trading. If you are new to options trading, you may want to start with some of my earlier posts.\nYou know your terminology, you know your max loss, and you are ready to start. Now you\u0026rsquo;re faced with the difficult question all new options traders face:\nHow do I choose which option to sell?\nAt first, you learn to look for the trades that are right for you. This post explains some of the things I look for when I make a trade, and how I compare different trades to see which one is right for me.\nDetermine how much you want to risk #If you\u0026rsquo;re starting the wheel strategy, then you are selling a put contract on a stock at a particular strike price. Options contracts involve 100 shares of the underlying stock, so you can multiply the strike price by 100 to know how much money you are risking on the trade.\nFor example, if you sell a $90 put option on AMD, then you are risking $9,000 on the trade. The chances of AMD rocketing to $0 is extremely unlikely, but anything is possible.\nCompare the size of your trade to the size of your account. Risking $9,000 in a $10,000 account may not be the best idea for new traders since you are putting 90% of your capital at risk. The goal is to avoid \u0026ldquo;blowing up\u0026rdquo; \u0026ndash; when you lose so much that you don\u0026rsquo;t have enough capital left for good trades. 💣\nIf my conviction on a trade is very strong (I know the company, I know the stock\u0026rsquo;s personality, and I\u0026rsquo;ve done my homework), then risking 15-20% of my account on the trade seems reasonable. For companies I know less about, I won\u0026rsquo;t go past about 5-10% of my account on the trade.\nWatch your stock\u0026rsquo;s calendar #There a certain events in the lifetime of a stock that can cause wild swings in price. Selling options around the time of these events creates increased risk because you really don\u0026rsquo;t know what happens during the event. Some of these events include:\nDividends. These are announced far in advance and stocks will often creep up just before the dividend date and then fall a bit after. If your option buyer wants the dividend badly, you may get assigned early.\nSplits. Stock splits are usually safe for options selling since your contract is automatically adjusted without you doing anything. However, if a stock does a significant split, like a 5:1, then a new influx of buyers with a new buying style may get into the stock. The stock\u0026rsquo;s \u0026ldquo;personality\u0026rdquo; can change quickly.\nProduct events and announcements. Launch events, such as Apple or Google\u0026rsquo;s new phone launches, or Tesla\u0026rsquo;s battery days, can have a huge impact on the stock price. The stock can fall even if the launch looks spectacular.\nEarnings. Definitely watch out for these. Stock prices do some idiotic things around earnings time. I avoid these in almost all conditions unless I have a lot of conviction about the stock.\nI\u0026rsquo;ve written about earnings in previous options trading post, but it bears repeating: earnings are unpredictable. Even if the earnings report is stellar, you may see the price fall off drastically. Why? Perhaps investors want to take profit. Perhaps investors were expecting more on earnings and the expectations were already \u0026ldquo;priced in\u0026rdquo;.\nDon\u0026rsquo;t sell options around earnings unless you know what you\u0026rsquo;re doing. Even if you think you know what you\u0026rsquo;re doing, you probably don\u0026rsquo;t. 😉\nChoosing a strike price #Once you know the stocks that fit your trading style, it\u0026rsquo;s now time to choose your strike price. IThere\u0026rsquo;s a critical options calculation to know here: delta.\nDelta runs from 0 to 1 and some software represents it as a percentage. It describes how much the option price moves as the underlying price moves. Here are some examples:\n1.00 delta (or 100%): The option price moves at the same rate as the underlying price. If the stock price goes up $10, the option price goes up $10. 0.50 delta (or 50%): The option price moves at half the rate of the underlying stock. If the stock goes up $10, the option price goes up $5. 0.25 delta (or 25%): If the stock goes up $10, the option price goes up $2.50. Most of my puts are sold near the 0.25 delta mark. This means that there is roughly a 75% chance that my put will finish out of the money (I keep the premium and there is no loss). There is a 25% chance that my put finishes in the money and I will be assigned stock (possibly at a loss).\nWe could spend all afternoon talking about delta and theta, but a good start is to sell your puts somewhere near 0.25 delta. I will sometimes move towards 0.30-0.35 delta if I feel very bullish about a stock or I will move towards 0.20 delta if my conviction is less strong.\nCalculate your return #We wouldn\u0026rsquo;t sell options if we didn\u0026rsquo;t expect a return! It\u0026rsquo;s a good idea to know your potential return for any trade you make. Let\u0026rsquo;s take an example trade and calculate our maximum loss and potential return.\nAMD is trading at $92.79. You choose to sell a put at the $85 strike (.26 delta) that gives $2.42 premium and expires 2021-02-19. Maximum loss is strike price - premium received. That\u0026rsquo;s 8500 - 242 = $8,258 in the absolute worst case the world is ending scenario. Your breakeven point is $82.58. As long as AMD stays above that price, you make money. You can calculate your return with: bid / (strike - bid) * 100. For our trade, that\u0026rsquo;s: 2.42 / (85 - 2.42) * 100 = 2.9%. If AMD stays over $85 through the life of the contract, you get a 2.9% return.\nThere\u0026rsquo;s another angle we can use to analyze the trade: an annualized return. Annualized returns consider how long you had to tie up your capital on a trade while you wait for your return. Would you rather make a 2.9% return in one week or one year? I\u0026rsquo;d much rather make it in a week.\nThe annualized return calculation extends the calculation we\u0026rsquo;ve already done above: (potential return / days held) * 365. We know we have a 2.9% potential return and 28 days to expiration, so we can calculate the annualized return: (2.9 / 28) * 365 = 37.8%\nIf you did this trade successfully over and over again all year long, you could possibly get a 37.8% return at the end of the year on these trades. Don\u0026rsquo;t take this as a given, though. I usually use this to compare different trades against each other to see which one is a better use of my money.\nComparing trades #Let\u0026rsquo;s say you like the AMD trade from the previous section, but you\u0026rsquo;re also looking at another stock that you really like. How do you choose which one to sell? I usually consider the annualized return. If I can make more money with the same amount of capital, I\u0026rsquo;ll go for that trade.\nFUBO is another stock I follow and I have strong conviction on it as well. Here\u0026rsquo;s a trade there:\nFUBO is trading at $37.72. You choose to sell three puts at the $30 strike (.22 delta) that gives $2.13 premium each and expires 2021-02-19. Maximum loss is strike price - premium received. That\u0026rsquo;s 3000 - 213 = $2,787 Since we are selling three contracts, max loss is $8,361. Your breakeven point is $27.87. As long as FUBO stays above that price, you make money. Return: 2.13 / (30 - 2.13) * 100 = 7.6% Annualized return: (7.6 / 28) * 365 = 99% The FUBO put has a much higher annualized return, which gets my attention. However, FUBO is riskier for me since the stock has a much shorter history and it is much more volatile. Sure, I can make more money with this trade, but the risk is substantially higher.\nAfter checking the calendars, I found that AMD has earnings in the next week! That breaks my trading rules since I avoid earnings under almost all conditions!\nMaking the trade #Every stock and options trade has a bid/ask spread. Buyers say \u0026ldquo;I will pay $1.00 for this trade\u0026rdquo; and that\u0026rsquo;s the bid. Sellers say \u0026ldquo;I will only sell if someone pays me $2.00\u0026rdquo; and that\u0026rsquo;s the ask.\nUnder most situations, I sell at the midpoint of the bid/ask spread. This means that on a spread of 1.00-1.10, I will sell at 1.05. The trade executes within 30 seconds unless the stock price is moving quickly.\nIf the stock is moving fast or if you really want to be sure your trade gets in, sell at the bid price. That almost always guarantees that a buyer is waiting to buy your contract as soon as you write it.\nYou can sell at the ask or even higher if you really want to, but your execution could be delayed or it may never execute.\nCongratulations! You sold your first option. What do you do now? (Keep reading.)\nAfter the trade #Your trade may turn red immediately after it executes: don\u0026rsquo;t panic! Most brokers show the midpoint price (or \u0026ldquo;mark\u0026rdquo;) and if you sold at the bid price, your trade will be red at first. Also, if the stock price dipped a little, your trade will be red for a short while. Take a deep breath.\nOne of the first things I do (after logging my trades on thetagang.com), is to put in a buy order at a 50% gain. Consider our AMD trade earlier ($85 put with $2.42 premium). I would enter a buy order for $1.21 with \u0026ldquo;good til canceled\u0026rdquo; (GTC). There\u0026rsquo;s no more work to do! As soon as I can get a 50% gain, the trade executes and I keep my $121 profit!\nWhy stop at 50%? I have several reasons:\nIt reduces my chance of loss because my money is at risk for less time. Emotion is removed from the trade. My trade is closed the moment 50% gains are reached. I have work and kids. I don\u0026rsquo;t have time to hover over my broker interface all day long. It almost always leads to a better annualized return. A better return with a smaller profit? How can that be? 🤔\nLet\u0026rsquo;s take an example from a PLTR trade I did last month. I sold two $25 puts for $2.04 premium each that had a 128% annualized return. However, the 50% trigger closed the trade in two days when PLTR rocketed upward unexpectedly.\nIf I held the trade from January 20 to February 12 and PLTR finished above $25, I would have made $408 (128% annualized return). Instead, I made $204 in two days (Jan 20-22). What\u0026rsquo;s our annualized return here?\nPotential return: 2.04 / (25 - 2.04) * 100 = 8.9% Annualized return: 8.9 / 2 * 365 = 1,624% Wow! 🤯\nThat\u0026rsquo;s 1,624% versus our original 128% annualized return and my money was only at risk for two days. What if PLTR suddenly drops next week? I avoid that risk by collecting profit early and freeing up capital for another trade.\nOne big part of risk management in the market is to keep your capital at risk for the shortest amount of time possible to make the returns you want.\nGood luck! 🍀\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Tomas Sobek on Unsplash\n","date":"23 January 2021","permalink":"/p/choosing-options-to-sell/","section":"Posts","summary":"Taking the leap and selling your first options contract takes a lot of thought and preparation.","title":"Choosing options to sell"},{"content":"🤔 This is another post in a set of posts on options trading on the blog. If you are confused on terminology, go back and start with the first post and the series and work your way to this one.\nMy options trading journey started back in September 2020 and I learned a lot in a short amount of time. This post covers some of the lessons I\u0026rsquo;ve learned along the way.\nEdwin Lefèvre says it best as Larry Livingston in Reminiscences of a Stock Operator:\nWhenever I have lost money in the stock market I have always considered that I have learned something; that if I have lost money I have gained experience, so that the money really went for a tuition fee.\nMake a set of rules and stick to them #This was one I learned early on from Joonie, the leader of the Theta Gang. His podcasts covered this topic frequently and the importance of this step cannot be understated. Everyone needs a strong set of rules when making trades so that emotion can be removed from trades.\nEmotion easily sneaks in and clouds your judgement whether your investments are green or red. When emotion takes over full control, you become \u0026ldquo;tilted\u0026rdquo; (as Joonie says), and you make poor choices. You can quickly over-leverage yourself and ruin a winning position. You can also throw good money after bad and make your losing positions worse.\nStart by building a set of rules that you can follow, or better yet, add to a screener (more on that next). Here is my rule list as of today for selling puts:\nThe underlying stock must be a stock I would enjoy holding for an extended period (potentially weeks or months).\nThe underlying stock must be priced higher than $10.\nChoose a trade between -.20 to -.30 delta on a monthly expiration date (no weeklies). This is roughly 70-80% chance of profit.\nMake trades on an underlying that is moving sideways or has a long upward trend above the 50 day exponential moving average (EMA), but avoid any underlying stock with spikes or gaps up that aren\u0026rsquo;t explained by solid news. (This avoid pump and dumps or other manipulative patterns.)\nThe monthly expiration should be between 21-60 days away from the current date.\nA trade should have an annualized return over 20%.\nNo earnings reports or other news should be scheduled before expiration. (Earnings are dangerous and unpredictable, even if you somehow get an advanced copy of the filing.)\nBy following your rules closely, you avoid making trades that you regret. However, market conditions could lead you to bend one of these rules. For example, if the market is fairly steady and the underlying has been moving sideways for a while, I may move closer to -.30 to -.40 delta (about 60-70% chance of profit).\nKnow exactly when you will exit the trade #This one is separate from the rules list above because it is important all by itself. When you enter any trade in the market, have an exact exit strategy in mind.\nFor my trades, I always exit when I have reached a 50% profit. If I sell a $90 put on AMD and get $2.50 in premium, I immediately enter an order to buy it back at $1.25. This takes all emotion out of the trade and it ensures that I won\u0026rsquo;t miss out on profits if I am away from the computer. It\u0026rsquo;s a great feeling to suddenly get a notification from your broker that you made money when you least expect it. 🤗\nAlways remember: profit is profit. I may make $1.25 on that trade while someone else makes $2.50, but my capital is freed up earlier for other trades and my profit is secured. I\u0026rsquo;d take a 50% gain over a loss of any size.\nWait on trade ideas to come to you #Sometimes the best trade is not to trade at all. My trading rules are fairly easy to pack into a scanner and that\u0026rsquo;s usually how I research my trades. I scan for options on Barchart and I will occasionally use finviz to find new stocks that should be on my radar.\nI get an email from Barchart about an hour after the market opens with my list of options that meet my criteria. From there, I decide on which ones to trade and which ones to skip. If there\u0026rsquo;s something good in the list, I\u0026rsquo;ll make a trade. Otherwise I\u0026rsquo;ll wait for the conditions to line up with my investment goals and rules.\nTrade outside the crazy market hours #I avoid trading during the first hour the market is open (9:30-10:30AM Eastern) and the last hour (3:00-4:00PM Eastern). The trading volume is really high around these times and it can be difficult to figure out what a stock is doing during those times. Day traders and swing traders are extremely active during this time.\nBe patient with limit orders #Always use a limit order when selling options and be patient with them. For example, if the bid/ask spread is $1.00 to $1.10 and you decide to set your limit order to $1.05, be patient. Many people will rush to lower the limit when the trade does not execute immediately, but you should stick with your order.\nSelling puts on volatile stocks allows you to collect premium, but volatile stocks have volatile options, too. I usually set my limit orders right in the middle of the bid/ask spread and wait. About 90% of the time, the order executes within a few minutes because the stock price is volatile.\nTrade where other people are trading #Be sure to find out where the most active options are being traded in the market. Options volume helps you enter trades quickly at good prices. It also helps you exit when it\u0026rsquo;s time to take a profit. Stocks with low volume options trading can provide good premiums, but it can be challenging to exit the trade when you\u0026rsquo;re ready to capture profit or limit your loss.\nUse care when you follow unusual options activity reports. It can be exciting to jump in on trades when you see lots of money pouring into puts and calls.\nHowever, many of these big options trades are hedges from larger firms who want to avoid losses or capture extra gains for their clients. There\u0026rsquo;s also some market manipulations strategies here where people buy tons of calls on a stock in the hopes that market makers will buy lots of shares.\nScrutinize skyrocketing stocks #Sometimes you\u0026rsquo;ll see a stock that has traded sideways or posted small gains day after day and then it suddenly shoots straight up (often called \u0026ldquo;mooning\u0026rdquo;). These look like great targets for selling puts at first, but you should be cautious.\nStocks sometimes do this when big news comes out about a company. For example, if a small semiconductor company makes a deal with Apple to put chips in new laptops, there\u0026rsquo;s a good chance that the small semiconductor stock will moon wildly. The market does this because the company\u0026rsquo;s valuation is now in flux. Is this company\u0026rsquo;s valuation now 50% more? Double? Quadruple? Be careful until the market decides on the new valuation.\nYou may also see situations where stocks go through the roof and there is no news, no big insider trading, and no significant industry news. This is where you should be extremely cautious. Prices often do this when the stock is being manipulated or when activist investors are at work.\nThe worst case is that the stock is headed into a \u0026ldquo;pump and dump\u0026rdquo; scheme where shares are rapidly being bought in the hopes that other investors will buy it up thinking that some news is about to come out (the \u0026ldquo;pump\u0026rdquo;). Once a lot of new investors pile on, the group buying the shares stops and sells all their shares (the \u0026ldquo;dump\u0026rdquo;).\nThe dump side often involves investors called \u0026ldquo;shorts\u0026rdquo; who short the stock and cause the price to go down further. This is a dangerous move for shorts if the buyers keep buying as this could lead the price to skyrocket again and force shorts out of their shares.\nAvoid trading around earnings #Earnings are some of the wildest times in the market and they\u0026rsquo;re incredibly hard to predict. I\u0026rsquo;ve seen companies post excellent earnings reports with glowing numbers, great sales, and excellent predictions. As soon as the earnings come out, the stock falls 20%. 🤯\nKeep in mind that valuation is a tricky thing and that many investors won\u0026rsquo;t agree on a valuation for a particular stock. A great example is that one of the analysts following Tesla raised the price target from $90 to $105. Tesla is trading at over $700 today. Again, valuation is in the eye of the beholder.\nI\u0026rsquo;ve also seen companies post terrible earnings reports and their stock remains flat or goes up. There\u0026rsquo;s a chance that investors already predicted the bad results and they\u0026rsquo;re priced in already.\nSomething that may look good at the moment may turn out awful later. For example, if a company releases earnings after the market close, they may only release a PDF after the market closes that includes their SEC filing data. That data may look fantastic and the stock moons immediately. Later, when the company has their conference call, they announce they\u0026rsquo;re acquiring another company and they are revising future estimates down by 20%. The stock prices falls through the floor.\nEven if you had an advanced copy of a company\u0026rsquo;s earnings filing, it would still be incredibly difficult for you to make a trade that will make a profit once the market is closed.\nTrack your trades #Keep yourself honest by tracking your trades. I track mine on thetagang.com and it\u0026rsquo;s a free way to analyze your trading strategy. You can find other people trading the same stocks and ask them questions. There is also a list of trending stocks that is updated frequently and this can help you build your own watchlist.\nYou can learn a lot from reading notes from other traders about their trades. I encourage you to leave good notes as well since this tests your conviction on the trade. If you\u0026rsquo;re not confident enough to explain to other people why you made the trade, then why make it in the first place?\nThere are a litany of spreadsheets out there for tracking trades as well, but the best one I found is the Options Tracker Spreadsheet from 2 Investing. It pulls stock quotes directly from Google Finance and it calculates helpful metrics, including annual return metrics.\nLike water off a duck\u0026rsquo;s back #When you win, take time to understand what worked in your favor so you can repeat it.\nWhen you lose, take time to understand what went wrong and what concrete things you plan to change.\nI was up almost $3,300 at the end of 2020 and I chased a skyrocketing stock, FUBO, much longer than I should have. It mooned without much news and I kept chasing it. That led to a loss of over $5,000 and my end of year finish was negative.\nMy mistake was that I had so many winning trades back to back that I got \u0026ldquo;tilted\u0026rdquo; and violated my rules. Initially, everything looked good, but once it turned after hours, there was nothing I could do. I continued to hold and hoped that the conditions would change, but nothing changed. The loss stopped the bleeding and I would have easily lost $2,000 more if I had not exited when I did.\nAfter this failure, I went back to my list of rules and made them more strict. I\u0026rsquo;m also working on a script that allows me to maintain a watchlist and let quality trades filter through that match all of my rules. I plan to put the script on GitHub soon once it works. I also shared my failure with other people and told them what I thought I did wrong.\nThe loss still hurts, but I\u0026rsquo;m trading again to make up the loss. My goal is still to donate a percentage of my gains to charity, so I keep that goal in mind. The key is to stay in the game.\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Artem Kniaz on Unsplash\n","date":"4 January 2021","permalink":"/p/lessons-learned-from-selling-puts/","section":"Posts","summary":"Learn from my successes and mistakes while selling puts in the stock market in 2020.","title":"Lessons learned from selling puts"},{"content":"This is the third post in a series of posts on options trading. You can read this one out of order since it applies to almost everything you do in the stock market, including buying shares of stock.\nWhat is \u0026ldquo;max loss\u0026rdquo; #The maximum loss on a trade is the worst outcome possible. Knowing this number helps you protect your account from huge losses and avoids terrible situations such as margin calls. (This is when your broker is forced to arbitrarily sell off things in your account or demand money from you to get your account above a zero balance.)\nWhen I first started learning about options trading, many people told me it was too dangerous. They told me \u0026ldquo;just buy shares since it\u0026rsquo;s safer.\u0026rdquo; If you begin analyzing maximum loss, you quickly realize that buying shares can be more risky than certain types of options.\nMaximum loss when buying shares #As an example, let\u0026rsquo;s say you own 100 shares of Wal-Mart (WMT). It\u0026rsquo;s trading around $148 today, so that means you have $14,800 in total equity. Your maximum loss would occur if the stock fell to zero and your total loss would be $14,800.\nA major retailer losing 100% of its value in a very short period of time is highly unlikely, but it\u0026rsquo;s possible. There\u0026rsquo;s nothing that prevents it from happening (other than the stock being halted in the market), and there is a chance that you could lose your entire investment.\nIt took Enron about a year to drop to nearly zero, but some other drops have been much more abrupt. According to Investopedia, some stocks have had some rough days:\nFacebook lost $119B (2018-07-26) Intel lost $90B (2000-09-22) MSFT lost $80B (2000-04-03) AAPL lost $60B (2013-01-24) Buying shares is a great way to start in investing for your future, but it\u0026rsquo;s important to know your maximum amount of loss and ensure you\u0026rsquo;re prepared to hold the stock through the down times.\nMaximum loss when buying options #When you buy options, your maximum loss is the amount of premium you paid for the option. If you pay $200 for a call on a stock, your max loss is $200. The same goes for puts.\nThe maximum loss scenario for bought options is when the option expires out of the money. For a call, this means the stock price was under your strike price at the expiration time. For a put, the stock price would need to be above your strike price.\nMaximum loss when selling a put #There are two types of puts:\ncash secured puts - You sell a put when you have cash to to cover the strike price. naked puts - You sell a put without having cash to cover the strike price. Some of this was covered in the The Dark Side: Selling Options post, but it\u0026rsquo;s worth talking about it here again. When you sell a put, you are obligated to purchase the stock at the strike price.\nThe max loss scenario here happens when the stock price falls to zero. Sure, that is unlikely, but it is possible. You will keep your premium from selling the put, however.\nAn important thing to remember is that your options are capped. If you sell a put at a $100 strike price and collect $200 in premium, your losses are capped at $9,800. The stock can\u0026rsquo;t go below zero (thankfully).\nWith a cash secured put, you have cash on hand to deal with the loss and purchase the stock. However, with a naked put, you don\u0026rsquo;t have cash to cover the loss. Your broker will issue a margin call and begin selling off other equities in your account to cover the loss. If that fails, they will demand that you deposit money to cover the loss.\n💣 Never sell naked puts. Ever. Seriously.\nMaximum loss when selling a call #Here\u0026rsquo;s where things can become horrifying. When you sell a call, there are two main types:\ncovered call - You own 100 shares of the stock and use them as collateral to sell a call option. naked call - You sell a call for a stock but you don\u0026rsquo;t own any shares. Let\u0026rsquo;s work with the covered call example first. Let\u0026rsquo;s say you own 100 shares of XYZ and that stock is trading at $100 per share. You think the stock might go up a little but not a lot, so you sell a call at the $105 strike price.\nThe best outcome would be for the stock to stay between $100-$105 so you don\u0026rsquo;t lose money on the shares you own and your call is not exercised. There\u0026rsquo;s no losses there.\nWhat if the stock flew to $125? You get to keep the premium for selling your call option, but you have to sell those shares for $105 (since that was your strike price). Sure, you did lose $20 per share of profit ($2,000 total), but you still made money. You received the premium and your stock gained $5 per share ($500). You just missed out on some of the gains.\nWhat if the stock falls to $50? You still keep your premium, but your $10,000 of equity (100 shares x $100) is now $5,000 of equity. You took a $5,000 loss. It\u0026rsquo;s the same loss you would have had if you bought shares and didn\u0026rsquo;t sell any options. However, you are getting the small benefit of the premium you received when you sold the call option.\nIn all of these scenarios, your losses are capped. You know what you can possibly lose, but you know the limit of the loss.\nNever sell naked calls #This loss is potentially so bad that it needs its own section. Covered calls are nice because your collateral appreciates in value to cap your upside losses. What if you didn\u0026rsquo;t own the stock?\nLet\u0026rsquo;s say you sold a call on XYZ for $105 again and it was trading at $100 per share. However, you don\u0026rsquo;t own any shares this time.\nIf the stock climbs to $104.99, your option expires out of the money and you keep your premium. Nice!\nIf the stock falls to $50, your option expires out of the money and you keep your premium. Nice!\nIf the stock climbs to $125, you have a problem. You agreed to sell 100 shares to the buyer for $105 per share. You now have to buy 100 shares at $125 each (since that\u0026rsquo;s the current stock price) and you have to sell them for $105 each. That\u0026rsquo;s a $2,000 loss right there.\nHowever, let\u0026rsquo;s say that XYZ is a biotechnology firm and they report that they have a treatment for almost any cancer in the human body with a 14 day home treatment with almost no side effects. This is a life-altering change for people suffering from cancer. The stock climbs to $450 and investors say it will go higher. By the time the option expiration hits, the stock is worth $600.\nWhat now? You sold a call option for $105. That means you have to buy 100 shares of the stock at its current price ($600 each) and sell that stock for $105 each. The math start to hurt:\nYou purchase 100 shares at $600 each = $60,000. You sell 100 shares at $105 each = $10,500. You lose $49,500. 🤯 💣 Never sell naked calls. Ever. Seriously.\nKnow your max loss #Before you enter a trade, know what the worst case scenario will bring and ensure your account is prepared for it.\nI entered a trade earlier this year where I had a 85% percent chance of profit and the trade looked fantastic in my account. I had money to cover the loss but I doubted I would need it.\nThe company suddenly dropped news within 48 hours of entering the trade. The news said that one of their acquisitions was a little slower and more expensive than they thought and their Q3 number would be revised down. The numbers were revised down a lot. My nice gain turned into a $660 loss that I was forced to look at every day for a month until the expiration. It was a daily reminder of two things:\nAnything can happen in the market. I was fully prepared for maximum loss. 💸\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Andre Benz on Unsplash\n","date":"9 December 2020","permalink":"/p/know-your-max-loss/","section":"Posts","summary":"Knowing your maximum amount of loss on trade is the difference between taking a calculated risk and blowing up your account.","title":"Know your max loss"},{"content":"I\u0026rsquo;ve started a series of posts on options trading and if you haven\u0026rsquo;t read yesterdays post, Options trading introduction, you should start there first.\nRecap: Buying calls and puts #Just to recap the introductory post:\nBuying a call gives you the right to buy a stock at a certain strike price Buying a put gives you the right to sell a stock at a certain strike price Buying calls is a great way to capture gains from a stock that is trending upwards an buying puts provides an insurance policy in case your stock drops.\nYou can make money in multiple ways:\nExercising a call option to buy stock at a reduced price (without owning the stock first) Exercising a put option to sell your stock at a higher price (and reduce losses on your owned stock) The stock moves towards (and hopefully, past) your strike price and you can sell the option to someone else for a profit But there\u0026rsquo;s an entirely other world out there and it involves selling options.\nRights versus Obligations #So far, we\u0026rsquo;ve only talked about rights. When you buy an options contract, you have the right to do nothing, exercise the option, or sell it away to someone else.\nSelling options means you must fulfill an obligation. We aren\u0026rsquo;t talking about selling an option you purchased \u0026ndash; that\u0026rsquo;s simply called selling to close. What we\u0026rsquo;re talking about here is a different concept: selling to open. This is sometimes called writing a put or writing a call.\nWhen you sell a contract, you are paid a sum of money called a premium. The buyer is the one paying the premium to you. Let\u0026rsquo;s dig into what\u0026rsquo;s happening here:\nYou are going out into the market and saying \u0026ldquo;I\u0026rsquo;d like to get $200 to sell this put that expires on December 31\u0026rdquo; A buyer can choose to take you up on your offer, give you $200, and the contract is written The contract is written in stone until the expiration date It\u0026rsquo;s key to remember here that there are only two ways out of an options contract that you sell:\nYou buy it back (hopefully at a profit). This is called buying to close. The option expires. There are three warnings we must cover before we go any further:\n💣 Never sell an options contract that you do not intend to fulfill. Sure, it may sound amazing to receive $4,000 in an instant to sell an option, but eventually that option expires and that can sometimes be a disaster.\n💣 Never assume that you will be able to buy to close your sold contract before expiration. The market does weird things and you may be stuck with holding your option through expiration or taking a loss before expiration.\n💣 Never sell options contracts if you don\u0026rsquo;t have stock or cash to cover them. Getting stuck in a margin call (or worse) with your broker is not a good place to be.\nWith that said, about 95% of my current trades involve selling options. More on that later.\nSelling puts #We should talk about selling puts first because that concept is a little easier. When you sell a put, you specify a few things:\nUnderlying stock ticket (such as AMD, WMT, or TM) Expiration date How much premium you want (this is the ask portion of the bid/ask spread) The strike price Selling puts can make money in a few different ways. The stock can go up, go sideways (where it hovers around a certain price), or go down a little.\nLet\u0026rsquo;s go through an example. You think AMD is likely going to keep its same price or go up over the next few weeks. If AMD is trading at $100 now, you decide to sell a put for $90 and collect $200 in premium.\n🛑 Before entering this trade, ensure you have at least $9,000 of cash in your account. 100 shares of AMD could potentially cost you $9,000 if you are assigned \u0026ndash; more on that below.\nNow what?\nAMD skyrockets up to $125 Your put loses value rapidly and you can buy it back for only $25 after a few days. You just made $175 in profit since you originally received $200 for selling the contract and paid $25 to close it. You can let your put expire and keep the $200 premium. (Risky!) AMD gets stuck at $100 Over time, your put loses value and you can buy it back at a cheaper rate before it expires (optional). You can let your put expire and keep the $200 premium. (Risky!) AMD falls to $85 You\u0026rsquo;re now in the money and that\u0026rsquo;s not great for a sold option. The buyer will likely exercise the option they bought from you and you will be assigned at a loss. (Keep reading) Assignment on puts #Some traders hear about assignment and they run off to hide. Assignment means that the buyer of your contract wants to use their right to exercise.\nLet\u0026rsquo;s go back to the very last example above where we sold a put for $90 and AMD fell to $85. This situation feels scary until you break it down:\nYou keep your $200 no matter what. You sold the contract and that\u0026rsquo;s yours. You must buy 100 shares of AMD at $90 each. But wait, you have to buy AMD at $90 even though AMD fell to $85! That\u0026rsquo;s your obligation for selling the contract. That means you will lose $500 on the stock purchase. (You bought $8,500 of AMD stock but had to pay $9,000 for it.)\nAll together, you lost $300. You paid $500 extra for the AMD shares but you kept your $200 premium for selling the contract. But, now you have 100 AMD shares worth $8,500 in your account!\nWhat do you do with those?\nSelling calls #Now you have 100 shares of AMD in your account worth $8,500 and you\u0026rsquo;re still down $300. It\u0026rsquo;s time to talk about selling calls. Selling calls and puts involve the same concepts of specifying a strike price, how much premium you want, and an expiration date.\nYou take a look at the options chain and you see that you can sell an AMD call for the $90 strike price for $250 and it expires in a few weeks. This is called a covered call since you own 100 shares as collateral for the contract.\nYou sell the call and now there are some potential outcomes:\nAMD falls to $80 Your call loses value rapidly and you can buy it back for a tiny amount. If you buy your call back at $25, then you made $225. ($250 premium originally received minus $25 you paid to close it) Go back and sell another call, collect more premium, and increase your profit. Allow the call to expire worthless and keep your full $250 premium. (Risky!) AMD gets stuck at $85 Your call gradually loses value and you can buy it back for a tiny amount. Go back and sell another call, collect more premium, and increase your profit. AMD climbs to $95 You\u0026rsquo;re now in the money here. When the option expires, the buyer of your contract will likely want to exercise and they will buy your shares at $90 each (your strike price). Your shares are gone. (But keep reading.) Assignment on calls #Getting assigned on puts means you have to buy the stock at your stike price. Getting assigned on calls is a little different. You sell your stock at the strike price to the person who bought your call option contract.\nLet\u0026rsquo;s go back to the previous example where you sold a $90 call option for $250 and AMD went to $95 per share:\nYou keep your $250 premium since you sold the contract. You sell your 100 AMD shares at $90 each (your strike price) even though AMD is trading at $95. Technically, you lost out on $500 of profit that you could have received if you had not sold the call. However, you keep the $250 of premium and you can begin selling puts again.\nWhy not just buy stock and hold it? #I have accounts where I buy stock and hold it for a long time. However, selling options allows you to collect premium and get profits even in a market that isn\u0026rsquo;t moving much.\nIf a stock gets stuck in a sideways pattern where it doesn\u0026rsquo;t move much for weeks, buying and holding the stock might give you a very small gain. During that time, you may have been able to get a better return selling options on that stock and collecting the premium.\nWhat\u0026rsquo;s the worst that can happen? #It\u0026rsquo;s always important to know what your max loss is for any trade. Losses begin when the stock price breaches your breakeven price:\nPut breakeven = strike price - premium received Call breakeven = strike price + premium received If you sell a $90 put for $200 and the stock falls to $89, you are still up $100. If you sell a $90 call for $200 and the stock goes up to $91.50, you are still up $50.\nHowever, if you are selling puts or calls, there is a chance that the stock could completely crater and go to zero. The chances of that happening to a well-known company that has been on the market for years is very low, but it\u0026rsquo;s important to know max loss.\nLet\u0026rsquo;s say you sold a put on company XYZ at the $100 strike and received $500 in premium. The stock somehow falls to $1 per share. Your max loss here is ((100 x $100) - (100 * x $1)) - $500 = $9,400. The buyer of your options contract will surely want to exercise so they can sell their XYZ stock at $100 instead of $1.\nOn the call side, let\u0026rsquo;s say you sold a $110 call on XYZ for $500 and the stock fell to $1. The 100 shares of XYZ you own would now be worth $1 per share, but you do keep your $500 premium. If you had owned the shares without selling that call at that time, you would have suffered the same fate (without getting $500 in premium).\nThe Wheel #The strategy shown here is called the wheel strategy. It involves these steps:\nSell puts and collect premium If unassigned, go back to step 1 If assigned, sell calls and collect premium If unassigned, go back to step 3 If assigned, go back to step 1 This strategy will be covered in the next post!\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Ricardo Gomez Angel on Unsplash\n","date":"7 December 2020","permalink":"/p/the-dark-side-selling-options/","section":"Posts","summary":"Selling options puts you on the other side of the options contract from buyers, but it comes with obligations.","title":"The Dark Side: Selling Options"},{"content":"I\u0026rsquo;m always on the hunt for new things to learn and one of my newer interests is around trading options on the stock market. Much like technology, it becomes much easier to understand when you peel back the layers of archaic terminology, snake oil salespeople, and debunked data.\nStay tuned for additional posts that go into greater detail. This post covers the basics about what an option is (and is not) and how you can compare it to other investment vehicles, like stocks.\nInvesting in stocks #Most of us are likely familiar with how investing in stocks works to some extent. The process goes something like this:\nDeposit money into a brokerage account Find a company that has good qualities for value and growth Buy shares Wait If the company performs well and other investors also see value in the company, your investment should gain value. For example, If you buy 10 shares of a company for $100 each ($1,000 total investment), and the stock value goes up by 10%, you will have 10 shares at $110 each ($1,100 total investment).\nStocks go up very often, but they go down, too. If the stock lost 10% of its value, you would own 10 shares of a $90 stock and your total investment would fall to $900.\nSometimes the stock market does downright silly things and companies with great fundamentals go down, while others with horrible numbers keep going up.\nA. Gary Shilling once said:\nMarkets can remain irrational a lot longer than you and I can remain solvent.\nNothing could be more true.\nHow are stocks traded? #This could be an entire post in itself, but I\u0026rsquo;ll keep things simple. When you buy a stock, you\u0026rsquo;re buying it from someone else. That could be another investor, a hedge fund,a mutual fund maintainer, or a massive company.\nThe other entity will set the price they are willing to sell some stock. This is called the ask. Buyers in the market set the price at which they want to buy, called the bid. When you look at the highest bid and the lowest ask for a stock at one time, that\u0026rsquo;s called the bid/ask spread or often just the spread.\nIf nobody sets a bid at or equal to anyone\u0026rsquo;s ask price, no stock changes hands. Either the buyer needs to raise the price that they are willing to pay or the sellers need to come down on their asking price.\nLook for stocks with a narrow bid/ask spread under $0.05-$0.10. This means the stock is liquid (easily bought and sold) and you\u0026rsquo;ll avoid a weird situation called slippage, where the bid/ask spread is really wide and your trade may not execute as a good price. (Limit orders help here. More on that later.)\nWhat is similar about options? #Options have bid/ask spreads, just like stock. Options are called derivatives because they are derived from a certain stock. That stock is called the underlying, since the options depend heavily on the stock\u0026rsquo;s performance. (Not every stock on the market has options, though.)\nWhat is different about options? #Options are contracts. Every options contract has rights, obligations, and an expiration date.\nBuyers of options contracts have rights (things they can choose to do) and sellers have obligations (things they must do). Everything starts with a buyer or seller who wants to open a contract. Once it\u0026rsquo;s open, it can be closed via different methods.\nIt revolves around two main types of contracts: calls and puts.\nBuying calls #If you buy a call option contract, you have the right to buy shares at the price you specify, which is called the strike price. (You can sell calls, too, but let\u0026rsquo;s keep this simple for now.)\nAMD is a popular stock for options traders and we can use it as an example. Let\u0026rsquo;s say that AMD is trading at $100 per share and you think their next chip launch will be really successful. A call could help you catch some of the potential upswing in the stock.\nLet\u0026rsquo;s say you buy a call at a strike price of $105 for $200 and the contract expires December 13. What have you done here and what are your rights?\nYou made a buy to open order for a call contract that cost you $200. You cannot trade this option after the market close on December 13. What are the potential outcomes before expiration? Let\u0026rsquo;s analyze what AMD could do during that time:\nBest outcome: AMD rises to $110 You are in the money Your options contract will increase in value and you can sell it before expiration. You can choose to exercise the option, which means you are using your right to buy 100 shares of AMD at $105 each, even though the stock is trading at $110! Your profit would be (100 * $110) - (100 * $105) = $500 Good outcome: AMD rises to $104.99 You are at the money During this time, your call option goes up in value since the stock price has moved really close to your strike price of $105. You can sell your option contract for a profit before expiration. Worst outcome: AMD falls below $100 You are out of the money Your options contract will be worth a lot less since the stock price has moved away from your strike price. If your options contract still has some value left, you could sell it to get back some of your losses. If AMD falls really far below $100, you may find that your option has no value. This is called max loss and it hurts. 💣 Just in case you missed it: you can easily lose all of your investment with options.\nWith stocks, a company can take a heavy loss, but you still have some value left. With options, that same scenario means your original $200 investment is completely gone.\nBuying puts #Buying a put, called a long put, is a different thing entirely. You are choosing a price where you are willing to sell 100 shares of stock you own. It\u0026rsquo;s a great insurance policy for stock that you own.\nGoing back to the AMD example, let\u0026rsquo;s say you own 100 shares of AMD at $100 each. If you buy a put at $90, you have the right to sell your stock at $90 even if AMD is trading lower than $90.\nLet\u0026rsquo;s go back to our $100 per share AMD example and you buy a $90 put for AMD that costs $200 and expires December 13. There are some possible outcomes:\nAMD rises to $110 Your put option will lose lots of value, but it was your insurance policy anyway. You can sell your put as the expiration gets closer, or you can let it expire on December 13th. It\u0026rsquo;s worthless at that point, so nothing happens. AMD drops to $90.01 Your put will increase in value since the stock price has moved close to your strike price. You can choose to sell it before it expires to get a profit. (Keep in mind that the stock you own has lost value, too.) AMD drops to $80 You can exercise your put contract and sell your 100 shares of AMD for $90 each, even though the current price is $80 per share! This will net you $10 per share, or $1,000 total. Your losses from the stock\u0026rsquo;s fall are limited. You might be asking: What happens if I think a stock will go down, I buy a put, but I don\u0026rsquo;t own the underlying stock?\nIf the stock drops, you can sell the put for a profit as the value goes up. However, if you choose to exercise it, you will have a short position in your brokerage account. This means you\u0026rsquo;ve borrowed stock from your broker and they will likely charge fees and/or interest for this.\nIf AMD\u0026rsquo;s price quickly rises afterwards, you will owe your brokerage the difference. This is painful! Just imagine borrowing 100 shares at $80 each from your broker only to see the stock fly up to $120 per share. You will owe your broker that $4,000 difference. 🥵\nExercising #In your personal life, I usually recommend exercising regularly. When it comes to options, exercising may not be your best bet. There are plenty of situations where exercising an options contract is the worst idea.\nLet\u0026rsquo;s go back to our original example about buying a call on AMD for $105 when the stock is at $100. As an example, the stock price climbs to $110 before expiration and stays there on Friday afternoon. You are excited about your profits and you tell your broker to exercise your contract.\nYour broker submits the exercise request to CBOE, the options clearinghouse. The CBOE has some detailed rules about exercising that I highly recommend reading.\nLet\u0026rsquo;s say that something happens after hours at AMD that turns out to be really bad. Maybe there\u0026rsquo;s a delay on new chips, an accounting irregularity, or something else. The stock crashes after hours after you have asked for the exercise. The stock could be down to $80 by that point.\nSince you asked for the exercise, the stock you expected to get (100 shares at $105 each) will actually be 100 shares at $80 each by the time your exercise is finished. This is an extreme example, but always consider what can happen during the exercise process.\nFor a detailed explanation of potential outcomes here, and how to avoid them, I highly recommend the Options Expire Saturday episode of the Theta Gang Podcast. (The podcast is excellent overall.)\nWhat\u0026rsquo;s next? #👶🏻 This introduction barely scratches the surface of options contracts, so don\u0026rsquo;t start trading yet.\nI\u0026rsquo;m planning future posts about selling options contracts, how to choose the options to sell and buy, as well as many other ways you can gain (and lose) money with options contracts.\nDisclaimer: Keep in mind that I am not an investment professional and you should make your own decisions around stock research and trades. Investing comes with plenty of risk and I\u0026rsquo;m the last person who should be giving anyone investment advice. 😜\nPhoto credit: Dave Hoefler on Unsplash\n","date":"6 December 2020","permalink":"/p/options-trading-introduction/","section":"Posts","summary":"Trading options contracts feels incredibly daunting, but you can learn the terminology and make good choices in the market.","title":"Options trading introduction"},{"content":"The AMIs provided by most Linux distributions in AWS work well for most use cases. However, there are those times when you need a customized image to support a certain configuration or to speed up CI processes.\nYou can get a customized image via a few methods:\nBuild from an existing AMI, customize it, and snapshot it. Use an automated tool, such as Packer, to automate #1. Build your own image locally in KVM, VMware, or Virtualbox and upload the image into S3, import it into an EC2, and create an AMI from the snapshot. My preferred option is the last method since the installation happens locally and the image is first booted in AWS. This ensures that log files and configurations are clean on first boot. Although this method produces the best result, it has plenty of steps that can go wrong.\nImporting an image into AWS (the hard way) #AWS has documentation for importing an image and the basic steps include:\nInstall into a VM locally and customize it. Snapshot the image and upload it into an S3 bucket. Create an IAM role for vmimport so that EC2 can pull the image from S3 and import it. Run aws ec2 import-snapshot to tell EC2 to import the image. Monitor the output of aws ec2 describe-import-snapshot-tasks until the snapshot fully imports into EC2. It might fail to import, so you need to be prepared for that. (If that happens, go back to step 4.) Get the snapshot ID from the import. Run aws ec2 register-image to create the AMI from the snapshot ID. This is a lot of manual work. 😩\nUsing Image Builder to make images #Image Builder has two main components:\nosbuild-composer takes an image configuration and generates instructions for the image build stages (and optionally uploads an image to a cloud) osbuild takes those instructions and builds an image The support for uploading to clouds first arrived in Fedora 32 and this post will use that release for generating images.\nTo get started, install osbuild-composer along with composer-cli, a command line interface to create images. Start the socket for osbuild-composer as well:\n# dnf -y install composer-cli osbuild-composer # systemctl enable --now osbuild-composer.socket Verify that everything is working:\n# composer-cli sources list fedora updates fedora-modular updates-modular We now need an image blueprint. A blueprint is a TOML file that provides some basic specifications for the image, such as which packages to install, which services should start at boot time, and the system\u0026rsquo;s time zone. Refer to the Lorax composer documentation for a full list of options.\nIn this example, we will build a small image with nginx to serve a website. Here\u0026rsquo;s the TOML file:\nname = \u0026#34;aws-nginx\u0026#34; description = \u0026#34;AWS nginx image\u0026#34; version = \u0026#34;0.0.1\u0026#34; [[packages]] name = \u0026#34;chrony\u0026#34; [[packages]] name = \u0026#34;cloud-utils-growpart\u0026#34; [[packages]] name = \u0026#34;nginx\u0026#34; [customizations.kernel] append = \u0026#34;no_timer_check console=hvc0 LANG=en_US.UTF-8\u0026#34; [customizations.services] enabled = [\u0026#34;chronyd\u0026#34;, \u0026#34;nginx\u0026#34;] [customizations.timezone] timezome = \u0026#34;UTC\u0026#34; Our specification says:\nBuild an image with nginx and ensure it starts at boot time Install chrony for time synchronization, set the time zone to UTC, and start it at boot time. Install cloud-utils-growpart so that cloud-init can automatically grow the root filesystem on the first boot Add some kernel boot parameters to ensure the serial console works in AWS Push the blueprint into osbuild-composer and ensure the packages are available. (The depsolve check is optional, but I recommend it so you can find any typos in your package names.)\n# composer-cli blueprints push aws-image.toml # composer-cli blueprints depsolve aws-nginx blueprint: aws-nginx v0.0.1 acl-2.2.53-5.fc32.x86_64 alternatives-1.11-6.fc32.x86_64 audit-libs-3.0-0.19.20191104git1c2f876.fc32.x86_64 ... We can now build the image:\n# composer-cli --json compose start aws-nginx ami { \u0026#34;build_id\u0026#34;: \u0026#34;285c1ee8-6b9e-4725-9c4c-346eafae86de\u0026#34;, \u0026#34;status\u0026#34;: true } # composer-cli --json compose status 285c1ee8-6b9e-4725-9c4c-346eafae86de [ { \u0026#34;id\u0026#34;: \u0026#34;285c1ee8-6b9e-4725-9c4c-346eafae86de\u0026#34;, \u0026#34;blueprint\u0026#34;: \u0026#34;aws-nginx\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;0.0.1\u0026#34;, \u0026#34;compose_type\u0026#34;: \u0026#34;ami\u0026#34;, \u0026#34;image_size\u0026#34;: 0, \u0026#34;status\u0026#34;: \u0026#34;RUNNING\u0026#34;, \u0026#34;created\u0026#34;: 1592578852.962228, \u0026#34;started\u0026#34;: 1592578852.987541, \u0026#34;finished\u0026#34;: null } ] Our image is building! After a few minutes, the image is ready:\n# composer-cli --json compose status 285c1ee8-6b9e-4725-9c4c-346eafae86de [ { \u0026#34;id\u0026#34;: \u0026#34;285c1ee8-6b9e-4725-9c4c-346eafae86de\u0026#34;, \u0026#34;blueprint\u0026#34;: \u0026#34;aws-nginx\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;0.0.1\u0026#34;, \u0026#34;compose_type\u0026#34;: \u0026#34;ami\u0026#34;, \u0026#34;image_size\u0026#34;: 6442450944, \u0026#34;status\u0026#34;: \u0026#34;FINISHED\u0026#34;, \u0026#34;created\u0026#34;: 1592578852.962228, \u0026#34;started\u0026#34;: 1592578852.987541, \u0026#34;finished\u0026#34;: 1592579061.3364012 } ] # composer-cli compose image 285c1ee8-6b9e-4725-9c4c-346eafae86de 285c1ee8-6b9e-4725-9c4c-346eafae86de-image.vhdx: 1304.00 MB # ls -alh 285c1ee8-6b9e-4725-9c4c-346eafae86de-image.vhdx -rw-r--r--. 1 root root 1.3G Jun 19 15:12 285c1ee8-6b9e-4725-9c4c-346eafae86de-image.vhdx We can take this image, upload it to S3 and import it into AWS using the process mentioned earlier in this post. Or, we can have osbuild-composer do this for us.\nPreparing for automatic AWS upload #Start by making a bucket in S3 in your preferred region. Mine is called mhayden-image-uploads:\n# aws --region us-east-2 s3 mb s3://mhayden-image-uploads make_bucket: mhayden-image-uploads Now we need a role that allows EC2 to import images for us. Save this file as vmimport.json:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;Service\u0026#34;: \u0026#34;vmie.amazonaws.com\u0026#34; }, \u0026#34;Action\u0026#34;: \u0026#34;sts:AssumeRole\u0026#34;, \u0026#34;Condition\u0026#34;: { \u0026#34;StringEquals\u0026#34;:{ \u0026#34;sts:Externalid\u0026#34;: \u0026#34;vmimport\u0026#34; } } } ] } We now need a policy to apply to the vmimport role that allows EC2 to use the role to download the image, import it, and register an AMI (replace the bucket name with your S3 bucket). Save this as vmimport-policy.json:\n{ \u0026#34;Version\u0026#34;:\u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;:[ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetBucketLocation\u0026#34;, \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:ListBucket\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:s3:::mhayden-image-uploads\u0026#34;, \u0026#34;arn:aws:s3:::mhayden-image-uploads/*\u0026#34; ] }, { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;ec2:ModifySnapshotAttribute\u0026#34;, \u0026#34;ec2:CopySnapshot\u0026#34;, \u0026#34;ec2:RegisterImage\u0026#34;, \u0026#34;ec2:Describe*\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; } ] } Add the role and the policy to IAM:\n# aws iam create-role --role-name vmimport \\ --assume-role-policy-document \u0026#34;file://vmimport.json\u0026#34; # aws iam put-role-policy --role-name vmimport --policy-name vmimport \\ --policy-document \u0026#34;file://vmimport-policy.json\u0026#34; Building an image with automatic upload #We can use our same TOML blueprint we created earlier and provide one additional TOML file that provides AWS configuration and credentials. Create an aws-config.toml file with the following content:\nprovider = \u0026#34;aws\u0026#34; [settings] accessKeyID = \u0026#34;***\u0026#34; secretAccessKey = \u0026#34;***\u0026#34; bucket = \u0026#34;mhayden-image-uploads\u0026#34; region = \u0026#34;us-east-2\u0026#34; key = \u0026#34;fedora-32-image-from-my-blog-post\u0026#34; Add your AWS credentials here along with your S3 bucket, preferred AWS region, and an image key. The image key is the name applied to the snapshot and the resulting AMI.\nNow we can build our AMI and have it automatically uploaded:\n# composer-cli --json compose start aws-nginx ami fedora-32-image-from-my-blog-post aws-config.toml { \u0026#34;build_id\u0026#34;: \u0026#34;f343b20d-70f9-467a-9157-f9b4fc90ee87\u0026#34;, \u0026#34;status\u0026#34;: true } # composer-cli --json compose info f343b20d-70f9-467a-9157-f9b4fc90ee87 { \u0026#34;id\u0026#34;: \u0026#34;f343b20d-70f9-467a-9157-f9b4fc90ee87\u0026#34;, \u0026#34;config\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;blueprint\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;aws-nginx\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;AWS nginx image\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;0.0.1\u0026#34;, \u0026#34;packages\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;chrony\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;cloud-utils-growpart\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;nginx\u0026#34; } ], \u0026#34;modules\u0026#34;: [], \u0026#34;groups\u0026#34;: [], \u0026#34;customizations\u0026#34;: { \u0026#34;kernel\u0026#34;: { \u0026#34;append\u0026#34;: \u0026#34;no_timer_check console=hvc0 LANG=en_US.UTF-8\u0026#34; }, \u0026#34;timezone\u0026#34;: {}, \u0026#34;services\u0026#34;: { \u0026#34;enabled\u0026#34;: [ \u0026#34;chronyd\u0026#34;, \u0026#34;nginx\u0026#34; ] } } }, \u0026#34;commit\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;deps\u0026#34;: { \u0026#34;packages\u0026#34;: [] }, \u0026#34;compose_type\u0026#34;: \u0026#34;ami\u0026#34;, \u0026#34;queue_status\u0026#34;: \u0026#34;RUNNING\u0026#34;, \u0026#34;image_size\u0026#34;: 6442450944, \u0026#34;uploads\u0026#34;: [ { \u0026#34;uuid\u0026#34;: \u0026#34;e747be78-87e2-48b9-b0d2-cc1bb393a9e4\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;RUNNING\u0026#34;, \u0026#34;provider_name\u0026#34;: \u0026#34;aws\u0026#34;, \u0026#34;image_name\u0026#34;: \u0026#34;fedora-32-image-from-my-blog-post\u0026#34;, \u0026#34;creation_time\u0026#34;: 1592580775.438667, \u0026#34;settings\u0026#34;: { \u0026#34;region\u0026#34;: \u0026#34;us-east-2\u0026#34;, \u0026#34;accessKeyID\u0026#34;: \u0026#34;***\u0026#34;, \u0026#34;secretAccessKey\u0026#34;: \u0026#34;***\u0026#34;, \u0026#34;bucket\u0026#34;: \u0026#34;mhayden-image-uploads\u0026#34;, \u0026#34;key\u0026#34;: \u0026#34;fedora-32-image-from-my-blog-post\u0026#34; } } ] } The output now shows an uploads section with the AWS upload details included. This process may take some time, especially if your upload speed is low. You can follow along with composer-cli --json compose info or you can monitor the system journal:\n# journalctl -af -o cat -u osbuild-worker@1.service Running job f343b20d-70f9-467a-9157-f9b4fc90ee87 2020/06/19 15:57:37 [AWS] 🚀 Uploading image to S3: mhayden-image-uploads/fedora-32-image-from-my-blog-post 2020/06/19 15:58:03 [AWS] 📥 Importing snapshot from image: mhayden-image-uploads/fedora-32-image-from-my-blog-post 2020/06/19 15:58:03 [AWS] ⏱ Waiting for snapshot to finish importing: import-snap-0f4baff3e1eb945a8 2020/06/19 16:04:50 [AWS] 🧹 Deleting image from S3: mhayden-image-uploads/fedora-32-image-from-my-blog-post 2020/06/19 16:04:51 [AWS] 📋 Registering AMI from imported snapshot: snap-0cf822f1441f9e407 2020/06/19 16:04:51 [AWS] 🎉 AMI registered: ami-0d0873cc888ab12a2 I ran this job on a small instance at Vultr and the whole process took about 10 minutes. The AWS image import process can vary a bit, but it\u0026rsquo;s usually in the range of 5-15 minutes.\nAt this point, I can take my new AMI (in my case, it\u0026rsquo;s ami-0d0873cc888ab12a2) and build instances at EC2! 🎉\nWrapping up #Although there is some work involved in laying the groundwork for importing images into EC2, this work only needs to be done one time. You can re-use your existing AWS credentials TOML file over and over for new images that are made from different blueprints.\nYou can also do almost all of this work via the cockpit web interface using the cockpit-composer package if you prefer. The only downside to that method is that some image customizations cannot be made through cockpit and some TOML blueprint editing with composer-cli is needed. Look for that in a future blog post.\nPhoto credit: Wikimedia Commons\n","date":"19 June 2020","permalink":"/p/build-aws-images-with-imagebuilder/","section":"Posts","summary":"Build a customized image for AWS with Image Builder and use the built-in automatic uploader and importer.","title":"Build AWS images with Image Builder"},{"content":"","date":null,"permalink":"/tags/lifestyle/","section":"Tags","summary":"","title":"Lifestyle"},{"content":"I\u0026rsquo;ve talked about some of my experiences with altering my diet on Twitter and many people have asked about my experiences with keto so far. Note that I don\u0026rsquo;t call it \u0026ldquo;the keto diet\u0026rdquo; because it\u0026rsquo;s much more than a diet: it\u0026rsquo;s a lifestyle change. Sure, you alter what you eat, but you begin to think differently about how you fuel your body.\nBefore we start, I\u0026rsquo;d like to note that:\nI am definitely not a medical professional or a dietician. If you plan to make any big changes to what you eat, talk to an expert first. Certain people with certain conditions could be harmed by a drastic change in diet or lifestyle.\nEveryone reacts differently to lifestyle changes. Your experiences are not guaranteed to match mine. Yours could easily be better (or worse).\nI have nothing to sell you. The keto lifestyle is effectively open source since there\u0026rsquo;s nothing to buy and no memberships to maintain. There are tons of different ways to start and maintain it, and you can choose the right method for you.\nWhat is keto? #When you adopt a keto lifestyle, you change what fuel is used primarily in your body. It\u0026rsquo;s a high fat, moderate protein, low carbohydrate approach.\nLow carbohydrate diets are not new. They were studied heavily in the early 1900\u0026rsquo;s as a potential remedy for epilepsy. Doctors noticed that patients who fasted had fewer problems, but the patients couldn\u0026rsquo;t maintain fasting for extended periods. Once they put the patients on a high fat diet and the body went into ketosis (where it burns fat for fuel), the patients had fewer problems.\nThese diets are still used today for children and adults with seizure disorders and other brain-related conditions. To learn more about these patients and how fat works in your body, read Dr. Mark Hyman\u0026rsquo;s Eat Fat Get Thin. He provides tons of easy to understand reasons why a high fat diet is better and he provides links to hundreds of studies and publications about it.\nTime for bacon all day! #Hold on for a moment.\nAs a friend explained to me, doing keto the right way is like getting the right octane gasoline from a reputable gas station. It burns clean and make your car run well. You can choose a different octane or choose a shady gas station and your car will still run, but it might make noise, run slower, or send you to the mechanic more often.\nPeople often talk about \u0026ldquo;dirty\u0026rdquo; and \u0026ldquo;clean\u0026rdquo; keto. If you focus purely on your carbohydrate target and use any foods you want to fill up your calories from fat, you will end up in ketosis. However, some of these foods will slow your weight loss progress (if that\u0026rsquo;s your goal) and they may rob some of the additional keto health benefits from you.\nI thought it was just weight loss? #There are two big benefits here: weight loss and health benefits.\nFirst, let\u0026rsquo;s talk about weight loss. You can check Reddit\u0026rsquo;s r/keto for some amazing weight loss stories. People have loss hundreds of pounds and kept it off!\nI started around 195 lbs (88.5 kg) and I am 6'1\u0026quot; (1.85 m) tall. After starting keto, my weight increased (as did my blood pressure). I was frustrated as the food change was difficult and I felt like I was doing a lot of work for nothing. Online forums reassured me that this was part of the process and that I should keep pushing.\nSo I kept pushing and stayed with it. Around the 3-4 week mark, the weight slowly began to come down. Within two months, I was down to 180 (81.6 kg). Within three months, I was at 174 (78.9 kg) and realized I didn\u0026rsquo;t want to be that thin! After increasing my fat intake a bit more, I\u0026rsquo;m able to hover around 180 lbs and that\u0026rsquo;s very comfortable for me.\nEven at 180 lbs, I looked different. Some of the troublesome fat that I dealt with (love handles!) shrank some. I moved from a 36 in most pants down to a 34 (and a 32 in some).\nThe best part about all of this is that my body stays between 178-182 lbs reliably without needing to take any drastic action. It is easy to maintain.\nWhat are the health benefits? #I\u0026rsquo;ve had mild high blood pressure for a few years and I was taking medication (lisinopril) to reduce it. The medicine worked fairly well and and kept me in a better range. However, I had spells where I would stand from sitting and I felt like I was going to pass out. I felt tired more often and felt lightheaded after working out.\nI\u0026rsquo;ve also had asthma for years and I\u0026rsquo;ve taken various medications (montelukast, albeuterol, olopatadine, flonase) to deal with that and my seasonal allergies. My asthma has put me in the ER more than once.\nAfter about four months of keto, my lightheaded spells got worse and my doctor told me to stop taking blood pressure medication. I watched my blood pressure closely and found that I no longer needed medication for it! (That was one medication I was really glad to throw away.)\nI\u0026rsquo;m about 8 months in now and I was able to stop taking all allergy and asthma medications about a month ago. This really makes me think that something was in my diet that was causing me harm.\nWhat do I stop eating? #Looking at keto as a \u0026ldquo;What do I stop eating?\u0026rdquo; question will make it much more difficult to maintain. Here are some of the things I eat regularly:\nAvocados Berries (raspberries, blackberries, strawberries) Butter Cheese (mostly the harder ones) Coffee Eggs (add chorizo for more flavor) Fish (salmon, trout, limited tuna) Ghee (butter with dairy removed; goes great in coffee) Meats (beef, chicken, pork, and yes, bacon) MCT oil (medium chain triglycerides from coconuts) Mushrooms Nuts (walnuts, pecans, almonds, limited peanuts) Olive oil Vegetables (lettuce, zucchini, eggplant, celery, many more) This is just a small subset of what I like to eat. If you want a comprehensive keto grocery shopping list, the experts at Diet Doctor have you covered.\nBut yes, there are some things that I avoid as much as possible:\nAnything that says \u0026ldquo;diet\u0026rdquo; on the package Beer 😭 Chips and crackers Gluten (difficult to totally avoid) High-fructose corn syrup (HFCS) Malitol (used as a sugar substitute but it spikes blood sugar 🤦🏻‍♂️) Potatoes Sugar White wine To succeed at keto, you need to keep your body in sustained ketosis. Anything that spikes your blood sugar (sometimes called high glycemic load) must go.\nDon\u0026rsquo;t fear: some alcohol is okay! Avoid beer, sugary mixed drinks, and white whines. Red wines from Europe and South America are great, and you can have low sugar liquor like gin, bourbon, whiskey, scotch, tequila, and vodka. Too much alcohol will stunt your progress towards weight loss and other health benefits.\n💣 PLEASE NOTE: Hangovers are significantly worse and longer lasting on keto. Studies are underway to figure out why this is the case, but Dr. Hyman speculates in his book that it\u0026rsquo;s related to how the liver operates in a lower-carb setting.\nOkay, it can\u0026rsquo;t all be perfect. #And you\u0026rsquo;re right. Getting into ketosis is not fun at all.\nMany people talk about the \u0026ldquo;keto flu\u0026rdquo;, which is a period that lasts anywhere from a few days to a week where you feel pretty terrible. Your energy level will drop, exercising will be difficult, and you will be irritable. This is the period where your body has exhausted its supply of easily accessible glucose and it\u0026rsquo;s changing gears to burn fat.\nI\u0026rsquo;ve gone through it three times (more on that later), and it\u0026rsquo;s not fun. Here\u0026rsquo;s a list of what I went through:\nexhaustion increased trips to the bathroom irritability lack of focus muscle cramps temporary weight gain To make this as easy as possible (and short in duration), do these things:\neat consistently (do not fast) exercise consistently, but make it lighter than usual get electrolytes daily (low carb, of course) keep fat intake high (you need to encourage your body to burn it) sleep at least 8 hours The fog will slowly begin to lift and you will feel great when you reach the other side.\nWhat happens when I eat bad things? #This happens to all of us. I was two months in on keto when Thanksgiving arrived. That\u0026rsquo;s a big holiday in the US where we generally eat a lot, and a lot of what we eat is carbs. Some potatoes, pumpkin pie, and beer tempted me and I woke up the next morning feeling awful. My stomach cramped and I felt nauseated.\nMany people suggested getting right back into keto with high fat foods and exercise. I persevered with that and felt better by the end of the day.\nIf you do make a bad choice and spike your blood sugar, your body will crave more sugar and put you into a keto-busting spiral. Your digestive system biome that changed to work with a high-fat diet will get confused with a sudden influx of carbs and it will cramp. However, if you get back on track quickly, those problems will subside.\nPeople make me feel guilty for not eating things. #I\u0026rsquo;ve heard things like these a hundred times:\n\u0026ldquo;It\u0026rsquo;s not like one piece of cake is going to screw up your whole diet.\u0026rdquo; \u0026ldquo;She spent a lot of time cooking that and now you\u0026rsquo;re not going to eat it?\u0026rdquo; \u0026ldquo;You\u0026rsquo;re thin. Why do you need to do keto?\u0026rdquo; \u0026ldquo;I\u0026rsquo;ve eaten bread all my life and I\u0026rsquo;m just fine.\u0026rdquo; Remember that this is a choice you make for your body. Don\u0026rsquo;t ever say \u0026ldquo;I can\u0026rsquo;t eat that\u0026rdquo; because your brain will see your lifestyle change as limitation. I usually say \u0026ldquo;Thank you, but I don\u0026rsquo;t eat that\u0026rdquo; or \u0026ldquo;I love what you made, but that\u0026rsquo;s not something I eat.\u0026rdquo;\nWhat do I do when I get invited out for dinner? #This is probably the toughest part. What do you do if you get asked to go to dinner or if you\u0026rsquo;re on the road and you need to get food? You don\u0026rsquo;t really notice how much of the American diet is made from carbohydrates until you start avoiding them. They are everywhere.\nVegetarians and vegans have a similar problem here. Try to look at the menu ahead of time and figure out the things that you can eat.\nThere are two approaches here:\nChoose something you want to eat and ask the server if you can substitute anything you don\u0026rsquo;t want to eat. Order something and eat what you like. Leave the rest. I\u0026rsquo;ve had lots of luck by asking for different vegetables to replace potatoes as a side dish. There are some of those times where I can\u0026rsquo;t substitute and I end up leaving food behind. I feel bad about wasting food, but I\u0026rsquo;ll choose that option if it\u0026rsquo;s the only one I have.\nWhen everyone starts to order dessert, get yourself a low carb treat! Sometimes I\u0026rsquo;ll ask for a cup of coffee or some scotch as a dessert. Everyone does dessert differently\nI have more questions! #Great! There is only so much content I can put into this post. Send me a message on Twitter or email me at major at mhtx dot net.\nHere are some great resources that I use:\nBulletproof produces a diet that is similar to keto but it may help people with digestive disorders. It aligns with the FODMAP diet which may help you if you suffer from conditions like IBS and you want to lose weight. Their products are also top notch. Diet Doctor has tons of useful information on various diets that benefit your health and their guidance is easy to understand. Dr. Hyman is an expert on low-carb diets and their health benefits. He has a great podcast and lots of free resources to guide you. Reddit\u0026rsquo;s r/keto has inspiring stories and plenty of guidance for you if you get stuck. Photo credit: Pexels\n","date":"11 June 2020","permalink":"/p/my-experience-with-keto-so-far/","section":"Posts","summary":"Moving to the keto lifestyle is a big change. It\u0026rsquo;s more than just a diet and I\u0026rsquo;ll share my ups and downs from my journey.","title":"My experience with keto so far"},{"content":"","date":null,"permalink":"/tags/language/","section":"Tags","summary":"","title":"Language"},{"content":"Diacritics are all of the small things that are added on, above, or below certain characters in various languages. Some examples include tildes (ñ), accents (á), or other marks (š). These marks are little hints that help you know how to pronounce a word properly (and they sometimes change the definition of a word entirely).\nThey are often skipped by non-native language speakers, and sometimes even by native speakers, but I have done my best to make a habit of including them when I can.\nPronounciation can change drastically with a certain mark. For example, a common Czech name is Tomaš. The š on the end makes a sh sound insead of a the normal sss sound.\nIn Spanish, the word for Spain is España. The ñ has a n sound followed by a yah sound. If you leave off the ñ, you end up with a sound like ana on the end instead of an-yah.\nLeaving out diacritics can also lead to terrible results, such as a famous Spanish mistake:\nMi papá tiene cincuenta años (My Dad is fifty years old) Mi papa tiene cincuenta anos (My potato has fifty anuses) This could obviously lead to some confusion. 🤭\nFirst attempts (failures) #At first, I found myself going to online character maps and I would copy/paste the character I wanted. Typing in Spanish quickly became painful with the constant back and forth to copy certain characters.\nI knew there had to be a better way.\nAltGr #After some research, I found that there are some keyboards with a special Alt key on the right side of the space bar called the AltGr key. It\u0026rsquo;s a special modifier key that lets you type characters that are not easy with your keyboard layout.\nLuckily, you can tell your computer to pretend like you have an AltGr key to the right of the keyboard and you get access to all of the international characters via key combinations.\nFor Linux, you can run this in any terminal:\nsetxkbmap us -variant altgr-intl I add this command in my ~/.config/i3/config for i3:\nexec_always --no-startup-id \u0026#34;setxkbmap us -variant altgr-intl\u0026#34; Most window managers give you the option to change the keyboard layout for your session in the window manager settings.\nIn GNOME, open Settings, click Region \u0026amp; Language, and click the plus (+) below the list of layouts. Choose English and then choose English (US, alt. intl.) from the list. You can switch from layout to layout in GNOME, but AltGr works well for me as a default.\nTrying it out #Once you have AltGr enabled, here are some quick things to try:\nAltGr + Shift + ~, release keys, press n: ñ AltGr + \u0026lsquo;, release keys, press a: á AltGr + Shift + ., release keys, press s: š AltGr + s: ß Take a look at AltGr on Wikipedia for lots more combinations.\n","date":"13 February 2020","permalink":"/p/make-diacritics-easy-in-linux/","section":"Posts","summary":"Making an effort to use diacritics is always a good idea, but how can you make it easier in Linux?","title":"Make diacritics easy in Linux"},{"content":"","date":null,"permalink":"/tags/brno/","section":"Tags","summary":"","title":"Brno"},{"content":"Come to the Czech Republic and discover the beautiful city of Brno. I just wrapped up my third visit to the city and I can\u0026rsquo;t wait to come back! The city is full of history, culture, and delicious food.\nHere\u0026rsquo;s my travel guide to Brno!\nGetting to Brno #Brno has an airport, but the flights are limited and sometimes expensive. Some coworkers have found good deals on these flights (especially via Ryanair from London\u0026rsquo;s Stansted Airport), but I prefer the train.\nI prefer to fly into Vienna, Austria and catch the Regiojet train from Vienna\u0026rsquo;s main train station (Hauptbanhof, or Hbf.) to Brno\u0026rsquo;s main train station (hlavní nádraží). The train is usually €10 or less and it takes about 90 minutes. You can get snacks or a full meal on the train. They also have beer, wine, and coffee (the coffee is free!). They take credit cards on the train for anything you buy.\nThe only downside of the Regiojet train is that you will need to take a train from Vienna\u0026rsquo;s airport train station (Flughafen) to Vienna\u0026rsquo;s main train station (Wien Hauptbanhof). It\u0026rsquo;s a quick 15 minute trip that costs €5 or less.\nThere is also an ÖBB Railjet train that leaves directly from Vienna\u0026rsquo;s airport train station (Flughafen Hbf.). It\u0026rsquo;s convenient since you don\u0026rsquo;t need to take a train from the airport to the main train station, but it usually costs €20 or more.\nGetting around Brno #Brno has an extensive tram, bus, and train system that runs 24 hours a day. You have plenty of options for getting tickets for trams and buses:\nAt most stops, you can get a 24 hour ticket with coins (more on money later) The main train station has passes for multiple days (5 days, 14 days, and longer) Purchase tickets on BrnoID and attach the ticket to a contactless credit or debit card Some buses and trams have a contactless card terminal on them (but don\u0026rsquo;t count on it being there) Send SMS to buy tickets (see signs at stops, requires Czech phone number) Don\u0026rsquo;t get on without a valid ticket! When they check for tickets (and they do!), you could get hit with a fine of €15-€40. That\u0026rsquo;s downright silly when a two week ticket is usually around €11.\nBuying tickets in Brno\u0026rsquo;s main train station is generally easy, but it costs more than using BrnoID.\nAs for tram and bus etiquette, I\u0026rsquo;ve learned a few things:\nTry move away from the doors when you board (sometimes this is difficult when it\u0026rsquo;s crowded) Make room for people in wheelchairs and with baby carriages in the doorway areas Offer up your seat to the elderly or to people who really need to sit down Make your way towards the door early and press the green button on the poles near the door when you want to get off If you ride in the late evening or night time, you may hear na znamení \u0026ndash; if you do, then it means your stop will be skipped if you don\u0026rsquo;t push the button to get off (pay attention!) When you want to board the train, be sure to push the buttons next to the doors on the outside of the tram so they open (no need to do this at the big stops like the main train station or Česka since all of the doors open anyway) The cars in the rear of the trams seem to be the least crowded If you only remember one thing, remember this: trams always have the right of way. As my coworker in Brno says \u0026ldquo;cars stop, trams do not\u0026rdquo;. Do not assume that the trams will stop in front of you, even at stations. Stay out of the way until the tram has fully passed you or fully stopped.\nThe trams stop just before 11PM, so be sure to check the schedules or Google Maps prior to heading out at night time. The night buses run all night long but there are some long gaps between stops late at night. The night buses are quite lively, so please don\u0026rsquo;t get distracted and forget about my na znamení note above. 😀\nSpeaking Czech #I\u0026rsquo;m an American who speaks English and limited Spanish, and Czech is a difficult language for me. There are sounds in Czech that are totally new to you and they will take a lot of practice before you get them right. However, Czech people are really pleased when you make an attempt to speak some Czech, so it\u0026rsquo;s worth knowing a few things:\ndobrý den: good day dobré ráno: good morning ahoj: hello (and goodbye) for a friend, someone you know (sounds like \u0026ldquo;ahoy!\u0026rdquo;) děkuji: thank you prosím: please, can I help you, here you go, casual version of \u0026ldquo;you\u0026rsquo;re welcome\u0026rdquo; nemáš zač: you\u0026rsquo;re welcome (more formal) pivo: beer (follow with prosím) vino: wine (just like Spanish!) If these look difficult, do your best to work on dobrý den and děkuji. In my experience, most people I met would put on a big smile when they heard me try some basic Czech. One of the workers at my hotel saw me every morning and after a few days of me saying \u0026ldquo;dobrý den\u0026rdquo; to her before breakfast, she finally said \u0026ldquo;getting better, good job!\u0026rdquo; 🤗\nThe vowels in Czech are almost exactly the same as Spanish (including accented ones). If you see a consonant with a hat, like č, add an h after it. A č sounds like ch in chair. A š sounds like sh in shut.\nIf you see a hat on a vowel, like ě, try to mix in a y sound like in yarn. These are difficult.\nI\u0026rsquo;m told the most difficult letter is the ř. It\u0026rsquo;s a sound we don\u0026rsquo;t have in Latin languages. I seem to get better at it after a few beers.\nA friend in Brno told me that you speak Czech like you care a lot about the first syllable and you don\u0026rsquo;t care about the others. Put most of your emphasis on the early part of the word and avoid doing that in the middle and end.\nFood and drinks #You had better come hungry (and thirsty) because the food here is delicious. Brno has plenty of delicious meats, vegetables, pastries, beer, and wine. Being vegetarian or vegan in Brno isn\u0026rsquo;t exactly easy, but it\u0026rsquo;s entirely possible.\nMy absolute favorite items are bramboraky (thick potato pancakes), trdelník (cake made on a spit), and kolače (round pastry with plum in the middle). Czech people also love cheesecake because you can find it everywhere. Meat dishes like pork knee, rump steak, and beef goulash are top notch.\nAs for drinks, pilsner beer is really popular. You can find beers like Starobrno and Pilsner Urquell everywhere, but I\u0026rsquo;d encourage you to look for some other beers like Chotěboř (it\u0026rsquo;s a tough one to say). There are lots of microbrweries all over town.\nWine in Brno is delicious. Moravia (southern Czech Republic) is full of wineries and they extend into Sloviakia and Croatia. If you enjoy red wine, you are in for a treat. Try a local Cabernet or Frankovka. Even the Merlot wine here tastes amazing. As for white wines, my favorite is the Pálava\nIf you tire of Czech food, Brno has excellent food from around the world, especially Indian, Italian, and Thai food.\nBreakfasts include lots of familiar foods, including eggs, bacon, beans, fruit, pastries, coffee, and tea. Filtered coffee (American-style) is hard to come by, but you can order an americano at most coffee shops even if it\u0026rsquo;s not on the menu. Try a good espresso at least once. You can finish it fast and you get a good jolt in a few minutes.\nSafety #I\u0026rsquo;ve felt safer in Brno than in some cities in Texas. As with any big city, travel with groups when you can and be sure you know where you\u0026rsquo;re going before you go. Czech people try to keep to themselves on streets and public transportation, so if you\u0026rsquo;re minding your own business, you are most likely never going to be bothered.\nMoney #The Czechs use Koruna (\u0026ldquo;crowns\u0026rdquo; in English). Amounts are usually shown in full units (no cents like with US dollars) and there are coins for 50 CZK and lower. The bills start at 100 CZK.\nI recommend using an ATM when you arrive since you\u0026rsquo;ll get the best rate. However, stay away from the ATMs that are very close to the train station. I\u0026rsquo;ve seen fees as high as €10 at ATMs near the station! The city center is a few blocks away with plenty of ATMs with little or no fees.\nAs with most of Europe, cards are widely accepted. Chip cards are required and contactless cards are really helpful. Most payment terminals will allow you to pay with a quick tap of your contactless card and it\u0026rsquo;s quite handy. American cards still require a signature and you may find that Czech people are stunned when their payment terminal demands that they collect a signature from you.\nBefore you travel, be sure to let your credit and debit card companies know about your travel so that you won\u0026rsquo;t trigger fraud alerts.\nEnjoy #I\u0026rsquo;ve probably missed a lot of things in this post, but these are the things that come to mind right now. Thanks for reading the post and I hope you get to enjoy a trip to Brno soon!\n","date":"30 January 2020","permalink":"/p/my-travel-guide-to-brno/","section":"Posts","summary":"Brno is a beautiful city in the Czech Republic. Learn some travel tips from my experiences as an American in Brno!","title":"My Travel Guide to Brno"},{"content":"","date":null,"permalink":"/tags/redhat/","section":"Tags","summary":"","title":"Redhat"},{"content":"","date":null,"permalink":"/tags/travel/","section":"Tags","summary":"","title":"Travel"},{"content":"I wrote about installing Linux on the Lenovo ThinkPad T490 last month and one of the biggest challenges was getting graphics working properly. The T490 comes with an option where you can get a discrete Nvidia MX250 GPU and it packs plenty of power in a small footprint.\nIt also brings along a few issues.\nAwful battery life #There are many times where it would be helpful to fully disable the Nvidia card to extend battery life when graphics processing is not needed. The MX250 is a Pascal family GPU and those GPUs require signed drivers, so nouveau will not work.\nThere is a handy kernel feature called VGA Switcheroo (yes, that is the name). It gives you a quick method for turning the GPU on and off. Unfortunately, that does require the nouveau module to work with the card.\nThe Nvidia drivers attempt to take the card into a low power mode called P8, but it\u0026rsquo;s not low enough. Removing the nvidia module causes the card to run with full power and that makes things even worse.\nDarn. It\u0026rsquo;s time to fix some other problems. 😟\nSuspend and resume #There are issues with suspend and resume with the Nvidia drivers after Linux 4.8. If you close the lid on the laptop, the laptop suspends properly and you can see the pulsating LED light on the lid.\nOpen the lid after a few seconds and you will see a black screen (possibly with a kernel trace) that looks like this:\n[ 51.435212] ACPI: Waking up from system sleep state S3 [ 51.517986] ACPI: EC: interrupt unblocked [ 51.567244] nvidia 0000:2d:00.0: Refused to change power state, currently in D3 The laptop will lock up and the fans will spin up shortly after. The only remedy is a hard power off.\nThis is related to a Nvidia driver bug that surfaced after Linux 4.8 added per-port PCIe power management. That feature allows the kernels to handle PCIe power management for each port individually. It helps certain PCIe devies (or portions of those devices) to go into various power saving modes independently.\nYou can work around this issue by adding pcie_port_pm=off to your kernel command line. I added it and my suspend/resume worked well after a reboot.\nThis leads to another problem:\nEven worse battery life #Getting suspend and resume back was a nice improvement, but I noticed that my battery life dropped significantly. I went from 6 hours (which was not great) down to 3-4 hours. That\u0026rsquo;s terrible.\nI booted my laptop into i3wm and ran powertop in a terminal. The idle power usage bounced between 10-12 watts with a single terminal open and i3status updating my status line.\nSo I was left with a choice:\nLeave the Nvidia card enabled with pcie_port_pm=off set, enjoy my suspend/resume, and suffer through terrible battery life 😫\nRemove pcie_port_pm=off, save battery life, and deal with hard lockups if I attempt to suspend 😭\nBoth options were terrible.\nI knew there was only one good choice: find a way to disable the Nvidia card by default and only enable it when I need it.\nDigging deep #If you can\u0026rsquo;t control your hardware well enough in the OS, and you can control it in the BIOS, the only option remaining is to examine your ACPI tables. This requires dumping the DSDT and SSDT tables from the laptop. These tables provide a map of instructions for taking all kinds of actions with the hardware on the laptop, including turning devices on and off.\n🔥 DISCLAIMER: Tinkering with DSDT and SSDT files can damage your machine if you are not familiar with the process. All changes in these files must be made with extreme care and you should try the smallest possible change first to reduce the risks.\nWe need some tools to dump the ACPI tables and decompile them into a DSL that we can read as humans:\ndnf install acpica-tools Make a directory to hold the files and dump the ACPI tables:\nmkdir ~/dsdt cd ~/dsdt sudo acpidump -b You should have plenty of files ending in .dat in the directory. These are the compiled ACPI tables and they are difficult to read unless you love hex. You can decompile them with iasl and move the compiled files out of the way:\niasl -d *.dat mkdir raw mv *.dat raw/ You can find the decompiled files in my T490 DSDT repository on GitLab.\nWe need to find some details on the discrete GPU. Running a grep on the .dsl files in the directory shows some mentions in the ssdt10.dsl:\n{ Local0 [One] = 0x03 TGPU = \\_SB.PCI0.LPCB.EC.HKEY.GPTL /* External reference */ Local0 [0x08] = TGPU /* \\_SB_.PCI0.RP09.PEGP.TGPU */ Return (Local0) } So the GPU is represented in the ACPI tables as SB_.PCI0.RP09.PEGP. Let\u0026rsquo;s grep for that:\n$ grep -l SB_.PCI0.RP09.PEGP *.dsl dsdt.dsl ssdt10.dsl ssdt11.dsl ssdt14.dsl So the card appears in ssdt11.dsl. Examine that file and you will find:\nMethod (_ON, 0, Serialized) // _ON_: Power On { D8XH (Zero, 0x11) If ((TDGC == One)) { If ((DGCX == 0x03)) { _STA = One \\_SB.PCI0.RP09.PEGP.GC6O () } ElseIf ((DGCX == 0x04)) { _STA = One \\_SB.PCI0.RP09.PEGP.GC6O () } TDGC = Zero DGCX = Zero } ElseIf ((OSYS != 0x07D9)) { PCMR = 0x07 PWRS = Zero Sleep (0x10) \\_SB.PCI0.HGON () // \u0026lt;---- This is where it turns on! _STA = One } D8XH (Zero, 0x12) } When the _ON method is called, it calls \\_SB.PCI0.HGON () and that turns on the card. There\u0026rsquo;s another method called \\_SB.PCI0.HGOF () that turns off the card.\nLet\u0026rsquo;s try changing any instances of HGON to HGOF. It\u0026rsquo;s dirty, but it just might work. There are two calls to HGON in ssdt11.dsl and I changed both to HSOF. This should cause the card to be turned off when the system boots (and the _INI methods are called).\nWe need to make one more change so that the kernel will know our patched SSDT file is newer than the one in the BIOS. Look for this line at the top of ssdt11.dsl:\nDefinitionBlock (\u0026#34;\u0026#34;, \u0026#34;SSDT\u0026#34;, 2, \u0026#34;LENOVO\u0026#34;, \u0026#34;SgRpSsdt\u0026#34;, 0x00001000) Change the number at the very end so that it is incremented by one:\nDefinitionBlock (\u0026#34;\u0026#34;, \u0026#34;SSDT\u0026#34;, 2, \u0026#34;LENOVO\u0026#34;, \u0026#34;SgRpSsdt\u0026#34;, 0x00001001) Now we need to compile the SSDT\niasl -tc ssdt11.dsl The easiest method for loading the SSDT table is to patch it during the initrd step. We need to pack the file into a cpio archive:\nmkdir -p /tmp/fix-nvidia/kernel/firmware/acpi cd /tmp/fix-nvidia cp ~/dsdt/ssdt11.aml kernel/firmware/acpi find kernel | cpio -H newc --create \u0026gt; acpi_override sudo cp acpi_override /boot/ Now we can carefully edit the bootloader options by adding initrd /acpi_override to our current kernel entry. These are found in /boot/loader/entries and are named based on the kernel they load. In my case, the bootloader config for 5.4.12 is in /boot/loader/entries/d95743f260b941dcb518e3fcd3a02fa9-5.4.12-200.fc31.x86_64.conf.\nThe file should look like this afterwards:\ntitle Fedora (5.4.12-200.fc31.x86_64) 31 (Thirty One) version 5.4.12-200.fc31.x86_64 linux /vmlinuz-5.4.12-200.fc31.x86_64 initrd /acpi_override initrd /initramfs-5.4.12-200.fc31.x86_64.img options $kernelopts grub_users $grub_users grub_arg --unrestricted grub_class kernel The initrd /acpi_override line is the one I added.\nReboot your laptop. After the boot, look for the SSDT lines in dmesg:\n$ dmesg | egrep -i \u0026#34;ssdt|dsdt\u0026#34; [ 0.018597] ACPI: SSDT ACPI table found in initrd [kernel/firmware/acpi/ssdt11.aml][0xe28] [ 0.018813] ACPI: Table Upgrade: override [SSDT-LENOVO-SgRpSsdt] [ 0.018816] ACPI: SSDT 0x000000008780E000 Physical table override, new table: 0x0000000086781000 Now look for Nvidia:\n$ nvidia-smi NVIDIA-SMI has failed because it couldn\u0026#39;t communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. Success! My laptop is now hovering around 4.5-5.5 watts. That\u0026rsquo;s half of what it was before! 🎊 🎉 🥳\nBut sometimes I want my dGPU #Okay, there are some times where the discrete GPU is nice. Let\u0026rsquo;s edit the SSDT table once more to add an option to enable it at boot time with a kernel command line option.\nHere are the changes needed for ssdt11.dsl:\ndiff --git a/ssdt11.dsl b/ssdt11.dsl index fd9042f05376aa80e3b94c1d6313e69cbb495c34..f75b43f57655553c5ced7a2595ad2b48f26b2c10 100644 --- a/ssdt11.dsl +++ b/ssdt11.dsl @@ -337,7 +337,17 @@ DefinitionBlock (\u0026#34;\u0026#34;, \u0026#34;SSDT\u0026#34;, 2, \u0026#34;LENOVO\u0026#34;, \u0026#34;SgRpSsdt\u0026#34;, 0x00001000) PCMR = 0x07 PWRS = Zero Sleep (0x10) - \\_SB.PCI0.HGON () + + // Set this ACPI OSI flag to enable the dGPU. + If (\\_OSI (\u0026#34;T490-Hybrid-Graphics\u0026#34;)) + { + \\_SB.PCI0.HGON () + } + Else + { + \\_SB.PCI0.HGOF () + } + _STA = One } @@ -449,7 +459,15 @@ DefinitionBlock (\u0026#34;\u0026#34;, \u0026#34;SSDT\u0026#34;, 2, \u0026#34;LENOVO\u0026#34;, \u0026#34;SgRpSsdt\u0026#34;, 0x00001000) Method (_ON, 0, Serialized) // _ON_: Power On { - \\_SB.PCI0.HGON () + // Set this ACPI OSI flag to enable the dGPU. + If (\\_OSI (\u0026#34;T490-Hybrid-Graphics\u0026#34;)) + { + \\_SB.PCI0.HGON () + } + Else + { + \\_SB.PCI0.HGOF () + } Return (Zero) } Follow the same steps as before to compile the SSDT, pack it into a cpio archive, and copy it to /boot/acpi_override. Now you can add acpi_osi='T490-Hybrid-Graphics' to your kernel command line whenever you want to use your Nvidia card. You won\u0026rsquo;t need to mess with SSDT tables again to make it work.\nI hope this guide was helpful! Keep in mind that future BIOS updates may change your ACPI tables and this fix may stop working. You may need to look around for the changes and adjust your changes to match.\n","date":"24 January 2020","permalink":"/p/disable-nvidia-gpu-thinkpad-t490/","section":"Posts","summary":"The Lenovo ThinkPad T490 is a great laptop, but it comes with some discrete GPU challenges.","title":"Disable Nvidia GPU on the Thinkpad T490"},{"content":"","date":null,"permalink":"/tags/nvidia/","section":"Tags","summary":"","title":"Nvidia"},{"content":"Way back in 2012 when Fedora releases had names, there was one release that many of us in the Fedora community will never forget. Fedora 17\u0026rsquo;s code name was \u0026ldquo;Beefy Miracle\u0026rdquo; and it caused plenty of giggles and lots of consternation (especially in vegetarian and vegan circles).\nNo matter how you feel about the code name, the mascot was really good:\nMajor and the beefy miracle in 2012 The mustard #I was told several times that \u0026ldquo;the mustard indicates progress.\u0026rdquo; That didn\u0026rsquo;t make a lot of sense to me until I saw the Plymouth boot splash. During the boot-up, the mustard moves from bottom to top to indicate how much of the boot process has completed.\nYou can try out the hot dog boot splash yourself with a few quick commands on Fedora.\nFirst off, install the hot-dog plymouth theme:\nsudo dnf install plymouth-theme-hot-dog Set the theme as the default and rebuild the initrd to ensure that the boot screen is updated after you reboot:\nsudo plymouth-set-default-theme --rebuild-initrd hot-dog This step takes a few moments to finish since it causes dracut to rebuild the entire initrd with the new plymouth theme. Once it finishes, reboot your computer and you should get something like this:\nHot dog boot splash ","date":"16 December 2019","permalink":"/p/bring-back-fedora-beefy-miracle-boot-splash/","section":"Posts","summary":"Fedora 17\u0026rsquo;s code name was Beefy Miracle and it had a great mascot. You can see it at boot time with a few quick changes.","title":"Bring Back Fedora's Beefy Miracle boot splash"},{"content":" 🔨 WORK IN PROGRESS! I\u0026rsquo;m still finding some additional issues and I\u0026rsquo;ll write those up here as soon as I find some solutions.\nWith my 4th Gen X1 Carbon beginning to age (especially the battery), it was time for an upgrade. I now have a T490 with a 10th gen Intel CPU and a discrete NVIDIA MX250 GPU. This laptop spec was just released on Black Friday!\nAs with any new technology, there are bound to be some quirks in Linux that require some workarounds. This laptop is no exception!\nThis post will grow over time as I find more workarounds and fixes for this laptop.\nInstalling Fedora #Start by downloading whichever installation method of Fedora you prefer. Since this laptop is fairly new, I went with the network installation (included in the Server ISOs) and chose to apply updates during installation.\nOn the first boot, wait for the LUKS screen to appear and ask for your password to decrypt the drive. Hit CTRL-ALT-DEL at the password prompt and wait for the grub screen to appear on reboot.\nWhy are we issuing the three finger salute? If you allow the laptop to fully boot, it will hang when it starts gdm. There are some nouveau issues in the system journal that provide hints but I haven\u0026rsquo;t made sense of them yet. By preventing the system from fully booting, the grub success flag won\u0026rsquo;t be set and you will see the grub menu at the next boot that is normally hidden from you.\nPress e on the first line of the grub menu. Find the longest line (it usually has rhgb quiet) and add this text to the end:\nrd.driver.blacklist=nouveau Press CTRL-X to boot the system. Enter your LUKS password when asked and you should boot straight into gdm!\nYou have two options here:\nBlacklist nouveau until bugs are fixed. (Not recommended) This will force your laptop to use the integrated Intel GPU on the CPU, but it may or may not shut off the NVIDIA GPU. This could cause a significant battery drain.\nInstall NVIDIA\u0026rsquo;s proprietary drivers. (Recommended) You will have much better control over the power state of the NVIDIA GPU and the installation process will automatically blacklist nouveau for you.\nI\u0026rsquo;m going to install NVIDIA\u0026rsquo;s proprietary drivers that have power management and optimus support built in already. All of these steps here come from RPM Fusion\u0026rsquo;s excellent NVIDIA documentation.\nStart by installing RPMFusion\u0026rsquo;s repository configuration:\nsudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm Next, install the proprietary drivers.\nsudo dnf install akmod-nvidia When the packages are installed, run akmods to build the nvidia kernel module (this took about a minute on my laptop):\n# sudo akmods Checking kmods exist for 5.3.15-300.fc31.x86_64 [ OK ] Reboot your laptop. If all goes well, your laptop should boot right up without interaction (except for entering your LUKS password).\nNVIDIA power management #You can enable some power management features for the NVIDIA GPU by following the (somewhat lengthy) documentation about PCI-Express Runtime D3 (RTD3) Power Management. I\u0026rsquo;ve enabled the most aggressive setting by adding the following to /etc/modprobe.d/nvidia.conf:\noptions nvidia \u0026#34;NVreg_DynamicPowerManagement=0x02\u0026#34; Reboot your laptop for the change to take effect.\nBIOS Updates #My laptop was shipped to me with the 1.04 BIOS, but 1.06 is the latest (as of this writing). Follow these steps to update:\nOpen the Software application Go to the Updates tab Look for a firmware update (usually at the end of the list) Click update and wait for a notification Reboot The firmware capsule is found on the first reboot and then the laptop reboots to install the new firmware. You\u0026rsquo;ll see some screens about backing up the BIOS and some self-health related things. They take a while to complete, but my laptop came right up on 1.06 without any problems!\n","date":"12 December 2019","permalink":"/p/thinkpad-t490-fedora-install-tips/","section":"Posts","summary":"My new T490 with a 10th generation Intel CPU and a discrete NVIDIA MX250 has arrived! Installing Linux creates some interesting challenges.","title":"Thinkpad T490 Fedora install tips"},{"content":"Moving applications into an entirely containerized deployment, such as OpenShift or Kubernetes, requires care and attention. One aspect of both that is often overlooked is scheduled jobs, or cron jobs. ⏰\nCron jobs in OpenShift allow you to run certain containers on a regular basis and execute certain applications or scripts in those containers. You can use them to trigger GitLab CI pipelines, run certain housekeeping tasks in web applications, or run backups.\nThis post will cover a quick example of a cron job and how to monitor it.\nNote: Almost all of these commands will work in a Kubernetes deployment by changing oc to kubectl, but your mileage may vary based on your Kubernetes version. All of these commands were tested on OpenShift 3.11.\nAdd a job #Here is a really simple cron job that gets the current date:\n# cronjob.yml apiVersion: batch/v1beta1 kind: CronJob metadata: name: get-date spec: schedule: \u0026#34;*/1 * * * *\u0026#34; jobTemplate: spec: template: spec: containers: - name: get-date image: docker.io/library/fedora:31 command: - date The job definition says:\nStart a Fedora 31 container every minute Run date in the container Kill the container Load this into OpenShift with: oc apply -f cronjob.yml\nIf you want to make more complex jobs, review the OpenShift documentation on cron job objects. The cron job API documentation has much more detail.\nBad things happen to good cron jobs #Cron jobs come with certain limitations and these are explained in the Kuberntes documentation on cron jobs. If a cron job is missed for a certain period of time, the scheduler will think something has gone horribly wrong and it won\u0026rsquo;t schedule new jobs.\nThese situations include:\nthe container takes too long to start (check .spec.startingDeadlineSeconds)\none run of the job takes a very long time and another job can\u0026rsquo;t start (usually when concurrencyPolicy is set to Forbid)\nIf 100 of the jobs are missed, the scheduler will not start any new jobs. This could be a disaster for your application and it\u0026rsquo;s a good place to add monitoring.\nMonitor missed cron jobs with bash #Luckily, OpenShift makes an API available for checking on these situations where cron jobs are missed. The API sits under the following URI: /apis/batch/v1beta1/namespaces/$NAMESPACE/cronjobs/$JOBNAME\nFor our get-date example above, this would be: /apis/batch/v1beta1/namespaces/$NAMESPACE/cronjobs/get-date\nWe can monitor this job using two handy tools: curl and jq.\n#!/bin/bash # Get unix time stamp of a last job run. LAST_RUN_DATE=$( curl -s -H \u0026#34;Authorization: Bearer $YOUR_BEARER_TOKEN\u0026#34; \\ https://openshift.example.com/apis/batch/v1beta1/namespaces/$NAMESPACE/cronjobs/get-date | \\ jq \u0026#34;.status.lastScheduleTime | strptime(\\\u0026#34;%Y-%m-%dT%H:%M:%SZ\\\u0026#34;) | mktime\u0026#34; ) # Get current unix time stamp CURRENT_DATE=$(date +%s) # How many minutes since the last run? MINUTES_SINCE_LAST_RUN=$((($CURRENT_DATE - $LAST_RUN_DATE) / 60)) DETAIL=\u0026#34;(last run $MINUTES_SINCE_LAST_RUN minute(s) ago)\u0026#34; if [[ $MINUTES_SINCE_LAST_RUN -ge 2 ]]; then echo -n \u0026#34;FAIL ${DETAIL}\u0026#34; exit 1 else echo -n \u0026#34;OK ${DETAIL}\u0026#34; exit 0 fi Note: Getting tokens for the curl request is covered in OpenShift\u0026rsquo;s Authentication documentation.\nIf the cron job is running normally, the script output should be:\n$ ./check-cron-job.sh OK (last run 0 minute(s) ago) $ echo $? 0 And when things go wrong:\n$ ./check-cron-job.sh FAIL (last run 22 minute(s) ago) $ echo $? 1 ","date":"18 November 2019","permalink":"/p/monitoring-openshift-cron-jobs/","section":"Posts","summary":"Openshift (and Kubernetes) allow you to run jobs on schedule, but these jobs can fail from time to time. You can monitor them from bash!","title":"Monitoring OpenShift cron jobs"},{"content":"","date":null,"permalink":"/tags/openshift/","section":"Tags","summary":"","title":"Openshift"},{"content":"","date":null,"permalink":"/tags/shell/","section":"Tags","summary":"","title":"Shell"},{"content":"I have a CyberPower CP1350AVRLCD under my desk at home and I use it to run my computer, monitors, speakers, and a lamp. My new computer is a little more power hungry than my old one since I just moved to to a Ryzen 3700x and Nvidia GeForce 2060 and I like to keep tabs on how much energy it is consuming.\nSome power supplies offer a monitoring interface where you can watch your power consumption in real time, but I\u0026rsquo;m not willing to spend that much money. Most CyberPower UPS units offer some pretty decent power monitoring features right out of the box, and fortunately for us, they work quite well in Linux.\nIn this post, we will set up the Linux communication with the UPS and make it easy to monitor via scripts. Also, we will add it to an existing polybar configuration so we can monitor it right from the desktop environment.\nInstalling powerpanel #CyberPower offers software called PowerPanel that runs on most Linux distributions. It has a daemon (pwrstatd) and a client (pwrstat) that allows you to monitor the UPS and take actions automatically when the power is disrupted.\nDownload the PowerPanel RPM and install it:\nsudo dnf install ~/Downloads/powerpanel-132-0x86_64.rpm As I noted in my post called Troubleshooting CyberPower PowerPanel issues in Linux, we need to tell pwrstatd where it should communicate with the UPS. If you skip this step, the daemon hangs without much explanation of what is happening.\nOpen /etc/pwrstatd.conf with your favorite text editor and change the allowed_device_nodes line to point to the right USB device:\n# For example: restrict to use libusb device. # allowed-device-nodes = libusb allowed-device-nodes = /dev/usb/hiddev0 Unfortunately, CyberPower doesn\u0026rsquo;t ship a systemd unit file for pwrstatd. Write this unit file to /etc/systemd/system/pwrstatd.service:\n[Unit] Description=pwrstatd [Service] Group=wheel UMask=0002 ExecStart=/usr/sbin/pwrstatd [Install] WantedBy=multi-user.target The wheel group should be fine here if your user is already in that group and uses sudo. You can also change that to a different group, like power, and then add your user to the power group.\nNow we can reload systemd, start pwrstatd, and ensure it comes up at boot time:\nsystemctl daemon-reload systemctl enable --now pwrstatd Testing the client #The pwrstat client is installed in /usr/sbin by default, but since this is my home computer and I trust what happens there, I want to be able to run this command as my regular user. Move the client to /usr/bin instead:\nmv /usr/sbin/pwrstat /usr/bin/pwrstat Let\u0026rsquo;s try getting a current status:\n$ pwrstat -status The UPS information shows as following: Properties: Model Name................... CP 1350C Firmware Number.............. BFE5107.B23 Rating Voltage............... 120 V Rating Power................. 810 Watt Current UPS status: State........................ Normal Power Supply by.............. Utility Power Utility Voltage.............. 124 V Output Voltage............... 124 V Battery Capacity............. 100 % Remaining Runtime............ 38 min. Load......................... 137 Watt(17 %) Line Interaction............. None Test Result.................. Unknown Last Power Event............. None Just the wattage, please #Awesome! Let\u0026rsquo;s make a really short script that will dump just the wattage for us:\n#!/bin/bash pwrstat -status | grep -oP \u0026#34;Load\\.* \\K([0-9]+)(?= Watt)\u0026#34; Now we can test the script:\n$ ~/bin/ups_wattage.sh 137 My computer (and accessories) are using 137 watts.\nAdding it to polybar #I use polybar as my status bar, and it\u0026rsquo;s easy to add a custom command to the bar. Here\u0026rsquo;s my configuration section for my ups_wattage.sh script:\n[module/wattage] type = custom/script exec = ~/bin/ups_wattage.sh label = \u0026#34; %output%W\u0026#34; interval = 15 format-padding = 1 Add that to your bar (mine is on the right side):\n[bar/primary] ---SNIP--- modules-right = weather cpu memory gpu filesystem wattage uptime ---SNIP--- There\u0026rsquo;s live power monitoring right there in my polybar!\n","date":"8 November 2019","permalink":"/p/monitor-cyberpower-ups-wattage/","section":"Posts","summary":"Monitor the power consumption of your CyberPower UPS and display the live output in your Linux desktop\u0026rsquo;s status bar.","title":"Monitor CyberPower UPS wattage"},{"content":"","date":null,"permalink":"/tags/chromium/","section":"Tags","summary":"","title":"Chromium"},{"content":"UPDATE: The chromium-vaapi package is now chromium-freeworld. This post was updated on 2019-11-06 to include the change. See the end of the post for the update steps.\nIf you use a web browser to watch videos on a laptop, you\u0026rsquo;ve probably noticed that some videos play without much impact on the battery. Other videos cause the fans to spin wildly and your battery life plummets.\nIntel designed a specification called VA API, often called VAAPI (without the space), and it offers up device drivers to applications running on your system. It provides a pathway for those applications to access certain parts of the graphics processing hardware directly. This increases performance, lowers CPU usage, and increases battery life.\nIn this post, you will learn how to get VAAPI working on your Fedora 30 system and how to use it along with a Chromium build that has VAAPI patches already included. There are some DRM-related workarounds as well toward the end.\nNote: Keep in mind that some videos are in formats that are difficult to accelerate with a GPU and some applications support acceleration with some formats but not others. You may find that your favorite site still uses the same amount of CPU as it did before you completed this guide. 😢\nGetting started with VAAPI #You will need a few packages before you get started, and some of these depend on the type of GPU that is present in your system. In my case, I\u0026rsquo;m on a 4th generation Lenovo X1 Carbon, and it has an Skylake GPU:\n$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Skylake GT2 [HD Graphics 520] (rev 07) Fedora 30 has quite a few VAAPI packages available:\n$ sudo dnf list all | grep libva | awk \u0026#39;{print $1}\u0026#39; libva.x86_64 libva-intel-driver.x86_64 libva-intel-hybrid-driver.x86_64 libva-utils.x86_64 libva-vdpau-driver.x86_64 libva.i686 libva-devel.i686 libva-devel.x86_64 libva-intel-driver.i686 libva-intel-hybrid-driver.i686 libva-vdpau-driver.i686 My Intel GPU requires these packages:\n$ sudo dnf install libva libva-intel-driver \\ libva-vdpau-driver \\ libva-utils At this point, you should be able to run vainfo to ensure that everything is working:\n$ vainfo libva info: VA-API version 1.4.1 libva info: va_getDriverName() returns 0 libva info: Trying to open /usr/lib64/dri/i965_drv_video.so libva info: Found init function __vaDriverInit_1_4 libva info: va_openDriver() returns 0 vainfo: VA-API version: 1.4 (libva 2.4.1) vainfo: Driver version: Intel i965 driver for Intel(R) Skylake - 2.3.0 vainfo: Supported profile and entrypoints VAProfileMPEG2Simple :\tVAEntrypointVLD VAProfileMPEG2Simple :\tVAEntrypointEncSlice VAProfileMPEG2Main :\tVAEntrypointVLD VAProfileMPEG2Main :\tVAEntrypointEncSlice VAProfileH264ConstrainedBaseline:\tVAEntrypointVLD VAProfileH264ConstrainedBaseline:\tVAEntrypointEncSlice VAProfileH264ConstrainedBaseline:\tVAEntrypointEncSliceLP VAProfileH264ConstrainedBaseline:\tVAEntrypointFEI VAProfileH264ConstrainedBaseline:\tVAEntrypointStats VAProfileH264Main :\tVAEntrypointVLD VAProfileH264Main :\tVAEntrypointEncSlice VAProfileH264Main :\tVAEntrypointEncSliceLP VAProfileH264Main :\tVAEntrypointFEI VAProfileH264Main :\tVAEntrypointStats VAProfileH264High :\tVAEntrypointVLD VAProfileH264High :\tVAEntrypointEncSlice VAProfileH264High :\tVAEntrypointEncSliceLP VAProfileH264High :\tVAEntrypointFEI VAProfileH264High :\tVAEntrypointStats VAProfileH264MultiviewHigh :\tVAEntrypointVLD VAProfileH264MultiviewHigh :\tVAEntrypointEncSlice VAProfileH264StereoHigh :\tVAEntrypointVLD VAProfileH264StereoHigh :\tVAEntrypointEncSlice VAProfileVC1Simple :\tVAEntrypointVLD VAProfileVC1Main :\tVAEntrypointVLD VAProfileVC1Advanced :\tVAEntrypointVLD VAProfileNone :\tVAEntrypointVideoProc VAProfileJPEGBaseline :\tVAEntrypointVLD VAProfileJPEGBaseline :\tVAEntrypointEncPicture VAProfileVP8Version0_3 :\tVAEntrypointVLD VAProfileVP8Version0_3 :\tVAEntrypointEncSlice VAProfileHEVCMain :\tVAEntrypointVLD VAProfileHEVCMain :\tVAEntrypointEncSlice VAProfileVP9Profile0 :\tVAEntrypointVLD If you run into a problem like this one, try installing the libva-intel-hybrid-driver:\n$ vainfo libva info: VA-API version 1.4.1 libva info: va_getDriverName() returns 0 libva info: Trying to open /usr/lib64/dri/i965_drv_video.so libva info: va_openDriver() returns -1 vaInitialize failed with error code -1 (unknown libva error),exit Installing Chromium with VAAPI support #Now that we have a pathway for applications to talk to our GPU, we can install Chromium with VAAPI support:\n$ sudo dnf -y install chromium-freeworld Run chromium-freeworld to ensure Chromium starts properly. Visit chrome://flags in the Chromium browser and search for ignore-gpu-blacklist. Choose Enabled in the dropdown and press Relaunch Now in the bottom right corner.\nAfter the relaunch, check some common video sites, like YouTube or DailyMotion. The CPU usage may be a bit lower on these, but you can lower it further by installing the h264ify extension. It forces some sites to provide h264 video rather than other CPU hungry formats.\nDealing with DRM #The only remaining problem is DRM. Some sites, like Netflix or YouTube TV, require that the browser can handle DRM content. The Widevine DRM module is required for some of these sites, but it is automatically bundled with Chrome. The regular Chrome (not Chromium) package contains the module at: /opt/google/chrome/libwidevinecdm.so.\nFirst, ensure Chromium is not running. Then copy that module over to Chromium\u0026rsquo;s directory:\nsudo cp /opt/google/chrome/libwidevinecdm.so /usr/lib64/chromium-freeworld/ Start chromium-freeworld one more time and try out some DRM-protected sites like Netflix and they should be working properly.\nAs I mentioned at the start of the guide, some applications support acceleration with certain video formats and not others, so your results may vary.\nNew package: chromium-freeworld #When this post was first written, the chromium package was called chromium-vaapi. It now chromium-freeworld. The upgrade is seamless since the new package obsoletes the old one, but you need one extra step to bring over the DRM module to the new chromium library directory:\nsudo cp /usr/lib64/chromium-vaapi/libwidevinecdm.so /usr/lib64/chromium-freeworld Restart chromium-freeworld and you\u0026rsquo;re good to go again.\n","date":"20 October 2019","permalink":"/p/install-chromium-with-vaapi-on-fedora-30/","section":"Posts","summary":"Lower your CPU usage and increase battery life when you watch certain videos by using Chromium with VAAPI support.","title":"Install Chromium with VAAPI on Fedora 30"},{"content":"i3 has been my window manager of choice for a while and I really enjoy its simplicity and ease of use. I use plenty of gtk applications, such as Firefox and Evolution, and configuring them within i3 can be confusing.\nThis post covers a few methods to change configurations for GNOME and gtk applications from i3.\nlxappearance #Almost all of the gtk theming settings are available in lxappearance. You can change fonts, mouse cursors, icons, and colors. The application makes the changes easy to preview and you can install more icon sets if you wish.\nFedora already has lxappearance packaged and ready to go:\n$ sudo dnf install lxappearance $ lxappearance Although style changes are immediately applied in lxappearance, you need to restart all gtk applications to see the style changes there.\nlxappearance writes GTK 2.0 and GTK 3.0 configuration files:\nGTK 2.0: ~/.gtkrc-2.0 GTK 3.0: ~/.config/gtk-3.0/settings.ini gnome-control-center #Recent versions of GNOME bundle all of the system settings into a single application called gnome-control-center. This normally starts right up in GNOME, but i3 is a little trickier since it doesn\u0026rsquo;t have some of the same environment variables set:\n$ gnome-control-center ** ERROR:../shell/cc-shell-model.c:458:cc_shell_model_set_panel_visibility: assertion failed: (valid) [1] 837 abort (core dumped) gnome-control-center The problem is a missing environment variable: XDG_CURRENT_DESKTOP. We can set that on the command line and everything works:\nenv XDG_CURRENT_DESKTOP=GNOME gnome-control-center gnome-tweaks #The gnome-tweaks application has been around for a long time and it works well from i3. Install it in Fedora and run it:\n$ sudo dnf install gnome-tweaks $ gnome-tweaks Although many of the configurations inside gnome-tweaks match up with lxappearance, gnome-tweaks offers an added benefit: it changes the configuration inside GNOME\u0026rsquo;s key-based configuration system (dconf). This is required for some applications, such as Firefox.\nYou can also open up dconf-editor and make these changes manually in /org/gnome/desktop/interface, but gnome-tweaks has a much more user-friendly interface.\n","date":"22 September 2019","permalink":"/p/customize-gnome-from-i3/","section":"Posts","summary":"All of your GNOME and gtk applications are configured in i3 with a few simple tricks.","title":"Customize GNOME from i3"},{"content":"Monit is a tried-and-true method for monitoring all kinds of systems, services, and network endpoints. Deploying monit is easy. There\u0026rsquo;s only one binary daemon to run and it reads monitoring configuration from files in a directory you specify.\nMost Linux distributions have a package for monit and the package usually contains some basic configuration along with a systemd unit file to run the daemon reliably.\nHowever, this post is all about how to deploy it inside OpenShift. Deploying monit inside OpenShift allows you to monitor services inside OpenShift that might not have a route or a NodePort configured, but you can monitor systems outside OpenShift, too.\nMonit in a container #Before we can put monit into a container, we need to think about what it requires. At the most basic level, we will need:\nthe monit daemon binary a very basic config, the .monitrc a directory to hold lots of additional monitoring configs any packages needed for running monitoring scripts In my case, some of the scripts I want to run require curl, httpie (for complex HTTP/JSON requests), and jq (for parsing json). I\u0026rsquo;ve added those, along with some requirements for the monit binary, to my container build file:\nFROM fedora:latest # Upgrade packages and install monit. RUN dnf -y upgrade RUN dnf -y install coreutils httpie jq libnsl libxcrypt-compat RUN dnf clean all # Install monit. RUN curl -Lso /tmp/monit.tgz https://bitbucket.org/tildeslash/monit/downloads/monit-5.26.0-linux-x64.tar.gz RUN cd /tmp \u0026amp;\u0026amp; tar xf monit.tgz RUN mv /tmp/monit-*/bin/monit /usr/local/bin/monit RUN rm -rf /tmp/monit* # Remove monit user/group. RUN sed -i \u0026#39;/^monit/d\u0026#39; /etc/passwd RUN sed -i \u0026#39;/^monit/d\u0026#39; /etc/group # Work around OpenShift\u0026#39;s arbitrary UID/GIDs. RUN chmod g=u /etc/passwd /etc/group # The monit server listens on 2812. EXPOSE 2812 # Set up a volume for /config. VOLUME [\u0026#34;/config\u0026#34;] # Start monit when the container starts. ENV HOME=/tmp COPY extras/start.sh /opt/start.sh RUN chmod +x /opt/start.sh CMD [\u0026#34;/opt/start.sh\u0026#34;] Let\u0026rsquo;s break down what\u0026rsquo;s here in the container build file:\nInstall some basic packages that we need in the container Download monit and install it to /usr/local/bin/monit Remove the monit user/group (more on this later) Make /etc/passwd and /etc/group writable by the root group (more on this later) Expose the default monit port Run our special startup script The last three parts help us run with OpenShift\u0026rsquo;s strict security requirements.\nStartup script #Monit has some strict security requirements for startup. It requires that the monit daemon is started with the same user/group combination that owns the initial configuration file (.monitrc). That\u0026rsquo;s why we removed the monit user/group and made /etc/passwd and /etc/shadow writable during the build step. We need to add those back in once the container starts and we\u0026rsquo;ve received our arbitrary UID from OpenShift.\n(For more on OpenShift\u0026rsquo;s arbitrary UIDs, read my other post about Running Ansible in OpenShift with arbitrary UIDs.)\nHere\u0026rsquo;s the startup script:\n#!/bin/bash set -euxo pipefail echo \u0026#34;The home directory is: ${HOME}\u0026#34; # Work around OpenShift\u0026#39;s arbitrary UID/GIDs. if [ -w \u0026#39;/etc/passwd\u0026#39; ]; then echo \u0026#34;monit❌`id -u`:`id -g`:,,,:${HOME}:/bin/bash\u0026#34; \u0026gt;\u0026gt; /etc/passwd fi if [ -w \u0026#39;/etc/group\u0026#39; ]; then echo \u0026#34;monit❌$(id -G | cut -d\u0026#39; \u0026#39; -f 2)\u0026#34; \u0026gt;\u0026gt; /etc/group fi # Make a basic monitrc. echo \u0026#34;set daemon 30\u0026#34; \u0026gt;\u0026gt; \u0026#34;${HOME}\u0026#34;/monitrc echo \u0026#34;include /config/*\u0026#34; \u0026gt;\u0026gt; \u0026#34;${HOME}\u0026#34;/monitrc chmod 0700 \u0026#34;${HOME}\u0026#34;/monitrc # Ensure the UID/GID mapping works. id # Run monit. /usr/local/bin/monit -v -I -c \u0026#34;${HOME}\u0026#34;/monitrc Let\u0026rsquo;s talk about what is happening in the script:\nAdd the monit user to /etc/passwd with the arbitrary UID Do the same for the monit group in /etc/group Create a very basic .monitrc that is owned by the monit user and group Run monit in verbose mode in the foreground with our .monitrc OpenShift will make an emptyDir volume in /config that we can modify since we specified a volume in the container build file.\nDeploying monit #Now that we have a container and a startup script, it\u0026rsquo;s time to deploy monit in OpenShift.\napiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: generation: 1 labels: app: monit name: monit spec: replicas: 1 revisionHistoryLimit: 10 selector: app: monit deploymentconfig: monit strategy: activeDeadlineSeconds: 21600 resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: labels: app: monit deploymentconfig: monit spec: containers: - image: registry.gitlab.com/majorhayden/container-monit/monit:latest imagePullPolicy: Always name: monit resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config name: monit-config - mountPath: /scripts name: monit-scripts dnsPolicy: ClusterFirst hostname: monit-in-openshift restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 0420 name: monit-config name: monit-config - configMap: defaultMode: 0755 name: monit-scripts name: monit-scripts test: false triggers: - type: ConfigChange There is a lot of text here, but there are two important parts:\nThe container image is pre-built from my monit GitLab repository (feel free to use it!) The volumes refer to the OpenShift configmaps that hold the monit configurations as well as the scripts that are called for monitoring Next comes the service (which allows the monit web port to be exposed inside the OpenShift cluster):\napiVersion: v1 kind: Service metadata: labels: app: monit name: monit spec: ports: - port: 2812 protocol: TCP targetPort: 2812 selector: app: monit deploymentconfig: monit sessionAffinity: None type: ClusterIP And finally, the route (which exposes the monit web port service outside the OpenShift cluster):\napiVersion: route.openshift.io/v1 kind: Route metadata: labels: app: monit name: monit spec: tls: insecureEdgeTerminationPolicy: Redirect termination: edge host: monit.openshift.example.com to: kind: Service name: monit weight: 100 wildcardPolicy: None Monitoring configuration and scripts #The deploymentConfig for monit refers to a configMap called monit-config. This config map contains all of the additional monitoring configuration for monit outside of the .monitrc. Here is a basic configMap for checking that icanhazheaders.com is accessible:\napiVersion: v1 kind: ConfigMap metadata: name: monit-config data: config: | set daemon 30 set httpd port 2812 allow 0.0.0.0/0 set alert me@example.com set mailserver smtp.example.com check host \u0026#34;icanhazheaders responding\u0026#34; with address icanhazheaders.com if failed port 80 for 2 cycles then alert check program \u0026#34;icanhazheaders header check\u0026#34; with path \u0026#34;/scripts/header-check.sh ACCEPT-ENCODING \u0026#39;gzip\u0026#39;\u0026#34; if status gt 0 then exec \u0026#34;/scripts/irc-notification.sh\u0026#34; else if succeeded then exec \u0026#34;/scripts/irc-notification.sh\u0026#34; This configuration will check icanhazheaders.com and only alert if the check fails for two check periods. Each check period is 30 seconds, so the site would need to be inaccessible for 60 seconds before an alert would be sent.\nAlso, there is a second check that runs a script. Let\u0026rsquo;s deploy the script to OpenShift as well:\napiVersion: v1 kind: ConfigMap metadata: name: monit-scripts data: header-check.sh: | #!/bin/bash set -euo pipefail URL=\u0026#34;http://icanhazheaders.com\u0026#34; HEADER=$1 EXPECTED_VALUE=$2 HEADER_VALUE=$(curl -s ${URL} | jq -r ${HEADER}) if [[ $HEADER_VALUE == $EXPECTED_VALUE ]]; then exit 0 else exit 1 fi Use oc apply to deploy all of these YAML files to your OpenShift cluster and monit should be up and running within seconds!\n","date":"11 September 2019","permalink":"/p/deploy-monit-in-openshift/","section":"Posts","summary":"Monit is a tried-and-true monitoring daemon that is easy to deploy. Add it to OpenShift to make monitoring even easier.","title":"Deploy monit in OpenShift"},{"content":"","date":null,"permalink":"/tags/monit/","section":"Tags","summary":"","title":"Monit"},{"content":"","date":null,"permalink":"/tags/monitoring/","section":"Tags","summary":"","title":"Monitoring"},{"content":"","date":null,"permalink":"/tags/buildah/","section":"Tags","summary":"","title":"Buildah"},{"content":"When you build tons of kernels every day like my team does, you look for speed improvements anywhere you can. Caching repositories, artifacts, and compiled objects makes kernel builds faster and it reduces infrastructure costs.\nNeed for speed #We use GitLab CI in plenty of places, and that means we have a lot of gitlab-runner configurations for OpenShift (using the kubernetes executor) and AWS (using the docker-machine executor). The runner\u0026rsquo;s built-in caching makes it easy to upload and download cached items from object storage repositories like Google Cloud Storage or Amazon S3.\nHowever, there\u0026rsquo;s an often overlooked feature hiding in the configuration for the docker executor that provides a great performance boost: mounting tmpfs inside your container. Not familiar with tmpfs? Arch Linux has a great wiki page for tmpfs and James Coyle has a well-written blog post about what makes it unique from the older ramfs.\nRAM is much faster than your average cloud provider\u0026rsquo;s block storage. It also has incredibly low latency relative to most storage media. There\u0026rsquo;s a great interactive latency page that allows you to use a slider to travel back in time to 1990 and compare all kinds of storage performance numbers. (It\u0026rsquo;s really fun! Go drag the slider and be amazed.)\nBetter yet, many cloud providers give you lots of RAM per CPU on their instances, so if your work isn\u0026rsquo;t terribly memory intensive, you can use a lot of this RAM for faster storage.\nEnabling tmpfs in Docker containers #⚠️ Beware of the dangers of tmpfs before adjusting your runner configuration! See the warnings at the end of this post.\nThis configuration is buried in the middle of the docker executor documentation. You will need to add some extra configuration to your [runners.docker] section to make it work:\n[runners.docker] [runners.docker.tmpfs] \u0026#34;/ramdisk\u0026#34; = \u0026#34;rw,noexec\u0026#34; This configuration mounts a tmpfs volume underneath /ramdisk inside the container. By default, this directory will be mounted with noexec, but if you need to execute scripts from that directory, change noexec to exec:\n[runners.docker] [runners.docker.tmpfs] \u0026#34;/ramdisk\u0026#34; = \u0026#34;rw,exec\u0026#34; In our case, compiling kernels requires executing scripts, so we use exec for our tmpfs mounts.\nYou must be specific for exec! As an example, this tmpfs volume will be mounted with noexec since that is the default:\n[runners.docker] [runners.docker.tmpfs] \u0026#34;/ramdisk\u0026#34; = \u0026#34;rw\u0026#34; Extra speed #For even more speed, we moved the objects generated by ccache to the ramdisk. The seek times are much lower and this allows ccache to look for its cached objects much more quickly.\nGit repositories are also great things to stash on tmpfs. Big kernel repositories are usually 1.5GB to 2GB in size with tons of files. Checkouts are really fast when they\u0026rsquo;re done in tmpfs.\nDangers are lurking #⚠️ As mentioned earlier, beware of the dangers of tmpfs.\nAll of the containers on the machine will share the same amount of RAM for their tmpfs volumes. Be sure to account for how much each container will use and how many containers could be present on the same machine.\nBe aware of how much memory your tests will use when they run. In our case, kernel compiles can consume 2-4GB of RAM, depending on configuration, so we try our best to leave some memory free.\nThese volumes also have no limits on how much data can go into the volume. However, if you put too much data into the tmpfs volume and your system runs critically low on available RAM, you could see a huge drop in performance, system instability, or even a crash. 🔥\n","date":"16 August 2019","permalink":"/p/get-faster-gitlab-runners-with-a-ramdisk/","section":"Posts","summary":"Many cloud providers give you lots of memory with each instance and you can speed up tests and builds by using a ramdisk.","title":"Get faster GitLab runners with a ramdisk"},{"content":"","date":null,"permalink":"/tags/gitlab/","section":"Tags","summary":"","title":"Gitlab"},{"content":"Buildah and podman make a great pair for building, managing and running containers on a Linux system. You can even use them with GitLab CI with a few small adjustments, namely the switch from the overlayfs to vfs storage driver.\nI have some regularly scheduled GitLab CI jobs that attempt to build fresh containers each morning and I use these to get the latest packages and find out early when something is broken in the build process. A failed build appeared in my inbox earlier this week with the following error:\n+ buildah bud -f builds/builder-fedora30 -t builder-fedora30 . vfs driver does not support overlay.mountopt options My container build script1 is fairly basic, but it does include a change to use the vfs storage driver:\n# Use vfs with buildah. Docker offers overlayfs as a default, but buildah # cannot stack overlayfs on top of another overlayfs filesystem. export STORAGE_DRIVER=vfs The script doesn\u0026rsquo;t change any mount options during the build process. A quick glance at the /etc/containers/storage.conf revealed a possible problem:\n[storage.options] # Storage options to be passed to underlying storage drivers # mountopt specifies comma separated list of extra mount options mountopt = \u0026#34;nodev,metacopy=on\u0026#34; These mount options make sense when used with an overlayfs filesystem, but they are not used with vfs. I commented out the mountopt option, saved the file, and ran a test build locally. Success!\nFixing the build script involved a small change to the storage.conf just before building the container:\n# Use vfs with buildah. Docker offers overlayfs as a default, but buildah # cannot stack overlayfs on top of another overlayfs filesystem. export STORAGE_DRIVER=vfs # Newer versions of podman/buildah try to set overlayfs mount options when # using the vfs driver, and this causes errors. sed -i \u0026#39;/^mountopt =.*/d\u0026#39; /etc/containers/storage.conf My containers are happily building again in GitLab.\nThe original build script is no longer available, but the remainder of the repository still exists.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"13 August 2019","permalink":"/p/buildah-error-vfs-driver-does-not-support-overlay-mountopt-options/","section":"Posts","summary":"Buildah and podman work well with the vfs storage driver, but the default mount options can cause problems.","title":"buildah error: vfs driver does not support overlay.mountopt options"},{"content":"Fedora 30 is my primary operating system for desktops and servers, so I usually try to take it everywhere I go. I was recently doing some benchmarking for kernel compiles on different cloud plaforms and I noticed that Fedora isn\u0026rsquo;t included in Google Compute Engine\u0026rsquo;s default list of operating system images.\n(Note: Fedora does include links to quick start an Amazon EC2 instance with their pre-built AMI\u0026rsquo;s. They are superb!)\nFirst try #Fedora does offer cloud images in raw and qcow2 formats, so I decided to give that a try. Start by downloading the image, decompressing it, and then repackaging the image into a tarball.\n$ wget http://mirrors.kernel.org/fedora/releases/30/Cloud/x86_64/images/Fedora-Cloud-Base-30-1.2.x86_64.raw.xz $ xz -d Fedora-Cloud-Base-30-1.2.x86_64.raw.xz $ mv Fedora-Cloud-Base-30-1.2.x86_64.raw disk.raw $ tar cvzf fedora-30-google-cloud.tar.gz disk.raw Once that\u0026rsquo;s done, create a bucket on Google storage and upload the tarball.\n$ gsutil mb gs://fedora-cloud-base-30-image $ gsutil cp fedora-30-google-cloud.tar.gz gs://fedora-cloud-image-30/ Uploading 300MB on my 10mbit/sec uplink was a slow process. When that\u0026rsquo;s done, tell Google Compute Engine that we want a new image made from this raw disk we uploaded:\n$ gcloud compute images create --source-uri \\ gs://fedora-cloud-image-30/fedora-30-google-cloud.tar.gz \\ fedora-30-google-cloud After a few minutes, a new custom image called fedora-30-google-cloud will appear in the list of images in Google Compute Engine.\n$ gcloud compute images list | grep -i fedora fedora-30-google-cloud major-hayden-20150520 PENDING $ gcloud compute images list | grep -i fedora fedora-30-google-cloud major-hayden-20150520 PENDING $ gcloud compute images list | grep -i fedora fedora-30-google-cloud major-hayden-20150520 READY I opened a browser, ventured to the Google Compute Engine console, and built a new VM with my image.\nProblems abound #However, there are problems when the instance starts up. The serial console has plenty of errors:\nDataSourceGCE.py[WARNING]: address \u0026#34;http://metadata.google.internal/computeMetadata/v1/\u0026#34; is not resolvable Obviously something is wrong with DNS. It\u0026rsquo;s apparent that cloud-init is stuck in a bad loop:\nurl_helper.py[WARNING]: Calling \u0026#39;http://169.254.169.254/2009-04-04/meta-data/instance-id\u0026#39; failed [87/120s]: bad status code [404] url_helper.py[WARNING]: Calling \u0026#39;http://169.254.169.254/2009-04-04/meta-data/instance-id\u0026#39; failed [93/120s]: bad status code [404] url_helper.py[WARNING]: Calling \u0026#39;http://169.254.169.254/2009-04-04/meta-data/instance-id\u0026#39; failed [99/120s]: bad status code [404] url_helper.py[WARNING]: Calling \u0026#39;http://169.254.169.254/2009-04-04/meta-data/instance-id\u0026#39; failed [105/120s]: bad status code [404] url_helper.py[WARNING]: Calling \u0026#39;http://169.254.169.254/2009-04-04/meta-data/instance-id\u0026#39; failed [112/120s]: bad status code [404] url_helper.py[WARNING]: Calling \u0026#39;http://169.254.169.254/2009-04-04/meta-data/instance-id\u0026#39; failed [119/120s]: unexpected error [Attempted to set connect timeout to 0.0, but the timeout cannot be set to a value less than or equal to 0.] DataSourceEc2.py[CRITICAL]: Giving up on md from [\u0026#39;http://169.254.169.254/2009-04-04/meta-data/instance-id\u0026#39;] after 126 seconds Those are EC2-type metadata queries and they won\u0026rsquo;t work here. The instance also has no idea how to set up networking:\nCloud-init v. 17.1 running \u0026#39;init\u0026#39; at Wed, 07 Aug 2019 18:27:07 +0000. Up 17.50 seconds. ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------+-------------------+ ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------+-------------------+ ci-info: | eth0: | False | . | . | . | 42:01:0a:f0:00:5f | ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . | . | ci-info: | lo: | True | . | . | d | . | ci-info: +--------+-------+-----------+-----------+-------+-------------------+ This image is set up well for Amazon, but it needs some work to work at Google.\nFixing up the image #Go back to the disk.raw that we made in the first step of the blog post. We need to mount that disk, mount some additional filesystems, and chroot into the Fedora 30 installation on the raw disk.\nStart by making a loop device for the raw disk and enumerating its partitions:\n$ sudo losetup /dev/loop0 disk.raw $ kpartx -a /dev/loop0 Make a mountpoint and mount the first partition on that mountpoint:\n$ sudo mkdir /mnt/disk $ sudo mount /dev/mapper/loop0p1 /mnt/disk We need some extra filesystems mounted before we can run certain commands in the chroot:\n$ sudo mount --bind /dev /mnt/disk/dev $ sudo mount --bind /sys /mnt/disk/sys $ sudo mount --bind /proc /mnt/disk/proc Now we can hop into the chroot:\n$ sudo chroot /mnt/disk From inside the chroot, remove cloud-init and install google-compute-engine-tools to help with Google cloud:\n$ dnf -y remove cloud-init $ dnf -y install google-compute-engine-tools $ dnf clean all The google-compute-engine-tools package has lots of services that help with running on Google cloud. We need to enable each one to run at boot time:\n$ systemctl enable google-accounts-daemon google-clock-skew-daemon \\ google-instance-setup google-network-daemon \\ google-shutdown-scripts google-startup-scripts To learn more about these daemons and what they do, head on over to the GitHub page for the package.\nExit the chroot and get back to your main system. Now that we have this image just like we want it, it\u0026rsquo;s time to unmount the image and send it to the cloud:\n$ sudo umount /mnt/disk/dev /mnt/disk/sys /mnt/disk/proc $ sudo umount /mnt/disk $ sudo losetup -d /dev/loop0 $ tar cvzf fedora-30-google-cloud-fixed.tar.gz disk.raw $ gsutil cp fedora-30-google-cloud-fixed.tar.gz gs://fedora-cloud-image-30/ $ gcloud compute images create --source-uri \\ gs://fedora-cloud-image-30/fedora-30-google-cloud-fixed.tar.gz \\ fedora-30-google-cloud-fixed Start a new instance with this fixed image and watch it boot in the serial console:\n[ 10.379253] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 10.381350] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 10.382487] RAPL PMU: hw unit of domain package 2^-0 Joules [ 10.383415] RAPL PMU: hw unit of domain dram 2^-16 Joules [ 10.503233] EDAC sbridge: Ver: 1.1.2 Fedora 30 (Cloud Edition) Kernel 5.1.20-300.fc30.x86_64 on an x86_64 (ttyS0) instance-2 login: Yes! A ten second boot with networking is exactly what I needed.\n","date":"7 August 2019","permalink":"/p/fedora-30-on-google-compute-engine/","section":"Posts","summary":"Fedora 30 is a great Linux distribution for cloud platforms, but it needs a little work to perform well on Google Compute Engine.","title":"Fedora 30 on Google Compute Engine"},{"content":"","date":null,"permalink":"/tags/google/","section":"Tags","summary":"","title":"Google"},{"content":"Welcome! #This page is a work in progress!\nThe world of amateur, or ham, radio is huge and it\u0026rsquo;s what you want to make of it. The itch struck me in the middle of 2017 and I learned a lot since then.\nPeople often ask me questions about all kinds of amateur radio topics and I decided to compile all of the answers into a big page that I can update over time. If your questions isn\u0026rsquo;t answered here, please send me an email and I\u0026rsquo;ll get it answered!\nKeep in mind that the vast majority of the topics presented here will be applicable to amateur radio all over the world, but much of the discussion around licensing and rules is very specific to the United States.\nAs with anything else, everyone has their own opinions about what makes a hobby special. As long as you enjoy your time working with a hobby, it does not matter what anyone else says, so long as you avoid getting in the way of their enjoyment. (More on that later.)\nTable of Contents # The very very basics Getting a license Choosing your first radio The very very basics #This section covers the absolute basic questions around the hobby itself and why it can be interesting for many people.\nAmateur radio is like Citizen\u0026rsquo;s Band (CB) radio, right? #This is entirely valid question, but be forewarned \u0026ndash; it can cause some amateur radio operators to start fuming. It\u0026rsquo;s like going to Barcelona and asking: \u0026ldquo;So Catalan is basically just a Spanish dialect, right?\u0026rdquo; 😡\nAuthor\u0026rsquo;s note: Please don\u0026rsquo;t try this in Barcelona. I do not recommend it.\nYes, CB radios are similar in some ways. The are restricted to certain bands and you have a variety of radios to choose from if you want to talk to people.\nHowever, talking on a CB radio has very few restrictions and no licensing requirements. People often talk about topics that you wouldn\u0026rsquo;t want your children to hear and many people willfully disobey what few rules and restrictions actually exist. Also, CB is only one band. Amateur radio has tons of bands to choose from based on the size of the radio you want to carry, how far you want to communicate, and your license level.\nAmateur radio offers tons of different operating modes, such as morse code, digital modes, meteor scatter, satellite operations, and plain old AM/FM/SSB voice communication. You can talk to someone on the other side of the Earth with amateur radio but CB radio range tops out at 20-25 miles.\nLong story short: they are tremendously different.\nI have a mobile phone. Why do I need amateur radio? #This might not be the hobby for you, and that\u0026rsquo;s okay!\nKeep in mind that mobile phones are just small radios that transmit and receive digital data all day long. If that interests you and you want to tinker on similar technologies, amateur radio might be good for you. Using a radio to call long distances (say, across the USA) on a frequent basis can be frustrating if you just need something that works all the time.\nHowever, consider those situations where mobile phones do not work, such as natural disasters. Communication falls back to radios during these difficult times and you may be able to assist with emergencies or get information for your family with some knowledge of amateur radio.\nI thought amateur radio was just for old people. #That\u0026rsquo;s a common misconception, but there are many older people involved in amateur radio for a variety of reasons.\nMany military veterans had to work with radios during their miltary career and they find that amateur radio is a fun way to keep their skills sharp. It can also be a fun way to talk to other veterans on the radio and at club meetings.\nAmateur radio is something you can do even if you have physical limitations caused by injury, illness, or old age. Some hams have large towers with complex wiring and unique antennas. Others plop down a small vertical or magnetic loop on a desk and transmit from there.\nThere\u0026rsquo;s a great benefit to older folks being involved with amateur radio: they can teach you a lot. Often called elmers in ham radio vernacular, these are people who can prevent you from making costly mistakes in planning your station and they can show you some unique ways to fix radio problems. If you don\u0026rsquo;t mind taking a little direction, many of these experienced hams will overwhelm you with radio knowledge that you can put to use immediately.\nWith that said, there are plenty of younger people getting involved with ham radio. There are plenty of newer technologies, especially digital modes, that allow newer operators to mix over-the-air radio operations with functionality over the internet.\nCan I just listen and see if it\u0026rsquo;s interesting? #Of course! In the United States, you can listen to any amateur radio transmission anytime with very inexpensive equipment. No licensing or expertise is required to listen.\nYou can pick up a shortwave radio and listen to long-distance transmissions, or you can buy a handheld transceiver (often called an HT) to listen to local discussions.\nYour local hams might have a repeater set up to rebroadcast local radio transmissions over a long distance. Head over to RepeaterBook to find your local repeaters and listen! I wrote a lengthy post about repeaters in 2018.\nIf you want to go a bit further, find your local radio club online and go to one of their meetings! Tell them that you\u0026rsquo;re new to the hobby and I\u0026rsquo;m sure they will be happy to show you some of their uses for radio. You might even discover a use that you never considered.\nGetting a license #Licensing is very specific to the country where you live. This section is specific to getting licensed in the United States.\nI heard radio operators have license levels. What\u0026rsquo;s in each level? #There are three main licensing levels:\nTechnician General Extra You must pass a test to move up to the next level.\nTechnician gives you access to lots of frequencies, but the amount of things you can do below the 10 meter band is very limited. General opens the door to many of the frequencies below 10 meters, but Extra class licensees have the most access to those bands.\nTo learn more about what access each level gets, consult the ARRL Band Plan or review the handy graphical band chart (which is good for printing).\nWhat\u0026rsquo;s on the test? #At a very high level, each test consists of questions in these areas:\nProcedures, rules, process Electrical circuits Safety Inner workings of radios and antennas The great thing is that every single test (and its answers) is available to you online! Review the question pools at any time!\nHow should I study for the test? #You have plenty of choices and hams will often argue which is best. 🤓\nHamStudy is my go-to resource for studying and for reference after the exam. You can review content and take practice exams right on your computer for free. They also offer some mobile apps (for a fee, totally worth it) so you can study from wherever you are. Their site works well on Android devices right in the browser, so that may work for you as well.\nAnother option is to get a book! The ARRL offers study guides and reference material for each license level. Some people learn better from offline books than electronic screens, and this could be the right option for you. Just be sure that the book you order matches the license level and the test currently being used. The tests are rotated out about every four years, so make sure the book is up to date.\nYou could also get the question and answer pools (see the previous question above) and go over all of those. This requires some brute force memorization and you may not learn the theory behind the questions.\nKeep in mind that rote memorization will get you past the test, but then you could make quite a few expensive or dangerous mistakes as you try to get on the air.\nDon\u0026rsquo;t just try to beat the test. Learn the theory! You will thank me later when you\u0026rsquo;re trying to get your SWR in check on a dipole using the wrong feed line that has a high impedance. 🤦‍♂️\nWhere do I go to take the test? #Once again, ARRL has you covered. Fill in your location information and search for a test near you. In the San Antonio area, there are 3-4 testing locations that run tests on Saturdays. There\u0026rsquo;s a test almost every weekend.\nKeep in mind that some groups will only test if someone sends them a note to say they are coming to take a test. It requires a minimum of three volunteer examiners (VEs) to be present and they are exactly that \u0026ndash; volunteers! Most groups have an email address or phone number for you to contact if you want to take a test.\nBe sure to note the fee for testing! It\u0026rsquo;s usually $15, but it may be different in some places.\nIf you\u0026rsquo;re in the San Antonio, Texas area, I highly recommend the ROOST. They are a friendly bunch of folks in a relaxed shack and they keep the examinees relaxed with plenty of jokes. You may spot a cat poke its head in the shack. I\u0026rsquo;m told that\u0026rsquo;s a good omen. 😺\nIt\u0026rsquo;s test day! What do I bring? #Before you leave the house, you will need a FCC Registration Number (FRN) if you want things to go as smoothly as possible. Test results are submitted by hand on paper that goes through the postal mail. Yes, I know.\nLuckily, the FCC has good documentation on getting your FRN. By getting your FRN ahead of time, you will ensure that your test results are processed as quickly as humanly possible once they reach the FCC.\nNow that you\u0026rsquo;re ready to go, grab these items:\nGovernment-issued photo ID (driver\u0026rsquo;s license, passport, military ID, etc) Good pencil Small basic calculator (optional, but better be safe than sorry) Testing fee (see previous section) Your FRN (you did get your FRN, didn\u0026rsquo;t you?) Good luck on your test!\nWhat should I expect when I take the exam? #Most ham radio exams are extremely relaxed and the VEs are often very experienced radio operators who want more people to join the hobby. You are definitely with good people.\nOnce you get signed in and you complete some paperwork (which includes your FRN, which I\u0026rsquo;m sure you brought with you), you will pay the testing fee. There are various versions of the test and you\u0026rsquo;ll get one from the pile and sit down to take your test.\nHere\u0026rsquo;s each test, its length, and required passing grade:\nTechnician: 35 questions; must get 26 or more correct General: same as Technician Extra: 50 questions; must get 37 or more correct You\u0026rsquo;ll turn in your exam and get your grade right there! You have two options after that:\nIf you passed, you can take the General exam at no cost. There\u0026rsquo;s no harm in trying it. Some people are able to pass it if they studied some of the material!\nIf you failed, don\u0026rsquo;t worry. It happens to the best of us. You can take it one more time at no additional cost! You can do it!\nWhen everything is over and you pass at least one of the exams, the VEs will complete a form called the Certificate of Successful Completion of Examination (or CSCE). Your CSCE is your record of passing the exam. Put your copy in a safe place! If it gets lost in the mail, you may need to provide that copy.\nYour VEs will submit that CSCE along with other paperwork to the FCC. This can take 7-10 business days to complete, or longer if you didn\u0026rsquo;t create an FRN first. Log in at the License Manager (ULS) with your FRN and password. As soon as your license is available, you\u0026rsquo;ll see it there!\nI got my license and my new callsign, but my callsign is terrible! #You can apply for a vanity callsign with the FCC for free so long as the callsign you want is available and it\u0026rsquo;s of the appropriate format for your license class. There is some great documentation on applying for a vanity callsign on ARRL\u0026rsquo;s site.\nChoosing your first radio #Picking your first radio is an important first step towards learning the skills for your license or putting your new license to use! However, finding a new radio can be tricky.\nRadios come in all shapes and sizes. How do I choose? #First things first, you need to consider how and where you want to use your radio. That determines the type of radio you want to look for. For example:\nAt home: any radio, including handheld (HT), mobile, or base, should work In the car: go for a HT or a mobile radio On you: HT for sure There are benefits and drawbacks to each type of radio:\nBase station radios. These are typically radios that are meant to stay in one place and they may often be quite heavy or bulky. They usually have the largest feature set, highest power (usually 100-200W), and most connections (for extra antennas and accessories). Lugging these to the car or on a hike will be frustrating for most units unless you purchase a small one.\nExamples of base station radios I\u0026rsquo;ve used:\nIcom 746: old and rock solid, but 19.6 lbs 😦 (I have this one) Icom 7300: extremely popular right now with a great touch screen Yaesu FT-991A: a base station radio that is pretty portable at 9.7 lbs! Mobile radios. These are smaller than base station radios and they can be used at home or in the car. Many of these top out at 50W but they often have a good set of features. They often come with handy features like GPS (for APRS) or removable displays. Removable displays allow you to mount the radio in a hidden place (such as under your seat) and mount the display somewhere else in the car.\nExamples of mobile radios I\u0026rsquo;ve used or love:\nKenwood TM-D710A: great mobile radio with detachable display, APRS, and a nice mic with buttons (I have this one) Kenwood TM-281A: cheaper radio that is incredibly durable, but no removable display Handheld radios. HTs are excellent for portable operations when you\u0026rsquo;re away from your home or car. They usually have a more limited feature set, lower power (usually 5-10W), and can operate on fewer radio bands. Most have great removable batteries that last all day.\nExample of HTs I\u0026rsquo;ve used or love:\nKenwood TH-K20A: really durable, simple to operate, inexpensive (I have this one) Yaesu VX-8: durable with tons of extra features, like APRS (I have this one) Baofeng UV-5R: extremely cheap, almost disposable radio (I have this one) 🚨 Now that we\u0026rsquo;ve mentioned Baofeng here, I feel obliged to ask you read the following section:\nI was told never to buy one of those Chinese clone radios. Why? #There is some truth and some myth here. Some of the manufacturers of really cheap radios, like Baofeng, are often ridiculed.\nSometimes it\u0026rsquo;s for valid reasons, such as Baeofeng\u0026rsquo;s roger beep. That\u0026rsquo;s a small tone made when you finish talking and let go of PTT. Amateur radio operators will often call you out for doing that and ask you to turn it off. You hear these beeps occasionally on commercial radios, especially trunked ones, but it has no place in amateur radio bands.\nThe FCC did recently publish FCC Enforcement Advisotry DA 18-980 and they are cracking down on some imported radios that don\u0026rsquo;t follow the most basic of FCC rules. Problems happen when unlicensed or licensed operators get one (really cheaply) and cause all kinds of problems on local radio repeaters during nets, emergencies, or other important events.\nBaofengs will allow you to transmit on frequencies you shouldn\u0026rsquo;t such as Family Radio Service (FRS) or General Mobile Radio Service (GMRS) bands. It\u0026rsquo;s illegal to use a radio that is not approved for those bands, and GMRS requires its own license that is entirely exclusive of the amateur radio license. (There\u0026rsquo;s no test, but it costs $70 for 10 years of access.)\nIn summary, these imported radios can allow good people (and bad people) to do bad things more easily than most other radios. However, the radios do work quite well for their price range, but don\u0026rsquo;t expect too much. Some radios will transmit slightly off frequency from time to time and many of them overmodulate at times.\nAt $35 each for the Baofeng, it\u0026rsquo;s not a bad idea to buy a couple as emergency backups just in case.\nRadios are expensive! How can I afford this hobby? #There are many ways to get a great radio at a price you can afford. You have two main options:\nGet something brand new. There\u0026rsquo;s nothing wrong with that! You will get the latest and greatest features in the smallest size with that new electronics smell. If you love that feeling of pulling plastic from screens and you don\u0026rsquo;t want something anyone else has touched, this is a good option for you. However, there\u0026rsquo;s a steep cost associated with it!\nI recommend shopping with Main Trading Company if you can. They\u0026rsquo;re a small shop in Paris, Texas (not France), and they have a great selection. They\u0026rsquo;ve been really helpful for me when dealing with out of stock items.\nGet something used. This is my favorite plan. Buying new is good, but you might skip on features to save price. A used radio might have all of the features you want, but it might also have some scratches and dings. Also, if the radio is fairly old (maybe 10+ years), someone has probably taken good care of it and it\u0026rsquo;s a reliable rig.\nNo matter what or how you buy, I highly recommend doing the following ahead of time:\nCarefully read the list of features, connections, radio bands, and power requirements to ensure it has what you need.\nRead lots of review on eHam, including people who loved and hated the radio.\nTalk to radio operators at your local radio club about what they\u0026rsquo;ve used and what might be for sale! Most clubs have swapmeets (sometimes they\u0026rsquo;re online, like classified ads), where you can go meet the ham, test the radio, and buy it!\nGo to a store and play with the same or similar radio.\nRead the radio documentation. You can find out a lot more about the features from there.\nI want to buy a used radio. How do I do it? #My requirement is that I only buy radios from another radio operator. Most are trustworthy and they usually want you to understand the radio\u0026rsquo;s capabilities before you buy.\nThere are ham radio conventions of various sizes scattered around the USA each month and these usually have big swapmeets. Be prepared to see a lot of junk. This is stuff that won\u0026rsquo;t power on, could barely be used for parts, and looks like it barely survived a nuclear war. Keep your eye out for the more well-maintained items there.\nIf you have your license, be sure to check out the online swapmeet on QRZ! That\u0026rsquo;s where I got my Icom 746. The spouse of a silent keyer (a ham radio operator who passed away 😔) was trying to make ends meet by selling some equipment and I got a great deal on the radio.\neBay is okay for some things (especially ham-made gear, like antennas), but there some occasional scams there that can be painful.\nYour local radio club meetings likely have a short section of the meeting dedicated to buy/sell/trade, so be sure to ask there. If you participate in a local radio net, perhaps on your local repeater, you can mention that you\u0026rsquo;re looking for a radio. You might get recommendations for something to look for or you may get a lead on a good radio to buy! (FCC Rule note: It\u0026rsquo;s okay to buy/sell/trade on the airwaves, but don\u0026rsquo;t turn it into an every day habit.)\nWhat does it mean when some radios say all-mode and others say FM? #An all-mode radio typically means that it supports a lot of different radio modes. This is especially handy for long-range high frequency radio bands and low power (or QRP) VHF bands. Most radios will include a set like this:\nAM CW (morse code) FM SSB (single sideband) RTTY An all-mode radio is typically bigger, has higher power output, and is more expensive.\nIf you see an FM radio, that means it only supports frequency modulation for voice communication. That\u0026rsquo;s plain old voice transmissions. This is really handy for mobile operations since you\u0026rsquo;ll be using your voice most often there. However, this limits your fun on the high frequency bands.\nSome FM radios have some fancy extra features, such as APRS or AX.25, but those come with an added expense.\nFM radios are typically smaller, have low to medium power output, and are cheaper.\nSome radios cover all of the bands. Why doesn\u0026rsquo;t everyone get those? #There are some great radios on the market, like the Yaesu FT-991A or the Icom 746 that pack tons of bands into one receiver. If you\u0026rsquo;re looking for a \u0026ldquo;shack-in-a-box\u0026rdquo;, then look no further! You can do everything with one radio.\nThere are some downsides to this, too. More electronics in the box increases weight and cost. It also increases the amount of things that can fail. In addition, a radio that tries to be good at many things sometimes can\u0026rsquo;t be great on all of them. Jamming a 144 MHz (2 meter) transceiver into a crowded HF radio means compromises must be made somewhere.\nThere are advantages to getting a radio that does a subset of HF, VHF and UHF bands. The manufacturer can specalize solely in those bands and make them perform really well. I haven\u0026rsquo;t heard it myself, but many hams swear that radios that are dedicated to a subset of bands have better sound, better noise reduction, and better range.\nDedicated radios requires more radios to cover the bands you care about, and that means more expense. Also consider that you\u0026rsquo;ll need more DC power for multiple radios and sharing antennas can be frustrating without additional equipment (and expense).\nEveryone tells me digital modes are great. How do I get a radio for FT8? #Digital modes are great and they\u0026rsquo;re an excellent way to learn more about how your radio works. FT8 has two main requirements:\nA radio that can transmit single sideband (SSB). A radio with sound input and output. Be sure to find an all-mode transceiver (discussion on that above) that has some type of audio or control interface. On my old Icom 746, there is a small remote connector on the back so the computer can control the radio itself. There is also an unusual 8-pin accessory plug that handles audio input and output. (I use an awesome sound card cable from xggcomms to transmit audio into my Icom.)\nNewer radios have a USB port right there on the back with audio and radio control built in! The Icom 7300 has this feature and it works extremely well. All you need is a cheap USB cable that you can buy anywhere. The audio drivers show up just fine in Linux!\nBe sure to connect your audio via USB or some kind of accessory port. Some radios have microphone ports on the front, but that doesn\u0026rsquo;t work well with FT8. Speech processing, auto gain control, and other fancy features that work wonders for voice transmissions can cause problems for FT8.\n","date":"6 June 2019","permalink":"/p/ham-radio-faq/","section":"Posts","summary":"Welcome! #This page is a work in progress!","title":"Ham Radio FAQ"},{"content":"","date":null,"permalink":"/tags/technology/","section":"Tags","summary":"","title":"Technology"},{"content":"","date":null,"permalink":"/tags/conferences/","section":"Tags","summary":"","title":"Conferences"},{"content":"Another Texas Linux Fest has come and gone! The 2019 Texas Linux Fest was held in Irving at the Irving Convention Center. It was a great venue surrounded by lots of shops and restaurants.\nIf you haven\u0026rsquo;t attended one of these events before, you really should! Attendees have varying levels of experience with Linux and the conference organizers (volunteers) work really hard to ensure everyone feels included.\nThe event usually falls on a Friday and Saturday. Fridays consist of longer, deeper dive talks on various topics \u0026ndash; technical and non-technical. Saturdays are more of a typical conference format with a keynote in the morning and 45-minute talks through the day. Saturday nights have lightning talks as well as \u0026ldquo;Birds of a Feather\u0026rdquo; events for people with similar interests.\nHighlights #Steve Ovens took us on a three hour journey on Friday to learn more about our self-worth. His talk, \u0026ldquo;You\u0026rsquo;re Worth More Than You Know, Matching your Skills to Employers\u0026rdquo;, covered a myriad of concepts such as discovering what really motivates you, understanding how to value yourself (and your skills), and how to work well with different personality types.\nI\u0026rsquo;ve attended these types of talks before and they sometimes end up a bit fluffy without items that you can begin using quickly. Steve\u0026rsquo;s talk was the opposite. He gave us concrete ways to change how we think about ourselves and use that knowledge to advance ourselves at work. I learned a lot about negotiation strategies for salary when getting hired or when pushing for a raise. Steve stopped lots of times to answer questions and it was clear that he was really interested in this topic.\nThomas Cameron kicked off Saturday with his \u0026ldquo;Linux State of the Union\u0026rdquo; talk. He talked a lot about his personal journey and how he has changed along the way. He noted quite a few changes to Linux (not the code, but the people around it) that many of us had not noticed. We learned more about how we can make the Linux community more diverse, inclusive, and welcoming. We also groaned through some problems from the good old days with jumpers on SATA cards and the joys of winmodems.\nAdam Miller threw us into a seat of a roller coaster and gave a whirlwind talk about all the ways you can automate (nearly) everything with Ansible.\nHe covered everything from simple configuration management tasks to scaling up software deployments over thousands of nodes. Adam also explained the OCI image format as being \u0026ldquo;sweet sweet tarballs with a little bit of metadata\u0026rdquo; and the audience was rolling with laughter. Adam\u0026rsquo;s talks are always good and you\u0026rsquo;ll be energized all the way through.\nJosé Miguel Parrella led a great lightning talk in the evening about how Microsoft uses Linux in plenty of places:\nThe audience was shocked by how much Debian was used at Microsoft and it made it more clear that Microsoft is really making a big shift towards open source well. Many of us knew that already but we didn\u0026rsquo;t know the extent of the work being done.\nMy talks #My first talk was about my team at Red Hat, the Continuous Kernel Integration team. I shared some of the challenges involved with doing CI for the kernel at scale and how difficult it is to increase test coverage of subsystems within the kernel. There were two kernel developers in the audience and they had some really good questions.\nThe discussion at the end was quite productive. The audience had plenty of questions about how different pieces of the system worked, and how well GitLab was working for us. We also talked a bit about how the kernel is developed and if there is room for improvement. One attendee hoped that some of the work we\u0026rsquo;re doing will change the kernel development process for the better. I hope so, too.\nMy second talk covered the topic of burnout. I have delivered plenty of talks about impostor syndrome in the past and I was eager to share more ideas around \u0026ldquo;soft\u0026rdquo; skills that become more important to technical career development over time.\nThe best part of these types of talks for me is the honesty that people bring when they share their thoughts after the talk. A few people from the audience shared their own personal experiences (some were very personal) and you could see people in the audience begin to understand how difficult burnout recovery can be. Small conferences like these create environments where people can talk honestly about difficult topics.\nIf you\u0026rsquo;re looking for the slides from these talks, you can view them in Google Slides (for the sake of the GIFs!):\nContinuous Kernel Integration I was too burned out to name this talk Google Slides also allows you to download the slides as PDFs. Just choose File \u0026gt; Download as \u0026gt; PDF.\nBoF: Ham Radio and OSS #The BoFs were fairly late in the day and everyone was looking tired. However, we had a great group assemble for the Ham Radio and OSS BoF. We had about 15-20 licensed hams and 5-6 people who were curious about the hobby.\nWe talked about radios, antennas, procedures, how to study, and the exams. The ham-curious folks who joined us looked a bit overwhelmed by the help they were getting, but they left the room with plenty of ideas on how to get started.\nI also agreed to write a blog post about everything I\u0026rsquo;ve learned so far that has made the hobby easier for me and I hope to write that soon. There is so much information out there for studying and finding equipment that it can become really confusing for people new to the hobby.\nFinal thoughts #If you get the opportunity to attend a local Linux fest in your state, do it! The Texas one is always good and people joined us from Arkansas, Oklahoma, Louisiana, and Arizona. Some people came as far as Connecticut and the United Kingdom! These smaller events have a much higher signal to noise ratio and there is more real discussion rather than marketing from industry giants.\nThanks to everyone who put the Texas Linux Fest together this year!\n","date":"2 June 2019","permalink":"/p/texas-linux-fest-2019-recap/","section":"Posts","summary":"Another Texas Linux Fest has come and gone!","title":"Texas Linux Fest 2019 Recap"},{"content":"My team at Red Hat depends heavily on GitLab CI and we build containers often to run all kinds of tests. Fortunately, GitLab offers up CI to build containers and a container registry in every repository to hold the containers we build.\nThis is really handy because it keeps everything together in one place: your container build scripts, your container build infrastructure, and the registry that holds your containers. Better yet, you can put multiple types of containers underneath a single git repository if you need to build containers based on different Linux distributions.\nBuilding with Docker in GitLab CI #By default, GitLab offers up a Docker builder that works just fine. The CI system clones your repository, builds your containers and pushes them wherever you want. There\u0026rsquo;s even a simple CI YAML file that does everything end-to-end for you.\nHowever, I have two issues with the Docker builder:\nLarger images: The Docker image layering is handy, but the images end up being a bit larger, especially if you don\u0026rsquo;t do a little cleanup in each stage.\nAdditional service: It requires an additional service inside the CI runner for the dind (\u0026ldquo;Docker in Docker\u0026rdquo;) builder. This has caused some CI delays for me several times.\nBuilding with buildah in GitLab CI #On my local workstation, I use podman and buildah all the time to build, run, and test containers. These tools are handy because I don\u0026rsquo;t need to remember to start the Docker daemon each time I want to mess with a container. I also don\u0026rsquo;t need sudo.\nAll of my containers are stored beneath my home directory. That\u0026rsquo;s good for keeping disk space in check, but it\u0026rsquo;s especially helpful on shared servers since each user has their own unique storage. My container pulls and builds won\u0026rsquo;t disrupt anyone else\u0026rsquo;s work on the server and their work won\u0026rsquo;t disrupt mine.\nFinally, buildah offers some nice options out of the box. First, when you build a container with buildah bud, you end up with only three layers by default:\nOriginal OS layer (example: fedora:30) Everything you added on top of the OS layer Tiny bit of metadata This is incredibly helpful if you use package managers like dnf, apt, and yum that download a bunch of metadata before installing packages. You would normally have to clear the metadata carefully for the package manager so that your container wouldn\u0026rsquo;t grow in size. Buildah takes care of that by squashing all the stuff you add into one layer.\nOf course, if you want to be more aggressive, buildah offers the --squash option which squashes the whole image down into one layer. This can be helpful if disk space is at a premium and you change the layers often.\nGetting started #I have a repository called os-containers in GitLab that maintains fully updated containers for Fedora 29 and 30. The .gitlab-ci.yml file calls build.sh for two containers: fedora29 and fedora30. Open the build.sh file and follow along here:\n# Use vfs with buildah. Docker offers overlayfs as a default, but buildah # cannot stack overlayfs on top of another overlayfs filesystem. export STORAGE_DRIVER=vfs First off, we need to tell buildah to use the vfs storage driver. Docker uses overlayfs by default and stacking overlay filesystems will definitely lead to problems. Buildah won\u0026rsquo;t let you try it.\n# Write all image metadata in the docker format, not the standard OCI format. # Newer versions of docker can handle the OCI format, but older versions, like # the one shipped with Fedora 30, cannot handle the format. export BUILDAH_FORMAT=docker By default, buildah uses the oci container format. This sometimes causes issues with older versions of Docker that don\u0026rsquo;t understand how to parse that type of metadata. By setting the format to docker, we\u0026rsquo;re using a format that almost all container runtimes can understand.\n# Log into GitLab\u0026#39;s container repository. export REGISTRY_AUTH_FILE=${HOME}/auth.json echo \u0026#34;$CI_REGISTRY_PASSWORD\u0026#34; | buildah login -u \u0026#34;$CI_REGISTRY_USER\u0026#34; --password-stdin $CI_REGISTRY Here we set a path for the auth.json that contains the credentials for talking to the container repository. We also use buildah to authenticate to GitLab\u0026rsquo;s built-in container repository. GitLab automatically exports these variables for us (and hides them in the job output), so we can use them here.\nbuildah bud -f builds/${IMAGE_NAME} -t ${IMAGE_NAME} . We\u0026rsquo;re now building the container and storing it temporarily as the bare image name, such as fedora30. This is roughly equivalent to docker build.\nCONTAINER_ID=$(buildah from ${IMAGE_NAME}) buildah commit --squash $CONTAINER_ID $FQ_IMAGE_NAME Now we are making a reference to our container with buildah from and using that reference to squash that container down into a single layer. This keeps the container as small as possible.\nThe commit step also tags the resulting image using our fully qualified image name (in this case, it\u0026rsquo;s registry.gitlab.com/majorhayden/os-containers/fedora30:latest)\nbuildah push ${FQ_IMAGE_NAME} This is the same as docker push. There\u0026rsquo;s not much special to see here.\nMaintaining containers #GitLab allows you to take things to the next level with CI schedules. In my repository, there is a schedule to build my containers once a day to catch the latest updates. I use these containers a lot and they need to be up to date before I can run tests.\nIf the container build fails for some reason, GitLab will send me an email to let me know.\n","date":"24 May 2019","permalink":"/p/build-containers-in-gitlab-ci-with-buildah/","section":"Posts","summary":"My team at Red Hat depends heavily on GitLab CI and we build containers often to run all kinds of tests.","title":"Build containers in GitLab CI with buildah"},{"content":"","date":null,"permalink":"/tags/ansible/","section":"Tags","summary":"","title":"Ansible"},{"content":"My team at Red Hat builds a lot of kernels in OpenShift pods as part of our work with the Continuous Kernel Integration (CKI) project. We have lots of different pod sizes depending on the type of work we are doing and our GitLab runners spawn these pods based on the tags in our GitLab CI pipeline.\nCompiling with make #When you compile a large software project, such as the Linux kernel, you can use multiple CPU cores to speed up the build. GNU\u0026rsquo;s make does this with the -j argument. Running make with -j10 means that you want to run 10 jobs while compiling. This would keep 10 CPU cores busy.\nSetting the number too high causes more contention from the CPU and can reduce performance. Setting the number too low means that you are spending more time compiling than you would if you used all of your CPU cores.\nEvery once in a while, we adjusted our runners to use a different amount of CPUs or memory and then we had to adjust our pipeline to reflect the new CPU count. This was time consuming and error prone.\nMany people just use nproc to determine the CPU core count. It works well with make:\nmake -j$(nproc) Problems with containers #The handy nproc doesn\u0026rsquo;t work well for OpenShift. If you start a pod on OpenShift and limit it to a single CPU core, nproc tells you something very wrong:\n$ nproc 32 We applied the single CPU limit with OpenShift, so what\u0026rsquo;s the problem? The issue is how nproc looks for CPUs. Here\u0026rsquo;s a snippet of strace output:\nsched_getaffinity(0, 128, [0, 1, 2, 3, 4, 5]) = 8 fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x6), ...}) = 0 write(1, \u0026#34;6\\n\u0026#34;, 26 ) = 2 The sched_getaffinity syscall looks to see which CPUs are allowed to run the process and returns a count of those. OpenShift doesn\u0026rsquo;t prevent us from seeing the CPUs of the underlying system (the VM or bare metal host underneath our containers), but it uses cgroups to limit how much CPU time we can use.\nReading cgroups #Getting cgroup data is easy! Just change into the /sys/fs/cgroup/ directory and look around:\n$ cd /sys/fs/cgroup/ $ ls -al cpu/ ls: cannot open directory \u0026#39;cpu/\u0026#39;: Permission denied Ouch. OpenShift makes this a little more challenging. We\u0026rsquo;re not allowed to wander around in the land of cgroups without a map to exactly what we want.\nMy Fedora workstation shows a bunch of CPU cgroup settings:\n$ ls -al /sys/fs/cgroup/cpu/ total 0 dr-xr-xr-x. 2 root root 0 Apr 5 01:40 . drwxr-xr-x. 14 root root 360 Apr 5 01:40 .. -rw-r--r--. 1 root root 0 Apr 5 13:08 cgroup.clone_children -rw-r--r--. 1 root root 0 Apr 5 01:40 cgroup.procs -r--r--r--. 1 root root 0 Apr 5 13:08 cgroup.sane_behavior -r--r--r--. 1 root root 0 Apr 5 13:08 cpuacct.stat -rw-r--r--. 1 root root 0 Apr 5 13:08 cpuacct.usage -r--r--r--. 1 root root 0 Apr 5 13:08 cpuacct.usage_all -r--r--r--. 1 root root 0 Apr 5 13:08 cpuacct.usage_percpu -r--r--r--. 1 root root 0 Apr 5 13:08 cpuacct.usage_percpu_sys -r--r--r--. 1 root root 0 Apr 5 13:08 cpuacct.usage_percpu_user -r--r--r--. 1 root root 0 Apr 5 13:08 cpuacct.usage_sys -r--r--r--. 1 root root 0 Apr 5 13:08 cpuacct.usage_user -rw-r--r--. 1 root root 0 Apr 5 09:10 cpu.cfs_period_us -rw-r--r--. 1 root root 0 Apr 5 13:08 cpu.cfs_quota_us -rw-r--r--. 1 root root 0 Apr 5 09:10 cpu.shares -r--r--r--. 1 root root 0 Apr 5 13:08 cpu.stat -rw-r--r--. 1 root root 0 Apr 5 13:08 notify_on_release -rw-r--r--. 1 root root 0 Apr 5 13:08 release_agent -rw-r--r--. 1 root root 0 Apr 5 13:08 tasks OpenShift uses the Completely Fair Scheduler (CFS) to limit CPU time. Here\u0026rsquo;s a quick excerpt from the kernel documentation:\nQuota and period are managed within the cpu subsystem via cgroupfs.\ncpu.cfs_quota_us: the total available run-time within a period (in microseconds) cpu.cfs_period_us: the length of a period (in microseconds) cpu.stat: exports throttling statistics [explained further below]\nThe default values are: cpu.cfs_period_us=100ms cpu.cfs_quota=-1\nA value of -1 for cpu.cfs_quota_us indicates that the group does not have any bandwidth restriction in place, such a group is described as an unconstrained bandwidth group. This represents the traditional work-conserving behavior for CFS.\nWriting any (valid) positive value(s) will enact the specified bandwidth limit. The minimum quota allowed for the quota or period is 1ms. There is also an upper bound on the period length of 1s. Additional restrictions exist when bandwidth limits are used in a hierarchical fashion, these are explained in more detail below.\nWriting any negative value to cpu.cfs_quota_us will remove the bandwidth limit and return the group to an unconstrained state once more.\nAny updates to a group\u0026rsquo;s bandwidth specification will result in it becoming unthrottled if it is in a constrained state.\nLet\u0026rsquo;s see if inspecting cpu.cfs_quota_us can help us:\n$ cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us 10000 Now we\u0026rsquo;re getting somewhere. But what does 10000 mean here? OpenShift operates on the concept of millicores of CPU time, or 1/1000 of a CPU. 500 millicores is half a CPU and 1000 millicores is a whole CPU.\nThe pod in this example is assigned 100 millicores. Now we know that we can take the output of /sys/fs/cgroup/cpu/cpu.cfs_quota_us, divide by 100, and get our millicores.\nWe can make a script like this:\nCFS_QUOTA=$(cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us) if [ $CFS_QUOTA -lt 100000 ]; then CPUS_AVAILABLE=1 else CPUS_AVAILABLE=$(expr ${CFS_QUOTA} / 100 / 1000) fi echo \u0026#34;Found ${CPUS_AVAILABLE} CPUS\u0026#34; make -j${CPUS_AVAILABLE} ... The script checks for the value of the quota and divides by 100,000 to get the number of cores. If the share is set to something less than 100,000, then a core count of 1 is assigned. (Pro tip: make does not like being told to compile with zero jobs.)\nReading memory limits #There are other limits you can read and inspect in a pod, including the available RAM. As we found with nproc, free is not very helpful:\n# An OpenShift pod with 200MB RAM $ free -m total used free shared buff/cache available Mem: 32008 12322 880 31 18805 19246 Swap: 0 0 0 But the cgroups tell the truth:\n$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes 209715200 If you run Java applications in a container, like Jenkins (or Jenkins slaves), be sure to use the -XX:+UseCGroupMemoryLimitForHeap option. That will cause Java to look at the cgroups to determine its heap size.\nPhoto credit: Wikipedia\n","date":"5 April 2019","permalink":"/p/inspecting-openshift-cgroups-from-inside-the-pod/","section":"Posts","summary":"My team at Red Hat builds a lot of kernels in OpenShift pods as part of our work with the Continuous Kernel Integration (CKI) project.","title":"Inspecting OpenShift cgroups from inside the pod"},{"content":"My work at Red Hat involves testing lots and lots of kernels from various sources and we use GitLab CE to manage many of our repositories and run our CI jobs. Those jobs run in thousands of OpenShift containers that we spawn every day.\nOpenShift has some handy security features that we like. First, each container is mounted read-only with some writable temporary space (and any volumes that you mount). Also, OpenShift uses arbitrarily assigned user IDs (UIDs) for each container.\nConstantly changing UIDs provide some good protection against container engine vulnerabilities, but they can be a pain if you have a script or application that depends on being able to resolve a UID or GID back to a real user or group account.\nAnsible and UIDs #If you run an Ansible playbook within OpenShift, you will likely run into a problem during the fact gathering process:\n$ ansible-playbook -i hosts playbook.yml PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: \u0026#39;getpwuid(): uid not found: 1000220000\u0026#39; fatal: [localhost]: FAILED! =\u0026gt; {\u0026#34;msg\u0026#34;: \u0026#34;Unexpected failure during module execution.\u0026#34;, \u0026#34;stdout\u0026#34;: \u0026#34;\u0026#34;} to retry, use: --limit @/major-ansible-messaround/playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 This exception is telling us that getpwuid() was not able to find an entry in /etc/passwd for our UID (1000220000 in this container).\nOne option would be to adjust the playbook so that we skip the fact gathering process:\n- hosts: all gather_facts: no tasks: - name: Run tests command: ./run_tests.sh However, this might not be helpful if you need facts to be gathered for your playbook to run. In that case, you need to make some adjustments to your container image first.\nUpdating the container #Nothing in the container image is writable within OpenShift, but we can change certain files to be group writable for the root user since every OpenShift user has an effective GID of 0.\nWhen you build your container, add a line to your Dockerfile to allow the container user to have group write access to /etc/passwd and /etc/group:\n# Make Ansible happy with arbitrary UID/GID in OpenShift. RUN chmod g=u /etc/passwd /etc/group Once your container has finished building, the permissions on both files should look like this:\n$ ls -al /etc/passwd /etc/group -rw-rw-r--. 1 root root 514 Mar 20 18:12 /etc/group -rw-rw-r--. 1 root root 993 Mar 20 18:12 /etc/passwd Make a user account #Now that we\u0026rsquo;ve made these files writable for our user in OpenShift, it\u0026rsquo;s time to change how we run our GitLab CI job. My job YAML currently looks like this:\nansible_test: image: docker.io/major/ansible:fedora29 script: - ansible-playbook -i hosts playbook.yml We can add two lines that allow us to make a temporary user and group account for our OpenShift user:\nansible_test: image: docker.io/major/ansible:fedora29 script: - echo \u0026#34;tempuser❌$(id -u):$(id -g):,,,:${HOME}:/bin/bash\u0026#34; \u0026gt;\u0026gt; /etc/passwd - echo \u0026#34;tempuser❌$(id -G | cut -d\u0026#39; \u0026#39; -f 2)\u0026#34; \u0026gt;\u0026gt; /etc/group - id - ansible-playbook -i hosts playbook.yml Note that we want the second GID returned by id since the first one is 0. The id command helps us check our work when the container starts. When the CI job starts, we should see some better output:\n$ echo \u0026#34;tempuser❌$(id -u):$(id -g):,,,:${HOME}:/bin/bash\u0026#34; \u0026gt;\u0026gt; /etc/passwd $ echo \u0026#34;tempuser❌$(id -G | cut -d\u0026#39; \u0026#39; -f 2)\u0026#34; \u0026gt;\u0026gt; /etc/group $ id uid=1000220000(tempuser) gid=0(root) groups=0(root),1000220000(tempuser) $ ansible-playbook -i hosts playbook.yml PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Download kernel source] ************************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 Success!\n","date":"22 March 2019","permalink":"/p/running-ansible-in-openshift-with-arbitrary-uids/","section":"Posts","summary":"My work at Red Hat involves testing lots and lots of kernels from various sources and we use GitLab CE to manage many of our repositories and run our CI jobs.","title":"Running Ansible in OpenShift with arbitrary UIDs"},{"content":"After writing my last post on my IPv6 woes with my Pixel 3, some readers asked how I\u0026rsquo;m handling IPv6 on my router lately. I wrote about this previously when Spectrum was Time Warner Cable and I was using Mikrotik network devices.\nThere is a good post from 2015 on the blog and it still works today:\nTime Warner Road Runner, Linux, and large IPv6 subnets I am still using that same setup today, but some readers found it difficult to find the post since Time Warner Cable has renamed to Spectrum. Don\u0026rsquo;t worry \u0026ndash; the old post still works. :)\n","date":"19 March 2019","permalink":"/p/get-a-slash-56-from-spectrum-using-wide-dhcpv6/","section":"Posts","summary":"After writing my last post on my IPv6 woes with my Pixel 3, some readers asked how I\u0026rsquo;m handling IPv6 on my router lately.","title":"Get a /56 from Spectrum using wide-dhcpv6"},{"content":"We have two Google Pixel phones in our house: a Pixel 2 and a Pixel 3. Both of them drop off our home wireless network regularly. It causes lots of problems with various applications on the phones, especially casting video via Chromecast.\nAt the time when I first noticed the drops, I was using a pair of wireless access points (APs) from Engenius:\nEAP600 EAP1200H Also, here\u0026rsquo;s what I knew at the time:\nMac and Linux computers had no Wi-Fi issues at all The signal level from both APs was strong Disabling one AP made no improvement Disabling one band (2.4 or 5GHz) on the APs made no improvement Clearing the bluetooth/Wi-Fi data on the Pixel had no effect Assigning a static IP address on the Pixel made no improvement Using unencrypted SSIDs made no improvement At this point, I felt strongly that the APs had nothing to do with it. I ordered a new NetGear Orbi mesh router and satellite anyway. The Pixels still dropped off the wireless network even with the new Orbi APs.\nReading logs #I started reading logs from every source I could find:\ndhcpd logs from my router syslogs from my APs (which forwarded into the router) output from tcpdump on my router Several things became apparent after reading the logs:\nThe Wi-Fi drop occurred usually every 30-60 seconds The DHCP server received requests for a new IP address after every drop None of the network traffic from the phones was being blocked at the router The logs from the APs showed the phone disconnecting itself from the network; the APs were not forcing the phones off the network All of the wireless and routing systems in my house seemed to point to a problem in the phones themselves. They were voluntarily dropping from the network without being bumped off by APs or the router.\nGetting logs from the phone #It was time to get some logs from the phone itself. That would require connecting the phone via USB to a computer and enabling USB debugging on the phone.\nFirst, I downloaded the Android SDK. The full studio release isn\u0026rsquo;t needed \u0026ndash; scroll down and find the Command line tools only section. Unzip the download and find the tools/bin/sdkmanager executable. Run it like this:\n# Fedora 29 systems may need to choose the older Java version for sdkmanager # to run properly. export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-2.fc29.x86_64/jre # Install the android-28 platform tools ./sdkmanager \u0026#34;platform-tools\u0026#34; \u0026#34;platforms;android-28\u0026#34; Now we need to enable USB debugging on the phone itself. Be sure to disable USB debugging when you are done! Follow these steps:\nGo into the phone\u0026rsquo;s settings and choose About Phone from the bottom of the list. Scroll to the bottom and tap the Build number section repeatedly until a message appears saying that you are now a developer. Go back one screen and tap System. Click Advanced to show the additional options and tap Developer Options. In the Debugging section, tap USB Debugging to enable USB debugging. Connect the phone to your computer via USB and run:\nsudo platform-tools/adb logcat Your screen will fill with logs from your phone.\nNuggets in the log #I watched the logs and waited for the Wi-Fi to drop. As soon as it dropped, I saw some interesting log messages:\nI wpa_supplicant: wlan0: CTRL-EVENT-AVOID-FREQ ranges=5785-5825 I chatty : uid=1000(system) IpClient.wlan0 expire 3 lines I chatty : uid=1000 system_server expire 1 line D CommandListener: Setting iface cfg E cnss-daemon: wlan_service_update_sys_param: unable to open /proc/sys/net/ipv4/tcp_use_userconfig I chatty : uid=1000(system) android.fg expire 1 line I wpa_supplicant: wlan0: CTRL-EVENT-DISCONNECTED bssid=88:dc:96:4a:b6:75 reason=3 locally_generated=1 I chatty : uid=10025 com.google.android.gms.persistent expire 7 lines V NativeCrypto: Read error: ssl=0x7b349e2d08: I/O error during system call, Software caused connection abort V NativeCrypto: Write error: ssl=0x7b349e2d08: I/O error during system call, Broken pipe V NativeCrypto: Write error: ssl=0x7b349e2d08: I/O error during system call, Broken pipe V NativeCrypto: SSL shutdown failed: ssl=0x7b349e2d08: I/O error during system call, Success D ConnectivityService: reportNetworkConnectivity(158, false) by 10025 The line with CTRL-EVENT-AVOID-FREQ isn\u0026rsquo;t relevant because it\u0026rsquo;s simply a hint to the wireless drivers to avoid certain frequencies not used in the USA. The CTRL-EVENT-DISCONNECTED shows where wpa_supplicant received the disconnection message. The last line with ConnectivityService was very interesting. Something in the phone believes there is a network connectivity issue. That could be why the Pixel is hopping off the wireless network.\nFrom there, I decided to examine only the ConnectivityService logs:\nsudo platform-tools/adb logcat \u0026#39;ConnectivityService:* *:S\u0026#39; This logcat line tells adb that I want all logs from all log levels about the ConnectivityService, but all of the other logs should be silenced. I started seeing some interesting details:\nD ConnectivityService: NetworkAgentInfo [WIFI () - 148] validation failed D ConnectivityService: Switching to new default network: NetworkAgentInfo{ ni{[type: MOBILE[LTE]... D ConnectivityService: Sending DISCONNECTED broadcast for type 1 NetworkAgentInfo [WIFI () - 148] isDefaultNetwork=true D ConnectivityService: Sending CONNECTED broadcast for type 0 NetworkAgentInfo [MOBILE (LTE) - 100] isDefaultNetwork=true D ConnectivityService: handleNetworkUnvalidated NetworkAgentInfo [WIFI () - 148] ... Wait, what is this \u0026ldquo;validation failed\u0026rdquo; message? The Pixel was making network connections successfully the entire time as shown by tcpdump. This is part of Android\u0026rsquo;s [network connecivity checks] for various networks.\nThe last few connections just before the disconnect were to connectivitycheck.gstatic.com (based on tcpdump logs) and that\u0026rsquo;s Google\u0026rsquo;s way of verifying that the wireless network is usable and that there are no captive portals. I connected to it from my desktop on IPv4 and IPv6 to verify:\n$ curl -4 -i https://connectivitycheck.gstatic.com/generate_204 HTTP/2 204 date: Sun, 17 Mar 2019 15:00:30 GMT alt-svc: quic=\u0026#34;:443\u0026#34;; ma=2592000; v=\u0026#34;46,44,43,39\u0026#34; $ curl -6 -i https://connectivitycheck.gstatic.com/generate_204 HTTP/2 204 date: Sun, 17 Mar 2019 15:00:30 GMT alt-svc: quic=\u0026#34;:443\u0026#34;; ma=2592000; v=\u0026#34;46,44,43,39\u0026#34; Everything looked fine.\nHeading to Google #After a bunch of searching on Google, I kept finding posts talking about disabling IPv6 to fix the Wi-Fi drop issues. I shrugged it off and kept searching. Finally, I decided to disable IPv6 and see if that helped.\nI stopped radvd on the router, disabled Wi-Fi on the phone, and then re-enabled it. As I watched, the phone stayed on the wireless network for two minutes. Three minutes. Ten minutes. There were no drops.\nAt this point, this is still an unsolved mystery for me. Disabling IPv6 is a terrible idea, but it keeps my phones online. I plan to put the phones on their own VLAN without IPv6 so I can still keep IPv6 addresses for my other computers, but this is not a good long term fix. If anyone has any input on why this helps and how I can get IPv6 re-enabled, please let me know!\nUpdate 2019-03-18 #Several readers wanted to see what was happening just before the Wi-Fi drop, so here\u0026rsquo;s a small snippet from tcpdump:\n07:26:06.736900 IP6 2607:f8b0:4000:80d::2003.443 \u0026gt; phone.41310: Flags [F.], seq 3863, ack 511, win 114, options [nop,nop,TS val 1288800272 ecr 66501414], length 0 07:26:06.743101 IP6 2607:f8b0:4000:80d::2003.443 \u0026gt; phone.41312: Flags [F.], seq 3864, ack 511, win 114, options [nop,nop,TS val 1778536228 ecr 66501414], length 0 07:26:06.765444 IP6 phone.41312 \u0026gt; 2607:f8b0:4000:80d::2003.443: Flags [R], seq 4183481455, win 0, length 0 07:26:06.765454 IP6 phone.41310 \u0026gt; 2607:f8b0:4000:80d::2003.443: Flags [R], seq 3279990707, win 0, length 0 07:26:07.487180 IP6 2607:f8b0:4000:80d::2003.443 \u0026gt; phone.41316: Flags [F.], seq 3863, ack 511, win 114, options [nop,nop,TS val 4145292968 ecr 66501639], length 0 07:26:07.537080 IP6 phone.41316 \u0026gt; 2607:f8b0:4000:80d::2003.443: Flags [R], seq 4188442452, win 0, length 0 That IPv6 address is at a Google PoP in Dallas, TX:\n$ host 2607:f8b0:4000:80d::2003 3.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.d.0.8.0.0.0.0.4.0.b.8.f.7.0.6.2.ip6.arpa domain name pointer dfw06s49-in-x03.1e100.net. I haven\u0026rsquo;t been able to intercept the traffic via man-in-the-middle since Google\u0026rsquo;s certificate checks are very strict. However, checks from my own computer work without an issue:\n$ curl -ki \u0026#34;https://[2607:f8b0:4000:80d::2003]/generate_204\u0026#34; HTTP/2 204 date: Mon, 18 Mar 2019 12:35:18 GMT alt-svc: quic=\u0026#34;:443\u0026#34;; ma=2592000; v=\u0026#34;46,44,43,39\u0026#34; ","date":"17 March 2019","permalink":"/p/pixel-3-wifi-drops-constantly/","section":"Posts","summary":"We have two Google Pixel phones in our house: a Pixel 2 and a Pixel 3.","title":"Pixel 3 Wi-Fi drops constantly"},{"content":"I recently picked up a Dell Optiplex 7060 and I\u0026rsquo;m using it as my main workstation now. The Fedora installation was easy, but I noticed a variety of \u0026ldquo;pop\u0026rdquo; or clicking sounds when audio played, especially terminal bells.\nIf everything was quiet and I triggered a terminal bell, I would hear a loud pop just before the terminal bell sound. However, if I played music and then triggered a terminal bell, the pop was gone.\nA quick Google search told me that the likely culprit was power saving settings on my Intel HD Audio chipset:\n$ lspci | grep Audio 00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10) Fixing it #There\u0026rsquo;s a handy power saving tunable available at /sys/module/snd_hda_intel/parameters/power_save that can be usd to adjust the timeout or disable power savings entirely. In my case, the delay was set to one second.\n$ cat /sys/module/snd_hda_intel/parameters/power_save 1 That would be good for a laptop use case, but my workstation is always plugged in. I disabled it by setting it to zero:\n# echo 0 \u0026gt; /sys/module/snd_hda_intel/parameters/power_save $ cat /sys/module/snd_hda_intel/parameters/power_save 0 And the pops are gone! My Klipsch speakers have a built in amplifier and it was likely the abrupt changes in current that was causing the popping noises.\nThis setting will last until you reboot. You can make it permanent by adding this text to /etc/modprobe.d/audio_disable_powersave.conf:\noptions snd_hda_intel power_save=0 If you\u0026rsquo;re a laptop user and you want power savings but fewer pops, you could increase the delay to a more acceptable number. For example, setting it to 60 would mean that the card will power down after 60 seconds of silence. Just remember that you\u0026rsquo;ll get a nice pop when the 60 seconds has passed and a new sound is played.\nLearning more #Diving into the kernel code reveals the tunable in /sound/pci/hda/hda_intel.c:\nstatic int power_save = CONFIG_SND_HDA_POWER_SAVE_DEFAULT; module_param(power_save, xint, 0644); MODULE_PARM_DESC(power_save, \u0026#34;Automatic power-saving timeout \u0026#34; \u0026#34;(in second, 0 = disable).\u0026#34;); The default comes from a kernel config option: CONFIG_SND_HDA_POWER_SAVE_DEFAULT. Most kernel packages on most distributions provide access to the kernel config file that was used to build the kernel originally. It\u0026rsquo;s often found in /boot (named the same as the kernel version) or it might be available at /proc/config.gz.\nFor Fedora, the kernel config is provided in /boot whenever a new kernel is is installed. I inspected mine and found:\n$ grep HDA_POWER_SAVE_DEFAULT /boot/config-4.20.13-200.fc29.x86_64 CONFIG_SND_HDA_POWER_SAVE_DEFAULT=1 The power_save setting is applied in /sound/pci/hda/hda_codec.c:\n/** * snd_hda_set_power_save - reprogram autosuspend for the given delay * @bus: HD-audio bus * @delay: autosuspend delay in msec, 0 = off * * Synchronize the runtime PM autosuspend state from the power_save option. */ void snd_hda_set_power_save(struct hda_bus *bus, int delay) { struct hda_codec *c; list_for_each_codec(c, bus) codec_set_power_save(c, delay); } EXPORT_SYMBOL_GPL(snd_hda_set_power_save); We can look where codec_set_power_save is defined in the same file to learn more:\n#ifdef CONFIG_PM static void codec_set_power_save(struct hda_codec *codec, int delay) { struct device *dev = hda_codec_dev(codec); if (delay == 0 \u0026amp;\u0026amp; codec-\u0026gt;auto_runtime_pm) delay = 3000; if (delay \u0026gt; 0) { pm_runtime_set_autosuspend_delay(dev, delay); pm_runtime_use_autosuspend(dev); pm_runtime_allow(dev); if (!pm_runtime_suspended(dev)) pm_runtime_mark_last_busy(dev); } else { pm_runtime_dont_use_autosuspend(dev); pm_runtime_forbid(dev); } } This logic looks to see if CONFIG_PM is set to know if power management is desired at all. From there, it checks if we disabled power saving but there\u0026rsquo;s a discrete graphics card involved (codec-\u0026gt;auto_runtime_pm). This check is important because the discrete graphics card cannot power down unless the HDA card suspends at the same time.\nNext, there\u0026rsquo;s a check to see if the delay is greater than 0. This would be the case if CONFIG_SND_HDA_POWER_SAVE_DEFAULT was set to 1 (Fedora\u0026rsquo;s default). If so, the proper auto suspend delays are set.\nIf the delay is 0, then autosuspend is disabled and removed from power management entirely. This is the option I chose and it\u0026rsquo;s working great.\n","date":"4 March 2019","permalink":"/p/stop-audio-pops-on-intel-hd-audio/","section":"Posts","summary":"I recently picked up a Dell Optiplex 7060 and I\u0026rsquo;m using it as my main workstation now.","title":"Stop audio pops on Intel HD Audio"},{"content":"The i3 window manager is a fast window manager that helps you keep all of your applications in the right place. It automatically tiles windows and can manage those tiles across multiple virtual desktops.\nHowever, there are certain applications that I really prefer in a floating window. Floating windows do not get tiled and they can easily be dragged around with your mouse. They\u0026rsquo;re the type of windows you expect to see on other non-tiling desktops such as GNOME or KDE.\nConvert a window to floating temporarily #If you have an existing window that you prefer to float, select that window and press Mod + Shift + Space bar. The window will pop up in front of the tiled windows and you can easily move it with your mouse.\nDepending on your configuration, you may be able to resize it by grabbing a corner of the window with your mouse. You can also assign a key combination for resizing in your i3 configuration file (usually ~/.config/i3/config):\n# resize window (you can also use the mouse for that) mode \u0026#34;resize\u0026#34; { bindsym Left resize shrink width 10 px or 10 ppt bindsym Down resize grow height 10 px or 10 ppt bindsym Up resize shrink height 10 px or 10 ppt bindsym Right resize grow width 10 px or 10 ppt bindsym Return mode \u0026#34;default\u0026#34; bindsym Escape mode \u0026#34;default\u0026#34; bindsym $mod+r mode \u0026#34;default\u0026#34; } bindsym $mod+r mode \u0026#34;resize\u0026#34; With this configuration, simply press Mod + r and use the arrow keys to grow or shrink the window\u0026rsquo;s borders.\nAlways float certain windows #For those windows that you always want to be floating no matter what, i3 has a solution for that, too. Just tell i3 how to identify your windows and ensure floating enable appears in the i3 config:\nfor_window [window_role=\u0026#34;About\u0026#34;] floating enable for_window [class=\u0026#34;vlc\u0026#34;] floating enable for_window [title=\u0026#34;Authy\u0026#34;] floating enable In the example above, I have a few windows always set to be floating:\n[window_role=\u0026quot;About\u0026quot;] - Any of the \u0026ldquo;About\u0026rdquo; windows in various applications that are normally opened by Help -\u0026gt; About. [class=\u0026quot;vlc\u0026quot;] - The VLC media player can be a good one to float if you need to stuff it away in a corner. [title=\u0026quot;Authy\u0026quot;] - Authy\u0026rsquo;s chrome extension looks downright silly as a tiled window. Any time these windows are spawned, they will automatically appear as floating windows. You can always switch them back to tiled manually by pressing Mod + Shift + Space bar.\nIdentifying windows #Identifying windows in the way that i3 cares about can be challenging. Knowing when to use window_role or class for a window isn\u0026rsquo;t very intuitive. Fortunately, there\u0026rsquo;s a great script from an archived i3 faq thread that makes this easy:\nDownload this script to your system, make it executable (chmod +x i3-get-window-criteria), and run it. As soon as you do that, a plus (+) icon will replace your normal mouse cursor. Click on the window you care about and look for the output in your terminal where you ran the i3-get-window-criteria script.\nOn my system, clicking on a terminator terminal window gives me:\n[class=\u0026#34;Terminator\u0026#34; id=37748743 instance=\u0026#34;terminator\u0026#34; title=\u0026#34;major@indium:~\u0026#34;] If I wanted to float all terminator windows, I could add this to my i3 configuration file:\nfor_window [class=\u0026#34;Terminator\u0026#34;] floating enable Float in a specific workspace #Do you need a window to always float on a specific workspace? i3 can do that, too!\nLet\u0026rsquo;s go back to the example with VLC. Let\u0026rsquo;s consider that we have a really nice 4K display where we always want to watch movies and that\u0026rsquo;s where workspace 2 lives. We can tell i3 to always float the VLC window on workspace 2 with this configuration:\nset $ws1 \u0026#34;1: main\u0026#34; set $ws2 \u0026#34;2: 4kdisplay\u0026#34; for_window [class=\u0026#34;vlc\u0026#34;] floating enable for_window [class=\u0026#34;vlc\u0026#34;] move to workspace $ws2 Restart i3 to pick up the new changes (usually Mod + Shift + R) and start VLC. It should appear on workspace 2 as a floating window!\n","date":"8 February 2019","permalink":"/p/automatic-floating-windows-in-i3/","section":"Posts","summary":"The i3 window manager is a fast window manager that helps you keep all of your applications in the right place.","title":"Automatic floating windows in i3"},{"content":"","date":null,"permalink":"/tags/devconf/","section":"Tags","summary":"","title":"Devconf"},{"content":"DevConf.CZ 2019 wrapped up last weekend and it was a great event packed with lots of knowledgeable speakers, an engaging hallway track, and delicious food. This was my first trip to any DevConf and it was my second trip to Brno.\nLots of snow showed up on the second day and more snow arrived later in the week!\nFirst talk of 2019 #I co-presented a talk with one of my teammates, Nikolai, about some of the fun work we\u0026rsquo;ve been doing at Red Hat to improve the quality of the Linux kernel in an automated way. The room was full and we had lots of good questions at the end of the talk. We also received some feedback that we could take back to the team to change how we approached certain parts of the kernel testing.\nOur project, called Continuous Kernel Integration (CKI), has a goal of reducing the amount of bugs that are merged into the Linux kernel. This requires lots of infrastructure, automation, and testing capabilities. We shared information about our setup, the problems we\u0026rsquo;ve found, and where we want to go in the future.\nFeel free to view our slides and watch the video (which should be up soon.)\nGreat talks from DevConf #My favorite talk of the conference was Laura Abbott\u0026rsquo;s \u0026ldquo;Monsters, Ghosts, and Bugs.\u0026rdquo;\nIt\u0026rsquo;s the most informative, concise, and sane review of how all the Linux kernels on the planet fit together. From the insanity of linux-next to the wild world of being a Linux distribution kernel maintainer, she helped us all understand the process of how kernels are maintained. She also took time to help the audience understand which kernels are most important to them and how they can make the right decisions about the kernel that will suit their needs. There are plenty of good points in my Twitter thread about her talk.\nDan Walsh gave a detailed overview of how to use Podman instead of Docker. He talked about the project\u0026rsquo;s origins and some of the incorrect assumptions that many people have (that running containers means only running Docker). Running containers without root has plenty of benefits. In addition, a significant amount of work has been done to speed up container pulls and pushes in Podman. I took some notes on Dan\u0026rsquo;s talk in a thread on Twitter.\nThe firewalld package has gained some new features recently and it\u0026rsquo;s poised to fully take advantage of nftables in Fedora 31! Using nftables means that firewall updates are done faster with fewer hiccups in busy environments (think OpenStack and Kubernetes). In addition, nftables can apply rules to IPv4 and IPv6 simultaneously, depeending on your preferences. My firewalld Twitter thread has more details from the talk.\nThe cgroups v2 subsystem was a popular topic in a few of the talks I visited, including the lightning talks. There are plenty of issues to get it working with Kubernetes and container management systems. It\u0026rsquo;s also missing the freezer capability from the original cgroups implementation. Without that, pausing a container, or using technology like CRIU, simply won\u0026rsquo;t work. Nobody could name a Linux distribution that has cgroups v2 enabled at the moment, and that\u0026rsquo;s not helping the effort move forward. Look for more news on this soon.\nOpenShift is quickly moving towards offering multiple architectures as a first class product feature. That would incluve aarch64, ppc64le, and s390x in addition to the existing x86_64 support. Andy McCrae and Jeff Young had a talk detailing many of the challenges along with lots of punny references to various \u0026ldquo;arches\u0026rdquo;. I made a Twitter thread of the main points from the OpenShift talk.\nSome of the other news included:\nreal-time linux patches are likely going to be merged into mainline. (only 15 years in the making!) Fedora, CentOS, RHEL and EPEL communities are eager to bring more of their processes together and make it easier for contributors to join in. Linux 5.0 is no more exciting than 4.20. It would have been 4.21 if Linus had an extra finger or toe. DevConf.US Boston 2019 #The next DevConf.US is in Boston, USA this summer. I hope to see you there!\n","date":"31 January 2019","permalink":"/p/devconf/","section":"Posts","summary":"DevConf.CZ 2019 wrapped up last weekend and it was a great event packed with lots of knowledgeable speakers, an engaging hallway track, and delicious food.","title":"DevConf.CZ 2019 Recap"},{"content":"","date":null,"permalink":"/tags/events/","section":"Tags","summary":"","title":"Events"},{"content":"","date":null,"permalink":"/tags/red-hat/","section":"Tags","summary":"","title":"Red Hat"},{"content":"","date":null,"permalink":"/tags/performance/","section":"Tags","summary":"","title":"Performance"},{"content":"Fedora 29 now has kernel 4.20 available and it has lots of new features. One of the more interesting and easy to use features is the pressure stall information interface.\nLoad average #We\u0026rsquo;re all familiar with the load average measurement on Linux machines, even if the numbers do seem a bit cryptic:\n$ w 10:55:46 up 11 min, 1 user, load average: 0.42, 0.39, 0.26 The numbers denote how many processes were active over the last one, five and 15 minutes. In my case, I have a system with four cores. My numbers above show that less than one process was active in the last set of intervals. That means that my system isn\u0026rsquo;t doing very much and processes are not waiting in the queue.\nHowever, if I begin compiling a kernel with eight threads (double my core count), the numbers change dramatically:\n$ w 11:00:28 up 16 min, 1 user, load average: 4.15, 1.89, 0.86 The one minute load average is now over four, which means some processes are waiting to be served on the system. This makes sense because I am using eight threads to compile a kernel on a system with four cores.\nMore detail #We assume that the CPU is the limiting factor in the system since we know that compiling a kernel takes lots of CPU time. We can verify (and quantify) that with the pressure stall information available in 4.20.\nWe start by taking a look in /proc/pressure:\n$ head /proc/pressure/* ==\u0026gt; /proc/pressure/cpu \u0026lt;== some avg10=71.37 avg60=57.25 avg300=23.83 total=100354487 ==\u0026gt; /proc/pressure/io \u0026lt;== some avg10=0.17 avg60=0.13 avg300=0.24 total=8101378 full avg10=0.00 avg60=0.01 avg300=0.16 total=5866706 ==\u0026gt; /proc/pressure/memory \u0026lt;== some avg10=0.00 avg60=0.00 avg300=0.00 total=0 full avg10=0.00 avg60=0.00 avg300=0.00 total=0 But what do these numbers mean? The shortest explanation is in the patch itself:\nPSI aggregates and reports the overall wallclock time in which the tasks in a system (or cgroup) wait for contended hardware resources.\nThe numbers here are percentages, not time itself:\nThe averages give the percentage of walltime in which one or more tasks are delayed on the runqueue while another task has the CPU. They\u0026rsquo;re recent averages over 10s, 1m, 5m windows, so you can tell short term trends from long term ones, similarly to the load average.\nWe can try to apply some I/O pressure by making a big tarball of a kernel source tree:\n$ head /proc/pressure/* ==\u0026gt; /proc/pressure/cpu \u0026lt;== some avg10=1.33 avg60=10.07 avg300=26.83 total=262389574 ==\u0026gt; /proc/pressure/io \u0026lt;== some avg10=40.53 avg60=13.27 avg300=3.46 total=20451978 full avg10=37.44 avg60=12.40 avg300=3.21 total=16659637 ==\u0026gt; /proc/pressure/memory \u0026lt;== some avg10=0.00 avg60=0.00 avg300=0.00 total=0 full avg10=0.00 avg60=0.00 avg300=0.00 total=0 The CPU is still under some stress here, but the I/O is now the limiting factor.\nThe output also shows a total= number, and that is explained in the patch as well:\nThe total= value gives the absolute stall time in microseconds. This allows detecting latency spikes that might be too short to sway the running averages. It also allows custom time averaging in case the 10s/1m/5m windows aren\u0026rsquo;t adequate for the usecase (or are too coarse with future hardware).\nThe total number can be helpful for machines that run for a long time, especially when you graph them and you monitor them for trends.\n","date":"27 January 2019","permalink":"/p/using-the-pressure-stall-information-interface-in-kernel-4/","section":"Posts","summary":"Fedora 29 now has kernel 4.","title":"Using the pressure stall information interface in kernel 4.20"},{"content":"","date":null,"permalink":"/tags/homeassistant/","section":"Tags","summary":"","title":"Homeassistant"},{"content":"The Home Assistant project provides a great open source way to get started with home automtion that can be entirely self-contained within your home. It already has plenty of integrations with external services, but it can also monitor Z-Wave devices at your home or office.\nHere are my devices:\nMonoprice Z-Wave Garade Door Sensor Aeotec Z-Stick Gen5 (ZW090) Fedora Linux server with Docker installed Install the Z-Wave stick #Start by plugging the Z-Stick into your Linux server. Run lsusb and it should appear in the list:\n# lsusb | grep Z-Stick Bus 003 Device 006: ID 0658:0200 Sigma Designs, Inc. Aeotec Z-Stick Gen5 (ZW090) - UZB The system journal should also tell you which TTY is assigned to the USB stick (run journalctl --boot and search for ACM):\nkernel: usb 3-3.2: USB disconnect, device number 4 kernel: usb 3-1: new full-speed USB device number 6 using xhci_hcd kernel: usb 3-1: New USB device found, idVendor=0658, idProduct=0200, bcdDevice= 0.00 kernel: usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 kernel: cdc_acm 3-1:1.0: ttyACM0: USB ACM device kernel: usbcore: registered new interface driver cdc_acm kernel: cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters In my case, my device is /dev/ttyACM0. If you have other serial devices attached to your system, your Z-Stick may show up as ttyACM1 or ttyACM2.\nUsing Z-Wave in the Docker container #If you use docker-compose, simply add a devices section to your existing YAML file:\nversion: \u0026#39;2\u0026#39; services: home-assistant: ports: - \u0026#34;8123:8123/tcp\u0026#34; network_mode: \u0026#34;host\u0026#34; devices: - /dev/ttyACM0 volumes: - /etc/localtime:/etc/localtime:ro - /mnt/raid/hass/:/config:Z image: homeassistant/home-assistant restart: always You can add the device to manual docker run commands by adding --device /dev/ttyACM0 to your existing command line.\nPairing #For this step, always refer to the instructions that came with your Z-Wave device since some require different pairing steps. In my case, I installed the battery, pressed the button inside the sensor, and paired the device:\nGo to the Home Assistant web interface Click Configuration on the left Click Z-Wave on the right Click Add Node and follow the steps on screen Understanding how the sensor works #Now that the sensor has been added, we need to understand how it works. One of the entities the sensor provides is an alarm_level. It has two possible values:\n0: the sensor is tilted vertically (garage door is closed) 255: the sensor is tilted horizontally (garage door is open) If the sensor changes from 0 to 255, then someone opened the garage door. Closing the door would result in the sensor changing from 255 to 0.\nAdding automation #Let\u0026rsquo;s add automation to let us know when the door is open:\nClick Configuration on the left Click Automation on the right Click the plus (+) at the bottom right Set a good name (like \u0026ldquo;Garage door open\u0026rdquo;) Under triggers, look for Vision ZG8101 Garage Door Detector Alarm Level and select it Set From to 0 Set To to 255 Leave the For spot empty Now that we can detect the garage door being open, we need a notification action. I love PushBullet and I have an action set up for PushBullet notifications already. Here\u0026rsquo;s how to use an action:\nSelect Call Service for Action Type in the Actions section Select a service to call when the trigger occurs Service data should contain the json that contains the notification message and title Here\u0026rsquo;s an example of my service data:\n{ \u0026#34;message\u0026#34;: \u0026#34;Someone opened the garage door at home.\u0026#34;, \u0026#34;title\u0026#34;: \u0026#34;Garage door opened\u0026#34; } Press the orange and white save icon at the bottom right and you are ready to go! You can tilt the sensor in your hand to test it or attach it to your garage door and test it there.\nIf you want to know when the garage door is closed, follow the same steps above, but use 255 for From and 0 for To.\n","date":"14 January 2019","permalink":"/p/running-home-assistant-in-a-docker-container-with-zwave-usb-stick/","section":"Posts","summary":"The Home Assistant project provides a great open source way to get started with home automtion that can be entirely self-contained within your home.","title":"Running Home Assistant in a Docker container with a Z-Wave USB stick"},{"content":"Managing iptables gets a lot easier with firewalld. You can manage rules for the IPv4 and IPv6 stacks using the same commands and it provides fine-grained controls for various \u0026ldquo;zones\u0026rdquo; of network sources and destinations.\nQuick example #Here\u0026rsquo;s an example of allowing an arbitrary port (for netdata) through the firewall with iptables and firewalld on Fedora:\n## iptables iptables -A INPUT -j ACCEPT -p tcp --dport 19999 ip6tables -A INPUT -j ACCEPT -p tcp --dport 19999 service iptables save service ip6tables save ## firewalld firewall-cmd --add-port=19999/tcp --permanent In this example, firewall-cmd allows us to allow a TCP port through the firewall with a much simpler interface and the change is made permanent with the --permanent argument.\nYou can always test a change with firewalld without making it permanent:\nfirewall-cmd --add-port=19999/tcp ## Do your testing to make sure everything works. firewall-cmd --runtime-to-permanent The --runtime-to-permanent argument tells firewalld to write the currently active firewall configuration to disk.\nAdding a port range #I use mosh with most of my servers since it allows me to reconnect to an existing session from anywhere in the world and it makes higher latency connections less painful. Mosh requires a range of UDP ports (60000 to 61000) to be opened.\nWe can do that easily in firewalld:\nfirewall-cmd --add-port=60000-61000/udp --permanent We can also see the rule it added to the firewall:\n# iptables-save | grep 61000 -A IN_public_allow -p udp -m udp --dport 60000:61000 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT # ip6tables-save | grep 61000 -A IN_public_allow -p udp -m udp --dport 60000:61000 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT If you haven\u0026rsquo;t used firewalld yet, give it a try! There\u0026rsquo;s a lot more documentation on common use cases in the Fedora firewalld documentation.\n","date":"4 January 2019","permalink":"/p/allow-port-range-with-firewalld/","section":"Posts","summary":"Managing iptables gets a lot easier with firewalld.","title":"Allow a port range with firewalld"},{"content":"Firefox has some great features, but one of my favorites is the ability to disable autoplay for videos. We\u0026rsquo;ve all had one of those moments: your speakers are turned up and you browse to a website with an annoying advertisement that plays immediately.\nI just want it to stop This feature stopped working for me somewhere in the Firefox 65 beta releases. Also, the usual setting in the preference page (under Privacy \u0026amp; Security) seems to be missing.\nLuckily we can edit Firefox\u0026rsquo;s configuration directly to get this feature working again. Open up a new browser tab, go to about:config, and adjust these settings:\nSet media.autoplay.default to 1 to disable video autoplay for all sites\nSet media.autoplay.allow-muted to false to disable video autoplay even for muted videos\nThose changes take effect for any new pages that you open after making the change.\n","date":"18 December 2018","permalink":"/p/disable-autoplay-for-videos-in-firefox-65/","section":"Posts","summary":"Firefox has some great features, but one of my favorites is the ability to disable autoplay for videos.","title":"Disable autoplay for videos in Firefox 65"},{"content":"","date":null,"permalink":"/tags/amateur-radio/","section":"Tags","summary":"","title":"Amateur Radio"},{"content":"Amateur radio is a fun way to mess around with technology, meet new people, and communicate off the grid. Talking directly to another radio on a single frequency (also called simplex) is the easiest way to get started. However, it can be difficult to communicate over longer distances without amplifiers, proper wiring, and antennas. This is where a radio repeater can help.\nWhat\u0026rsquo;s in scope #This post is focused on fairly local communication on VHF/UHF bands. The most common frequencies for local communication in these bands are:\n2 meters (~144-148MHz)* 70 centimeters (~420-450MHz)* * NOTE: Always consult the band plan for your area to see which part of the frequency band you could and should use.\nOf course, you can do some amazing things with weak signal VHF (which can be used to commuinicate over great distances), but we\u0026rsquo;re not talking about that here. The HAMSter Amateur Radio Group is a great place to get started with that.\nWe\u0026rsquo;re also not talking about radio bands longer than 2 meters (which includes high frequency (HF) bands). Some of those bands require advanced FCC licensing that takes additional studying and practice.\nKeeping it simple(x) #Simplex radio involves communication where radios are tuned to a single frequency and only one radio can transmit at a time. This is like a simple walkie-talkie. If one person is transmitting, everyone else listens. If someone else tries to transmit at the same time, then the waves will be garbled and nobody will be able to hear either person. This is often called \u0026ldquo;doubling up\u0026rdquo;.\nThis method works well when radios are in range of each other without a bunch of objects in between. However, it\u0026rsquo;s difficult to talk via simplex over great distances or around big obstables, such as mountains or hills.\nRepeaters #Repeaters are a little more complex to use, but they provide some great benefits. A repeater usually consists of one or two radios, one or two antennas, duplexers, and some other basic equipment. They receive a signel on one frequency and broadcast that same signal on another frequency. They often are mounted high on towers and this gives them a much better reach than antennas on your car or home.\nI enjoy using a repeater here in San Antonio called KE5HBB. The repeater has this configuration:\nDownlink: 145.370 Uplink: 144.770 Offset: -0.6 MHz Uplink Tone: 114.8 Downlink Tone: 114.8 Let\u0026rsquo;s make sense of this data:\nDownlink: This is the frequency that the repeater uses to transmit. In other words, when people talk on this repeater, this is the frequency you use to hear them.\nUplink: The receiver listens on this frequency. If you want to talk to people who are listening to this repeater, you need to transmit on this frequency.\nOffset: This tells you how to calculate the uplink frequency if it is not shown. This repeater has a negative 0.6 offset, so we can calculate the uplink frequency if it was not provided:\n145.370 - 0.600 = 144.770 Uplink/Downlink Tones: Your radio must transmit this tone to open the squelch on the repeater (more on this in a moment). The repeater will use the same tone to transmit, so we can configure our radio to listen for that tone and only open our squelch when it is detected. Opening the squelch #Transmitting radio waves uses a lot of power and it creates a lot of heat. There are parts of a radio that will wear out much more quickly if a radio is transmitting constantly. This is why receivers have a squelch. This means that a radio must transmit something strong enough on the frequency (or use a tone) to let the repeater know that it needs to repeat something.\nYou may come across repeaters with no tones listed (sometimes shown as PL). This means that you can just transmit on the uplin frequency and the repeater will repeat your signal. These repeaters are easy to use, but they can create problems.\nImagine if you\u0026rsquo;re traveling through an area and you\u0026rsquo;re using a frequency to talk to a friend in another car. As you\u0026rsquo;re driving, you move in range of a repeater that is listening on that frequeny. Suddenly your conversation is now being broadcasted through the repeater and everyone listening to that repeater must listen to you. This isn\u0026rsquo;t what you expected and it could be annoying to other listeners.\nAlso, in crowded urban areas, there\u0026rsquo;s always a chance that signals might end up on the repeater\u0026rsquo;s listening frequency unintentionally. That would cause the repeater to start transmitting and it would increase wear.\nTwo repeaters might be relatively close (or just out of range) and the tone helps each repeater identify its own valid radio traffic.\nTuning the tones #Most repeaters have a tone squelch. That means you can blast them with 100 watts of radio waves and they won\u0026rsquo;t repeat a thing until you transmit an inaudible tone at the beginning of your transmission.\nAs an example, in the case of KE5HBB, this tone is 114.8. You must configure a CTCSS tone on your radio so that the tone is transmitted as soon as you begin transmitting. That signals the repeater that it\u0026rsquo;s time to repeat. These signals aren\u0026rsquo;t audible to humans.\nIf you know you\u0026rsquo;re tuned to the right frequency to transmit (the uplink frequency), but the repeater won\u0026rsquo;t repeat your traffic, then you are most likely missing a tone. There\u0026rsquo;s also a chance that you programmed the uplink and downlink tones into your radio in reverse, so check that, too.\nRepeater transmit tone #Some receivers will transmit a tone when they broadcast back to you, but some won\u0026rsquo;t. If you can transmit but you can\u0026rsquo;t hear anyone else when they talk, double check your radio\u0026rsquo;s settings for a tone squelch on the receiving side. Your radio can also listen for these tones and only open its squelch when it hears them.\nI usually disable receiver squelch for tones on my radio since the repeater operator could disable that feature at any time and I wouldn\u0026rsquo;t be able to hear any transmissions since my radio would be waiting for the tone.\nTesting a repeater #First off, please don\u0026rsquo;t test a repeater unless you have a proper amateur radio license in your jurisdiction. In the United States, that\u0026rsquo;s the FCC. Don\u0026rsquo;t skip this step.\nOnce you get your repeater\u0026rsquo;s frequencies programmed into your radio properly and you\u0026rsquo;ve double checked the settings for sending tones, you can try \u0026ldquo;breaking the squelch.\u0026rdquo;\nPress the transmit button on your radio briefly for about half second and release. You should hear something when you do this. For some repeaters, you may hear a KERRRCHUNK noise. That\u0026rsquo;s the sound of the repeater squelch closing the transmission now that you\u0026rsquo;re done with your transmission. On other repeaters, you may hear some audible tones or beeps as soon as you release the transmit button.\nOnce you have it working properly, stop breakng the squelch and introduce yourself! For example, when I\u0026rsquo;m in my car, I might say: \u0026ldquo;W5WUT mobile and monitoring.\u0026rdquo; That lets people on the repeater know that I\u0026rsquo;m there and that I\u0026rsquo;m moving (so I might not be on for a very long time).\nGood luck on the radio waves! 73\u0026rsquo;s from W5WUT.\n","date":"13 December 2018","permalink":"/p/getting-started-with-ham-radio-repeaters/","section":"Posts","summary":"Repeaters are a great way to get into ham radio, but they can be tricky to use for new amateur radio operators. This post explains how to get started.","title":"Getting started with ham radio repeaters"},{"content":"","date":null,"permalink":"/tags/ham-radio/","section":"Tags","summary":"","title":"Ham Radio"},{"content":"OpenShift deployments allow you to take a container image and run it within a cluster. You can easily add extra items to the deployment, such as environment variables or volumes.\nThe best practice for sensitive environment variables is to place them into a secret object rather than directly in the deployment configuration itself. Although this keeps the secret data out of the deployment, the environment variable is still exposed to the running application inside the container.\nCreating a secret #Let\u0026rsquo;s start with a snippet of a deploymentConfig that has a sensitive environment variable in plain text:\nspec: containers: - env: - name: MYAPP_SECRET_TOKEN value: vPWps5E7KO8KPlljaD3eXED3f6jmLsV5mQ image: \u0026#34;fedora:29\u0026#34; name: my_app The first step is to create a secret object that contains our sensitive environment variable:\napiVersion: v1 kind: Secret metadata: name: secret-token-for-my-app stringData: SECRET_TOKEN: vPWps5E7KO8KPlljaD3eXED3f6jmLsV5mQ Save this file as secret-token.yml and deploy it to OpenShift:\noc apply -f secret-token.yml Query OpenShift to ensure the secret is ready to use:\n$ oc get secret/secret-token-for-my-app NAME TYPE DATA AGE secret-token-for-my-app Opaque 1 1h Using the secret #We can adjust the deployment configuration to use this new secret:\nspec: containers: - env: - name: MYAPP_SECRET_TOKEN valueFrom: secretKeyRef: key: SECRET_TOKEN name: secret-token-for-my-app image: \u0026#34;fedora:29\u0026#34; name: my_app This configuration tells OpenShift to look inside the secret object called secret-token-for-my-app for a key matching SECRET_TOKEN. The value will be passed into the MYAPP_SECRET_TOKEN environment variable and it will be available to the application running in the container.\nSecurity note: If someone has access to change the deployment configuration object, they could get access to the value of the secret without having direct access to the secret object itself. It would be trivial to change the startup command in the container to print all of the environment variables in the container (using the env command) and view them in the container logs.\n","date":"6 December 2018","permalink":"/p/use-secret-as-environment-variable-in-openshift-deployments/","section":"Posts","summary":"Environment variables are easy to add to OpenShift deployments, but a more secure way to add these variables is by referencing a secret.","title":"Use a secret as an environment variable in OpenShift deployments"},{"content":"","date":null,"permalink":"/tags/irc/","section":"Tags","summary":"","title":"Irc"},{"content":"As I make the move from the world of GNOME to i3, I found myself digging deeper into the terminator preferences to make it work more like gnome-terminal.\nI kept running into an issue where I couldn\u0026rsquo;t move up and down between buffers using alt and arrow keys. My workaround was to call the buffer directly with alt-8 (for buffer #8) or alt-j 18 (buffer #18). However, that became tedious. Sometimes I just wanted to quickly hop up or down one or two buffers.\nTo fix this problem, right click anywhere inside the terminal and choose Preferences. Click on the Keybindings tab and look for go_up and go_down. These are almost always set to Alt-Up and Alt-Down by default. That\u0026rsquo;s the root of the problem: terminator is grabbing those keystrokes before they can make it down into weechat.\nUnfortunately, it\u0026rsquo;s not possible to clear a keybinding within the preferences dialog. Close the window and open ~/.config/terminator/config in a terminal.\nIf you\u0026rsquo;re new to terminator, you might not have a [keybindings] section in your configuration file. If that\u0026rsquo;s the case, add the whole section below the [global_config] section. Otherwise, just ensure your [keybindings] section contains these lines:\n[keybindings] go_down = None go_up = None Close all of the terminator windows (on all of your workspaces). This is a critical step! Terminator only loads the config file when it is first started, not when additional terminals are opened.\nOpen a terminator terminal, start weechat, and test your alt-arrow keys! You should be moving up and down between buffers easily. If that doesn\u0026rsquo;t work, check your window manager\u0026rsquo;s settings to ensure that another application hasn\u0026rsquo;t stolen that keybinding from your terminals.\n","date":"6 September 2018","permalink":"/p/make-alt-arrow-keys-work-with-terminator-and-weechat/","section":"Posts","summary":"\u003cp\u003eAs I make the move from the world of GNOME to i3, I found myself digging deeper into the \u003ca href=\"https://terminator-gtk3.readthedocs.io/en/latest/\" target=\"_blank\" rel=\"noreferrer\"\u003eterminator\u003c/a\u003e preferences to make it work more like \u003ca href=\"https://help.gnome.org/users/gnome-terminal/stable/\" target=\"_blank\" rel=\"noreferrer\"\u003egnome-terminal\u003c/a\u003e.\u003c/p\u003e","title":"Make alt-arrow keys work with terminator and weechat"},{"content":"","date":null,"permalink":"/tags/terminator/","section":"Tags","summary":"","title":"Terminator"},{"content":"","date":null,"permalink":"/tags/weechat/","section":"Tags","summary":"","title":"Weechat"},{"content":"","date":null,"permalink":"/tags/guidance/","section":"Tags","summary":"","title":"Guidance"},{"content":"I\u0026rsquo;m at the 2018 Red Hat Summit this week in San Francisco and I am enjoying the interactions between developers, executives, vendors, and engineers. It\u0026rsquo;s my seventh Summit (but my first as a Red Hat employee!), but I regularly meet people who are attending their first technical conference.\nThe question inevitably comes up: \u0026ldquo;I\u0026rsquo;m so tired. How do you survive these events?\u0026rdquo;\nOne attendee asked me to write a blog post on my tips and tricks. This is the post that explains how to thrive, not just survive, at conferences. Beware - these tips are based on my experiences and your mileage may vary depending on your personality, the event itself, and your caffeine intake.\nDiscover the area #Traveling to a conference is awesome way to experience more of the world! Take time to enjoy the tourist sites but also find out where the locals like to go. Any hotel concierge should be able to give you advice on where to go to truly experience the location.\nTake some time to learn the area around your hotel and the venue. Be sure you can navigate between the two and find some important spots nearby, like pharmacies and coffee shops.\nFood, water, and sleep #These conferences can often feel overwhelming and you may find yourself forgetting to eat the right foods, stay hydrated, and get some rest.\nTake every opportunity to eat healthier foods during the week that will give you energy without weighing you down. All the stuff that your Mom told you to eat is a good idea. My rule of thumb is to eat a heavy breakfast, a medium sized lunch, and then whatever I want for dinner. Evening events often have free food (more on those events next), and that fits my travel budget well. It also allows me to splurge a bit on foods that I might not eat back home.\nTake along a water container when you travel. You can\u0026rsquo;t always depend on the conference for making water available and you\u0026rsquo;ll often need more than they offer anyway. I\u0026rsquo;m a big fan of Nalgene\u0026rsquo;s products since they take a beating and they have really good seals.\nSleeping is a real challenge. Early morning keynotes and late night events put a strain on anyone\u0026rsquo;s sleep schedule. Lots of people have trouble sleeping in hotels or in cities where the noise level remains high all night long. The best remedy is to be choosy about the events you attend and the time you spend there. Think about what is more valuable: more time listening to blasting music at a party or more time with your head on the pillow.\nConsider using an application on your phone that provides various types of noises, such as white noise. I love the White Noise app on Android since it has tons of options for various sounds. In my experience, brown noise works best for sleeping. Pink noise can help in extremely noisy environments (like downtown San Francisco) but it\u0026rsquo;s often too loud for me.\nKeep your devices charged #Find a way to keep your devices charged, especially your phone. I use Anker battery packs to keep my phone topped up during the day when I can\u0026rsquo;t get to a plug. A dead phone disconnects you from your friends, maps, and conference details.\nDress for success #Your clothing selection really depends on the type of conference and the company you represent. If you need to dress formally each day, then your choices are already made for you.\nPack layers of clothing so you can add or remove layers as needed. The walk to the conference center may be warm, but the keynote auditorium could feel like a freezer. This also prepares you for evening events which might be outdoors.\nWear clothing that makes you feel comfortable. You\u0026rsquo;ll find a wide range of outfits at most tech conferences and you\u0026rsquo;ll find that nobody really cares how formal or informal you are. If you\u0026rsquo;re there to listen, learn, and contribute, then dress casually. If you\u0026rsquo;re looking for a new job, doing a talk, or if you\u0026rsquo;ll be on camera, choose something a little more formal.\nThe hallway track #You won\u0026rsquo;t find the hallway track on any agenda, but it is often the most valuable part of any gathering. The hallway track encompasses those brief encounters you have with other people at the event. Turn those mundane events, such as waiting in line, eating lunch, or between talks, into opportunities to meet other people.\nYes, this does mean that you must do something to come out of your shell and start a conversation. This is still difficult for me. Here are some good ways to start a conversation with someone around you:\n\u0026ldquo;Hello, my name is Major\u0026rdquo; (put out your hand for a handshake) \u0026ldquo;Where do you work?\u0026rdquo; \u0026ldquo;What do you work on?\u0026rdquo; \u0026ldquo;Man, this line is really long.\u0026rdquo; \u0026ldquo;vim or emacs?\u0026rdquo; (just kidding) The secret is to find something that makes you and the other person feel comfortable. There are situations where you might be met with a cold shoulder, and that\u0026rsquo;s okay. I\u0026rsquo;ve found that sometimes people need some space or the issue could be a language barrier. Making the attempt is what matters.\nThese are excellent opportunities for learning, for listening, and for sharing. These new contacts will show up again and again at the event (more on parties/networking next), and you can talk to them again when you feel the tendency to become a wallflower again.\nParties and networking events #Evening events at conferences are a great way to keep the hallway track going while taking some time to relax as well. Some of the best conversations I\u0026rsquo;ve had at conferences were during evening events or vendor parties. People are more candid since the conference demands are often reduced.\nHowever, it\u0026rsquo;s incredibly easy to make some spectacularly bad decisions at these events. This list should help you navigate these events and get value from them:\nEnjoy an open bar responsibly #Early in my career, I looked at an open bar as a magical oasis. Free drinks! As many as I want! This is heaven! (Narrator: It was not heaven. It was something else.)\nI think about open bars much like I think about a trip to Las Vegas. Before I go, I think about how much money I feel like losing, and I only bet that much. Once the money is gone, I\u0026rsquo;m done.\nGo into the event knowing how much or how little you want to consume. Zero is an entirely valid answer. Keep in mind that the answer to \u0026ldquo;Why aren\u0026rsquo;t you drinking anything?\u0026rdquo; does not have to be \u0026ldquo;I guess I\u0026rsquo;ll get something.\u0026rdquo; Nobody needs to know why you\u0026rsquo;re not drinking and you shouldn\u0026rsquo;t feel pressured to do something you don\u0026rsquo;t want to do.\nThink about how you want to feel in the morning. Is a massive hangover worth another round of shots? Is it worth it to ruin your talk the next day? Is it worth it to get belligerent and say something that may be difficult to take back? Think about these things ahead of time and make a plan before you begin drinking.\nLeave when you want #Some evening events can last much too late and this could derail your plans for the morning. If the party runs from 7-10PM, don\u0026rsquo;t feel obligated to stay until 10PM. If you\u0026rsquo;re not meeting the right people or if you\u0026rsquo;re not having a good time: leave. It\u0026rsquo;s better to abandon an event early than suffer through it and crawl through the next morning.\nTurn down an uninteresting invitation #The conference may host various events or a vendor may invite you to an event. These are just invitations and your attendance is not required (unless you work for the vendor throwing the party). Feel free to do something else with your time if the event or the venue seem uninteresting or unsafe. (More on safety next.)\nGet a party buddy #Remember those people you talked to in the hallway and during lunch? Find those people at the event and tell them you enjoyed the conversation from earlier. I\u0026rsquo;ve been to conferences before where I\u0026rsquo;ve been the only one from my company and after letting the other person know that, they invited me to hang out with them or their group at the event.\nThis is a good idea for two reasons. First, it gives you someone to talk to. More importantly, it helps you stay safe.\nDealing with harassment #This gets its own section. It has happened to me and it will likely happen to you.\nNobody ever wants it to happen, but people are often harassed in one way or another at these events. It\u0026rsquo;s inevitable: there are drinks, people are away from home, and they\u0026rsquo;re enjoying time away from work. For some people, this is a combination of factors that leads them to make bad choices at these events.\nHarassment comes in many forms, but nobody should put up with it. If you see someone being treated badly, step in. If you\u0026rsquo;re being treated badly, get help. If you\u0026rsquo;re treating someone badly, apologize and remove yourself from the situation. This is where a party buddy can be extremely helpful.\nHarassment is not a women-only or men-only problem. I have been touched in unwelcome ways and verbally harassed at evening events. It is not fun. In my experience, telling the other person to \u0026ldquo;Please stop\u0026rdquo; or \u0026ldquo;That is not okay\u0026rdquo; is usually enough to diffuse the situation.\nThis may not always work. Grab your buddy and get help from conference staffers or a security guard if a situation continues to escalate.\nMore ideas #These are some ideas that help me thrive at conferences and make the most of my time traveling. Feel free to leave some of your ideas below in the comments section!\n","date":"9 May 2018","permalink":"/p/how-to-thrive-at-a-tech-conference/","section":"Posts","summary":"I\u0026rsquo;m at the 2018 Red Hat Summit this week in San Francisco and I am enjoying the interactions between developers, executives, vendors, and engineers.","title":"How to thrive at a technical conference"},{"content":"","date":null,"permalink":"/tags/impostor-syndrome/","section":"Tags","summary":"","title":"Impostor Syndrome"},{"content":"","date":null,"permalink":"/tags/openstack/","section":"Tags","summary":"","title":"Openstack"},{"content":"","date":null,"permalink":"/tags/rackspace/","section":"Tags","summary":"","title":"Rackspace"},{"content":"Walt Disney said it best:\nWe keep moving forward, opening new doors, and doing new things, because we\u0026rsquo;re curious and curiosity keeps leading us down new paths.\nThe world of technology is all about change. We tear down the old things that get in our way and we build new technology that takes us to new heights. Tearing down these old things can often be difficult and that forces us to make difficult choices.\nRackspace has been a great home for me for over 11 years. I\u0026rsquo;ve made the incredibly difficult choice to leave Rackspace on March 9th to pursue new challenges.\nHumble beginnings #I came to Rackspace as an entry-level Linux administrator and was amazed by the culture generated by Rackers. The dedication to customers, technology, and quality was palpable from the first few minutes I spent with my team. Although I didn\u0026rsquo;t know it at the time, I had landed at the epicenter of a sink-or-swim technology learning experience. My team had some very demanding customers with complex infrastructures and it forced me to take plenty of notes (and hard knocks). My manager and teammates supported me through it all.\nFrom there, I served in several different roles. I was a manager of technicians on a support team and had the opportunity to learn how to mentor. One of my favorite leaders said that \u0026ldquo;good managers know when to put their arm around to people and when to put a boot in their rear.\u0026rdquo; I reluctantly learned how to do both and I watched my people grow into senior engineers and great leaders.\nDatapoint office closing in 2011\nI was pulled to Mosso, Rackspace\u0026rsquo;s first cloud offering, shortly after that and discovered an entirely new world. Rackers force-fed me \u0026ldquo;Why\u0026rsquo;s (Poignant) Guide to Ruby\u0026rdquo; and I started building scripts and web front-ends for various services. Rackspace acquired Slicehost after that and I jumped at the chance to work as an operations engineer on the new infrastructure. That led to a lot of late nights diagnosing problems with Xen hypervisors and rails applications. I met some amazing people and began to realize that St. Louis has some pretty good barbecue (but Texas still has them beat).\nSlicehost humor in 2009\nNot long after that, I found myself managing an operations team that cared for Slicehost\u0026rsquo;s infrastructure and Rackspace\u0026rsquo;s growing Cloud Servers infrastructure. OpenStack appeared later and I jumped at the chance to do operations there. It was an extremely rough experience in the Diablo release, but it taught me a lot. My start with OpenStack involved fixing lots of broken Keystone tests that didn\u0026rsquo;t run on Python 2.6.\nWorking on OpenStack in 2012\nIf you\u0026rsquo;ve attended some of my talks on impostor syndrome, you may know what came next. We had a security issue and I sent some direct feedback to our CSO about how it was handled. I expected to be told to \u0026ldquo;pack a box\u0026rdquo; after that, but I was actually asked to lead a security architecture team in the corporate security group. It was definitely a surprise. I accepted and joined the team as Chief Security Architect. My coworkers called it \u0026ldquo;joining the dark side\u0026rdquo;, but I did my best to build bridges between security teams and the rest of the company.\nTalking at Rackspace::Solve in 2015\nThis role really challenged me. I had never operated at the Director level before and our team had a ton of work to do. I found myself stumbling (and floundering) fairly often and I leaned on other leaders in the business for advice. This led me to take some courses on critical thinking, accounting, finance, and tough conversations. I\u0026rsquo;ve never had a role as difficult as this one.\nOur cloud team came calling and asked me to come back and help with some critical projects in the public cloud. We worked on some awesome skunkworks projects that could really change the business. Although they didn\u0026rsquo;t get deployed in one piece, we found ways to take chunks of the work and optimize different areas of our work. An opportunity came up to bring public cloud experience to the private cloud team and I jumped on that one. I discovered the awesome OpenStack-Ansible project and a strong set of Rackers who were dedicated to bringing high-touch service to customers who wanted OpenStack in their own datacenter.\nImpostor syndrome talk at the Boston OpenStack Summit in 2017\nDuring this time, I had the opportunity to deliver several conference talks about OpenStack, Fedora, security, and Ansible. My favorite topic was impostor syndrome and I set out on a mission to help people understand it. My first big talk was at the Fedora Flock conference in Rochester in 2015. This led to deep conversations with technical people in conference hallways, evening events, and even airport terminals about how impostor syndrome affects them. I took those conversations and refined my message several times over.\nTalking about impostor syndrome at Fedora Flock 2015 (Photo credit: Kushal Das)\nGratitude #I couldn\u0026rsquo;t even begin to name a list of Rackers who have helped me along the way. I wouldn\u0026rsquo;t be where I am now without the help of hundreds of Rackers. They\u0026rsquo;ve taught me how to build technology, how to navigate a business, and how to be a better human. They have made me who I am today and I\u0026rsquo;m eternally grateful. I\u0026rsquo;ve had an incredible amount of hugs this week at the office and I\u0026rsquo;ve tried my best not to get a face full of tears in the process.\nI\u0026rsquo;d also like to thank all of the people who have allowed me to mentor them and teach them something along the way. One of the best ways to understand something is to teach it to someone else. I relish any opportunity to help someone avoid a mistake I made, or at least be able to throw something soft under them to catch their fall. These people put up with my thick Texas accent, my erratic whiteboard diagrams, and worse of all, my dad jokes.\nAnother big \u0026ldquo;thank you\u0026rdquo; goes out to all of the members of the open source communities who have mentored me and dealt with my patches.\nThe first big community I joined was the Fedora Linux community. I\u0026rsquo;ve been fortunate to serve on the board and participate in different working groups. Everyone has been helpful and accommodating, even when I pushed broken package builds. I plan to keep working in the community as long as they will have me!\nThe OpenStack community has been like family. Everyone - from developers to foundation leaders - has truly been a treat to work with over several years. My work on Rackspace\u0026rsquo;s public and private clouds has pushed me into various projects within the OpenStack ecosystem and I\u0026rsquo;ve found everyone to be responsive. OpenStack events are truly inspiring and it is incredible to see so many people from so many places who dedicate themselves to the software and the people that make cloud infrastructure work.\nThe next adventure #I plan to talk more on this later, but I will be working from home on some projects that are entirely different from what I\u0026rsquo;m working on now. That adventure starts on March 19th after a week of \u0026ldquo;funemployment.\u0026rdquo; I\u0026rsquo;m incredibly excited about the new opportunity and I\u0026rsquo;ll share more details when I can.\nTop photo credit: Wikipedia\n","date":"7 March 2018","permalink":"/p/reaching-the-fork-in-the-road/","section":"Posts","summary":"Walt Disney said it best:","title":"Reaching the fork in the road"},{"content":"","date":null,"permalink":"/tags/dnf/","section":"Tags","summary":"","title":"Dnf"},{"content":"If you\u0026rsquo;re on the latest Fedora release, you\u0026rsquo;re already running lots of modern packages. However, there are those times when you may want to help with testing efforts or try out a new feature in a newer package.\nMost of my systems have the updates-testing repository enabled in one way or another. This repository contains packages that package maintainers have submitted to become the next stable package in Fedora. For example, if there is a bug fix for nginx, the package maintainer submits the changes and publish a release. That release goes into the testing repositories and must sit for a waiting period or receive sufficient karma (\u0026ldquo;works for me\u0026rdquo; responses) to move into stable repositories.\nGetting started #One of the easiest ways to get started is to allow a small amount of packages to be installed from the testing repository on a regular basis. Fully enabling the testing repository for all packages can lead to trouble on occasion, especially if a package maintainer discovers a problem and submits a new testing package.\nTo get started, open /etc/yum.repos.d/fedora-updates-testing.repo in your favorite text editor (using sudo). This file tells yum and dnf where it should look for packages. The stock testing repository configuration looks like this:\n[updates-testing] name=Fedora $releasever - $basearch - Test Updates failovermethod=priority #baseurl=http://download.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/ metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever\u0026amp;arch=$basearch enabled=0 repo_gpgcheck=0 type=rpm gpgcheck=1 metadata_expire=6h gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch skip_if_unavailable=False By default, the repository is not enabled (enabled=0).\nIn this example, let\u0026rsquo;s consider a situation where you want to test the latest kernel packages as soon as they reach the testing repository. We need to make two edits to the repository configuration:\nenabled=1 - Allow yum/dnf to use the repository includepkgs=kernel* - Only allow packages matching kernel* to be installed from the testing repository The repository configuration should now look like this:\n[updates-testing] name=Fedora $releasever - $basearch - Test Updates failovermethod=priority #baseurl=http://download.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/ metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever\u0026amp;arch=$basearch enabled=1 repo_gpgcheck=0 type=rpm gpgcheck=1 metadata_expire=6h gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch skip_if_unavailable=False includepkgs=kernel* Getting testing packages #Running dnf upgrade kernel* should now pull a kernel from the updates-testing repository. You can verify this by checking the Repository column in the dnf output.\nIf you feel more adventurous later, you can add additional packages (separated by spaces) to the includepkgs line. The truly adventurous users can leave the repo enabled but remove includepkgs altogether. This will pull all available packages from the testing repository as soon as they are available.\nPackage maintainers need feedback! #One final note: package maintainers need your feedback on packages. Positive or negative feedback is very helpful. You can search for the package on Bodhi and submit feedback there, or use the fedora-easy-karma script via the fedora-easy-karma package. The script will look through your installed package list and query you for feedback on each one.\nSubmitting lots of feedback can earn you some awesome Fedora Badges!\nPhoto credit: US Air Force\n","date":"28 February 2018","permalink":"/p/install-testing-kernels-in-fedora/","section":"Posts","summary":"If you\u0026rsquo;re on the latest Fedora release, you\u0026rsquo;re already running lots of modern packages.","title":"Install testing kernels in Fedora"},{"content":"","date":null,"permalink":"/tags/kernel/","section":"Tags","summary":"","title":"Kernel"},{"content":"","date":null,"permalink":"/tags/yum/","section":"Tags","summary":"","title":"Yum"},{"content":"The Overland Expo in Asheville last year was a great event, and one of my favorite sessions covered the basics about radio communications while overlanding. The instructors shared their radios with us and taught us some tips and tricks for how to save power and communicate effectively on the trail.\nBack at the office, I was surprised to discover how many of my coworkers had an FCC license already. They gave me tips on getting started and how to learn the material for the exam. I took some of my questions to Twitter and had plenty of help pouring in quickly.\nThis post covers how I studied, what the exam was like, and what I\u0026rsquo;ve learned after getting on the air.\nThe basics #FCC licenses in the US for amateur radio operators have multiple levels. Everything starts with the Technician level and you get the most basic access to radio frequencies. From there, you can upgrade (with another exam) to General, and Extra. Each license upgrade opens up more frequencies and privileges.\nStudying #A coworker recommended the official ARRL book for the Technician exam and I picked up a paper copy. The content is extremely dry. It was difficult to remain focused for long periods.\nThe entire exam is available in the public domain, so you can actually go straight to the questions that you\u0026rsquo;ll see on the exam and study those. I flipped to the question section in the ARRL book and found the questions I could answer easily (mostly about circuits and electrical parts). For each one that was new or difficult, I flipped back in the ARRL book to the discussion in each chapter and learned the material.\nI also used HamStudy.org to quickly practice and keep track of my progress. The site has some handy graphs that show you how many questions you\u0026rsquo;ve seen and what your knowledge level of different topics really is. I kept working through questions on the site until I was regularly getting 90% or higher on the practice tests.\nTesting #Before you test, be sure to get a FCC Registration Number (commonly called a FRN). They are free to get and it ensures that you get your license (often called your \u0026rsquo;ticket\u0026rsquo;) as soon as possible. I was told that some examiners won\u0026rsquo;t offer you a test if you don\u0026rsquo;t have your FRN already.\nThe next step is to find an amateur radio exam in your area. Exams are available in the San Antonio area every weekend and they are held by different groups. I took mine with the Radio Operators of South Texas and the examiners were great! Some examiners require you to check in with them so they know you are coming to test, but it\u0026rsquo;s a good idea to do this anyway. Ask how they want to be paid (cash, check, etc), too.\nBe sure to take a couple of pencils, a basic calculator, your government issued ID, your payment, and your FRN to the exam. I forgot the calculator but the examiners had a few extras. The examiners complete some paperwork before your exam, and you select one of the available test versions. Each test contains a randomly selected set of 35 questions from the pool of 350.\nGo through the test, carefully read each question, and fill in the answer sheet. Three examiners will grade it when you turn it in, and they will fill out your Certificate of Successful Completion of Examination (CSCE). Hold onto this paper just in case something happens with your FCC paperwork.\nThe examiners will send your paperwork to the FCC and you should receive a license within two weeks. Mine took about 11-12 business days, but I took it just before Thanksgiving. The FCC will send you a generic email stating that there is a new license available and you can download it directly from the FCC\u0026rsquo;s website.\nLessons learned on the air #Once I passed the exam and keyed up for the first transmission, I feared a procedural misstep more than anything. What if I say my callsign incorrectly? What if I\u0026rsquo;m transmitting at a power level that is too high? What power level is too high? What am I doing?!\nEveryone has to start somewhere and you\u0026rsquo;re going to make mistakes. Almost 99.9% of my radio contacts so far have been friendly, forgiving, and patient. I\u0026rsquo;ve learned a lot from listening to other people and from the feedback I get from radio contacts. Nobody will yell at you for using a repeater when simplex should work. Nobody will yell at you if you blast a repeater with 50 watts when 5 would be fine.\nI\u0026rsquo;m on VHF most often and I\u0026rsquo;ve found many local repeaters on RepeaterBook. Most of the repeaters in the San Antonio area are busiest during commute times (morning and afternoon) as well as lunchtime. I\u0026rsquo;ve announced my callsign when the repeater has been quiet for a while and often another radio operator will call back. It\u0026rsquo;s a good idea to mention that you\u0026rsquo;re new to amateur radio since that will make it easier for others to accept your mistakes and provide feedback.\nwhen I\u0026rsquo;m traveling long distances, I monitor the national simplex calling frequency (146.520). That\u0026rsquo;s the CB equivalent of channel 19 where you can announce yourself and have conversations. In busy urban areas, it\u0026rsquo;s best to work out another frequency with your contact to keep the calling frequency clear.\nMy equipment #My first purchase was a (cheap) BTECH UV-5X3. The price is fantastic, but the interface is rough to use. Editing saved channels is nearly impossible and navigating the menus requires a good manual to decipher the options. The manual that comes with it is surprisingly brief. There are some helpful how-to guides from other radio operators on various blogs that can help.\nI picked up a Kenwood TM-D710G mobile radio from a coworker and mounted it in the car. I wired it up with Anderson Powerpole connectors and that makes things incredibly easy (and portable). The interface on the Kenwood is light years ahead of the BTECH, but the price is 10x more.\nMy car has the Comet SBB-5NMO antenna mounted with a Comet CP-5NMO lip mount. It fits well on the rear of the 4Runner.\nManaging a lot of repeater frequencies is challenging with both radios (exponentially more so with the BTECH), but the open source CHIRP software works well. I installed it on my Fedora laptop and could manage both radios easily. The BTECH radio requires you to download the entire current configuration, edit it, and upload it to the radio. The Kenwood allows you to make adjustments to the radio in real time (which is excellent for testing).\nMore questions? #If you have more questions about any part of the process, let me know!\n","date":"6 January 2018","permalink":"/p/takeaways-from-my-foray-into-amateur-radio/","section":"Posts","summary":"The Overland Expo in Asheville last year was a great event, and one of my favorite sessions covered the basics about radio communications while overlanding.","title":"Takeaways from my foray into amateur radio"},{"content":"After a recent OpenStack-Ansible (OSA) deployment on CentOS, I found that keepalived was not starting properly at boot time:\nKeepalived_vrrp[801]: Cant find interface br-mgmt for vrrp_instance internal !!! Keepalived_vrrp[801]: Truncating auth_pass to 8 characters Keepalived_vrrp[801]: VRRP is trying to assign ip address 172.29.236.11/32 to unknown br-mgmt interface !!! go out and fix your conf !!! Keepalived_vrrp[801]: Cant find interface br-mgmt for vrrp_instance external !!! Keepalived_vrrp[801]: Truncating auth_pass to 8 characters Keepalived_vrrp[801]: VRRP is trying to assign ip address 192.168.250.11/32 to unknown br-mgmt interface !!! go out and fix your conf !!! Keepalived_vrrp[801]: VRRP_Instance(internal) Unknown interface ! systemd[1]: Started LVS and VRRP High Availability Monitor. Keepalived_vrrp[801]: Stopped Keepalived[799]: Keepalived_vrrp exited with permanent error CONFIG. Terminating OSA deployments have a management bridge for traffic between containers. These containers run the OpenStack APIs and other support services. By default, this bridge is called br-mgmt.\nThe keepalived daemon is starting before NetworkManager can bring up the br-mgmt bridge and that is causing keepalived to fail. We need a way to tell systemd to wait on the network before bringing up keepalived.\nWaiting on NetworkManager #There is a special systemd target, network-online.target, that is not reached until all networking is properly configured. NetworkManager comes with a handy service called NetworkManager-wait-online.service that must be complete before the network-online target can be reached:\n# rpm -ql NetworkManager | grep network-online /usr/lib/systemd/system/network-online.target.wants /usr/lib/systemd/system/network-online.target.wants/NetworkManager-wait-online.service Start by ensuring that the NetworkManager-wait-online service starts at boot time:\nsystemctl enable NetworkManager-wait-online.service Using network-online.target #Next, we tell the keepalived service to wait on network-online.target. Bring up an editor for overriding the keepalived.service unit:\nsystemctl edit keepalived.service Once the editor appears, add the following text:\n[Unit] Wants=network-online.target After=network-online.target Save the file in the editor and reboot the server. The keepalived service should come up successfully after NetworkManager signals that all of the network devices are online.\nLearn more by reading the upstream NetworkTarget documentation.\n","date":"15 December 2017","permalink":"/p/ensuring-keepalived-starts-network-ready/","section":"Posts","summary":"After a recent OpenStack-Ansible (OSA) deployment on CentOS, I found that keepalived was not starting properly at boot time:","title":"Ensuring keepalived starts after the network is ready"},{"content":"","date":null,"permalink":"/tags/network/","section":"Tags","summary":"","title":"Network"},{"content":"The latest release of the Red Hat Enterprise Linux Security Technical Implementation Guide (STIG) was published last week. This release is Version 1, Release 3, and it contains four main changes:\nV-77819 - Multifactor authentication is required for graphical logins V-77821 - Datagram Congestion Control Protocol (DCCP) kernel module must be disabled V-77823 - Single user mode must require user authentication V-77825 - Address space layout randomization (ASLR) must be enabled Deep dive #Let\u0026rsquo;s break down this list to understand what each one means.\nV-77819 - Multifactor authentication is required for graphical logins #This requirement improves security for graphical logins and extends the existing requirements for multifactor authentication for logins (see V-71965, V-72417, and V-72427). The STIG recommends smartcards (since the US Government often uses CAC cards for multifactor authentication), and this is a good idea for high security systems.\nI use Yubikey 4\u0026rsquo;s as smartcards in most situations and they work anywhere you have available USB slots.\nV-77821 - Datagram Congestion Control Protocol (DCCP) kernel module must be disabled #DCCP is often used as a congestion control mechanism for UDP traffic, but it isn\u0026rsquo;t used that often in modern networks. There have been vulnerabilities in the past that are mitigated by disabling DCCP, so it\u0026rsquo;s a good idea to disable it unless you have a strong reason for keeping it enabled.\nThe ansible-hardening role has been updated to disable the DCCP kernel module by default.\nV-77823 - Single user mode must require user authentication #Single user mode is often used in emergency situations where the server cannot boot properly or an issue must be repaired without a fully booted server. This mode can only be used at the server\u0026rsquo;s physical console, serial port, or via out-of-band management (DRAC, iLO, and IPMI). Allowing single-user mode access without authentication is a serious security risk.\nFortunately, every distribution supported by the ansible-hardening role already has authentication requirements for single user mode in place. The ansible-hardening role does not make any adjustments to the single user mode unit file since any untested adjustment could cause a system to have problems booting.\nV-77825 - Address space layout randomization (ASLR) must be enabled #ASLR is a handy technology that makes it more difficult for attackers to guess where a particular program is storing data in memory. It\u0026rsquo;s not perfect, but it certainly raises the difficulty for an attacker. There are multiple settings for this variable and the kernel documentation for sysctl has some brief explanations for each setting (search for randomize_va_space on the page).\nEvery distribution supported by the ansible-hardening role is already setting kernel.randomize_va_space=2 by default, which applies randomization for the basic parts of process memory (such as shared libraries and the stack) as well as the heap. The ansible-hardening role will ensure that the default setting is maintained.\nansible-hardening is already up to date #If you\u0026rsquo;re already using the ansible-hardening role\u0026rsquo;s master branch, these changes are already in place! Try out the new updates and open a bug report if you find any problems.\n","date":"2 November 2017","permalink":"/p/changes-in-rhel-7-security-technical-implementation-guide-version-1-release-3/","section":"Posts","summary":"The latest release of the Red Hat Enterprise Linux Security Technical Implementation Guide (STIG) was published last week.","title":"Changes in RHEL 7 Security Technical Implementation Guide Version 1, Release 3"},{"content":"","date":null,"permalink":"/tags/debian/","section":"Tags","summary":"","title":"Debian"},{"content":"","date":null,"permalink":"/tags/information-security/","section":"Tags","summary":"","title":"Information Security"},{"content":"","date":null,"permalink":"/tags/opensuse/","section":"Tags","summary":"","title":"Opensuse"},{"content":"","date":null,"permalink":"/tags/suse/","section":"Tags","summary":"","title":"Suse"},{"content":"","date":null,"permalink":"/tags/ubuntu/","section":"Tags","summary":"","title":"Ubuntu"},{"content":"I\u0026rsquo;ve been working through some patches to OpenStack-Ansible lately to optimize how we configure yum repositories in our deployments. During that work, I ran into some issues where pgp.mit.edu was returning 500 errors for some requests to retrieve GPG keys.\nAnsible was returning this error:\ncurl: (22) The requested URL returned error: 502 Proxy Error error: http://pgp.mit.edu:11371/pks/lookup?op=get\u0026amp;search=0x61E8806C: import read failed(2) How does the rpm command know which keyserver to use? Let\u0026rsquo;s use the --showrc argument to show how it is configured:\n$ rpm --showrc | grep hkp -14: _hkp_keyserver http://pgp.mit.edu -14: _hkp_keyserver_query %{_hkp_keyserver}:11371/pks/lookup?op=get\u0026amp;search=0x How do we change this value temporarily to test a GPG key retrieval from a different server? There\u0026rsquo;s an argument for that as well: --define:\n$ rpm --help | grep define -D, --define=\u0026#39;MACRO EXPR\u0026#39; define MACRO with value EXPR We can assemble that on the command line to set a different keyserver temporarily:\n# rpm -vv --define=\u0026#34;%_hkp_keyserver http://pool.sks-keyservers.net\u0026#34; --import 0x61E8806C -- SNIP -- D: adding \u0026#34;63deac79abe7ad80e147d671c2ac5bd1c8b3576e\u0026#34; to Sha1header index. -- SNIP -- Let\u0026rsquo;s verify that our new key is in place:\n# rpm -qa | grep -i gpg-pubkey-61E8806C gpg-pubkey-61e8806c-5581df56 # rpm -qi gpg-pubkey-61e8806c-5581df56 Name : gpg-pubkey Version : 61e8806c Release : 5581df56 Architecture: (none) Install Date: Wed 20 Sep 2017 10:17:11 AM CDT Group : Public Keys Size : 0 License : pubkey Signature : (none) Source RPM : (none) Build Date : Wed 17 Jun 2015 03:57:58 PM CDT Build Host : localhost Relocations : (not relocatable) Packager : CentOS Virtualization SIG (http://wiki.centos.org/SpecialInterestGroup/Virtualization) \u0026lt;security@centos.org\u0026gt; Summary : gpg(CentOS Virtualization SIG (http://wiki.centos.org/SpecialInterestGroup/Virtualization) \u0026lt;security@centos.org\u0026gt;) Description : -----BEGIN PGP PUBLIC KEY BLOCK----- Version: rpm-4.11.3 (NSS-3) mQENBFWB31YBCAC4dFmTzBDOcq4R1RbvQXLkyYfF+yXcsMA5kwZy7kjxnFqBoNPv aAjFm3e5huTw2BMZW0viLGJrHZGnsXsE5iNmzom2UgCtrvcG2f65OFGlC1HZ3ajA 8ZIfdgNQkPpor61xqBCLzIsp55A7YuPNDvatk/+MqGdNv8Ug7iVmhQvI0p1bbaZR 0GuavmC5EZ/+mDlZ2kHIQOUoInHqLJaX7iw46iLRUnvJ1vATOzTnKidoFapjhzIt i4ZSIRaalyJ4sT+oX4CoRzerNnUtIe2k9Hw6cEu4YKGCO7nnuXjMKz7Nz5GgP2Ou zIA/fcOmQkSGcn7FoXybWJ8DqBExvkJuDljPABEBAAG0bENlbnRPUyBWaXJ0dWFs aXphdGlvbiBTSUcgKGh0dHA6Ly93aWtpLmNlbnRvcy5vcmcvU3BlY2lhbEludGVy ZXN0R3JvdXAvVmlydHVhbGl6YXRpb24pIDxzZWN1cml0eUBjZW50b3Mub3JnPokB OQQTAQIAIwUCVYHfVgIbAwcLCQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJEHrr voJh6IBsRd0H/A62i5CqfftuySOCE95xMxZRw8+voWO84QS9zYvDEnzcEQpNnHyo FNZTpKOghIDtETWxzpY2ThLixcZOTubT+6hUL1n+cuLDVMu4OVXBPoUkRy56defc qkWR+UVwQitmlq1ngzwmqVZaB8Hf/mFZiB3B3Jr4dvVgWXRv58jcXFOPb8DdUoAc S3u/FLvri92lCaXu08p8YSpFOfT5T55kFICeneqETNYS2E3iKLipHFOLh7EWGM5b Wsr7o0r+KltI4Ehy/TjvNX16fa/t9p5pUs8rKyG8SZndxJCsk0MW55G9HFvQ0FmP A6vX9WQmbP+ml7jsUxtEJ6MOGJ39jmaUvPc= =ZzP+ -----END PGP PUBLIC KEY BLOCK----- Success!\nIf you want to override the value permanently, create a ~/.rpmmacros file and add the following line to it:\n%_hkp_keyserver http://pool.sks-keyservers.net Photo credit: Wikipedia\n","date":"20 September 2017","permalink":"/p/import-rpm-repository-keys-from-other-keyservers-temporarily/","section":"Posts","summary":"I\u0026rsquo;ve been working through some patches to OpenStack-Ansible lately to optimize how we configure yum repositories in our deployments.","title":"Import RPM repository GPG keys from other keyservers temporarily"},{"content":"","date":null,"permalink":"/tags/mail/","section":"Tags","summary":"","title":"Mail"},{"content":"","date":null,"permalink":"/tags/thunderbird/","section":"Tags","summary":"","title":"Thunderbird"},{"content":"Thunderbird is a great choice for a mail client on Linux systems if you prefer a GUI, but I had some problems with fonts in the most recent releases. The monospace font used for plain text messages was difficult to read.\nI opened Edit \u0026gt; Preferences \u0026gt; Display and clicked Advanced to the right of Fonts \u0026amp; Colors. The default font for monospace text was \u0026ldquo;Monospace\u0026rdquo;, and that one isn\u0026rsquo;t terribly attractive. I chose \u0026ldquo;DejaVu Sans Mono\u0026rdquo; instead, and closed the dialog boxes.\nThe fonts in monospace messages didn\u0026rsquo;t change. I quit Thunderbird, opened it again, and still didn\u0026rsquo;t see a change. The strange part is that a small portion of my monospaced messages were opening with the updated font while the majority were not.\nI went back into Thunderbird\u0026rsquo;s preferences and took another look:\nthunderbird fonts and colors panel Everything was set as I expected. I started with some Google searches and stumbled upon a Mozilla Bug: Changing monospace font doesn\u0026rsquo;t affect all messages. One of the participants in the bug mentioned that any emails received without ISO-8859-1 encoding would be unaffected since Thunderbird allows you set fonts for each encoding.\nI clicked the dropdown where \u0026ldquo;Latin\u0026rdquo; was selected and I selected \u0026ldquo;Other Writing Systems\u0026rdquo;. After changing the monospace font there, the changes went into effect for all of my monospaced messages!\n","date":"2 August 2017","permalink":"/p/thunderbird-changes-fonts-messages-not/","section":"Posts","summary":"Thunderbird is a great choice for a mail client on Linux systems if you prefer a GUI, but I had some problems with fonts in the most recent releases.","title":"Thunderbird changes fonts in some messages, not all"},{"content":"","date":null,"permalink":"/tags/serial/","section":"Tags","summary":"","title":"Serial"},{"content":"I have a CyberPower BRG1350AVRLCD at home and I\u0026rsquo;ve just connected it to a new device. However, the pwrstat command doesn\u0026rsquo;t retrieve any useful data on the new system:\n# pwrstat -status The UPS information shows as following: Current UPS status: State........................ Normal Power Supply by.............. Utility Power Last Power Event............. None I disconnected the USB cable and ran pwrstat again. Same output. I disconnected power from the UPS itself and ran pwrstat again. Same output. This can\u0026rsquo;t be right.\nChecking the basics #A quick look at dmesg output shows that the UPS is connected and the kernel recognizes it:\n[ 65.661489] usb 3-1: new full-speed USB device number 7 using xhci_hcd [ 65.830769] usb 3-1: New USB device found, idVendor=0764, idProduct=0501 [ 65.830771] usb 3-1: New USB device strings: Mfr=3, Product=1, SerialNumber=2 [ 65.830772] usb 3-1: Product: BRG1350AVRLCD [ 65.830773] usb 3-1: Manufacturer: CPS [ 65.830773] usb 3-1: SerialNumber: xxxxxxxxx [ 65.837801] hid-generic 0003:0764:0501.0004: hiddev0,hidraw0: USB HID v1.10 Device [CPS BRG1350AVRLCD] on usb-0000:00:14.0-1/input0 I checked the /var/log/pwrstatd.log file to see if there were any errors:\n2017/07/25 12:01:17 PM Daemon startups. 2017/07/25 12:01:24 PM Communication is established. 2017/07/25 12:01:27 PM Low Battery capacity is restored. 2017/07/25 12:05:19 PM Daemon stops its service. 2017/07/25 12:05:19 PM Daemon startups. 2017/07/25 12:05:19 PM Communication is established. 2017/07/25 12:05:22 PM Low Battery capacity is restored. 2017/07/25 12:06:27 PM Daemon stops its service. The pwrstatd daemon can see the device and communicate with it. This is unusual.\nDigging into the daemon #If the daemon can truly see the UPS, then what is it talking to? I used lsof to examine what the pwrstatd daemon is doing:\n# lsof -p 3975 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME pwrstatd 3975 root cwd DIR 8,68 224 96 / pwrstatd 3975 root rtd DIR 8,68 224 96 / pwrstatd 3975 root txt REG 8,68 224175 134439879 /usr/sbin/pwrstatd pwrstatd 3975 root mem REG 8,68 2163104 134218946 /usr/lib64/libc-2.25.so pwrstatd 3975 root mem REG 8,68 1226368 134218952 /usr/lib64/libm-2.25.so pwrstatd 3975 root mem REG 8,68 19496 134218950 /usr/lib64/libdl-2.25.so pwrstatd 3975 root mem REG 8,68 187552 134218939 /usr/lib64/ld-2.25.so pwrstatd 3975 root 0r CHR 1,3 0t0 1028 /dev/null pwrstatd 3975 root 1u unix 0xffff9e395e137400 0t0 37320 type=STREAM pwrstatd 3975 root 2u unix 0xffff9e395e137400 0t0 37320 type=STREAM pwrstatd 3975 root 3u unix 0xffff9e392f0c0c00 0t0 39485 /var/pwrstatd.ipc type=STREAM pwrstatd 3975 root 4u CHR 180,96 0t0 50282 /dev/ttyS1 Wait a minute. The last line of the lsof output shows that pwrstatd is talking to /dev/ttyS1, but the device is supposed to be a hiddev device over USB. If you remember, we had this line in dmesg when the UPS was plugged in:\nhid-generic 0003:0764:0501.0004: hiddev0,hidraw0: USB HID v1.10 Device [CPS BRG1350AVRLCD] on usb-0000:00:14.0-1/input0 Things are beginning to make more sense now. I have a USB-to-serial device that allows my server to talk to the console port on my Cisco switch:\n[ 80.389533] usb 3-1: new full-speed USB device number 9 using xhci_hcd [ 80.558025] usb 3-1: New USB device found, idVendor=067b, idProduct=2303 [ 80.558027] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 80.558028] usb 3-1: Product: USB-Serial Controller D [ 80.558029] usb 3-1: Manufacturer: Prolific Technology Inc. [ 80.558308] pl2303 3-1:1.0: pl2303 converter detected [ 80.559937] usb 3-1: pl2303 converter now attached to ttyUSB0 It appears that pwrstatd is trying to talk to my Cisco switch (through the USB-to-serial adapter) rather than my UPS! I\u0026rsquo;m sure they could have a great conversation together, but it\u0026rsquo;s hardly productive.\nFixing it #The /etc/pwrstatd.conf has a relevant section:\n# The pwrstatd accepts four types of device node which includes the \u0026#39;ttyS\u0026#39;, # \u0026#39;ttyUSB\u0026#39;, \u0026#39;hiddev\u0026#39;, and \u0026#39;libusb\u0026#39; for communication with UPS. The pwrstatd # defaults to enumerate all acceptable device nodes and pick up to use an # available device node automatically. But this may cause a disturbance to the # device node which is occupied by other software. Therefore, you can restrict # this enumerate behave by using allowed-device-nodes option. You can assign # the single device node path or multiple device node paths divided by a # semicolon at this option. All groups of \u0026#39;ttyS\u0026#39;, \u0026#39;ttyUSB\u0026#39;, \u0026#39;hiddev\u0026#39;, or # \u0026#39;libusb\u0026#39; device node are enumerated without a suffix number assignment. # Note, the \u0026#39;libusb\u0026#39; does not support suffix number only. # # For example: restrict to use ttyS1, ttyS2 and hiddev1 device nodes at /dev # path only. # allowed-device-nodes = /dev/ttyS1;/dev/ttyS2;/dev/hiddev1 # # For example: restrict to use ttyS and ttyUSB two groups of device node at # /dev,/dev/usb, and /dev/usb/hid paths(includes ttyS0 to ttySN and ttyUSB0 to # ttyUSBN, N is number). # allowed-device-nodes = ttyS;ttyUSB # # For example: restrict to use hiddev group of device node at /dev,/dev/usb, # and /dev/usb/hid paths(includes hiddev0 to hiddevN, N is number). # allowed-device-nodes = hiddev # # For example: restrict to use libusb device. # allowed-device-nodes = libusb allowed-device-nodes = We need to explicitly tell pwrstatd to talk to the UPS on /dev/hid/hiddev0:\nallowed-device-nodes = /dev/usb/hiddev0 Let\u0026rsquo;s restart the pwrstatd daemon and see what we get:\n# systemctl restart pwrstatd # pwrstat -status The UPS information shows as following: Properties: Model Name................... BRG1350AVRLCD Firmware Number.............. Rating Voltage............... 120 V Rating Power................. 810 Watt(1350 VA) Current UPS status: State........................ Normal Power Supply by.............. Utility Power Utility Voltage.............. 121 V Output Voltage............... 121 V Battery Capacity............. 100 % Remaining Runtime............ 133 min. Load......................... 72 Watt(9 %) Line Interaction............. None Test Result.................. Unknown Last Power Event............. None Success!\nPhoto credit: Wikipedia\n","date":"25 July 2017","permalink":"/p/troubleshooting-cyberpower-powerpanel-issues-in-linux/","section":"Posts","summary":"I have a CyberPower BRG1350AVRLCD at home and I\u0026rsquo;ve just connected it to a new device.","title":"Troubleshooting CyberPower PowerPanel issues in Linux"},{"content":"Tons of improvements made their way into the ansible-hardening role in preparation for the OpenStack Pike release next month. The role has a new name, new documentation and extra tests.\nThe role uses the Security Technical Implementation Guide (STIG) produced by the Defense Information Systems Agency (DISA) and applies the guidelines to Linux hosts using Ansible. Every control is configurable via simple Ansible variables and each control is thoroughly documented.\nThese controls are now applied to an even wider variety of Linux distributions:\nCentOS 7 Debian 8 Jessie (new for Pike) Fedora 25 (new for Pike) openSUSE Leap 42.2+ (new for Pike) Red Hat Enterprise Linux 7 SUSE Linux Enterprise 12 (new for Pike) Ubuntu 14.04 Trusty Ubuntu 16.04 Xenial Any patches to the ansible-hardening role are tested against all of these operating systems (except RHEL 7 and SUSE Linux Enterprise). Support for openSUSE testing landed this week.\nWork is underway to put the finishing touches on the master branch before the Pike release and we need your help!\nIf you have any of these operating systems deployed, please test the role on your systems! This is pre-release software, so it\u0026rsquo;s best to apply it only to a new server. Read the \u0026ldquo;Getting Started\u0026rdquo; documentation to get started with ansible-galaxy or git.\nPhoto credit: Wikipedia\n","date":"21 July 2017","permalink":"/p/apply-stig-operating-systems-ansible-hardening/","section":"Posts","summary":"Tons of improvements made their way into the ansible-hardening role in preparation for the OpenStack Pike release next month.","title":"Apply the STIG to even more operating systems with ansible-hardening"},{"content":"Thunderbird can connect to an LDAP server and autocomplete email addresses as you type, but it needs some adjustment for some LDAP servers. One of the LDAP servers that I use regularly returns email addresses like this in the thunderbird interface:\nusername \u0026lt;firstname.lastname@domain.tld\u0026gt; The email address looks fine, but I\u0026rsquo;d much rather have the person\u0026rsquo;s full name instead of the username. Here\u0026rsquo;s what I\u0026rsquo;m looking for:\nFirstname Lastname \u0026lt;firstname.lastname@domain.tld\u0026gt; In older Thunderbird versions, setting ldap_2.servers.SERVER_NAME.autoComplete.nameFormat to displayName was enough. However, this option isn\u0026rsquo;t used in recent versions of Thunderbird.\nDigging in #After a fair amount of searching the Thunderbird source code with awk, I found a mention of DisplayName in nsAbLDAPAutoCompleteSearch.js that looked promising:\n// Create a minimal map just for the display name and primary email. this._attributes = Components.classes[\u0026#34;@mozilla.org/addressbook/ldap-attribute-map;1\u0026#34;] .createInstance(Components.interfaces.nsIAbLDAPAttributeMap); this._attributes.setAttributeList(\u0026#34;DisplayName\u0026#34;, this._book.attributeMap.getAttributeList(\u0026#34;DisplayName\u0026#34;, {}), true); this._attributes.setAttributeList(\u0026#34;PrimaryEmail\u0026#34;, this._book.attributeMap.getAttributeList(\u0026#34;PrimaryEmail\u0026#34;, {}), true); } Something is unusual here. The LDAP field is called displayName, but this attribute is called DisplayName (note the capitalization of the D). Just before that line, I see a lookup in an attributes map of some sort. There may be a configuration option that is called DisplayName.\nIn Thunderbird, I selected Edit \u0026gt; Preferences. I clicked the Advanced tab and then Config Editor. A quick search for DisplayName revealed an interesting configuration option:\nldap_2.servers.default.attrmap.DisplayName: cn,commonname Fixing it #That\u0026rsquo;s the problem! This needs to map to displayName in my case, and not cn,commonname (which returns a user\u0026rsquo;s username). There are two different ways to fix this:\n# Change it for just one LDAP server ldap_2.servers.SERVER_NAME.attrmap.DisplayName: displayName # Change it for all LDAP servers by default (careful) ldap_2.servers.default.attrmap.DisplayName: displayName After making the change, quit Thunderbird and relaunch it. Compose a new email and start typing in the email address field. The user\u0026rsquo;s first and last name should appear!\n","date":"18 July 2017","permalink":"/p/customize-ldap-autocompletion-format-in-thunderbird/","section":"Posts","summary":"Thunderbird can connect to an LDAP server and autocomplete email addresses as you type, but it needs some adjustment for some LDAP servers.","title":"Customize LDAP autocompletion format in Thunderbird"},{"content":"","date":null,"permalink":"/tags/ldap/","section":"Tags","summary":"","title":"Ldap"},{"content":"The interest in the openstack-ansible-security role has taken off faster than I expected, and one piece of constant feedback I received was around the name of the role. Some users were unsure if they needed to use the role in an OpenStack cloud or if the OpenStack-Ansible project was required.\nThe role works everywhere - OpenStack cloud or not. I started a mailing list thread on the topic and we eventually settled on a new name: ansible-hardening! The updated documentation is already available.\nThe old openstack-ansible-security role is being retired and it will not receive any additional updates. Moving to the new role is easy:\nInstall ansible-hardening with ansible-galaxy (or git clone) Change your playbooks to use the ansible-hardening role There\u0026rsquo;s no need to change any variable names or tags - they are all kept the same in the new role.\nAs always, if you have questions or comments about the role, drop by #openstack-ansible on Freenode IRC or open a bug in Launchpad.\n","date":"27 June 2017","permalink":"/p/old-role-new-name-ansible-hardening/","section":"Posts","summary":"The interest in the openstack-ansible-security role has taken off faster than I expected, and one piece of constant feedback I received was around the name of the role.","title":"Old role, new name: ansible-hardening"},{"content":"","date":null,"permalink":"/tags/pythons/","section":"Tags","summary":"","title":"Pythons"},{"content":"","date":null,"permalink":"/tags/apparmor/","section":"Tags","summary":"","title":"Apparmor"},{"content":"I merged some initial Debian support into the openstack-ansible-security role and ran into an issue enabling AppArmor. The apparmor service failed to start and I found this output in the system journal:\nkernel: AppArmor: AppArmor disabled by boot time parameter Digging in #That was unexpected. I was using the Debian jessie cloud image and it uses extlinux as the bootloader. The file didn\u0026rsquo;t reference AppArmor at all:\n# cat /boot/extlinux/extlinux.conf default linux timeout 1 label linux kernel boot/vmlinuz-3.16.0-4-amd64 append initrd=boot/initrd.img-3.16.0-4-amd64 root=/dev/vda1 console=tty0 console=ttyS0,115200 ro quiet I learned that AppArmor is disabled by default in Debian unless you explicitly enable it. In contrast, SELinux is enabled unless you turn it off. To make matters worse, Debian\u0026rsquo;s cloud image doesn\u0026rsquo;t have any facilities or scripts to automatically update the extlinux configuration file when new kernels are installed.\nMaking a repeatable fix #My two goals here were to:\nEnsure AppArmor is enabled on the next boot Ensure that AppArmor remains enabled when new kernels are installed The first step is to install grub2:\napt-get -y install grub2 During the installation, a package configuration window will appear that asks about where grub should be installed. I selected /dev/vda from the list and waited for apt to finish the package installation.\nThe next step is to edit /etc/default/grub and add in the AppArmor configuration. Adjust the GRUB_CMDLINE_LINUX_DEFAULT line to look like the one below:\nGRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2\u0026gt; /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT=\u0026#34;quiet apparmor=1 security=apparmor\u0026#34; GRUB_CMDLINE_LINUX=\u0026#34;\u0026#34; Ensure that the required AppArmor packages are installed:\napt-get -y install apparmor apparmor-profiles apparmor-utils Enable the AppArmor service upon reboot:\nsystemctl enable apparmor Run update-grub and reboot. After the reboot, run apparmor_status and you should see lots of AppArmor profiles loaded:\n# apparmor_status apparmor module is loaded. 38 profiles are loaded. 3 profiles are in enforce mode. /usr/lib/chromium-browser/chromium-browser//browser_java /usr/lib/chromium-browser/chromium-browser//browser_openjdk /usr/lib/chromium-browser/chromium-browser//sanitized_helper 35 profiles are in complain mode. /sbin/klogd /sbin/syslog-ng /sbin/syslogd /usr/lib/chromium-browser/chromium-browser /usr/lib/chromium-browser/chromium-browser//chromium_browser_sandbox /usr/lib/chromium-browser/chromium-browser//lsb_release /usr/lib/chromium-browser/chromium-browser//xdgsettings /usr/lib/dovecot/anvil /usr/lib/dovecot/auth /usr/lib/dovecot/config /usr/lib/dovecot/deliver /usr/lib/dovecot/dict /usr/lib/dovecot/dovecot-auth /usr/lib/dovecot/dovecot-lda /usr/lib/dovecot/imap /usr/lib/dovecot/imap-login /usr/lib/dovecot/lmtp /usr/lib/dovecot/log /usr/lib/dovecot/managesieve /usr/lib/dovecot/managesieve-login /usr/lib/dovecot/pop3 /usr/lib/dovecot/pop3-login /usr/lib/dovecot/ssl-params /usr/sbin/avahi-daemon /usr/sbin/dnsmasq /usr/sbin/dovecot /usr/sbin/identd /usr/sbin/mdnsd /usr/sbin/nmbd /usr/sbin/nscd /usr/sbin/smbd /usr/sbin/smbldap-useradd /usr/sbin/smbldap-useradd///etc/init.d/nscd /usr/{sbin/traceroute,bin/traceroute.db} /{usr/,}bin/ping 0 processes have profiles defined. 0 processes are in enforce mode. 0 processes are in complain mode. 0 processes are unconfined but have a profile defined. Final thoughts #I\u0026rsquo;m still unsure about why AppArmor is disabled by default. There aren\u0026rsquo;t that many profiles shipped by default (38 on my freshly installed jessie system versus 417 SELinux policies in Fedora 25) and many of them affect services that wouldn\u0026rsquo;t cause significant disruptions on the system.\nThere is a discussion that ended last year around how to automate the AppArmor enablement process when the AppArmor packages are installed. This would be a great first step to make the process easier, but it would probably make more sense to take the step of enabling it by default.\nPhoto credit: Max Pixel\n","date":"24 May 2017","permalink":"/p/enable-apparmor-on-a-debian-jessie-cloud-image/","section":"Posts","summary":"I merged some initial Debian support into the openstack-ansible-security role and ran into an issue enabling AppArmor.","title":"Enable AppArmor on a Debian Jessie cloud image"},{"content":"I opened up a noVNC console to a virtual machine today in my OpenStack cloud but found that the console wouldn\u0026rsquo;t take keyboard input. The Send Ctrl-Alt-Del button in the top right of the window worked just fine, but I couldn\u0026rsquo;t type anywhere in the console. This happened on an Ocata OpenStack cloud deployed with OpenStack-Ansible on CentOS 7.\nTest the network path #The network path to the console is a little deep for this deployment, but here\u0026rsquo;s a quick explanation:\nMy laptop connects to HAProxy HAProxy sends the traffic to the nova-novncproxy service nova-novncproxy connects to the correct VNC port on the right hypervisor If all of that works, I get a working console! I knew the network path was set up correctly because I could see the console in my browser.\nMy next troubleshooting step was to dump network traffic with tcpdump on the hypervisor itself. I dumped the traffic on port 5900 (which was the VNC port for this particular instance) and watched the output. Whenever I wiggled the mouse over the noVNC console in my browser, I saw a flurry of network traffic. The same thing happened if I punched lots of keys on the keyboard. At this point, it was clear that the keyboard input was making it to the hypervisor, but it wasn\u0026rsquo;t being handled correctly.\nTest the console #Next, I opened up virt-manager, connected to the hypervisor, and opened a connection to the instance. The keyboard input worked fine there. I opened up remmina and connected via plain old VNC. The keyboard input worked fine there, too!\nInvestigate in the virtual machine #The system journal in the virtual machine had some interesting output:\nkernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). kernel: atkbd serio0: Use \u0026#39;setkeycodes 00 \u0026lt;keycode\u0026gt;\u0026#39; to make it known. kernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). kernel: atkbd serio0: Use \u0026#39;setkeycodes 00 \u0026lt;keycode\u0026gt;\u0026#39; to make it known. kernel: atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). kernel: atkbd serio0: Use \u0026#39;setkeycodes 00 \u0026lt;keycode\u0026gt;\u0026#39; to make it known. kernel: atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). kernel: atkbd serio0: Use \u0026#39;setkeycodes 00 \u0026lt;keycode\u0026gt;\u0026#39; to make it known. kernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). kernel: atkbd serio0: Use \u0026#39;setkeycodes 00 \u0026lt;keycode\u0026gt;\u0026#39; to make it known. kernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). kernel: atkbd serio0: Use \u0026#39;setkeycodes 00 \u0026lt;keycode\u0026gt;\u0026#39; to make it known. It seems like my keyboard input was being lost in translation - literally. I have a US layout keyboard (Thinkpad X1 Carbon) and the virtual machine was configured with the en-us keymap:\n# virsh dumpxml 4 | grep vnc \u0026lt;graphics type=\u0026#39;vnc\u0026#39; port=\u0026#39;5900\u0026#39; autoport=\u0026#39;yes\u0026#39; listen=\u0026#39;192.168.250.41\u0026#39; keymap=\u0026#39;en-us\u0026#39;\u0026gt; A thorough Googling session revealed that it is not recommended to set a keymap for virtual machines in libvirt in most situations. I set the nova_console_keymap variable in /etc/openstack_deploy/user_variables.yml to an empty string:\nnova_console_keymap: \u0026#39;\u0026#39; I redeployed the nova service using the OpenStack-Ansible playbooks:\nopenstack-ansible os-nova-install.yml Once that was done, I powered off the virtual machine and powered it back on. (This is needed to ensure that the libvirt changes go into effect for the virtual machine.)\nGreat success! The keyboard was working in the noVNC console once again!\nPhoto credit: Wikipedia\n","date":"18 May 2017","permalink":"/p/fixing-openstack-novnc-consoles-that-ignore-keyboard-input/","section":"Posts","summary":"I opened up a noVNC console to a virtual machine today in my OpenStack cloud but found that the console wouldn\u0026rsquo;t take keyboard input.","title":"Fixing OpenStack noVNC consoles that ignore keyboard input"},{"content":"","date":null,"permalink":"/tags/kvm/","section":"Tags","summary":"","title":"Kvm"},{"content":"","date":null,"permalink":"/tags/vnc/","section":"Tags","summary":"","title":"Vnc"},{"content":"Although OpenStack-Ansible doesn\u0026rsquo;t fully support CentOS 7 yet, the support is almost ready. I have a four node Ocata cloud deployed on CentOS 7, but I decided to change things around a bit and use systemd-networkd instead of NetworkManager or the old rc scripts.\nThis post will explain how to configure the network for an OpenStack-Ansible cloud on CentOS 7 with systemd-networkd.\nEach one of my OpenStack hosts has four network interfaces and each one has a specific task:\nenp2s0 – regular network interface, carries inter-host LAN traffic enp3s0 – carries br-mgmt bridge for LXC container communication enp4s0 – carries br-vlan bridge for VM public network connectivity enp5s0 – carries br-vxlan bridge for VM private network connectivity Adjusting services #First off, we need to get systemd-networkd and systemd-resolved ready to take over networking:\nsystemctl disable network systemctl disable NetworkManager systemctl enable systemd-networkd systemctl enable systemd-resolved systemctl start systemd-resolved rm -f /etc/resolv.conf ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf LAN interface #My enp2s0 network interface carries traffic between hosts and handles regular internal LAN traffic.\n/etc/systemd/network/enp2s0.network\n[Match] Name=enp2s0 [Network] Address=192.168.250.21/24 Gateway=192.168.250.1 DNS=192.168.250.1 DNS=8.8.8.8 DNS=8.8.4.4 IPForward=yes This one is quite simple, but the rest get a little more complicated.\nManagement bridge #The management bridge (br-mgmt) carries traffic between LXC containers. We start by creating the bridge device itself:\n/etc/systemd/network/br-mgmt.netdev\n[NetDev] Name=br-mgmt Kind=bridge Now we configure the network on the bridge (I use OpenStack-Ansible\u0026rsquo;s defaults here):\n/etc/systemd/network/br-mgmt.network\n[Match] Name=br-mgmt [Network] Address=172.29.236.21/22 I run the management network on VLAN 10, so I need a network device and network configuration for the VLAN as well. This step adds the br-mgmt bridge to the VLAN 10 interface:\n/etc/systemd/network/vlan10.netdev\n[NetDev] Name=vlan10 Kind=vlan [VLAN] Id=10 /etc/systemd/network/vlan10.network\n[Match] Name=vlan10 [Network] Bridge=br-mgmt Finally, we add the VLAN 10 interface to enp3s0 to tie it all together:\n/etc/systemd/network/enp3s0.network\n[Match] Name=enp3s0 [Network] VLAN=vlan10 Public instance connectivity #My router offers up a few different VLANs for OpenStack instances to use for their public networks. We start by creating a br-vlan network device and its configuration:\n/etc/systemd/network/br-vlan.netdev\n[NetDev] Name=br-vlan Kind=bridge /etc/systemd/network/br-vlan.network\n[Match] Name=br-vlan [Network] DHCP=no We can add this bridge onto the enp4s0 physical interface:\n/etc/systemd/network/enp4s0.network\n[Match] Name=enp4s0 [Network] Bridge=br-vlan VXLAN private instance connectivity #This step is similar to the previous one. We start by defining our br-vxlan bridge:\n/etc/systemd/network/br-vxlan.netdev\n[NetDev] Name=br-vxlan Kind=bridge /etc/systemd/network/br-vxlan.network\n[Match] Name=br-vxlan [Network] Address=172.29.240.21/22 My VXLAN traffic runs over VLAN 11, so we need to define that VLAN interface:\n/etc/systemd/network/vlan11.netdev\n[NetDev] Name=vlan11 Kind=vlan [VLAN] Id=11 /etc/systemd/network/vlan11.network\n[Match] Name=vlan11 [Network] Bridge=br-vxlan We can hook this VLAN interface into the enp5s0 interface now:\n/etc/systemd/network/enp5s0.network\n[Match] Name=enp5s0 [Network] VLAN=vlan11 Checking our work #The cleanest way to apply all of these configurations is to reboot. The Adjusting services step from the beginning of this post will ensure that systemd-networkd and systemd-resolved come up after a reboot.\nRun networkctl to get a current status of your network interfaces:\n# networkctl IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 enp2s0 ether routable configured 3 enp3s0 ether degraded configured 4 enp4s0 ether degraded configured 5 enp5s0 ether degraded configured 6 lxcbr0 ether routable unmanaged 7 br-vxlan ether routable configured 8 br-vlan ether degraded configured 9 br-mgmt ether routable configured 10 vlan11 ether degraded configured 11 vlan10 ether degraded configured You should have configured in the SETUP column for all of the interfaces you created. Some interfaces will show as degraded because they are missing an IP address (which is intentional for most of these interfaces).\n","date":"13 April 2017","permalink":"/p/openstack-ansible-on-centos-7-with-systemd-networkd/","section":"Posts","summary":"Although OpenStack-Ansible doesn\u0026rsquo;t fully support CentOS 7 yet, the support is almost ready.","title":"OpenStack-Ansible networking on CentOS 7 with systemd-networkd"},{"content":"DISA\u0026rsquo;s final release of the Red Hat Enterprise Linux (RHEL) 7 Security Technical Implementation Guide (STIG) came out a few weeks ago and it has plenty of improvements and changes. The openstack-ansible-security role has already been updated with these changes.\nQuite a few duplicated STIG controls were removed and a few new ones were added. Some of the controls in the pre-release were difficult to implement, especially those that changed parameters for PKI-based authentication.\nThe biggest challenge overall was the renumbering. The pre-release STIG used an unusual numbering convention: RHEL-07-123456. The final version used the more standardized \u0026ldquo;V\u0026rdquo; numbers, such as V-72225. This change required a substantial patch to bring the Ansible role inline with the new STIG release.\nAll of the role\u0026rsquo;s documentation is now updated to reflect the new numbering scheme and STIG changes. The key thing to remember is that you\u0026rsquo;ll need to use --skip-tag with the new STIG numbers if you need to skip certain tasks.\nNote: These changes won\u0026rsquo;t be backported to the stable/ocata branch, so you need to use the master branch to get these changes.\nHave feedback? Found a bug? Let us know!\nIRC: #openstack-ansible on Freenode IRC Bugs: LaunchPad E-mail: openstack-dev@lists.rackspace.com with the subject line [openstack-ansible][security] ","date":"5 April 2017","permalink":"/p/rhel-7-stig-v1-updates-for-openstack-ansible-security/","section":"Posts","summary":"DISA\u0026rsquo;s final release of the Red Hat Enterprise Linux (RHEL) 7 Security Technical Implementation Guide (STIG) came out a few weeks ago and it has plenty of improvements and changes.","title":"RHEL 7 STIG v1 updates for openstack-ansible-security"},{"content":"It all started shortly after I joined Rackspace in December of 2006. I needed a place to dump the huge amounts of information I was learning as an entry-level Linux support technician and I wanted to store everything in a place where it could be easily shared. The blog was born!\nThe blog now has over 700 posts on topics ranging from Linux system administration to job interview preparation. I\u0026rsquo;ll get an email or a tweet once every few weeks from someone saying: \u0026ldquo;I ran into a problem, Googled for it, and found your blog!\u0026rdquo; Comments like that keep me going and allow me to push through the deepest writer\u0026rsquo;s block moments.\nThe post titled \u0026ldquo;Why technical people should blog (but don\u0026rsquo;t)\u0026rdquo; is one of my favorites and I get a lot of feedback about it. Many people still feel like there\u0026rsquo;s no audience out there for the things they write. Just remember that someone, somewhere, can learn something from you and from your experiences. Write from the heart about what interests you and the readers will gradually appear. It\u0026rsquo;s a Field of Dreams moment.\nThanks to everyone who has given me support over the years to keep the writing going!\n","date":"10 March 2017","permalink":"/p/reflecting-on-10-years-of-mostly-technical-blogging/","section":"Posts","summary":"It all started shortly after I joined Rackspace in December of 2006.","title":"Reflecting on 10 years of (mostly) technical blogging"},{"content":"NOTE: The opinions shared in this post are mine alone and are not related to my employer in any way.\nThe first OpenStack Project Teams Gathering (PTG) event was held this week in Atlanta. The week was broken into two parts: cross-project work on Monday and Tuesday, and individual projects Wednesday through Friday. I was there for the first two days and heard a few discussions that started the same way.\nEveryone keeps saying OpenStack is dead.\nIs it?\nOpenStack isn\u0026rsquo;t dead. It\u0026rsquo;s boring.\n\u0026ldquo;The report of my death was an exaggeration\u0026rdquo; #Mark Twain said it best, but it works for OpenStack as well. The news has plenty of negative reports that cast a shadow over OpenStack\u0026rsquo;s future. You don\u0026rsquo;t have to look far to find them:\nHPE and Cisco Moves Hurt OpenStack\u0026rsquo;s Public Cloud Story (Fortune) Is Cisco killing off its OpenStack public cloud? (Computer Business Review) OpenStack still has an enterprise problem This isn\u0026rsquo;t evidence of OpenStack\u0026rsquo;s demise, but rather a transformation. Gartner called OpenStack a \u0026ldquo;science project\u0026rdquo; in 2015 and now 451 Research Group is saying something very different:\n451 Research Group estimates OpenStack\u0026rsquo;s ecosystem to grow nearly five-fold in revenue, from US$1.27 billion market size in 2015 to US$5.75 billion by 2020.\nA 35% CAGR sounds pretty good for a product in the middle of a transformation. In Texas, we\u0026rsquo;d say that\u0026rsquo;s more than enough to \u0026ldquo;shake a stick at\u0026rdquo;.\nThe transformation #You can learn a lot about the transformation going on within OpenStack by reading analyst reports and other news online. I won\u0026rsquo;t go into that here since that data is readily available.\nInstead, I want to take a look at how OpenStack has changed from the perspective of a developer. My involvement with OpenStack started in the Diablo release in 2011 and my first OpenStack Summit was the Folsom summit in San Francisco.\nMuch of the discussion at that time was around the \u0026ldquo;minutiae\u0026rdquo; of developing software in its early forms. We discussed topics like how to test, how to handle a myriad of requirements that constantly change, and which frameworks to use in which projects. The list of projects was quite short at that time (there were only 7 main services in Grizzly). Lots of effort certainly poured into feature development, but there was a ton of work being done to keep the wheels from falling off entirely.\nThe discussions at this week\u0026rsquo;s PTG were very different.\nMost of the discussion was around adding new integrations, improving reliability, and increasing scale. Questions were asked about how to integrate OpenStack into existing enterprise processes and applications. Reliability discussions were centered less around making the OpenStack services reliable, but more around how to increase overall resiliency when other hardware or software is misbehaving.\nDiscussions or arguments about minutiae were difficult to find.\nBoring is good #I\u0026rsquo;m not trying to say that working with OpenStack is boring. Developing software within the OpenStack community is an enjoyable experience. The rules and regulations within most projects are there to prevent design mistakes that have appeared before and many of these sets of rules are aligned between projects. Testing code and providing meaningful reviews is also straightforward.\nHowever, the drama, both unproductive and productive, that plagued the project in the past is diminishing. It still exists in places, especially when it comes to vendor relationships. (That\u0026rsquo;s where most open source projects see their largest amounts of friction, anyway.)\nThis transformation may make OpenStack appear \u0026ldquo;dead\u0026rdquo; to some. The OpenStack community is solving different problems now. Many of them are larger and more difficult to solve. Sometimes these challenges take more than one release to overcome. Either way, many OpenStack developers are up for these new challenges, even if they don\u0026rsquo;t make the headlines.\nAs for me: bring on the boring. Let\u0026rsquo;s crush the hard stuff.\nPhoto credit: By Mike (Flickr: DSC_6831_2_3_tonemapped) [CC BY 2.0], via Wikimedia Commons\n","date":"24 February 2017","permalink":"/p/openstack-isnt-dead-its-boring-thats-a-good-thing/","section":"Posts","summary":"NOTE: The opinions shared in this post are mine alone and are not related to my employer in any way.","title":"OpenStack isn’t dead. It’s boring. That’s a good thing."},{"content":"","date":null,"permalink":"/tags/servers/","section":"Tags","summary":"","title":"Servers"},{"content":"","date":null,"permalink":"/tags/virtualization/","section":"Tags","summary":"","title":"Virtualization"},{"content":"My OpenStack cloud depends on Ubuntu, and the latest release of OpenStack-Ansible (what I use to deploy OpenStack) requires Ubuntu 16.04 at a minimum. I tried upgrading the servers in place from Ubuntu 14.04 to 16.04, but that didn\u0026rsquo;t work so well. Those servers wouldn\u0026rsquo;t boot and the only recourse was a re-install.\nOnce I finished re-installing them (and wrestling with several installer bugs in Ubuntu 16.04), it was time to set up networking. The traditional network configurations in /etc/network/interfaces are fine, but they weren\u0026rsquo;t working the same way they were in 14.04. The VLAN configuration syntax appears to be different now.\nBut wait - 16.04 has systemd 229! I can use systemd-networkd to configure the network in a way that is a lot more familiar to me. I\u0026rsquo;ve made posts about systemd-networkd before and the simplicity in the configurations.\nI started with some simple configurations:\nroot@hydrogen:~# cd /etc/systemd/network root@hydrogen:/etc/systemd/network# cat enp3s0.network [Match] Name=enp3s0 [Network] VLAN=vlan10 root@hydrogen:/etc/systemd/network# cat vlan10.netdev [NetDev] Name=vlan10 Kind=vlan [VLAN] Id=10 root@hydrogen:/etc/systemd/network# cat vlan10.network [Match] Name=vlan10 [Network] Bridge=br-mgmt root@hydrogen:/etc/systemd/network# cat br-mgmt.netdev [NetDev] Name=br-mgmt Kind=bridge root@hydrogen:/etc/systemd/network# cat br-mgmt.network [Match] Name=br-mgmt [Network] Address=172.29.236.21/22 Here\u0026rsquo;s a summary of the configurations:\nPhysical network interface is enp3s0 VLAN 10 is trunked down from a switch to that interface Bridge br-mgmt should be on VLAN 10 (only send/receive traffic tagged with VLAN 10) Once that was done, I restarted systemd-networkd to put the change into effect:\n# systemctl restart systemd-networkd Great! Let\u0026rsquo;s check our work:\nroot@hydrogen:~# brctl show bridge name bridge id STP enabled interfaces br-mgmt 8000.0a30a9a949d9 no root@hydrogen:~# networkctl IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 enp2s0 ether routable configured 3 enp3s0 ether degraded configured 4 enp4s0 ether off unmanaged 5 enp5s0 ether off unmanaged 6 br-mgmt ether no-carrier configuring 7 vlan10 ether degraded unmanaged 7 links listed. So the bridge has no interfaces and it\u0026rsquo;s in a no-carrier status. Why? Let\u0026rsquo;s check the journal:\n# journalctl --boot -u systemd-networkd Jan 15 09:16:46 hydrogen systemd[1]: Started Network Service. Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: netdev exists, using existing without changing its parameters Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Could not append VLANs: Operation not permitted Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Failed to assign VLANs to bridge port: Operation not permitted Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Could not set bridge vlan: Operation not permitted Jan 15 09:16:59 hydrogen systemd-networkd[1903]: enp3s0: Configured Jan 15 09:16:59 hydrogen systemd-networkd[1903]: enp2s0: Configured The Could not append VLANs: Operation not permitted error is puzzling. After some searching on Google, I found a thread from Lennart:\nAfter an upgrade, systemd-networkd is broken, exactly the way descibed \u0026gt; in this issue #3876[0] Please upgrade to 231, where this should be fixed. Lennart But Ubuntu 16.04 has systemd 229:\n# dpkg -l | grep systemd ii libpam-systemd:amd64 229-4ubuntu13 amd64 system and service manager - PAM module ii libsystemd0:amd64 229-4ubuntu13 amd64 systemd utility library ii python3-systemd 231-2build1 amd64 Python 3 bindings for systemd ii systemd 229-4ubuntu13 amd64 system and service manager ii systemd-sysv 229-4ubuntu13 amd64 system and service manager - SysV links I haven\u0026rsquo;t found a solution for this quite yet. Keep an eye on this post and I\u0026rsquo;ll update it once I know more!\n","date":"15 January 2017","permalink":"/p/systemd-networkd-on-ubuntu-16-04-lts-xenial/","section":"Posts","summary":"My OpenStack cloud depends on Ubuntu, and the latest release of OpenStack-Ansible (what I use to deploy OpenStack) requires Ubuntu 16.","title":"systemd-networkd on Ubuntu 16.04 LTS (Xenial)"},{"content":"My new ThinkPad arrived this week and it is working well! The Fedora 25 installation was easy and all of the hardware was recognized immediately.\nHooray! pic.twitter.com/OiPSHREMLo \u0026mdash; Major Hayden (@majorhayden) January 9, 2017 However, there was a downside. The display looked washed out and had a strange tint. It seemed to be more pale than the previous ThinkPad. The default ICC profile in GNOME didn\u0026rsquo;t help much.\nThere\u0026rsquo;s a helpful review over at NotebookCheck that has a link to an ICC profile generated from a 4th generation ThinkPad X1 Carbon. This profile was marginally better than GNOME\u0026rsquo;s default, but it still looked a bit more washed out than what it should be.\nI picked up a ColorMunki Display and went through a fast calibration in GNOME\u0026rsquo;s Color Manager. The low quality run finished in under 10 minutes and the improvement was definitely noticeable. Colors look much deeper and less washed out. The display looks very similar to the previous generation ThinkPad X1 Carbon.\n","date":"11 January 2017","permalink":"/p/icc-color-profile-lenovo-thinkpad-x1-carbon-4th-generation/","section":"Posts","summary":"My new ThinkPad arrived this week and it is working well!","title":"ICC color profile for Lenovo ThinkPad X1 Carbon 4th generation"},{"content":"","date":null,"permalink":"/tags/auditd/","section":"Tags","summary":"","title":"Auditd"},{"content":"All systems running systemd come with a powerful tool for reviewing the system journal: journalctl. It allows you to get a quick look at the system journal while also allowing you to heavily customize your view of the log.\nI logged into a server recently that was having a problem and I found that the audit logs weren\u0026rsquo;t going into syslog. That\u0026rsquo;s no problem - they\u0026rsquo;re in the system journal. The system journal was filled with tons of other messages, so I decided to limit the output only to messages from the auditd unit:\n$ sudo journalctl -u auditd --boot -- Logs begin at Thu 2015-11-05 09:20:01 CST, end at Thu 2017-01-05 09:38:49 CST. -- Jan 05 07:47:04 arsenic systemd[1]: Starting Security Auditing Service... Jan 05 07:47:04 arsenic auditd[937]: Started dispatcher: /sbin/audispd pid: 949 Jan 05 07:47:04 arsenic audispd[949]: priority_boost_parser called with: 4 Jan 05 07:47:04 arsenic audispd[949]: max_restarts_parser called with: 10 Jan 05 07:47:04 arsenic audispd[949]: audispd initialized with q_depth=150 and 1 active plugins Jan 05 07:47:04 arsenic augenrules[938]: /sbin/augenrules: No change Jan 05 07:47:04 arsenic augenrules[938]: No rules Jan 05 07:47:04 arsenic auditd[937]: Init complete, auditd 2.7 listening for events (startup state enable) Jan 05 07:47:04 arsenic systemd[1]: Started Security Auditing Service. This isn\u0026rsquo;t helpful. I\u0026rsquo;m seeing messages about the auditd daemon itself. I want the actual output from the audit rules.\nThen I remembered: the kernel is the one that sends messages about audit rules to the system journal. Let\u0026rsquo;s just look at what\u0026rsquo;s coming from the kernel instead:\n$ sudo journalctl -k --boot -- Logs begin at Thu 2015-11-05 09:20:01 CST, end at Thu 2017-01-05 09:40:44 CST. -- Jan 05 07:46:47 arsenic kernel: Linux version 4.8.15-300.fc25.x86_64 (mockbuild@bkernel01.phx2.fedoraproject.org) (gcc version 6.2.1 20160916 (Red Hat 6.2.1-2 Jan 05 07:46:47 arsenic kernel: Command line: BOOT_IMAGE=/vmlinuz-4.8.15-300.fc25.x86_64 root=/dev/mapper/luks-e... ro rd.luks Jan 05 07:46:47 arsenic kernel: x86/fpu: Supporting XSAVE feature 0x001: \u0026#39;x87 floating point registers\u0026#39; Jan 05 07:46:47 arsenic kernel: x86/fpu: Supporting XSAVE feature 0x002: \u0026#39;SSE registers\u0026#39; Jan 05 07:46:47 arsenic kernel: x86/fpu: Supporting XSAVE feature 0x004: \u0026#39;AVX registers\u0026#39; Jan 05 07:46:47 arsenic kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 This is worse! Luckily, the system journal keeps a lot more data about what it receives than just the text of the log line. We can dig into that extra data with the verbose option:\n$ sudo journalctl --boot -o verbose After running that command, search for one of the audit log lines in the output:\n_UID=0 _BOOT_ID=... _MACHINE_ID=... _HOSTNAME=arsenic _TRANSPORT=audit SYSLOG_FACILITY=4 SYSLOG_IDENTIFIER=audit AUDIT_FIELD_HOSTNAME=? AUDIT_FIELD_ADDR=? AUDIT_FIELD_RES=success _AUDIT_TYPE=1105 AUDIT_FIELD_OP=PAM:session_open _SELINUX_CONTEXT=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 _AUDIT_LOGINUID=1000 _AUDIT_SESSION=3 AUDIT_FIELD_ACCT=root AUDIT_FIELD_EXE=/usr/bin/sudo AUDIT_FIELD_GRANTORS=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix AUDIT_FIELD_TERMINAL=/dev/pts/4 _PID=2666 _SOURCE_REALTIME_TIMESTAMP=1483631103122000 _AUDIT_ID=385 MESSAGE=USER_START pid=2666 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=\u0026#39;op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=\u0026#34;root\u0026#34; exe=\u0026#34;/usr/bin/sudo\u0026#34; hostname=? addr=? terminal=/dev/pts/4 res=success\u0026#39; One of the identifiers we can use is _TRANSPORT=audit. Let\u0026rsquo;s pass that to journalctl and see what we get:\n$ sudo journalctl --boot _TRANSPORT=audit -- Logs begin at Thu 2015-11-05 09:20:01 CST. -- Jan 05 09:47:24 arsenic audit[3028]: USER_END pid=3028 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg=\u0026#39;op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=\u0026#34;root\u0026#34; exe=\u0026#34;/usr/bin/sudo\u0026#34; hostname=? addr=? terminal=/dev/pts/4 res=success\u0026#39; ... more log lines snipped ... Success! You can get live output of the audit logs by tailing the output:\nsudo journalctl -af _TRANSPORT=audit For more details on journalctl, refer to the online documentation.\n","date":"5 January 2017","permalink":"/p/display-auditd-messages-with-journalctl/","section":"Posts","summary":"All systems running systemd come with a powerful tool for reviewing the system journal: journalctl.","title":"Display auditd messages with journalctl"},{"content":"When I came back from the holiday break, I found that the openstack-ansible-security role wasn\u0026rsquo;t passing tests any longer. The Ansible playbook stopped when augenrules ran to load the new audit rules. The error wasn\u0026rsquo;t terribly helpful:\n/usr/sbin/augenrules: No change Error sending add rule data request (Rule exists) There was an error in line 5 of /etc/audit/audit.rules A duplicated rule? #I\u0026rsquo;ve been working on lots of changes to implement the Red Hat Enterprise Linux 7 Security Technical Implementation Guide (STIG) and I assumed I put in the same rule twice with an errant copy and paste.\nThat wasn\u0026rsquo;t the case. I checked the input rule file in /etc/audit/rules.d/ and found that all of the rules were unique.\nIs something missing? #The augenrules command works by taking files from /etc/audit/rules.d/ and joining them together into /etc/audit/audit.rules. Based on the output from augenrules, the rule file checks out fine and it determined that the existing rule doesn\u0026rsquo;t need to be updated. However, augenrules is still unable to load the new rules into auditd.\nI decided to check the first several lines of /etc/audit/rules.d/ to see if line 5 had a problem:\n## This file is automatically generated from /etc/audit/rules.d -f 1 -a always,exit -F path=/usr/bin/chsh -F perm=x -F auid\u0026gt;=1000 -F auid!=4294967295 -k RHEL-07-030525 -a always,exit -F path=/usr/bin/chage -F perm=x -F auid\u0026gt;=1000 -F auid!=4294967295 -k RHEL-07-030513 Two things looked strange to me:\nLine 5 is correct and it is unique Why are lines 2 and 3 blank? I checked another CentOS 7 server and found the following in lines 2 and 3:\n-D -b 320 The -D deletes all previously loaded rules and -b increases the buffer size for busy periods. My rules weren\u0026rsquo;t loading properly because the -D was missing! Those two lines normally come from /etc/audit/rules.d/audit.rules, but that default file was not present.\nHere\u0026rsquo;s what was going wrong:\naugenrules read rules from rules.d/ augenrules found that the rules in rules.d/ were already in the main audit.rules file and didn\u0026rsquo;t need to be updated augenrules attempted to load the rules into auditd, but that failed auditd was rejecting the rules because at least one of them (line 5) already existed in the running rule set All of this happened because the -D wasn\u0026rsquo;t handled first before new rules were loaded.\nFixing it #I decided to add the -D line explicitly in my rules file within rules.d/ to catch those situations when the audit.rules default file is missing. The augenrules command ensures that the line appears at the top of the rules when they are loaded into auditd.\n","date":"3 January 2017","permalink":"/p/augenrules-fails-with-rule-exists-when-loading-rules-into-auditd/","section":"Posts","summary":"When I came back from the holiday break, I found that the openstack-ansible-security role wasn\u0026rsquo;t passing tests any longer.","title":"augenrules fails with “rule exists” when loading rules into auditd"},{"content":"Thanks to everyone who attended my talk at the OpenStack Summit in Barcelona! I really enjoyed sharing some tips with the audience and it was great to meet some attendees in person afterwards.\nIf you weren\u0026rsquo;t able to make it, don\u0026rsquo;t fret! This post will cover some of the main points of the talk and link to the video and slides.\nPurpose #OpenStack clouds are inherently complex. Operating a cloud involves a lot of moving pieces in software, hardware, and networking. Securing complex systems can be a real challenge, especially for newcomers to the information security realm. One wrong turn can knock critical infrastructure online or lead to lengthy debugging sessions.\nHowever, securing OpenStack clouds doesn\u0026rsquo;t need to be a tremendously stressful experience. It requires a methodical, thoughful, and strategic approach. The goal of the talk is give the audience a reliable strategy that is easy to start with and that scales easily over time.\nWhy holistic? #The dictionary definition of holistic is:\ncharacterized by comprehension of the parts of something as intimately connected and explicable only by reference to the whole\nTo simplify things a bit, thinking about something holistically means that you understand that there are small parts that are valuable on their own, but they make much more value when combined together. Also, it\u0026rsquo;s difficult to talk about the individual parts and get a real understanding of the whole.\nIn holistic medicine, humans are considered to be a body, mind, and spirit. OpenStack clouds involve servers, software, and a business goal. Security consists of people, process, and technology. To truly understand what\u0026rsquo;s going on, you need to take a look at something with all of its parts connected.\nSecurity refresher #Get into the mindset that attackers will get in eventually. Just change each instance of if to when in your conversations. Attackers can be wrong many times, but the defenders only need to be wrong once to allow a breach to occur.\nSimply building a huge moat and tall castle walls around the outside isn\u0026rsquo;t sufficient. Attackers will have free reign to move around inside and take what they want. Multiple layers are needed, and this is the backbone of a defense-in-depth strategy.\nCloud operators need to work from the outside inwards, much like you do with utensils at a fancy dinner. Make a good wall around the outside and work on tightening down the internals at multiple levels.\nFour layers for OpenStack #During the talk, I divided OpenStack clouds into four main layers:\nouter perimeter control and data planes OpenStack services and backend services in the control plane OpenStack services For the best explanation of what to do at this level, I highly recommend reviewing the slides or the presentation video (keep scrolling).\nLinks and downloads #The slides are on SlideShare and they are licensed CC-BY-SA. Feel free to share anything from the slide deck as you wish, but please share it via a similar license and attribute the source!\nThe video of the talk (including Q\u0026amp;A) is up on YouTube:\nFeedback #I love feedback about my talks! I\u0026rsquo;m still a novice presenter and every little bit of feedback - positive or negative - really helps. Feel free to email me or talk to me on Twitter.\n","date":"31 October 2016","permalink":"/p/talk-recap-holistic-security-for-openstack-clouds/","section":"Posts","summary":"Thanks to everyone who attended my talk at the OpenStack Summit in Barcelona!","title":"Talk Recap: Holistic Security for OpenStack Clouds"},{"content":"","date":null,"permalink":"/tags/education/","section":"Tags","summary":"","title":"Education"},{"content":"There are lots of efforts underway to get students (young and old) to learn to write code. There are far-reaching efforts, like the Hour of Code, and plenty of smaller, more focused projects, such as the Design and Technology Academy (part of Northeast ISD here in San Antonio, Texas). These are excellent programs that enrich the education of many students.\nI often hear a question from various people about these programs:\nWhy should a student learn to write code if they aren\u0026rsquo;t going to become a software developer or IT professional?\nIt\u0026rsquo;s a completely legitimate question and I hope to provide a helpful response in this post.\nSome students will actually enter the IT field #This may seem obvious, but it\u0026rsquo;s important to note that many students may choose to enter the IT field. They may not become full-time software developers, but the experience is useful for many different IT jobs. For example, knowing some basic principles about how software works is critical for system administrators, network administrators, project managers, and people managers. These skills could give students an edge later in their IT career.\nStudents learn to measure twice and cut once #The concept of thorough planning before execution shows up in many different fields. You can find it in general engineering, architecture, medicine, and criminal justice. A failure to plan often leads to bigger challenges down the line.\nIn software development, planning is key. Where will you store your data? How much data will you need to store? Does the user need a graphical interface? What if the amount of users on the system increases by a factor of ten? How do we keep the system secure?\nI talk to computer science students at UTSA on a regular basis. One of their most challenging courses involves a group project where the students must build a fully functional application that solves a need. The students run head-first into a myriad of problems during development. They almost always learn that it\u0026rsquo;s much easier to talk through how everything will fit together before they write one line of software.\nStudents learn to think logically #Dale Carnegie said:\nWhen dealing with people, remember you are not dealing with creatures of logic, but creatures of emotion.\nHumans are swayed easily by emotional appeals and logic is often tossed aside. Computers are the total opposite. You can scream for joy, cry uncontrollably, or yell angrily at a computer and it keeps doing the same thing. It doesn\u0026rsquo;t understand that you just told it to eat your data. It doesn\u0026rsquo;t understand that you can\u0026rsquo;t figure out why an error is occurring.\nStudents can learn a lot from the way computers work. As one of my journalism professors said in school:\nYou put garbage in? You\u0026rsquo;ll get garbage out.\nStudents will learn to be explicit in their instructions and contemplate how to handle treacherous situations in software. Some smaller failures might result in an error or a poor user experience. Others could result in a buffer overflow that leads to a costly security breach. In the end, computers can only do what they\u0026rsquo;re told and it\u0026rsquo;s up to the developer to tell to the computer - in the computer\u0026rsquo;s language - in all of these situations.\nOutside of the world of computers, learning to be explicit has its benefits. It reduces confusion in conversations, leads to better results in group projects, and it encourage students to structure their thoughts into more organized communication.\nThere\u0026rsquo;s more to IT than writing code #Software doesn\u0026rsquo;t run without computer hardware. We live in the age of easily accessible, inexpensive cloud infrastructure where you can have a server online in seconds, but it\u0026rsquo;s still someone\u0026rsquo;s computer. Within large businesses, software developers are often asked to justify the resources they need to deploy their applications.\nThere is obviously some technical knowledge involved here, especially around the topic of sizing a server for a particular software workload. However, there are plenty of non-technical questions to ask.\nCan we re-use depreciated hardware to reduce capital expenditures? Is our datacenter space limited by power, space, or something else? Can we select a lower-wattage server to reduce power consumption (if that\u0026rsquo;s the bigger expense)? Will ordering in a larger batch allow us to drive down the hardware cost?\nSoftware developers wield significant power if they can use their technical skills to branch into the world of accounting, finance, and basic engineering. A well-rounded approach could also allow developers to get more hardware than they planned to get if they make the purchases in a smarter way.\nBasic understanding of computers is useful #Almost every technical person has fielded that awkward question from a family member at a gathering or during the holidays:\nI heard you do computer stuff? I think I have a virus - could you look at it?\nAll students should have a basic understanding of how a computer works, even if they never write software or work in IT. This knowledge helps keep all of us a bit safer online and it helps to diagnose some issues before they become a serious problem. Approaching computers with an observant and inquisitive mind will reduce security breaches and increase confidence. We will also flood the market with people who can teach others the basics about their technology\nSummary #All students could learn some important life lessons simply from learning how to write some code. Does this mean that all students must write some software before they graduate? Definitely not.\nMany of the lessons learned from writing software will easily transfer into other fields and disciplines. Gaining these skills is a lot like learning a foreign language. If you use them frequently, you\u0026rsquo;ll have a strong foundation of knowledge to build upon over time. Even if you don\u0026rsquo;t use them frequently, they could give you that small edge that you need later on in your professional career.\nPhoto credit: Pixabay[^6]\n","date":"11 October 2016","permalink":"/p/why-should-students-learn-to-write-code/","section":"Posts","summary":"There are lots of efforts underway to get students (young and old) to learn to write code.","title":"Why should students learn to write code?"},{"content":"","date":null,"permalink":"/tags/ibm/","section":"Tags","summary":"","title":"Ibm"},{"content":"","date":null,"permalink":"/tags/power/","section":"Tags","summary":"","title":"Power"},{"content":"IBM Edge 2016 is almost over and I\u0026rsquo;ve learned a lot about Power 8 this week. The performance arguments sound really interesting and some of the choices in AIX\u0026rsquo;s design seem to make a lot of sense.\nHowever, there\u0026rsquo;s one remaining barrier for me: Power 8 isn\u0026rsquo;t really accessible for a tinkerer.\nTinkering? #Google defines tinkering as:\nattempt to repair or improve something in a casual or desultory way,\noften to no useful effect.\n\u0026ldquo;he spent hours tinkering with the car\u0026rdquo;\nWhen I come across a new piece of technology, I really enjoy learning how it works. I like to find its strengths and its limitations. I use that information to figure out how I might use the technology later and when I would recommend the technology for someone else to use it.\nTo me, tinkering is simply messing around with something until I have a better understanding of how it works. Tinkering doesn\u0026rsquo;t have a finish line. Tinkering may not have a well-defined goal. However, it\u0026rsquo;s tinkering that leads to a more robust community around a particular technology.\nFor example, take a look at the Raspberry Pi. There were plenty of other ARM systems on the market before the Pi and there are still a lot of them now. What makes the Pi different is that it\u0026rsquo;s highly accessible. You can get the newest model for $35 and there are tons of guides for running various operating systems on it. There are even more guides for how to integrate it with other items, such as sprinkler systems, webcams, door locks, and automobiles.\nAnother example is the Intel NUC. Although the NUC isn\u0026rsquo;t the most cost-effective way to get an Intel chip on your desk, it\u0026rsquo;s powerful enough to be a small portable server that you can take with you. This opens up the door for software developers to test code wherever they are (we use them for OpenStack development), run demos at a customer location, or make multi-node clusters that fit in a laptop bag.\nWhat makes Power 8 inaccessible to tinkerers? #One of the first aspects that most people notice is the cost. The S821LC currently starts at around $6,000 on IBM\u0026rsquo;s site, which is a bit steep for someone who wants to learn a platform.\nI\u0026rsquo;m not saying this server should cost less - the pricing seems quite reasonable when you consider that it comes with dual 8-core Power 8 processors in a 1U form factor. It also has plenty of high speed interconnects ready for GPUs and CAPI chips. With all of that considered, $6,000 for a server like this sounds very reasonable.\nThere are other considerations as well. A stripped down S821LC with two 8-core CPUs will consume about 406 Watts at 50% utilization. That\u0026rsquo;s a fair amount of power draw for a tinkerer and I\u0026rsquo;d definitely think twice about running something like that at home. When you consider the cooling that\u0026rsquo;s required, it\u0026rsquo;s even more difficult to justify.\nWhat about AIX? #AIX provides some nice benefits on Power 8 systems, but it\u0026rsquo;s difficult to access as well. Put \u0026ldquo;learning AIX\u0026rdquo; into a Google search and look at the results. The first link is a thread on LinuxQuestions.org where the original poster is given a few options:\nBuy some IBM hardware Get in some legal/EULA gray areas with VMware Find an old Power 5/6 server that is coming offline at a business that is doing a refresh Having access to AIX is definitely useful for tinkering, but it could be very useful for software developers. For example, if I write a script in Python and I want to add AIX support, I\u0026rsquo;ll need access to a system running AIX. It wouldn\u0026rsquo;t necessarily need to be a system with tons of performance, but it would need the functionality of a basic AIX environment.\nPotential solutions #I\u0026rsquo;d suggest two solutions:\nGet AIX into an accessible format, perhaps on a public cloud Make a more tinker-friendly Power 8 hardware platform Let\u0026rsquo;s start with AIX. I\u0026rsquo;d gladly work with AIX in a public cloud environment where I pay some amount for the virtual machine itself plus additional licensing for AIX. It would still be valuable even if the version of AIX had limiters so that it couldn\u0026rsquo;t be used for production workloads. I would be able to access the full functionality of a running AIX environment.\nThe hardware side leads to challenges. However, if it\u0026rsquo;s possible to do a single Power 8 SMT2 CPU in a smaller form factor, this could become possible. Perhaps these could even be CPUs with some type of defect where one or more cores are disabled. That could reduce cost while still providing the full functionality to someone who wants to tinker with Power 8.\nSome might argue that this defeats the point of Power 8 since it\u0026rsquo;s a high performance, purpose-built chip that crunches through some of the world\u0026rsquo;s biggest workloads. That\u0026rsquo;s a totally valid argument.\nHowever, that\u0026rsquo;s not the point.\nThe point is to get a fully-functional Power 8 CPU - even if it has serious performance limitations - into the hands of developers who want to do amazing things with it. My hope would be that these small tests will later turn into new ways to utilize POWER systems.\nIt could also be a way for more system administrators and developers to get experience with AIX. Companies would be able to find more people with a base level of AIX knowledge as well.\nFinal thoughts #IBM has something truly unique with Power 8. The raw performance of the chip itself is great and the door is open for even more performance through NVlink and CAPI accelerators. These features are game changers for businesses that are struggling to keep up with customer demands. A wider audience could learn about this game-changing technology if it becomes more accessible for tinkering.\nPhoto credit: Wikipedia\n","date":"22 September 2016","permalink":"/p/power-8-to-the-people/","section":"Posts","summary":"IBM Edge 2016 is almost over and I\u0026rsquo;ve learned a lot about Power 8 this week.","title":"Power 8 to the people"},{"content":"","date":null,"permalink":"/tags/database/","section":"Tags","summary":"","title":"Database"},{"content":"OpenStack\u0026rsquo;s compute service, nova, manages all of the virtual machines within a OpenStack cloud. When you ask nova to build an instance, or a group of instances, nova\u0026rsquo;s scheduler system determines which hypervisors should run each instance. The scheduler uses filters to figure out where each instance belongs.\nHowever, there are situations where the scheduler might put more than one of your instances on the same host, especially when resources are constrained. This can be a problem when you deploy certain highly available applications, like MariaDB and Galera. If more than one of your database instances landed on the same physical host, a failure of that physical host could take down more than one database instance.\nFilters to the rescue #The scheduler offers the ServerGroupAntiAffinityFilter filter for these deployments. This allows a user to create a server group, apply a policy to the group, and then begin adding servers to that group.\nIf the scheduler filter can\u0026rsquo;t find a way to fulfill the anti-affinity request (which often happens if the hosts are low on resources), it will fail the entire build transaction with an error. In other words, unless the entire request can be fulfilled, it won\u0026rsquo;t be deployed.\nLet\u0026rsquo;s see how this works in action on an OpenStack Mitaka cloud deployed with OpenStack-Ansible.\nCreating a server group #We can use the openstackclient tool to create our server group:\n$ openstack server group create --policy anti-affinity db-production +----------+--------------------------------------+ | Field | Value | +----------+--------------------------------------+ | id | cd234914-980a-42f2-b77c-602a7cc0080f | | members | | | name | db-production | | policies | anti-affinity | +----------+--------------------------------------+ We\u0026rsquo;ve told nova that we want all of the instances in the db-production group to land on different OpenStack hosts. I\u0026rsquo;ll copy the id to my clipboard since I\u0026rsquo;ll need that UUID for the next step.\nAdding hosts to the group #My small OpenStack cloud has four hypervisors, so I can add four instances to this server group:\n$ openstack server create \\ --flavor m1.small \\ --image \u0026#34;Fedora 24\u0026#34; \\ --nic net-id=bc8895ab-98f7-478f-a54a-36b121f7bb3f \\ --key-name personal_servers \\ --hint \u0026#34;group=cd234914-980a-42f2-b77c-602a7cc0080f\u0026#34; \\ --max 4 prod-db This server create command looks fairly standard, but I\u0026rsquo;ve added the --hint parameter to specify that we want these servers scheduled as part of the group we just created. Also, I\u0026rsquo;ve requested for four servers to be built at the same time. After a few moments, we should have four servers active:\n$ openstack server list --name prod-db -c ID -c Name -c Status +--------------------------------------+-----------+--------+ | ID | Name | Status | +--------------------------------------+-----------+--------+ | 7e7a81f3-eb02-4751-93c1-a0de999b8423 | prod-db-4 | ACTIVE | | b742fb58-8ea4-4e26-bfbf-645a698fbb26 | prod-db-3 | ACTIVE | | 78c7a43c-4deb-40da-a419-e62db37ab41a | prod-db-2 | ACTIVE | | 7b8af038-6441-40c0-87c8-0a1acced17a6 | prod-db-1 | ACTIVE | +--------------------------------------+-----------+--------+ If we check the instances, they should be on different hosts:\n$ for i in {1..4}; do openstack server show prod-db-${i} -c hostId -f shell; done hostid=\u0026#34;5fea4e5862f82f051e26caf926fe34bd3a9f1439b08a464f817b4c61\u0026#34; hostid=\u0026#34;65d87faf6d9baa110afa5f2e0308781dde4629142170b2c9af1f090b\u0026#34; hostid=\u0026#34;243f833055303efe838b3233f7ba6e1993fb28895ae11c724f10cc73\u0026#34; hostid=\u0026#34;54df76a1e66bd8585cc3c1f8f38e8f4937456394f2409daf2a8b4c2e\u0026#34; Success!\nIf we try to build one more instance, it should fail since the scheduler cannot fulfill the anti-affinity policy applied to server group:\n$ openstack server create \\ --flavor m1.small \\ --image \u0026#34;Fedora 24\u0026#34; \\ --nic net-id=bc8895ab-98f7-478f-a54a-36b121f7bb3f \\ --key-name personal_servers \\ --hint \u0026#34;group=cd234914-980a-42f2-b77c-602a7cc0080f\u0026#34; \\ --wait \\ prod-db-5 Error creating server: prod-db-5 Error creating server $ openstack server show prod-db-5 -c fault -f shell fault=\u0026#34;{u\u0026#39;message\u0026#39;: u\u0026#39;No valid host was found. There are not enough hosts available.\u0026#39;, u\u0026#39;code\u0026#39;: 500... The scheduler couldn\u0026rsquo;t find a valid host for a fifth server in the anti-affinity group.\nPhoto credit: \u0026ldquo;crowded houses\u0026rdquo; from jesuscm on Flickr\n","date":"9 August 2016","permalink":"/p/preventing-critical-services-from-deploying-on-the-same-openstack-host/","section":"Posts","summary":"OpenStack\u0026rsquo;s compute service, nova, manages all of the virtual machines within a OpenStack cloud.","title":"Preventing critical services from deploying on the same OpenStack host"},{"content":"I ran into an interesting problem recently in my production OpenStack deployment that runs the Mitaka release. On various occasions, instances were coming online with multiple network ports attached, even though I only asked for one network port.\nThe problem #If I issued a build request for ten instances, I\u0026rsquo;d usually end up with this:\n6 instances with one network port attached 2-3 instances with two network ports attached (not what I want) 1-2 instances with three or four network ports attached (definitely not what I want) When I examined the instances with multiple network ports attached, I found that one of the network ports would be marked as up while the others would be marked as down. However, the IP addresses associated with those extra ports would still be associated with the instance in horizon and via the nova API. All of the network ports seemed to be fully configured on the neutron side.\nDigging into neutron #The neutron API logs are fairly chatty, especially while instances are building, but I found two interesting log lines for one of my instances:\n172.29.236.41,172.29.236.21 - - [02/Aug/2016 14:03:11] \u0026#34;GET /v2.0/ports.json?tenant_id=a7b0519330ed481884431102a72dd04c\u0026amp;device_id=05eef1bb-5356-43d9-86c9-4d9854d4d46b HTTP/1.1\u0026#34; 200 2137 0.025282 172.29.236.11,172.29.236.21 - - [02/Aug/2016 14:03:15] \u0026#34;GET /v2.0/ports.json?tenant_id=a7b0519330ed481884431102a72dd04c\u0026amp;device_id=05eef1bb-5356-43d9-86c9-4d9854d4d46b HTTP/1.1\u0026#34; 200 3098 0.027803 There are two requests to create network ports for this instance and neutron is allocating ports to both requests. This would normally be just fine, but I only asked for one network port on this instance.\nThe IP addresses making the requests are unusual, though. 172.29.236.11 and 172.29.236.41 are two of the hypervisors within my cloud. Why are both of them asking neutron for network ports? Only one of those hypervisors should be building my instance, not both. After checking both hypervisors, I verified that the instance was only provisioned on one of the hosts and not both.\nLooking at nova-compute #The instance ended up on the 172.29.236.11 hypervisor once it finished building and the logs on that hypervisor looked fine:\nnova.virt.libvirt.driver [-] [instance: 05eef1bb-5356-43d9-86c9-4d9854d4d46b] Instance spawned successfully. I logged into the 172.29.236.41 hypervisor since it was the one that asked neutron for a port but it never built the instance. The logs there had a much different story:\n[instance: 05eef1bb-5356-43d9-86c9-4d9854d4d46b] Instance failed to spawn Traceback (most recent call last): File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/compute/manager.py\u0026#34;, line 2218, in _build_resources yield resources File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/compute/manager.py\u0026#34;, line 2064, in _build_and_run_instance block_device_info=block_device_info) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/libvirt/driver.py\u0026#34;, line 2773, in spawn admin_pass=admin_password) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/libvirt/driver.py\u0026#34;, line 3191, in _create_image instance, size, fallback_from_host) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/libvirt/driver.py\u0026#34;, line 6765, in _try_fetch_image_cache size=size) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py\u0026#34;, line 251, in cache *args, **kwargs) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py\u0026#34;, line 591, in create_image prepare_template(target=base, max_size=size, *args, **kwargs) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/oslo_concurrency/lockutils.py\u0026#34;, line 271, in inner return f(*args, **kwargs) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py\u0026#34;, line 241, in fetch_func_sync fetch_func(target=target, *args, **kwargs) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/libvirt/utils.py\u0026#34;, line 429, in fetch_image max_size=max_size) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/images.py\u0026#34;, line 120, in fetch_to_raw max_size=max_size) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/virt/images.py\u0026#34;, line 110, in fetch IMAGE_API.download(context, image_href, dest_path=path) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/image/api.py\u0026#34;, line 182, in download dst_path=dest_path) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/image/glance.py\u0026#34;, line 383, in download _reraise_translated_image_exception(image_id) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/image/glance.py\u0026#34;, line 682, in _reraise_translated_image_exception six.reraise(new_exc, None, exc_trace) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/image/glance.py\u0026#34;, line 381, in download image_chunks = self._client.call(context, 1, \u0026#39;data\u0026#39;, image_id) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/nova/image/glance.py\u0026#34;, line 250, in call result = getattr(client.images, method)(*args, **kwargs) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/glanceclient/v1/images.py\u0026#34;, line 148, in data % urlparse.quote(str(image_id))) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/glanceclient/common/http.py\u0026#34;, line 275, in get return self._request(\u0026#39;GET\u0026#39;, url, **kwargs) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/glanceclient/common/http.py\u0026#34;, line 267, in _request resp, body_iter = self._handle_response(resp) File \u0026#34;/openstack/venvs/nova-13.3.0/lib/python2.7/site-packages/glanceclient/common/http.py\u0026#34;, line 83, in _handle_response raise exc.from_response(resp, resp.content) ImageNotFound: Image 8feacda9-91fd-48ce-b983-54f7b6de6650 could not be found. This is one of those occasions where I was glad to find an exception in the log. The image that couldn\u0026rsquo;t be found is an image I\u0026rsquo;ve used regularly in the environment before, and I know it exists.\nGandering at glance #First off, I asked glance what it knew about the image:\n$ openstack image show 8feacda9-91fd-48ce-b983-54f7b6de6650 +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | 8de08e3fe24ee788e50a6a508235aa64 | | container_format | bare | | created_at | 2016-08-03T01:25:34Z | | disk_format | qcow2 | | file | /v2/images/8feacda9-91fd-48ce-b983-54f7b6de6650/file | | id | 8feacda9-91fd-48ce-b983-54f7b6de6650 | | min_disk | 0 | | min_ram | 0 | | name | Fedora 24 | | owner | a7b0519330ed481884431102a72dd04c | | properties | description=\u0026#39;\u0026#39; | | protected | False | | schema | /v2/schemas/image | | size | 204590080 | | status | active | | tags | | | updated_at | 2016-08-03T01:25:39Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+ If glance knows about the image, why can\u0026rsquo;t that hypervisor build an instance with that image? While I was scratching my head, Kevin Carter walked by my desk and joined in the debugging.\nHe asked about how I had deployed glance and what storage backend I was using. I was using the regular file storage backend since I don\u0026rsquo;t have swift deployed in the environment. He asked me how many glance nodes I had (I had two) and if I was doing anything to sync the images between the glance nodes.\nThen it hit me.\nAlthough both glance nodes knew about the image (since that data is in the database), only one of the glance nodes had the actual image content (the actual qcow2 file) stored. That means that if a hypervisor requests the image from a glance node that knows about the image but doesn\u0026rsquo;t have it stored, the hypervisor won\u0026rsquo;t be able to retrieve the image.\nUnfortunately, the checks go in this order on the nova-compute side:\nAsk glance if this image exists and if this tenant can use it Configure the network Retrieve the image If a hypervisor rolls through steps one and two without issues, but then fails on step 3, the network port will be provisioned but won\u0026rsquo;t come up on the instance. There\u0026rsquo;s nothing that cleans up that port in the Mitaka release, so it requires manual intervention.\nThe fix #As a temporary workaround, I took one of the glance nodes offline so that only one glance node is being used. After hundreds of builds, all of the instances came up with only one network port attached!\nThere are a few options for long-term fixes.\nI could deploy swift and put glance images into swift. That would allow me to use multiple glance nodes with the same swift backend. Another option would be to use an existing swift deployment, such as Rackspace\u0026rsquo;s Cloud Files product.\nSince I\u0026rsquo;m not eager to deploy swift in my environment for now, I decided to remove the second glance node and reconfigure nova to use only one glance node. That means I\u0026rsquo;m running with only one glance node and a failure there could be highly annoying. However, that trade-off is fine with me until I can get around to deploying swift.\nUPDATE: I\u0026rsquo;ve opened a bug for nova so that the network ports are cleaned up if the instance fails to build.\nPhoto credit: Flickr: pascalcharest\n","date":"3 August 2016","permalink":"/p/openstack-instances-come-online-with-multiple-network-ports-attached/","section":"Posts","summary":"I ran into an interesting problem recently in my production OpenStack deployment that runs the Mitaka release.","title":"OpenStack instances come online with multiple network ports attached"},{"content":"The OpenStack Zuul system has gone through some big changes recently, and one of those changes is around how you monitor a running CI job. I work on OpenStack-Ansible quite often, and the gate jobs can take almost an hour to complete at times. It can be helpful to watch the output of a Zuul job to catch a problem or follow a breakpoint.\nNew Zuul #In the previous version of Zuul, you could access the Jenkins server that was running the CI job and monitor its progress right in your browser. Today, you can monitor the progress of a job via telnet. It\u0026rsquo;s much easier to use and it\u0026rsquo;s a lighter-weight way to review a bunch of text.\nSome of you might be saying: \u0026ldquo;It\u0026rsquo;s 2016. Telnet? Unencrypted? Seriously?\u0026rdquo;\nBefore you get out the pitchforks, all of the data is read-only in the telnet session, and nothing sensitive is transmitted. Anything that comes through the telnet session is content that exists in an open source repository within OpenStack. If someone steals the output of the job, they\u0026rsquo;re not getting anything valuable.\nI was having a lot of trouble figuring out how to set up a handler for telnet:// URL\u0026rsquo;s that I clicked in Chrome or Firefox. If I clicked a link in Chrome, it would be passed off to xdg-open. I\u0026rsquo;d press OK on the window and then nothing happened.\nCreating a script #First off, I needed a script that would take the URL coming from an application and actually do something with it. The script will receive a URL as an argument that looks like telnet://SERVER_ADDRESS:PORT and that must be handed off to the telnet executable. Here\u0026rsquo;s my basic script:\n#!/bin/bash # Remove the telnet:// and change the colon before the port # number to a space. TELNET_STRING=$(echo $1 | sed -e \u0026#39;s/telnet:\\/\\///\u0026#39; -e \u0026#39;s/:/ /\u0026#39;) # Telnet to the remote session /usr/bin/telnet $TELNET_STRING # Don\u0026#39;t close out the terminal unless we are done read -p \u0026#34;Press a key to exit\u0026#34; I saved that in ~/bin/telnet.sh. A quick test with localhost should verify that the script works:\n$ chmod +x ~/bin/telnet.sh $ ~/bin/telnet.sh telnet://127.0.0.1:12345 Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused Press a key to exit Linking up with GNOME #We need a .desktop file so that GNOME knows how to run our script. Save a file like this to ~/.local/share/applications/telnet.desktop:\n[Desktop Entry] Version=1.0 Name=Telnet GenericName=Telnet Comment=Telnet Client Exec=/home/major/bin/telnet.sh %U Terminal=true Type=Application Categories=TerminalEmulator;Network;Telnet;Internet;BBS; MimeType=x-scheme/telnet X-KDE-Protocols=telnet Keywords=Terminal;Emulator;Network;Internet;BBS;Telnet;Client; Change the path in Exec to match where you placed your script.\nWe need to tell GNOME how to handle the x-scheme-handler/telnet mime type. We do that with xdg utilities:\n$ xdg-mime default telnet.desktop x-scheme-handler/telnet $ xdg-mime query default x-scheme-handler/telnet telnet.desktop Awesome! When you click a link in Chrome, the following should happen:\nChrome will realize it has no built-in handler and will hand off to xdg-open xdg-open will check its list of mime types for a telnet handler xdg-open will parse telnet.desktop and run the command in the Exec line within a terminal Our telnet.sh script runs with the telnet:// URI provided as an argument The remote telnet session is connected ","date":"22 July 2016","permalink":"/p/setting-up-a-telnet-handler-in-gnome-3/","section":"Posts","summary":"The OpenStack Zuul system has gone through some big changes recently, and one of those changes is around how you monitor a running CI job.","title":"Setting up a telnet handler for OpenStack Zuul CI jobs in GNOME 3"},{"content":"Most of the recent Fedora upgrades have been quite smooth. There were definitely some rough spots back in Fedora 15 and Fedora 17 with the /bin migration and the switch to systemd. The upgrade from Fedora 23 to Fedora 24 has been really easy except for one minor quirk: my two and three finger taps don\u0026rsquo;t seem to work on the touchpad.\nI use a Lenovo ThinkPad X1 Carbon (3rd gen) and it has a clickpad along with physical buttons across the top. I use the two finger taps (to do a secondary click) frequently. After the Fedora 24 upgrade, I can still do clicks with one, two or three fingers, but the taps don\u0026rsquo;t work.\nAfter a little digging in xinput, I began to narrow down the problem:\n[major@arsenic ~]$ xinput list ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)] ⎜ ↳ TPPS/2 IBM TrackPoint id=12 [slave pointer (2)] ⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Sleep Button id=8 [slave keyboard (3)] ↳ Integrated Camera id=9 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=10 [slave keyboard (3)] ↳ ThinkPad Extra Buttons id=13 [slave keyboard (3)] [major@arsenic ~]$ xinput list-props 11 | grep Tap Synaptics Tap Time (271): 180 Synaptics Tap Move (272): 252 Synaptics Tap Durations (273): 180, 100, 100 Synaptics Tap Action (285): 0, 0, 0, 0, 1, 0, 0 Hunting for a fix #It seems like this Synaptics Tap Action (285) is what I need to adjust. What do those numbers mean, anyway?\nAfter some searching, I found the answer in some Synaptics documentation:\nSynaptics Tap Action 8 bit, up to MAX_TAP values (see synaptics.h), 0 disables an element. order: RT, RB, LT, LB, F1, F2, F3. This seems like what I want, but what do those abbreviations mean at the end? I scrolled up on the page and found something useful:\nOption \u0026#34;RTCornerButton\u0026#34; \u0026#34;integer\u0026#34; Which mouse button is reported on a right top corner tap. Set to 0 to disable. Property: \u0026#34;Synaptics Tap Action\u0026#34; Option \u0026#34;RBCornerButton\u0026#34; \u0026#34;integer\u0026#34; Which mouse button is reported on a right bottom corner tap. Set to 0 to disable. Property: \u0026#34;Synaptics Tap Action\u0026#34; Option \u0026#34;LTCornerButton\u0026#34; \u0026#34;integer\u0026#34; Which mouse button is reported on a left top corner tap. Set to 0 to disable. Property: \u0026#34;Synaptics Tap Action\u0026#34; Option \u0026#34;LBCornerButton\u0026#34; \u0026#34;integer\u0026#34; Which mouse button is reported on a left bottom corner tap. Set to 0 to disable. Property: \u0026#34;Synaptics Tap Action\u0026#34; Option \u0026#34;TapButton1\u0026#34; \u0026#34;integer\u0026#34; Which mouse button is reported on a non-corner one-finger tap. Set to 0 to disable. Property: \u0026#34;Synaptics Tap Action\u0026#34; Option \u0026#34;TapButton2\u0026#34; \u0026#34;integer\u0026#34; Which mouse button is reported on a non-corner two-finger tap. Set to 0 to disable. Property: \u0026#34;Synaptics Tap Action\u0026#34; Option \u0026#34;TapButton3\u0026#34; \u0026#34;integer\u0026#34; Which mouse button is reported on a non-corner three-finger tap. Set to 0 to disable. Property: \u0026#34;Synaptics Tap Action\u0026#34; The last three are the ones I care about. Then the abbreviations made sense:\nF1: TapButton1 F2: TapButton2 F3: TapButton3 The TapButton1 setting was already set to 1, which means a primary tap. I need TapButton2 set to 3 (two fingers for a secondary button tap) and TapButton3 set to 2 (three fingers for a middle button tap). Let\u0026rsquo;s try with xinput directly first:\nxinput set-prop 11 \u0026#34;Synaptics Tap Action\u0026#34; 0 0 0 0 1 3 2 SUCCESS! The secondary and middle taps have returned!\nMaking it stick #Let\u0026rsquo;s make the setting permanent. You could add this to a ~/.xprofile or some other file that the display manager runs, but this isn\u0026rsquo;t helpful if you have a touchpad that could be removed or re-added (like a USB touchpad). For this, we need an extra X configuration file.\nI created a file called /etc/X11/xorg.conf.d/99-xinput-fix-multi-finger-taps.conf and added some configuration:\nSection \u0026#34;InputClass\u0026#34; Identifier \u0026#34;tap-by-default\u0026#34; MatchIsTouchpad \u0026#34;on\u0026#34; Option \u0026#34;TapButton1\u0026#34; \u0026#34;1\u0026#34; Option \u0026#34;TapButton2\u0026#34; \u0026#34;3\u0026#34; Option \u0026#34;TapButton3\u0026#34; \u0026#34;2\u0026#34; EndSection The configuration file specifies what we want to occur when one, two or three fingers tap on the pad. We\u0026rsquo;re also being careful here to match only on touchpads to avoid tinkering with a mouse or other pointer device.\nLog out of your X session and log in again. Your two and three finger taps should still be working!\n","date":"6 July 2016","permalink":"/p/bring-back-two-three-finger-taps-fedora-24/","section":"Posts","summary":"Most of the recent Fedora upgrades have been quite smooth.","title":"Bring back two and three finger taps in Fedora 24"},{"content":"","date":null,"permalink":"/tags/xinput/","section":"Tags","summary":"","title":"Xinput"},{"content":"The 2016 Red Hat Summit is underway in San Francisco this week and I delivered a talk with Robyn Bergeron earlier today. Our talk, When flexibility met simplicity: The friendship of OpenStack and Ansible, explained how Ansible can reduce the complexity of OpenStack environments without sacrificing the flexibility that private clouds offer.\nThe talk started at the same time as lunch began and the Partner Pavilion first opened, so we had some stiff competition for attendees\u0026rsquo; attention. However, the live demo worked without issues and we had some good questions when the talk was finished.\nThis post will cover some of the main points from the talk and I\u0026rsquo;ll share some links for the talk itself and some of the playbooks we ran during the live demo.\nIT is complex and difficult #Getting resources for projects at many companies is challenging. OpenStack makes this a little easier by delivering compute, network, and storage resources on demand. However, OpenStack\u0026rsquo;s flexibility is a double-edged sword. It makes it very easy to obtain virtual machines, but it can be challenging to install and configure.\nAnsible reduces some of that complexity without sacrificing flexibility. Ansible comes with plenty of pre-written modules that manage an OpenStack cloud at multiple levels for multiple types of users. Consumers, operators, and deployers can save time and reduce errors by using these modules and providing the parameters that fit their environment.\nAnsible and OpenStack #Ansible and OpenStack are both open source projects that are heavily based on Python. Many of the same dependencies needed for Ansible are needed for OpenStack, so there is very little additional software required. Ansible tasks are written in YAML and the user only needs to pass some simple parameters to an existing module to get something done.\nOperators are in a unique position since they can use Ansible to perform typical IT tasks, like creating projects and users. They can also assign fine-grained permissions to users with roles via reusable and extensible playbooks. Deployers can use projects like OpenStack-Ansible to deploy a production-ready OpenStack cloud.\nLet\u0026rsquo;s build something #In the talk, we went through a scenario for a live demo. In the scenario, the marketing team needed a new website for a new campaign. The IT department needed to create a project and user for them, and then the marketing team needed to build a server. This required some additional tasks, such as adding ssh keys, creating a security group (with rules) and adding a new private network.\nThe files from the live demo are up on GitHub:\nmajor/ansible-openstack-summit-demo In the operator-prep.yml, we created a project and added a user to the project. That user was given the admin role so that the marketing team could have full access to their own project.\nFrom there, we went through the tasks as if we were a member of the marketing team. The marketing.yml playbook went through all of the tasks to prepare for building an instance, actually building the instance, and then adding that instance to the dynamic inventory in Ansible. That playbook also verified the instance was up and performed additional configuration of the virtual machine itself - all in the same playbook.\nWhat\u0026rsquo;s next? #Robyn shared lots of ways to get involved in the Ansible community. AnsibleFest 2016 is rapidly approaching and the OpenStack Summit in Barcelona is happening this October.\nDownloads #The presentation is available in a few formats:\nPDF Slideshare ","date":"29 June 2016","permalink":"/p/talk-recap/","section":"Posts","summary":"The 2016 Red Hat Summit is underway in San Francisco this week and I delivered a talk with Robyn Bergeron earlier today.","title":"Talk recap: The friendship of OpenStack and Ansible"},{"content":"Lots of work has gone into the openstack-ansible-security Ansible role since I delivered a talk about it last month at the OpenStack Summit in Austin. Attendees asked for quite a few new features and I\u0026rsquo;ve seen quite a few bug reports (and that\u0026rsquo;s a good thing).\nHere\u0026rsquo;s a list of the newest additions since the Summit:\nNew features #Ubuntu 16.04 LTS (Xenial) support #The role now works with Ubuntu 16.04 and its newest features, including systemd. You can use the same variables as you used with Ubuntu 14.04 and it should take the same actions. Documentation updates are mostly merged with a few straggling reviews in the queue.\nCentOS 7 support #With all of the work going into the role to support Ubuntu 16.04 and systemd, CentOS 7 wasn\u0026rsquo;t a huge stretch. Many of the package names and file locations were a little different, but those are now moved out into variables files to reduce the repetition of tasks. Some of the Linux Security Module tasks needed adjustments since SELinux is a different beast than AppArmor.\nFollowing the STIG more closely #One of the common questions I had at the summit was: \u0026ldquo;Can I use this thing on my non-OpenStack environments?\u0026rdquo; You definitely can, but many of the configurations were tweaked to avoid causing problems with OpenStack environments. Some users asked if the configurations could be made more generic so that they followed the STIG more closely. This would reduce some compliance headaches and allow more people to use the role.\nSo far, I\u0026rsquo;ve been making some of these adjustments to fix more things rather than simply checking them. That should make it easier to get closer to the STIG\u0026rsquo;s requirements.\nAnother proposed idea is to create vars files that meet different criteria. For example, one vars file might be the ultra-secure, follow-the-STIG-to-the-letter configuration. This would be good for users that already know they want to apply the STIG\u0026rsquo;s requirements fully. There could be another vars file that would apply most of the STIG\u0026rsquo;s requirements, but it would steer clear of changing anything that could disrupt a production OpenStack environment.\nThe future #Here are a subset of the future plans and ideas:\nBetter reporting for users who need to feed data into vulnerability management applications or SIEMs for compliance checks Better testing, possibly with customized OpenSCAP XCCDF files Cross-referenced controls to other hardening guides, such as CIS Benchmarks If you have any other ideas, feel free to stop by #openstack-ansible or #openstack-security on Freenode. You can find me there as mhayden and I would really enjoy hearing about your use cases!\nPhoto credit: Mikecogh\n","date":"27 May 2016","permalink":"/p/automated-security-hardening-with-ansible-may-updates/","section":"Posts","summary":"Lots of work has gone into the openstack-ansible-security Ansible role since I delivered a talk about it last month at the OpenStack Summit in Austin.","title":"Automated security hardening with Ansible: May updates"},{"content":"","date":null,"permalink":"/tags/troubleshooting/","section":"Tags","summary":"","title":"Troubleshooting"},{"content":"NOTE: This post is a work in progress. If you find something that I missed, feel free to leave a comment. I\u0026rsquo;ve made plenty of silly mistakes, but I\u0026rsquo;m sure I\u0026rsquo;ll make a few more. :)\nCompleting a deployment of an OpenStack cloud is an amazing feeling. There is so much automation and power at your fingertips as soon as you\u0026rsquo;re finished. However, the mood quickly turns sour when you create that first instance and it never responds to pings.\nIt\u0026rsquo;s the same feeling I get when I hang Christmas lights every year only to find that a whole section didn\u0026rsquo;t light up. If you\u0026rsquo;ve ever seen National Lampoon\u0026rsquo;s Christmas Vacation, you know what I\u0026rsquo;m talking about:\nI\u0026rsquo;ve stumbled into plenty of problems (and solutions) along the way and I\u0026rsquo;ll detail them here in the hopes that it can help someone avoid throwing a keyboard across the room.\nSecurity groups #Security groups get their own section because I forget about them constantly. Security groups are a great feature that lets you limit inbound and outbound access to a particular network port.\nHowever, OpenStack\u0026rsquo;s default settings are fairly locked down. That\u0026rsquo;s great from a security perspective, but it can derail your first instance build if you\u0026rsquo;re not thinking about it.\nYou have two options to allow traffic:\nAdd more permissive rules to the default security group Create a new security group and add appropriate rules into it I usually ensure that ICMP traffic is allowed into any port with the default security group applied, and then I create a another security group specific to the class of server I\u0026rsquo;m building (like webservers). Changing a security group rule or adding a new security group to a port takes effect in a few seconds.\nSomething is broken in the instance #Try to get console access to the instance through Horizon or via the command line tools. I generally find an issue in one of these areas:\nThe IP address, netmask, or default gateway are incorrect Additional routes should have been applied, but were not applied Cloud-init didn\u0026rsquo;t run, or it had a problem when it ran The default iptables policy in the instance is overly restrictive The instance isn\u0026rsquo;t configured to bring up an instance by default Something is preventing the instance from getting a DHCP address If the network configuration looks incorrect, cloud-init may have had a problem during startup. Look in /var/log/ or in journald for any explanation of why cloud-init failed.\nThere\u0026rsquo;s also the chance that the network configuration is correct, but the instance can\u0026rsquo;t get a DHCP address. Verify that there are no iptables rules in place on the instance that might block DHCP requests and replies.\nSome Linux distributions don\u0026rsquo;t send gratuitous ARP packets when they bring an interface online. Tools like arping can help with these problems.\nIf you find that you can connect to almost anything from within the instance, but you can\u0026rsquo;t connect to the instance from the outside, verify your security groups (see the previous section). In my experience, a lopsided ingress/egress filter almost always points to a security group problem.\nSomething is broken in OpenStack\u0026rsquo;s networking layer #Within the OpenStack control plane, the nova service talks to neutron to create network ports and manage addresses on those ports. One of the requests or responses may have been lost along the way or you may have stumbled into a bug.\nIf your instance couldn\u0026rsquo;t get an IP address via DHCP, make sure the DHCP agent is running on the server that has your neutron agents. Restarting the agent should bring the DHCP server back online if it isn\u0026rsquo;t running.\nYou can also hop into the network namespace that the neutron agent uses for your network. Start by running:\n# ip netns list Look for a namespace that starts with qdhcp- and ends with your network\u0026rsquo;s UUID. You can run commands inside that namespace to verify that networking is functioning:\n# ip netns exec qdhcp-NETWORK_UUID ip addr # ip netns exec qdhcp-NETWORK_UUID ping INSTANCE_IP_ADDRESS If your agent can ping the instance\u0026rsquo;s address, but you can\u0026rsquo;t ping the instance\u0026rsquo;s address, there could be a problem on the underlying network - either within the virtual networking layer (bridges and virtual switches) or on the hardware layer (between the server and upstream network devices).\nTry to use tcpdump to dump traffic on the neutron agent and on the instance\u0026rsquo;s network port. Do you see any traffic at all? You may find a problem with incorrect VLAN ID\u0026rsquo;s here or you may see activity that gives you more clue (like one half of an ARP or DHCP exchange).\nSomething is broken outside of OpenStack #Diagnosing these problems can become a bit challenging since it involves logging into other systems.\nIf you are using VLAN networks, be sure that the proper VLAN ID is being set for the network. Run openstack network show and look for provider:segmentation_id. If that\u0026rsquo;s correct, be sure that all of your servers can transmit packets with that VLAN tag applied. I often remember to allow tagged traffic on all of the hypervisors and then I forget to do the same in the control plane.\nBe sure that your router has the VLAN configured and has the correct IP address configuration applied. It\u0026rsquo;s possible that you\u0026rsquo;ve configured all of the VLAN tags correctly in all places, but then fat-fingered an IP address in OpenStack or on the router.\nWhile you\u0026rsquo;re in the router, test some pings to your instance. If you can ping from the router to the instance, but not from your desk to the instance, your router might not be configured correctly.\nFor instances on private networks, ensure that you created a router on the network. This is something I tend to forget. Also, be sure that you have the right routes configured between you and your OpenStack environment so that you can route traffic to your private networks through the router. If this isn\u0026rsquo;t feasible for you, another option could be OpenStack\u0026rsquo;s VPN-as-a-service feature.\nAnother issue could be the cabling between servers and the nearest switch. If a cable is crossed, it could mean that a valid VLAN is being blocked at the switch because it\u0026rsquo;s coming in on the wrong port.\nWhen it\u0026rsquo;s something else #There are some situations that aren\u0026rsquo;t covered here. If you think of any, please leave a comment below.\nAs with any other troubleshooting, I go back to this quote from Dr. Theodore Woodward about diagnosing illness in the medical field:\nWhen you hear hoofbeats, think of horses not zebras.\nLook for the simplest solutions and work from the smallest domain (the instance) to the widest (the wider network). Make small changes and go back to the instance each time to verify that something changed. Once you find the solution, document it! Someone will surely appreciate it later.\n","date":"17 May 2016","permalink":"/p/troubleshooting-openstack-network-connectivity/","section":"Posts","summary":"NOTE: This post is a work in progress.","title":"Troubleshooting OpenStack network connectivity"},{"content":"When you\u0026rsquo;re ready to commit code in an OpenStack project, your patch will eventually land in a Gerrit queue for review. The web interface works well for most users, but it can be challenging to use when you have a large amount of projects to monitor. I recently became a core developer on the OpenStack-Ansible project and I searched for a better solution to handle lots of active reviews.\nThis is where gertty can help. It\u0026rsquo;s a console-based application that helps you navigate reviews efficiently. I\u0026rsquo;ll walk you through the installation and configuration process in the remainder of this post.\nInstalling gertty #The gertty package is available via pip, GitHub, and various package managers for certain Linux distributions. If you\u0026rsquo;re on Fedora, just install python-gertty via dnf.\nIn this example, we will use pip:\npip install gertty Configuration #You will need a .gertty.yaml file in your home directory for gertty to run. I have an example on GitHub that gives you a good start:\nBe sure to change the username and password parts to match your Gerrit username and password. For OpenStack\u0026rsquo;s gerrit server, you can get these credentials in the user settings area.\nGetting synchronized #Now that gertty is configured, start it up on the console:\n$ gertty Type a capital L (SHIFT + L) and wait for the list of projects to appear on the screen. You can choose projects to subscribe to (note that these are different than Gerrit\u0026rsquo;s watched projects) by pressing your \u0026rsquo;s\u0026rsquo; key.\nHowever, if you need to follow quite a few projects that match a certain pattern, there\u0026rsquo;s an easier way. Quit gertty (CTRL - q) and adjust the sqlite database that gertty uses:\n$ sqlite3 .gertty.db SQLite version 3.8.6 2014-08-15 11:46:33 Enter \u0026#34;.help\u0026#34; for usage hints. sqlite\u0026gt; SELECT count(*) FROM project WHERE name LIKE \u0026#39;%openstack-ansible%\u0026#39;; 39 sqlite\u0026gt; UPDATE project SET subscribed=1 WHERE name LIKE \u0026#39;%openstack-ansible%\u0026#39;; sqlite\u0026gt; In this example, I\u0026rsquo;ve subscribed to all projects that contain the string openstack-ansible.\nI can start gertty once more and wait for it to sync my new projects down to my local workstation. Keep an eye on the Sync: status at the top right of the screen. It will count up as it enumerates reviews to retrieve and then count down as those reviews are downloaded.\nYou can also create custom dashboards for gertty based on custom queries. In my example configuration file above, I have a special dashboard that contains all OpenStack-Ansible reviews. That dashboard appears whenever I press F5. You can customize these dashboards to include any custom queries that you need for your projects.\nPhoto credit: Frank Taillandier\n","date":"11 May 2016","permalink":"/p/getting-started-gertty/","section":"Posts","summary":"When you\u0026rsquo;re ready to commit code in an OpenStack project, your patch will eventually land in a Gerrit queue for review.","title":"Getting started with gertty"},{"content":"","date":null,"permalink":"/tags/sqlite/","section":"Tags","summary":"","title":"Sqlite"},{"content":"I\u0026rsquo;ve gone on some mini-rants in other posts about starting daemons immediately after they\u0026rsquo;re installed in Ubuntu and Debian. Things are a little different in Ubuntu 16.04 and I thought it might be helpful to share some tips for that release.\nBefore we do that, let\u0026rsquo;s go over something. I still don\u0026rsquo;t understand why this is a common practice within Ubuntu and Debian.\nTake a look at the postinst-systemd-start script within the init-systems-helpers package (source link):\nif [ -d /run/systemd/system ]; then systemctl --system daemon-reload \u0026gt;/dev/null || true deb-systemd-invoke start #UNITFILES# \u0026gt;/dev/null || true fi The daemon-reload is totally reasonable. We must tell systemd that we just deployed a new unit file or it won\u0026rsquo;t know we did it. However, the next line makes no sense. Why would you immediately force the daemon to start (or restart)? The deb-systemd-invoke script does check to see if the unit is disabled before taking action on it, which is definitely a good thing. However, this automatic management of running daemons shouldn\u0026rsquo;t be handled by a package manager.\nIf you don\u0026rsquo;t want your package manager handling your daemons, you have a few options:\nThe policy-rc.d method #This method involves creating a script called /usr/sbin/policy-rc.d with a special exit code:\n# echo -e \u0026#39;#!/bin/bash\\nexit 101\u0026#39; \u0026gt; /usr/sbin/policy-rc.d # chmod +x /usr/sbin/policy-rc.d # /usr/sbin/policy-rc.d # echo $? 101 This script is checked by the deb-systemd-invoke script in the init-systems-helpers package (source link). As long as this script is in place, dpkg triggers won\u0026rsquo;t cause daemons to start, stop, or restart.\nYou can start your daemon at any time with systemctl start service_name whenever you\u0026rsquo;re ready.\nThe systemd mask method #If you need to prevent a single package from starting after installation, you can use systemd\u0026rsquo;s mask feature for that. When you run systemctl mask nginx, it will symlink /etc/systemd/system/nginx.service to /dev/null. When systemd sees that, it won\u0026rsquo;t start the daemon.\nHowever, since the package isn\u0026rsquo;t installed yet, we can just mask it with a symlink:\n# ln -s /dev/null /etc/systemd/system/nginx.service You can install nginx now, configure it to meet your requirements, and start the service. Just run:\n# systemctl enable nginx # systemctl start nginx ","date":"5 May 2016","permalink":"/p/preventing-ubuntu-16-04-starting-daemons-package-installed/","section":"Posts","summary":"I\u0026rsquo;ve gone on some mini-rants in other posts about starting daemons immediately after they\u0026rsquo;re installed in Ubuntu and Debian.","title":"Preventing Ubuntu 16.04 from starting daemons when a package is installed"},{"content":"Authenticating to a wired or wireless network using 802.1x is simple using NetworkManager\u0026rsquo;s GUI client. However, this gets challenging on headless servers without a graphical interface. The nmcli command isn\u0026rsquo;t able to store credentials in a keyring and this causes problems when you try to configure an interfaces with 802.1x authentication.\nIf you aren\u0026rsquo;t familiar with 802.1x, there is some light reading and heavier reading available on the topic.\nStart by setting some basic configurations on the interface using the nmcli editor shell:\n# nmcli con edit CONNECTION_NAME nmcli\u0026gt; set ipv4.method auto nmcli\u0026gt; set 802-1x.eap peap nmcli\u0026gt; set 802-1x.identity USERNAME nmcli\u0026gt; set 802-1x.phase2-auth mschapv2 nmcli\u0026gt; save nmcli\u0026gt; quit Be sure to set the 802-1x.eap and 802-1x.phase2-auth to the appropriate values for your network. You might have noticed that the password isn\u0026rsquo;t specified here. That\u0026rsquo;s because NetworkManager has no access to a keyring where it can store the password. That comes next.\nCreate a new file called /etc/NetworkManager/system-connections/CONNECTION_NAME to hold your password. If your connection name has spaces in it, be sure to maintain those spaces in the filename. Add the following to that file:\n[connection] id=CONNECTION_NAME [802-1x] password=YOUR_8021X_PASSWORD Save the file and close it. Restart NetworkManager to pick up the changes:\nsystemctl restart NetworkManager You may need to bring the interface down and up to test the new changes:\nnmcli con down CONNECTION_NAME nmcli con up CONNECTION_NAME Once the network settles down, the authentication should complete within a few seconds in most cases. Be sure to check your system journal or other NetworkManager logs for more details if the interface doesn\u0026rsquo;t work properly.\n","date":"3 May 2016","permalink":"/p/802-1x-networkmanager-using-nmcli/","section":"Posts","summary":"Authenticating to a wired or wireless network using 802.","title":"802.1x with NetworkManager using nmcli"},{"content":"Today is the second day of the OpenStack Summit in Austin and I offered up a talk on host security hardening in OpenStack clouds. You can download the slides or watch the video here:\nHere\u0026rsquo;s a quick recap of the talk and the conversations afterward:\nSecurity tug-of-war #Information security is a challenging task, mainly because it is more than just a technical problem. Technology is a big part of it, but communication, culture, and compromise are also critical. I flashed up this statement on the slides:\n\"People should feel like security is something they are part of; not something that is done to them\" @majorhayden pic.twitter.com/Blh9rZp0uL \u0026mdash; Rackspace (@Rackspace) April 26, 2016 In the end, the information security teams, the developers and the auditors must be happy. This can be a challenging tightrope to walk, but automating some security allows everyone to get what they want in a scalable and repeatable way.\nMeeting halfway #The openstack-ansible-security role allows information security teams to meet developers or OpenStack deployers halfway. It can easily bolt onto existing Ansible playbooks and manage host security hardening for Ubuntu 14.04 systems. The role also works in non-OpenStack environments just as well. All of the documentation, configuration, and Ansible tasks are all included with the role.\nThe role itself applies security configurations to each host in an environment. Those configurations are based on the Security Technical Implementation Guide (STIG) from the Defense Information Systems Agency (DISA), which is part of the United States Department of Defense. The role takes the configurations from the STIG and makes small tweaks to fit an OpenStack environment. All of the tasks are carefully translated from the STIG for Red Hat Enterprise Linux 6 (there is no STIG for Ubuntu currently).\nThe role is available now as part of OpenStack-Ansible in the Liberty, Mitaka, and Newton releases. Simply adjust apply_security_hardening from false to true and deploy. For other users, the role can easily be used in any Ansible playbook. (Be sure to review the configuration to ensure its defaults meet your requirements.)\nGetting involved #We need your help! Upcoming plans include Ubuntu 16.04 and CentOS support, a rebase onto the RHEL 7 STIG (which will be finalized soon), and better reporting.\nJoin us later this week for the OpenStack-Ansible design summit sessions or anytime on Freenode in #openstack-ansible. We\u0026rsquo;re on the OpenStack development mailing list as well (be sure to use the [openstack-ansible][security] tags.\nHallway conversations #Lots of people came by to chat afterwards and offered to join in the development. A few people were hoping it would have been the security \u0026ldquo;silver bullet\u0026rdquo;, and I reset some expectations.\nSome attendees has good ideas around making the role more generic and adding an \u0026ldquo;OpenStack switch\u0026rdquo; that would configure many variables to fit an OpenStack environment. That would allow people to use it easily with non-OpenStack environments.\nOther comments were around hardening inside of Linux containers. These users had \u0026ldquo;heavy\u0026rdquo; containers where the entire OS is virtualized and multiple processes might be running at the same time. Some of the configuration changes (especially the kernel tunables) don\u0026rsquo;t make sense inside a container like that, but many of the others could be useful. For more information on securing Linux containers, watch the video from Thomas Cameron\u0026rsquo;s talk here at the summit.\nThank you #I\u0026rsquo;d like to thank everyone for coming to the talk today and sharing their feedback. It\u0026rsquo;s immensely useful and I pile all of that feedback into future talks. Also, I\u0026rsquo;d like to thank all of the people at Rackspace who helped me review the slides and improve them.\n","date":"26 April 2016","permalink":"/p/talk-recap-automated-security-hardening-openstack-ansible/","section":"Posts","summary":"Today is the second day of the OpenStack Summit in Austin and I offered up a talk on host security hardening in OpenStack clouds.","title":"Talk Recap: Automated security hardening with OpenStack-Ansible"},{"content":"","date":null,"permalink":"/tags/colocation/","section":"Tags","summary":"","title":"Colocation"},{"content":"","date":null,"permalink":"/tags/hosting/","section":"Tags","summary":"","title":"Hosting"},{"content":"Back in 2011, I decided to try out a new method for hosting my websites and other applications: colocation. Before that, I used shared hosting, VPS providers (\u0026ldquo;cloud\u0026rdquo; wasn\u0026rsquo;t a popular thing back then), and dedicated servers. Each had their drawbacks in different areas. Some didn\u0026rsquo;t perform well, some couldn\u0026rsquo;t recover from failure well, and some were terribly time consuming to maintain.\nThis post will explain why I decided to try colocation and will hopefully help you avoid some of my mistakes.\nWhy choose colocation? #For the majority of us, hosting an application involves renting something from another company. This includes tangible things, such as disk space, servers, and networking, as well as the intangible items, like customer support. We all choose how much we rent (and hope our provider does well) and how much we do ourselves.\nColocation usually involves renting space in a rack, electricity, and network bandwidth. In these environments, the customer is expected to maintain the server hardware, perimeter network devices (switches, routers, and firewalls), and all of the other equipment downstream from the provider\u0026rsquo;s equipment. Providers generally offer good price points for these services since they only need to ensure your rack is secure, powered, and networked.\nAs an example, a quarter rack in Dallas at a low to mid-range colocation provider can cost between $200-400 per month. That normally comes with a power allotment (mine is 8 amps) and network bandwidth (more on that later). In my quarter rack, I have five 1U servers plus a managed switch. One of those servers acts as my firewall.\nLet\u0026rsquo;s consider a similar scenario at a dedicated hosting provider. I checked some clearance prices in Dallas while writing this post and found pricing for servers which have similar CPUs as my servers, but with much less storage. The pricing for five servers runs about $550/month and we haven\u0026rsquo;t even considered a switch and firewall yet.\nCost is one factor, but I have some other preferences which push me strongly towards colocation:\nCustomized server configuration: I can choose the quantity and type of components I need Customized networking: My servers run an OpenStack private cloud and I need complex networking Physical access: Perhaps I\u0026rsquo;m getting old, but I enjoy getting my hands dirty from time to time If you have similar preferences, the rest of this post should be a good guide for getting started with a colocation environment.\nStep 1: Buy quality parts #Seriously - buy quality parts. That doesn\u0026rsquo;t mean you need to go buy the newest server available from a well-known manufacturer, but you do need to find reliable parts in a new or used server.\nI\u0026rsquo;ve built my environment entirely with Supermicro servers. The X9SCi-LN4F servers have been extremely reliable. My first two came brand new from Silicon Mechanics and I\u0026rsquo;m a big fan of their products. Look at their Rackform R.133 for something very similar to what I have.\nMy last three have all come from Mr. Rackables (an eBay seller). They\u0026rsquo;re used X9SCi-LN4F servers, but they\u0026rsquo;re all in good condition. I couldn\u0026rsquo;t beat the price for those servers, either.\nBefore you buy, do some reading on brand reliability and compatibility. If you plan to run Linux, do a Google search for the model of the server and the version of Linux you plan to run. Also, take a moment to poke around some forums to see what other people think of the server. Go to WebHostingTalk and search for hardware there, too.\nBuying quality parts gives you a little extra piece of mind, especially if your colo is a long drive from your home or work.\nStep 2: Ask questions #Your first conversation with most providers will be through a salesperson. That\u0026rsquo;s not necessarily a bad thing since they will give you a feature and pricing overview fairly quickly. However, I like to ask additional questions of the salesperson to learn more about the technical staff.\nHere are some good conversation starters for making it past a salesperson to the technical staff:\nYour website says I get 8 amps of power. How many plugs on the PDU can I use? My applications need IPv6 connectivity. Is it possible to get something larger than a /64? For IPv6 connectivity, will I use DHCP-PD or will you route a larger IPv6 block to me via SLAAC? I\u0026rsquo;d like to delegate reverse DNS to my own nameservers. Do you support that? If I have a problem with a component, can I ship you a part so you can replace it? When you do get a response, look for a few things:\nDid the response come back in a reasonable timeframe? (Keep in mind that you aren\u0026rsquo;t a paying customer yet.) Did the technical person take the time to fully explain their response? Does the response leave a lot of room for creative interpretation? (If so, ask for clarification.) Does the technical invite additional questions? It\u0026rsquo;s much better to find problems now than after you\u0026rsquo;ve signed a contract. I\u0026rsquo;ll talk more about contracts later.\nStep 3: Get specific on networking #Look for cities with very diverse network options. Big cities, or cities with lots of datacenters, will usually have good network provider choices and better prices. In Texas, your best bet is Dallas (which is where I host).\nI\u0026rsquo;ve learned that bandwidth measurement is one of the biggest areas of confusion and \u0026ldquo;creative interpretation\u0026rdquo; in colocation hosting. Every datacenter I\u0026rsquo;ve hosted with has handled things differently.\nThere are four main ways that most colocation providers handle networking:\n95th percentile #This is sometimes called burstable billing since it allows you to have traffic bursts without getting charged a lot for it. In the old days, you had to commit to a rate and stick with it (more on that method next). 95th percentile billing allows you to have some spikes during the measurement period without requiring negotiations for a higher transfer rate.\nLong story short, this billing method measures your bandwidth on regular intervals and throws out the top 5% of your intervals. This means that some unusual spikes (so long as it\u0026rsquo;s less than 36 hours in a month) won\u0026rsquo;t cause you to need a higher committed rate. For most people, this measurement method is beneficial since your spikes are thrown out. However, if you have sustained spikes, this billing method can be painful.\nCommitted rate #If you see a datacenter say something like \u0026ldquo;$15 per megabit\u0026rdquo;, they\u0026rsquo;re probably measuring with committed rates. In this model, you choose a rate, like 10 megabits/second, and pay based on that rate. At $15 per megabit, your bandwidth charge would be $150 per month. The actual space in the rack and the power often costs extra.\nSome people have good things to say about this billing method, but it seems awful to me. If you end up using 1 megabit per second, you still get billed for 10. If you have some spikes that creep over 10 megabit, even if you stay well under that rate as an average, the datacenter folks will ask you to do a higher commit. That means more money for something you may or may not use.\nIn my experience, this is is a serious red flag. This suggests that the datacenter network probably doesn\u0026rsquo;t have much extra capacity available within the datacenter, with its providers, or both. You sometimes see this in cities where bandwidth is more expensive. If you find a datacenter that\u0026rsquo;s totally perfect, but they want to do a committed rate, ask if there\u0026rsquo;s an option for 95th percentile. If not, I strongly suggest looking elsewhere.\nTotal bandwidth #Total bandwidth billing is what you normally see from most dedicated server or VPS providers. They will promise a certain transfer allotment and a network port speed. You will often see something like \u0026ldquo;10TB on a 100mbps port\u0026rdquo; and that means you can transfer 10TB in a month at at 100 megabits per second. They often don\u0026rsquo;t care how you consume the bandwidth, but if you pass 10TB, you will likely pay overages per gigabyte. (Be sure to ask about the overage price.)\nThis method is nice because you can watch it like a water meter. You can take your daily transfer, multiply by 30, and see if the transfer allotment is enough. Most datacenters will allow you to upgrade to a gigabit port for a fee and this will allow you to handle spikes a little easier.\nFor personal colocation, this bandwidth billing method is great. It could be a red flag for larger customers because it gives a hint that the datacenter might be oversubscribed on networking. If every customer wanted to burst to 100 megabit at the same time, there probably isn\u0026rsquo;t enough network connectivity to allow everyone to get that guaranteed rate.\nBring your own #Finally, you could always negotiate with bandwidth providers and bring your own bandwidth. This comes with its own challenges and it\u0026rsquo;s probably not worth it for a personal colocation environment. Larger business like this method because they often have rates negotiated with providers for their office connectivity and they can often get a good rate within the colocation datacenter.\nThere are plenty of up-front costs with bringing your own bandwidth provider. Some of those costs may come from the datacenter itself, especially if new cabling is required.\nStep 4: Plan for failure #Ensure that spare parts are available at a moment\u0026rsquo;s notice if something goes wrong. In my case, I keep two extra hard drives in the rack with my servers as well as two sticks of RAM. My colocation datacenter is about four hours away by car, and I certainly don\u0026rsquo;t want to make an emergency trip there to replace a broken hard drive.\nBuying servers with out of band management can be a huge help during an emergency. Getting console access remotely can shave plenty of time off of an outage. The majority of Supermicro servers come with an out of band management controller by default. You can use simple IPMI commands to reboot the server or use the iKVM interface to interact with the console. Many of these management controllers allow you to mount USB drives remotely and re-image a server at any time.\nAsk the datacenter if they have a network KVM that you could use or rent during an emergency. Be sure to ask about pricing and the time expectations when you request one to be connected to your servers.\nStep 5: Contracts and payment #Be sure to read the contract carefully. Pay special attention to how the datacenter handles outages and how you can request SLA credits. Take time to review any sections on what rights you have when they don\u0026rsquo;t hold up their end of the deal.\nAs with any contract, find out what happens when the contract ends. Does it auto-renew? Do you keep the same rates? Can you go month to month? Reviewing these sections before signing could save a lot of money later.\nWrapping up #Although cloud hosting has certainly made it easier to serve applications, there are still some people out there that prefer to have more customization than cloud hosting allows. For some applications, cloud hosting can be prohibitively expensive. Some applications don\u0026rsquo;t tolerate a shared platform and the noisy neighbor issues that come with it.\nColocation can be a challenging, but rewarding, experience. As with anything, you must do your homework. I certainly hope this post helps make that homework a little easier.\nSince I know someone will ask me: I host with Corespace and I\u0026rsquo;ve been with them for a little over a year. They have been great so far and their staff has been friendly in person, via tickets, and via telephone.\nPhoto credit: Kev (Flickr)\n","date":"22 April 2016","permalink":"/p/lessons-learned-four-years-colocation-hosting/","section":"Posts","summary":"Back in 2011, I decided to try out a new method for hosting my websites and other applications: colocation.","title":"Lessons learned: Five years of colocation"},{"content":"When I started Thunderbird today, it opened three windows. Each window was identical. I closed two of them and then quit Thunderbird.\nAs soon as I started Thunderbird, I had three windows again.\nI found a Mozilla bug report from 2015 that had some tips for getting the additional windows closed.\nChoose one of the open Thunderbird windows and select Close from the File menu. Do not use ALT-F4 or CTRL-W to close the window. Keep doing that until all of the windows are closed except for one. Then choose Quit from the hamburger menu drop down.\nAt that point, start Thunderbird again and you should have only one open window.\nNote: You may find that one window does not respond to clicking Close - that\u0026rsquo;s your root Thunderbird window and it cannot be closed. Be sure to close all of the others.\n","date":"20 April 2016","permalink":"/p/thunderbird-opens-multiple-windows/","section":"Posts","summary":"When I started Thunderbird today, it opened three windows.","title":"Thunderbird opens multiple windows"},{"content":"On most IPv6-enabled networks, network addresses are distributed via stateless address autoconfiguration (SLAAC). That is a fancy way to say that hosts on an IPv6 network will configure their own IP addresses.\nThe process usually works like this:\nThe host sends out a router solicitation request: Hey, who is the router around here? The router replies with a prefix: I am the router and your IPv6 address should start with this prefix. The host uses its MAC address to generate the remaining bits of the IP address. The format of the IPv6 address generated by the host is called EUI-64. The host takes its MAC address, wedges FF:FE in the middle, and adds the prefix from the router on the front. For much more detail on this process, review the IEEE\u0026rsquo;s guidelines for EUI-64. The Arch Linux wiki page on IPv6 has plenty of detail as well.\nTime to talk security #While SLAAC works really well on most networks and provides a highly efficient method for dealing with IP addresses, it can disclose more information about your computer or mobile device than you want to disclose. Websites will see the IPv6 address and they can determine the client\u0026rsquo;s MAC address on networks that are using SLAAC. This could be used for tracking purposes - both legitimate and illegitimate.\nAlso, bear in mind that the first several bits of a MAC address will often identify the hardware vendor that manufactured your ethernet card or wireless chip. Depending on the vendor, this may expose what type of device you are using (computer or mobile device) and in some cases, which type of computer you are using (Mac vs PC).\nIn the worst cases, this information could be used to deliver targeted malware to your device. It could also be used to locate or identify a user of a device in a particular location.\nUsing temporary addresses #Most systems allow for temporary addressing, and some even enable it by default. However, many Linux distributions do not enable temporary addresses by default.\nThere is a kernel tunable that controls temporary addressing on Linux systems:\n# Do not use a temporary address net.ipv6.conf.all.use_tempaddr = 0 # Set a temporary address, but do not make it the default net.ipv6.conf.all.use_tempaddr = 1 # Set a temporary address and make it the default net.ipv6.conf.all.use_tempaddr = 2 NetworkManager can handle this setting as well. Just set the ipv6.ip6-privacy variable to 0, 1, or 2. For example, to enable temporary adrr\nnmcli connection modify eth0 ipv6.ip6-privacy 2 NetworkManager will activate this setting immediately and begin using the temporary address as the default.\nCaveats #Temporary addresses are built based on the MAC address and a random time string, so they will change from time to time. Avoid using temporary addressing on devices that you regularly access via their IPv6 address, such as servers or other non-mobile systems.\nPhoto Credit: UnknownNet Photography\n","date":"17 April 2016","permalink":"/p/enable-ipv6-privacy-networkmanager/","section":"Posts","summary":"On most IPv6-enabled networks, network addresses are distributed via stateless address autoconfiguration (SLAAC).","title":"Enable IPv6 privacy in NetworkManager"},{"content":"","date":null,"permalink":"/tags/privacy/","section":"Tags","summary":"","title":"Privacy"},{"content":"Let\u0026rsquo;s Encrypt has taken the world by storm by providing free SSL certificates that can be renewed via automated methods. They have issued over 1.4 million certificates since launch in the fall of 2015.\nIf you are not familiar with how Let\u0026rsquo;s Encrypt operates, here is an extremely simple explanation:\nCreate a private key Make a request for a new certificate Complete the challenge process You have a certificate! That is highly simplified, but there is plenty of detail available on how the whole system works.\nOne of the most popular challenge methods is HTTP. That involves getting a challenge string from Let\u0026rsquo;s Encrypt, placing the string at a known URL on your domain, and then waiting for verification of the challenge. The process is quick and Let\u0026rsquo;s Encrypt provides tools that automate much of the process for you.\nA challenger appears #A DNS challenge is available in addition to the HTTP challenge. As you might imagine, this involves creating a DNS record with a string provided by Let\u0026rsquo;s Encrypt. Once the DNS record is in place, it is verified and certificates are issued. The process goes something like this:\nRequest a new certificate Get a challenge string Add a DNS TXT record on your domain with the challenge string as the data Wait for DNS records to appear on your DNS server Let\u0026rsquo;s Encrypt checks for the DNS record Clean up the DNS record Get a certificate Wrapping automation around this method is often easier than using the HTTP method since it does not require any changes on web servers. If someone has 500 web servers but they change their DNS records through a single API with a DNS provider, it quickly becomes apparent that adding a single DNS record is much easier.\nIn addition, the HTTP challenge method creates problems for websites which are not entirely publicly accessible yet. A stealth startup or a pre-release site could acquire a certificate without needing to allow any access into the webserver. This is also helpful for sites which will never be public facing, such as those on intranets.\nAutomating the process #After some research, I stumbled upon a project in GitHub called letsencrypt.sh. The project consists of a bash script that makes all the necessary requests to Let\u0026rsquo;s Encrypt\u0026rsquo;s API for requesting and obtaining SSL certificates. However, DNS records are tricky since they are usually managed via an API or other non-trivial methods.\nThe project provides a hook feature which allows anyone to write a script that receives data and does the necessary DNS adjustments to complete the challenge process. I wrote a hook that interfaces with Rackspace\u0026rsquo;s Cloud DNS API and handles the creation of DNS records:\nGitHub: letsencrypt-rackspace-hook All of the installation and configuration instructions are in the main README file within the repository. You can begin issuing certificates with DNS challenges in a few minutes.\nThe hook works like this:\nletsencrypt.sh hands off the domain name and a challenge string to the hook The hook adds a DNS record to Rackspace\u0026rsquo;s DNS servers via the API The hook keeps checking to see if the DNS record is publicly accessible Once the DNS record appears, control is handed back to letsencrypt.sh letsencrypt.sh tells Let\u0026rsquo;s Encrypt to verify the challenge Let\u0026rsquo;s Encrypt verifies the challenge The hook cleans up the DNS record and displays the paths to the new certificates and keys. From there, you can configure your configuration management software to push out the new certificate and keys to your production servers. Let\u0026rsquo;s Encrypt certificates are currently limited to a 90-day duration, so be sure to configure this automation via a cron job. At the very least, set a calendar reminder for yourself a week or two in advance of the expiration.\nKeep in mind that Let\u0026rsquo;s Encrypt and Rackspace\u0026rsquo;s DNS service are completely free. Free is a good thing.\nLet me know what you think of the script! Feel free to make pull requests or issues if you find bugs. I am still working on some automated testing for the script and I hope to have that available in the next week or two.\nPhoto Credit: Aphernai via Compfight cc\n","date":"31 March 2016","permalink":"/p/automated-lets-encrypt-dns-challenges-with-rackspace-cloud-dns/","section":"Posts","summary":"Let\u0026rsquo;s Encrypt has taken the world by storm by providing free SSL certificates that can be renewed via automated methods.","title":"Automated Let’s Encrypt DNS challenges with Rackspace Cloud DNS"},{"content":"","date":null,"permalink":"/tags/bash/","section":"Tags","summary":"","title":"Bash"},{"content":"","date":null,"permalink":"/tags/web/","section":"Tags","summary":"","title":"Web"},{"content":"","date":null,"permalink":"/tags/desktop/","section":"Tags","summary":"","title":"Desktop"},{"content":"UPDATE: The fixed version of mutter is now in the Fedora updates repository. You should be able to update the package with dnf:\ndnf -y upgrade mutter GNOME 3 has been rock solid for the last few months but something cropped up this week that derailed me for a short while. Whenever I moved my mouse cursor to the top bar (where the clock and status icons reside), the mouse cursor disappeared. The same thing happened if I pressed the Mod/Windows key to hop into the Activities display.\nIf I wiggled the mouse a bit, I could see the highlight move around to different windows and icons. The mouse cursor never appeared.\nLots of Google results led to dead ends. I stumbled onto a GNOME bug for gnome-shell from early 2015 that seemed to cover the same problem. After adding in my comments, I created a Fedora bug to track the problem.\nAround that time, Florian Müllner replied in the GNOME bug about trying mutter-3.18.3-2. My laptop was running mutter-3.18.3-1 at the time. The new version of mutter was still in the pending state in Fedora\u0026rsquo;s packaging infrastructure, so I pulled it down with koji:\nkoji download-build --arch x86_64 mutter-3.18.3-2.fc23 sudo dnf install mutter-3.18.3-2.fc23.x86_64.rpm After a reboot, everything was back to normal! The cursor appears reliably in the top bar, Activities screen, and other overlays. In addition, some of the transient cursor weirdness I had with some applications seems to be gone.\nUPDATE: Jiří Eischmann tweeted yesterday about this problem:\n@majorhayden I was hit by that too. But it only occurs when 0:0's not visible, when left monitor is below the right, so not so many ppl impt \u0026mdash; Jiří Eischmann (@Sesivany) March 11, 2016 In my particular case, my \u0026ldquo;left\u0026rdquo; monitor is my laptop screen and my \u0026ldquo;right\u0026rdquo; monitor is my external display. I configure the external monitor to be above my laptop monitor physically and logically, which is why the problem appears for me. Thanks for the clarification, Jiří!\nPhoto Credit: Perfectance via Compfight cc\n","date":"11 March 2016","permalink":"/p/mouse-cursor-disappears-gnome-3/","section":"Posts","summary":"UPDATE: The fixed version of mutter is now in the Fedora updates repository.","title":"Mouse cursor disappears in GNOME 3"},{"content":"","date":null,"permalink":"/tags/chrome/","section":"Tags","summary":"","title":"Chrome"},{"content":"After getting a bit overzealous with cleaning up bookmarks in Chrome, I discovered that I deleted a helpful Gerrit filter for OpenStack reviews. I worked hard to create the filter and I definitely needed it back.\nChrome keeps a file called Bookmarks.bak inside its configuration directory. You can find this file here:\n/home/[username]/.config/google-chrome/Default/Bookmarks.bak # If using Chrome stable /home/[username]/.config/google-chrome-beta/Default/Bookmarks.bak # If using Chrome beta The file is stored in JSON format. Open it up in your favorite text editor and search for your deleted bookmark.\n","date":"26 February 2016","permalink":"/p/recovering-deleted-chrome-bookmarks-on-linux/","section":"Posts","summary":"After getting a bit overzealous with cleaning up bookmarks in Chrome, I discovered that I deleted a helpful Gerrit filter for OpenStack reviews.","title":"Recovering deleted Chrome bookmarks on Linux"},{"content":"I\u0026rsquo;m always interested to talk to college students about technology and business in general. They have amazing ideas and they don\u0026rsquo;t place any limits on themselves. In particular, their curiosity is limitless.\nA great question #I joined several other local employers at the University of Texas at San Antonio last week for mock interviews with computer science students. We went through plenty of sample questions and gave feedback to the students on their content and delivery during the mock interviews. At the end, we opened it up for questions.\nOne student asked a question that really made me pause:\nWhat\u0026rsquo;s the one thing that you learned while working that you didn\u0026rsquo;t learn in college? What should we know that we won\u0026rsquo;t learn in the classroom?\nThinking it through #There are plenty of obvious things that came to mind when I thought about it:\nBe a team player Remember the customer Think globally, act locally Don\u0026rsquo;t \u0026ldquo;call the baby ugly\u0026rdquo; Learn something new every day However, many of these are cliche or difficult to teach. It takes some real world experience with real people on real projects to really understand them. There must be some correlation between these things I\u0026rsquo;ve learned since I entered the business world.\nThen it hit me. The biggest key to my own success rests upon a single word: curiosity.\nBeing curious #I\u0026rsquo;ve always been curious about things for as long as I can remember. I\u0026rsquo;ve questioned everything at one time or another and demanded to know the real story behind events in my work and personal life.\nBeing curious allows me to approach new things with more wonder and less fear. It has stopped me in my tracks when I\u0026rsquo;ve tried to pass judgement on a person or a situation without asking the right questions first. It has brought me closer to more people at work, in open source communities, and at home.\nIt\u0026rsquo;s not always easy. Being curious is exhausting. There are many times where I\u0026rsquo;ve wanted less change and I\u0026rsquo;ve rejected new ways of doing things. When those situations arise, I take a break and think about something else. I come back to it with more energy and a myriad of questions.\nBeing curious also leads you down those paths less traveled. Robert Frost\u0026rsquo;s poem, The Road Not Taken, ends with a paragraph that has special meaning to me as a curious person:\nI shall be telling this with a sigh\nSomewhere ages and ages hence:\nTwo roads diverged in a wood, and I-\nI took the one less traveled by,\nAnd that has made all the difference.\nBeing curious has taken me down this path less traveled many times. Almost every trip has been worth the trouble.\nCuriosity crushes cynicism #Anyone who works in technology, especially software development or system administration, has found themselves looking over a block of code or a server deployment that someone else has prepared. Many of us have had one of these moments:\nClint Eastwood is disgusted That\u0026rsquo;s totally natural. It\u0026rsquo;s human nature.\nLuckily, that\u0026rsquo;s a habit we can break and curiosity can be the tool that breaks it. Instead of immediately passing judgement, start making a list of questions in your head:\nWhy was this designed in this way? Is this something I can change? What was the original use case? Is there a better way to do this that already exists? Who worked on it originally and what was their charter or goal? This situation appears very frequently in open source software. Quickly passing judgement about a particular piece of software or user community can often to the dangerous cycle of Not Invented Here (NIH). This leads to competing standards, projects, and communities.\nA healthier approach is to look over the software and the community with a curious approach. Start asking questions and sharing your unique use cases with the community. You might find others in the community with a similar need and this can often convince the herd to change direction. Instead of building something on your own, you will belong to a community and stand on the shoulders of the work that is already done.\nMy advice for students #My advice here is the same as what I told the UTSA student last week: always be curious.\nLet curiosity drive your decisions and your growth.\nLet curiosity push you through the challenging or difficult times.\nLet curiosity guide your interactions with other people and encourage them to be curious as well.\nAs you refocus your energy from cynicism to curiosity, the momentum will build. Being curious will become one of the most beneficial habits you\u0026rsquo;ll ever make.\nPhoto Credit: jinterwas via Compfight cc\n","date":"17 February 2016","permalink":"/p/fight-cynicism-curiosity/","section":"Posts","summary":"I\u0026rsquo;m always interested to talk to college students about technology and business in general.","title":"Fight cynicism with curiosity"},{"content":"","date":null,"permalink":"/tags/sysadmin/","section":"Tags","summary":"","title":"Sysadmin"},{"content":"I\u0026rsquo;m a big fan of the pyenv project because it makes installing multiple python versions a simple process. However, I kept stumbling into a segmentation fault whenever I tried to build documentation with sphinx in Python 2.7.11:\nwriting output... [100%] unreleased [app] emitting event: \u0026#39;doctree-resolved\u0026#39;(\u0026lt;document: \u0026lt;section \u0026#34;current series release notes\u0026#34;...\u0026gt;\u0026gt;, u\u0026#39;unreleased\u0026#39;) [app] emitting event: \u0026#39;html-page-context\u0026#39;(u\u0026#39;unreleased\u0026#39;, \u0026#39;page.html\u0026#39;, {\u0026#39;file_suffix\u0026#39;: \u0026#39;.html\u0026#39;, \u0026#39;has_source\u0026#39;: True, \u0026#39;show_sphinx\u0026#39;: True, \u0026#39;last generating indices... genindex[app] emitting event: \u0026#39;html-page-context\u0026#39;(\u0026#39;genindex\u0026#39;, \u0026#39;genindex.html\u0026#39;, {\u0026#39;pathto\u0026#39;: \u0026lt;function pathto at 0x7f4279d51230\u0026gt;, \u0026#39;file_suffix\u0026#39;: \u0026#39;.html\u0026#39; Segmentation fault (core dumped) I tried a few different versions of sphinx, but the segmentation fault persisted. I did a quick reinstallation of Python 2.7.11 in the hopes that a system update of gcc/glibc was causing the problem:\npyenv install 2.7.11 The same segmentation fault showed up again. After a ton of Google searching, I found that the --enable-shared option allows pyenv to use shared Python libraries at compile time:\nenv PYTHON_CONFIGURE_OPTS=\u0026#34;--enable-shared CC=clang\u0026#34; pyenv install -vk 2.7.11 That worked! I\u0026rsquo;m now able to run sphinx without segmentation faults.\n","date":"9 February 2016","permalink":"/p/segmentation-faults-with-sphinx-and-pyenv/","section":"Posts","summary":"I\u0026rsquo;m a big fan of the pyenv project because it makes installing multiple python versions a simple process.","title":"Segmentation faults with sphinx and pyenv"},{"content":"Although I use GNOME 3 as my desktop environment, I prefer KDE\u0026rsquo;s kwallet service to gnome-keyring for some functions. The user interface is a little easier to use and it\u0026rsquo;s easier to link up to the keyring module in Python.\nAccidentally disabling kwallet #A few errant mouse clicks caused me to accidentally disable the kwalletd service earlier today and I was struggling to get it running again. The daemon is usually started by dbus and I wasn\u0026rsquo;t entirely sure how to start it properly.\nIf I start kwalletmanager, I see the kwallet icon in the top bar. However, it\u0026rsquo;s unresponsive to clicks. Starting kwalletmanager on the command line leads to lots of errors in the console:\nkwalletmanager(20406)/kdeui (Wallet): The kwalletd service has been disabled kwalletmanager(20406)/kdeui (Wallet): The kwalletd service has been disabled kwalletmanager(20406)/kdeui (Wallet): The kwalletd service has been disabled Manually running kwalletd in the console wasn\u0026rsquo;t successful either.\nUsing kcmshell #KDE provides a utility called kcmshell that allows you to start a configuration panel without running the entire KDE environment. If you disable kwallet accidentally like I did, this will bring up the configuration panel and allow you to re-enable it:\nkcmshell4 kwalletconfig You should see kwallet\u0026rsquo;s configuration panel appear:\nKDE wallet control module for kwallet Click on Enable the KDE wallet subsystem and then click OK. Once the window closes, start kwalletmanager and you should be able to access your secrets in kwallet again.\nPhoto Credit: Wei via Compfight cc\n","date":"28 January 2016","permalink":"/p/enabling-kwallet-after-accidentally-disabling-it/","section":"Posts","summary":"Although I use GNOME 3 as my desktop environment, I prefer KDE\u0026rsquo;s kwallet service to gnome-keyring for some functions.","title":"Enabling kwallet after accidentally disabling it"},{"content":"","date":null,"permalink":"/tags/command-line/","section":"Tags","summary":"","title":"Command Line"},{"content":"I\u0026rsquo;ve talked about predictable network names (and seemingly unpredictable ones) on the blog before, but some readers asked me how they could alter the network naming to fit a particular situation. Oddly enough, my Supermicro 5028D-T4NT has a problem with predictable names and it\u0026rsquo;s a great example to use here.\nThe problem #There\u0026rsquo;s plenty of detail in my post about the Supermicro 5028D-T4NT, but the basic gist is that something within the firmware is causing the all of the network cards in the server to show up as onboard. The server has two 1Gb network interfaces which show up as eno1 and eno2, which makes sense. It also has two 10Gb network interfaces that systemd tries to name eno1 and eno2 as well. That\u0026rsquo;s obviously not going to work, so they get renamed to eth0 and eth1.\nYou can see what udev thinks in this output:\nP: /devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/eth0 E: DEVPATH=/devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/eth0 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=Ethernet Connection X552/X557-AT 10GBASE-T E: ID_MODEL_ID=0x15ad E: ID_NET_DRIVER=ixgbe E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME=eno1 E: ID_NET_NAME_MAC=enx0cc47a7591c8 E: ID_NET_NAME_ONBOARD=eno1 E: ID_NET_NAME_PATH=enp3s0f0 E: ID_OUI_FROM_DATABASE=Super Micro Computer, Inc. E: ID_PATH=pci-0000:03:00.0 E: ID_PATH_TAG=pci-0000_03_00_0 E: ID_PCI_CLASS_FROM_DATABASE=Network controller E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller E: ID_VENDOR_FROM_DATABASE=Intel Corporation E: ID_VENDOR_ID=0x8086 E: IFINDEX=4 E: INTERFACE=eth0 E: SUBSYSTEM=net E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/eno1 E: TAGS=:systemd: E: USEC_INITIALIZED=7449982 The ID_NET_NAME_ONBOARD takes precedence, but the eno1 name is already in use at this point since udev has chosen names for the onboard 1Gb network interfaces already. Instead of falling back to ID_NET_NAME_PATH, it falls back to plain old eth0. This is confusing and less than ideal.\nAfter a discussion in a Github issue, it seems that the firmware is to blame. Don\u0026rsquo;t worry - we still have some tricks we can do with systemd-networkd.\nWorkaround #Another handy systemd-networkd feature is a link file. These files allow you to apply some network configurations to various interfaces. You can manage multiple interfaces with a single file with wildcards in the [Match] section.\nIn my case, I want to find any network interfaces that use the ixgbe driver (my 10Gb network interfaces) and apply a configuration change only to those interfaces. My goal is to get the system to name the interfaces using ID_NET_NAME_PATH, which would cause them to appear as enp3s0f0 and enp3s0f1.\nLet\u0026rsquo;s create a link file to handle our quirky hardware:\n# /etc/systemd/network/10gb-quirks.link [Match] Driver=ixgbe [Link] NamePolicy=path This file tells systemd to find any devices using the ixgbe driver and force them to use their PCI device path for the naming. After a reboot, the interfaces look like this:\n# networkctl |grep ether 2 eno1 ether degraded configured 4 eno2 ether off unmanaged 9 enp3s0f0 ether off unmanaged 10 enp3s0f1 ether off unmanaged Awesome! They\u0026rsquo;re now named based on their PCI path and that should remain true even through future upgrades. There are plenty of other tricks that you can do with link files, including completely custom naming for any interface.\nCaveats #As Sylvain noted in the comments below, systemd-networkd provides a default 99-default.link file that specifies how links should be handled. If you make a link file that sorts after that file, such as ixgbe-quirks.link, it won\u0026rsquo;t take effect. Be sure that your link file comes first by starting it off with a number less than 99. This is why my 10gb-quirks.link file works in my example above.\nPhoto Credit: realblades via Compfight cc\n","date":"20 January 2016","permalink":"/p/tinkering-with-systemds-predictable-network-names/","section":"Posts","summary":"I\u0026rsquo;ve talked about predictable network names (and seemingly unpredictable ones) on the blog before, but some readers asked me how they could alter the network naming to fit a particular situation.","title":"Tinkering with systemd’s predictable network names"},{"content":"","date":null,"permalink":"/tags/udev/","section":"Tags","summary":"","title":"Udev"},{"content":"","date":null,"permalink":"/tags/dell/","section":"Tags","summary":"","title":"Dell"},{"content":"Updating Dell PowerEdge firmware from Linux is quite easy, but it isn\u0026rsquo;t documented very well. I ended up with a set of PowerEdge R710\u0026rsquo;s at work for a lab environment and the BIOS versions were different on each server.\nDownloading the latest firmware #Start by heading over to Dell\u0026rsquo;s support site and enter your system\u0026rsquo;s service tag. You can use lshw to find your service tag:\n# lshw | head lab05 description: Rack Mount Chassis product: PowerEdge R710 () vendor: Dell Inc. serial: [service tag should be here] width: 64 bits capabilities: smbios-2.6 dmi-2.6 vsyscall32 configuration: boot=normal chassis=rackmount uuid=44454C4C-3700-104A-8052-B2C04F564831 *-core description: Motherboard After entering the service tag, follow these steps:\nClick Drivers \u0026amp; downloads on the left Click Change OS at the top right and choose Red Hat Enterprise Linux 7 Click the BIOS dropdown in the list Click Other file formats available Look for the file ending in BIN and click Download file underneath it Copy that file to your server that needs a BIOS update.\nInstalling firmware update tools #Start by getting the right packages installed. I\u0026rsquo;ll cover the CentOS/RHEL and Ubuntu methods here. At the moment, Fedora doesn\u0026rsquo;t build kernels with the dell_rbu module enabled, but there\u0026rsquo;s a discussion about getting that fixed.\nFor CentOS, you\u0026rsquo;ll need to get the Dell Linux repository configured first:\nwget http://linux.dell.com/repo/hardware/latest/bootstrap.cgi sh bootstrap.cgi yum -y install firmware-addon-dell For Ubuntu, the package is in the upstream repositories already:\napt-get -y install firmware-addon-dell Extract and flash the BIOS header #Dell packages up a BIOS header (the actual firmware blob that needs to be flashed) within the BIN file you downloaded earlier. The latest version of the BIOS for my R710 is 6.4.0, so my file is called R710_BIOS_4HKX2_LN_6.4.0.BIN. Let\u0026rsquo;s start by extracting the header file:\nbash R710_BIOS_4HKX2_LN_6.4.0.BIN --extract bios You should now have a directory in your current directory called bios. The header file is within bios/payload/ and you\u0026rsquo;ll use that to flash the BIOS:\n# modprobe dell_rbu # dellBiosUpdate-compat --hdr bios/payload/R710-060400.hdr --update Supported RBU type for this system: (MONOLITHIC, PACKET) Using RBU v2 driver. Initializing Driver. Setting RBU type in v2 driver to: PACKET writing (4096) to file: /sys/devices/platform/dell_rbu/packet_size Writing RBU data (4096bytes/dot): ........................... Done writing packet data. Activate CMOS bit to notify BIOS that update is ready on next boot. Update staged sucessfully. BIOS update will occur on next reboot. It\u0026rsquo;s now time to reboot! If you watch the console via iDRAC, you\u0026rsquo;ll see a 3-4 minute delay on the next reboot while the staged BIOS image is flashed. When the server boots, use lshw to verify that the BIOS version has been updated.\nPhoto Credit: vaxomatic via Compfight cc\n","date":"18 January 2016","permalink":"/p/updating-dell-poweredge-bios-from-linux/","section":"Posts","summary":"Updating Dell PowerEdge firmware from Linux is quite easy, but it isn\u0026rsquo;t documented very well.","title":"Updating Dell PowerEdge BIOS from Linux"},{"content":"Working with open source software is an amazing experience. The collaborative process around creation, refinement, and even maintenance, drives more developers to work on open source software more often. However, every developer finds themselves writing code that very few people actually use.\nFor some developers, this can be really bothersome. You offer your code up to the world only to find that the world is much less interested than you expected. We see projects that fit the \u0026ldquo;build it, and they will come\u0026rdquo; methodology all the time, but it can hurt when our projects don\u0026rsquo;t have the same impact.\nStart by asking yourself a question:\nDoes it matter? #Many of us write software that has a very limited audience. Perhaps we wrote something that worked around a temporary problem or solved an issue that very few poeple would see. Sometimes we write software to work with a project that doesn\u0026rsquo;t have a large user base.\nIn these situations, it often doesn\u0026rsquo;t matter if other contributors don\u0026rsquo;t show up to collaborate.\nHowever, if you\u0026rsquo;re eager to build a community around an open source project, here are some tips that have worked well for me.\nMake it approachable #Sites like StackOverflow became immensely popular over time because they provide simple, approachable solutions that normally come with a small amount of explanation. Not all of the code snippets are examples of high quality software development, but that\u0026rsquo;s not the point here. People can search, review something, and get on their way.\nMaking software more approachable is completely based on your audience. Complicated software, like the cryptography Python library, has an approach towards experienced software developers who want a robust method for handling cryptographic operations. Compare that to the requests Python library. The developers on that project have an audience of Python developers of all skill levels and they lead off with a simple example and very approachable documentation.\nBoth of those approaches are very different but extremely effective to their respective audiences.\nOnce you know your audience, make these changes to make your software more approachable to them:\nDescribe your project\u0026rsquo;s sweet spot. what does it do better than every other project? What does your project not do well? This could clue developers into better projects for their needs or entice them to submit patches for improvements. How do developers get started? This should include simple ways to install the software, test it after installation, and examples of ways to quickly begin using it. How do you want to receive improvements? If someone finds a bug or area for improvement, how should they submit it and what should their expectations be? If you haven\u0026rsquo;t figured it out already, documentation is required. Projects without documentation are quickly skipped over by most developers for a good reason: if you haven\u0026rsquo;t taken the time to help people understand how to use your project, why should they take the time to understand it themselves. Projects without documentation are often assumed to be less mature and not production-ready.\nOnce you make it this far, it\u0026rsquo;s time to charge your extrovert battery and promote it.\nPromoting the project #Some of the best-written software projects with the best documentation often find themselves limited by the fact that nobody knows they exist. This can become a challenging problem to solve because it involves actively reaching out to the audience you\u0026rsquo;ve identified in the previous steps.\nThe first step is to do some writing about your software project and what problems it tries to solve. The type of writing and the medium for sharing it is completely up to you.\nSome people prefer writing blog posts on their own blog. You may be able to get additional readers by publishing it on external sites, such as Medium, or as a guest author on another site. For example, opensource.com invites guest authors to write about various software projects or solutions provided by open source software. If your project is closely affiliated with another large software project, you may be able to publish a post as a guest author on their project site.\nSocial media can be helpful if it\u0026rsquo;s used wisely with the right audience. Your followers must be able to get some value from whatever you link them to in your social media posts. Steer clear of clickbait-type posts and be genuine. If you want to build a community, your integrity is your most important asset.\nTechnical talks #The most effective method for sharing a project is to do it in person. Yes, this means giving a technical talk to an audience. That means standing in front of people. It\u0026rsquo;s the kind of thing that make an introvert pause. However, if you care about your project, you can tame that technical talk and make a great connection with your audience.\nThe return on investment in technical talks often takes the form of a decrescendo. Feedback flows in quickly as soon as the talk is over and gradually decreases over time. The rate of decrease largely depends on the impact you make on your audience. A high-impact, emotionally appealing presentation will yield a long tail of feedback that decreases very slowly. Your project might appear in presentations made by other people and you\u0026rsquo;ll often get additional feedback and involvement from those talks as well.\nPhoto Credit: Wanaku via Compfight cc\n","date":"15 January 2016","permalink":"/p/nobody-using-software-project-now/","section":"Posts","summary":"Working with open source software is an amazing experience.","title":"Nobody is using your software project. Now what?"},{"content":"I\u0026rsquo;ve been a big fan of Thunderbird for years, but it lacks features in some critical areas. For example, I need Microsoft Exchange and Google Apps connectivity for my mail and contacts, but Thunderbird needs some extensions to make that connectivity easier. There are some great extensions available, but they lack polish since they\u0026rsquo;re not part of the core product.\nMy muscle memory for keyboard shortcuts in Thunderbird left me fumbling in Evolution. Some of the basics that I used regularly, such as writing a new email or collapsing/expanding threads, were wildly different. For example, there\u0026rsquo;s no keyboard shortcut for expanding threads in Evolution by default.\nThe search #In my quest to adjust some of the default keyboard shortcuts for Evolution, I found lots of documentation about previous versions of GNOME in documentation and countless forum posts. None of the old tricks, like editable menus and easily adjusted dconf settings, work any longer.\nI stumbled onto an email thread from August 2015 on this very topic and I was eager to find out if GNOME 3.18\u0026rsquo;s Evolution would look at the same .config/evolution/accels file as the one mentioned in the thread.\nFirst, I started Evolution with strace so I could review the system calls made during its startup:\nstrace -q -o evolution-trace.out -s 1500 evolution Sure enough, Evolution was looking for the accels file:\n$ grep accels evolution-trace.out open(\u0026#34;/home/user/.config/evolution/accels\u0026#34;, O_RDONLY) = 10 open(\u0026#34;/home/user/.config/evolution/accels\u0026#34;, O_WRONLY|O_CREAT|O_TRUNC, 0644) = 34 Adding custom keyboard shortcuts #Editing the accels file is easy for most changes, but be sure Evolution is stopped prior to editing the file. The file should look something like this:\n; evolution GtkAccelMap rc-file -*- scheme -*- ; this file is an automated accelerator map dump ; ; (gtk_accel_path \u0026#34;\u0026lt;Actions\u0026gt;/new-source/memo-list-new\u0026#34; \u0026#34;\u0026#34;) ; (gtk_accel_path \u0026#34;\u0026lt;Actions\u0026gt;/switcher/switch-to-tasks\u0026#34; \u0026#34;\u0026lt;Primary\u0026gt;4\u0026#34;) ; (gtk_accel_path \u0026#34;\u0026lt;Actions\u0026gt;/mailto/add-to-address-book\u0026#34; \u0026#34;\u0026#34;) ; (gtk_accel_path \u0026#34;\u0026lt;Actions\u0026gt;/mail/mail-next-thread\u0026#34; \u0026#34;\u0026#34;) Editing an existing shortcut is easy. For example, the default shortcut for creating a new email is CTRL-SHIFT-M:\nm\u0026#34;) I prefer Thunderbird\u0026rsquo;s default of CTRL-N for new emails:\nn\u0026#34;) Those edits are quite easy, but things get interesting with other characters. For example, Thunderbird uses the asterisk (*) for expanding threads and backslash (\\) for collapsing them. Those characters are special in the context of the accels file and they can\u0026rsquo;t be used. Here\u0026rsquo;s an example of how to set keyboard shortcuts with those:\n/mail/mail-threads-expand-all\u0026#34; \u0026#34;asterisk\u0026#34;) (gtk_accel_path \u0026#34;\u0026lt;Actions\u0026gt;/mail/mail-threads-collapse-all\u0026#34; \u0026#34;backslash\u0026#34;) To determine the names of those special characters, use xmodmap:\n$ xmodmap -pk | grep backslash 51 0x005c (backslash) 0x007c (bar) 0x005c (backslash) 0x007c (bar) Checking your work #Once you make your adjustments, Evolution should display those new keyboard shortcuts in its menus. For example, here\u0026rsquo;s my new shortcut for writing new emails:\nGo back and adjust as many of the shortcuts as necessary. However, remember to quit Evolution before editing the file.\n","date":"28 November 2015","permalink":"/p/custom-keyboard-shortcuts-for-evolution-in-gnome/","section":"Posts","summary":"I\u0026rsquo;ve been a big fan of Thunderbird for years, but it lacks features in some critical areas.","title":"Custom keyboard shortcuts for Evolution in GNOME"},{"content":"","date":null,"permalink":"/tags/evolution/","section":"Tags","summary":"","title":"Evolution"},{"content":" I was recently asked to talk to Computer Information Systems students at the University of the Incarnate Word here in San Antonio about information security in the business world. The students are learning plenty of the technical parts of information security and the complexity that comes from dealing with complicated computer networks. As we all know, it\u0026rsquo;s the non-technical things that are often the most important in those tough situations.\nMy talk, \u0026ldquo;Five lessons I learned about information security\u0026rdquo;, lasted for about 30 minutes and then I took plenty of technical and non-technical questions from the students. I\u0026rsquo;ve embedded the slides below and I\u0026rsquo;ll go through the lessons within the post here.\nLesson 1: Information security requires lots of communication and relationships #Most of what information security professionals do involves talking about security. There are exceptions to this, however. For example, if your role is highly technical in nature and you\u0026rsquo;re expected to monitor a network or disassemble malware, then you might be spending the majority of your time in front of a screen doing highly technical work.\nFor the rest of us, we spend a fair amount of time talking about what needs to be secured, why it needs to be secured, and the best way to do it. Information security professionals shouldn\u0026rsquo;t be alone in this work, though. They must find ways to get the business bought in and involved.\nI talked about three general buckets of mindsets that the students might find in an average organization:\n\u0026ldquo;Security is mission critical for us and it\u0026rsquo;s how we maintain our customers\u0026rsquo; trust.\u0026rdquo;\nThese are your allies in the business and they must be \u0026ldquo;read into\u0026rdquo; what\u0026rsquo;s happening in the business. Share intelligence with them regularly and highlight their accomplishments to your leadership as well as theirs.\n\u0026ldquo;Security is really important, but we have lots of features to release. We will get to it.\u0026rdquo;\nThese people often see security as a bolt-on, value added product feature. Share methods of building in security from the start and make it easier for them to build secure products. Also, this is a great opportunity to create technical standards as opposed to policies (more on that later).\n\u0026ldquo;I opened this weird file from someone I didn\u0026rsquo;t know and now my computer is acting funny.\u0026rdquo;\nThere\u0026rsquo;s no way to sugar-coat this one: this group is your biggest risk. Take steps to prevent them from making mistakes in the first place and regularly send them high-level security communications. Your goal here is to send them information that is easy to read and keeps security front of mind for them without inundating them with details.\nLesson 2: Spend the majority of your time and money on detection and response capabilities #This is something that my coworker Aaron and I talk about in our presentation called \u0026ldquo;The New Normal\u0026rdquo;. Make it easier to detect an intruder and respond to the intrusion. Don\u0026rsquo;t allow them to quietly sneak through your network undetected. Force them to be a bull in a china shop. If they cross a network segment, make sure there\u0026rsquo;s an alert for that. Ensure that they have to make a bunch of noise if they wander around in your network.\nWhen something is detected, you need to do two things well: respond to the incident and communicate with the rest of the organization about it.\nThe response portion requires you to have plenty of information available when it\u0026rsquo;s time to assess a situation. Ensure that logs and alerts are funneled into centralized systems that can aggregate and report on the events in real time (or close to real time). Take that information and understand where the intruders are, what data is at risk, and how severe the situation really is.\nFrom there, find a way to alert the rest of the organization. The United States Department of Defense uses DEFCON for this. They can communicate the severity of a situation very quickly to thousands of people by using a simple number designation. That number tells everyone what to do, no matter where they are. Everyone has an idea of the gravity of the situation without needing to know a ton of details.\nThis is also a good opportunity to share limited intelligence with your allies in the business. They may be able to go into battle with you and get additional information that will help the incident response process.\nLesson 3: People, process, and technology must be in sync #Everything revolves around this principle. If your processes and technology are great, but your people never follow the process and work around the technology, you have a problem. Great technology and smart people without process is also a dangerous mix. Just like a three-legged stool, all three legs must be strong to keep it stable. The same goes for any business.\nWhen an incident happens, don\u0026rsquo;t talk about people, what could have been done, or vendors. Why? Because no matter how delicate you are, you will eventually \u0026ldquo;call the baby ugly\u0026rdquo;. Calling the baby ugly means that you insult someone\u0026rsquo;s work or character without intending to, and then that person withdraws from the process or approaches the situation defensively. That won\u0026rsquo;t lead to a good outcome and will usually create plenty of animosity.\nAssume the worst will happen again and make your processes and technologies better over time. This is an iterative process, so keep in mind that a thousand baby steps will always deliver more value than one giant step.\nLesson 4: Set standards, not policies #Policies are inevitable. We get them from our compliance programs, our governments, and other companies. They\u0026rsquo;re required, but they\u0026rsquo;re horribly annoying. Have you ever read through ISO 27002 or NIST 800-53? If you have, you know what I mean. Don\u0026rsquo;t get me started on PCI-DSS 3.1, either.\nWhat\u0026rsquo;s my point? Policies are dry. They\u0026rsquo;re long. They\u0026rsquo;re often chock-full of requirements that are really difficult to translate into technical changes. There\u0026rsquo;s no better way to clear out a room of technical people than to say \u0026ldquo;Let\u0026rsquo;s talk about PCI-DSS.\u0026rdquo; (Seriously, try this at least once. It\u0026rsquo;s amazing.)\nYou need to use the right kind of psychology to get the results you want. Threatening someone with policy is like getting someone to go in for a root canal. They know they need it, but they know how much it will hurt.\nInstead, create technical standards that are actionable and valuable. If you know you need to meet PCI-DSS and ISO 27002 for your business, create a technical standard that allows someone in the business to design systems that meet both compliance programs. Make it actionable and then show them the results of their labor when they\u0026rsquo;re done.\nAlso, give them a method for checking their systems against the standard in an automated way. Nobody wants The Spanish Inquisition showing up at the end of a project to say \u0026ldquo;Hey, you missed something!\u0026rdquo;. They\u0026rsquo;ll be able to check their progress along the way.\nLesson 5: Don\u0026rsquo;t take security incidents personally #This one is still a challenge for me. Security incidents will happen. They certainly won\u0026rsquo;t be fun. However, when the smoke clears, look at the positive aspects of the incident. These situations highlight two critical things:\nRoom for improvement (and perhaps additional spending) What attackers really want from your business Take the time to understand what type of attacker you just dealt with and what their target really was. If a casual script kiddie found a weakness, you obviously need to invest in more security basics, like network segmentation and hardening standards. If a nation state or some other type of determined attacker found a weakness, you need to understand what they were trying to get. This can be challenging and sometimes third parties can help give an unbiased view.\nRequired reading #There are three really helpful books I mentioned in the presentation:\nSwitch: How to Change Things When Change is Hard Winning With People The Phoenix Project These three books help you figure out how to make change, build relationships, and work around challenges in IT.\nFinal thoughts #If you haven\u0026rsquo;t been to your local university to meet the next generation of professionals, please take the time to do so. There\u0026rsquo;s nothing more exciting than talking with people who have plenty of knowledge and are ready to embrace something new. In addition, they yearn to talk to people who have more experience in the real world.\nThanks to John Champion from UIW for asking me to do a talk! It was a fun experience and I can\u0026rsquo;t wait to do the next one.\n","date":"10 November 2015","permalink":"/p/talking-to-college-students-about-information-security/","section":"Posts","summary":"I was recently asked to talk to Computer Information Systems students at the University of the Incarnate Word here in San Antonio about information security in the business world.","title":"Talking to college students about information security"},{"content":"","date":null,"permalink":"/tags/macvlan/","section":"Tags","summary":"","title":"Macvlan"},{"content":"I spent some time working with macvlan interfaces on KVM hypervisors last weekend. They\u0026rsquo;re interesting because they\u0026rsquo;re not really a bridge. It allows you to assign multiple MAC addresses to a single interface and then allow the kernel to filter traffic into tap interfaces based on the MAC address in the packet. If you\u0026rsquo;re looking for a highly detailed explanation, head on over to waldner\u0026rsquo;s blog for a deep dive into the technology and the changes that come along with it.\nWhy macvlan? #Bridging can become a pain to work with, especially when you\u0026rsquo;re forced to add in creative filtering rules and keep them updated. The macvlan interfaces can help with that (read up on VEPA mode). There are some interesting email threads showing that macvlan interfaces can improve network performance for various workloads. Low latency workloads can benefit from the simplicity and low overhead of macvlan interfaces.\nsystemd-networkd and macvlan interfaces #Fortunately for us, systemd-networkd makes configuring a macvlan interface really easy. I\u0026rsquo;ve written about configuring bridges with systemd-networkd and the process for macvlan interfaces is similar.\nIn my scenario, I have a 1U server with an ethernet interface called enp4s0 (read up on interface naming with systemd-udevd). I want to make a macvlan interface for virtual machines and I\u0026rsquo;ll be attaching VM\u0026rsquo;s to that interface via macvtap interfaces. It\u0026rsquo;s similar to bridging where you make a bridge and then give everyone a port on the bridge.\nStart by creating a network device for our macvlan interface:\n# /etc/systemd/network/vmbridge.netdev [NetDev] Name=vmbridge Kind=macvlan [MACVLAN] Mode=bridge I\u0026rsquo;ve told systemd-networkd that I want a macvlan interface set up in bridge mode. This will allow hosts and virtual machines to talk to one another on the interface. You could choose vepa for the mode if you want additional security. However, this will force traffic out to your upstream switch/router and makes it challenging for hosts and guests to communicate with each other.\nNow that we have a device configured, let\u0026rsquo;s configure the IP address for the macvlan interface (similar to configuring a bridge):\n# /etc/systemd/network/vmbridge.network [Match] Name=vmbridge [Network] IPForward=yes Address=192.168.250.33/24 Gateway=192.168.250.1 DNS=192.168.250.1 Let\u0026rsquo;s tell systemd-networkd that our physical network interface, enp4s0, is part of this interface:\n# /etc/systemd/network/enp4s0.network [Match] Name=enp4s0 [Network] MACVLAN=vmbridge This is very similar to a configuration for a standard Linux bridge. Once you\u0026rsquo;ve reached this step, you\u0026rsquo;ll most likely want to reboot to ensure all of your network devices come up properly.\nAttaching a virtual machine #Attaching a KVM virtual machine to the macvlan interface is quite easy. When you\u0026rsquo;re creating a new VM using virt-manager, look for this setting in the wizard:\nIf you\u0026rsquo;re installing via virt-install just use the following argument for your network configuration:\n--network type=direct,source=vmbridge,source_mode=bridge You\u0026rsquo;ll end up with interfaces like these after creating multiple virtual machines:\nmtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500 link/ether 52:54:00:83:53:f2 brd ff:ff:ff:ff:ff:ff promiscuity 0 macvtap mode bridge addrgenmode eui64 15: macvtap2@enp4s0: \u0026lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026gt; mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500 link/ether 52:54:00:f1:76:0b brd ff:ff:ff:ff:ff:ff promiscuity 0 macvtap mode bridge addrgenmode eui64 17: macvtap3@enp4s0: \u0026lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026gt; mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500 link/ether 52:54:00💿53:34 brd ff:ff:ff:ff:ff:ff promiscuity 0 macvtap mode bridge addrgenmode eui64 20: macvtap1@enp4s0: \u0026lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026gt; mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500 link/ether 52:54:00:18:79:d3 brd ff:ff:ff:ff:ff:ff promiscuity 0 macvtap mode bridge addrgenmode eui64 ","date":"26 October 2015","permalink":"/p/systemd-networkd-and-macvlan-interfaces/","section":"Posts","summary":"I spent some time working with macvlan interfaces on KVM hypervisors last weekend.","title":"systemd-networkd and macvlan interfaces"},{"content":"Switching to systemd-networkd for managing your networking interfaces makes things quite a bit simpler over standard networking scripts or NetworkManager. Aside from being easier to configure, it uses fewer resources on your system, which can be handy for smaller virtual machines or containers.\nManaging tunnels between interfaces is also easier with systemd-networkd. This post will show you how to set up a GRE tunnel between two hosts running systemd-networkd.\nGetting started #You\u0026rsquo;ll need two hosts running a recent version of systemd-networkd. I\u0026rsquo;d recommend Fedora 22 since it provides very recent versions of systemd which include enhancements to systemd-networkd.\nFor this example, I\u0026rsquo;ve built one Rackspace Cloud Server in the DFW datacenter and another in IAD. I\u0026rsquo;ll connect them both together with a simple GRE tunnel.\nSwitch to systemd-networkd #I\u0026rsquo;ve detailed out this process before but I\u0026rsquo;ll do it again here. First off, we will need a directory made on both servers to hold systemd-networkd configuration files:\nmkdir /etc/systemd/network Let\u0026rsquo;s add a very simple network configuration for our eth0 interface on both hosts:\n# cat /etc/systemd/network/eth0.network [Match] Name=eth0 [Network] Address=x.x.x.x/24 Gateway=x.x.x.x DNS=8.8.8.8 DNS=8.8.4.4 Do this on both servers and be sure to fill in the Address and Gateway lines with the correct data for your servers. Also, feel free to use something other than Google\u0026rsquo;s DNS servers if needed.\nIt\u0026rsquo;s time to get our services in order so that systemd-networkd will handle our networking after a reboot:\nsystemctl disable network systemctl disable NetworkManager systemctl enable systemd-networkd systemctl enable systemd-resolved systemctl start systemd-resolved rm -f /etc/resolv.conf ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf Don\u0026rsquo;t start systemd-networkd yet. Having systemd-networkd and NetworkManager fight over your interfaces can lead to a bad day.\nReboot both hosts and wait for them to come back online.\nConfigure the GRE tunnel #A GRE tunnel is a simple way to encapsulate packets between two hosts and send almost any protocol across the tunnel you build. However, it\u0026rsquo;s not encrypted. (If you\u0026rsquo;re planning to use this long-term, consider only using encrypted streams across the link or add IPSec on top of the GRE tunnel.)\nIf we want to route traffic over the GRE tunnel, we will need IP addresses on both sides. I\u0026rsquo;ll use 192.168.254.0/24 for this example, but you\u0026rsquo;re free to use any subnet of any size (except /32!) for this network.\nWe need to tell systemd-networkd about a new network device that it doesn\u0026rsquo;t know about. We do this with .netdev files. Create this file on both hosts:\n# cat /etc/systemd/network/gre-example.netdev [NetDev] Name=gre-example Kind=gre MTUBytes=1480 [Tunnel] Remote=[public ip of remote server] Local=[public ip of local server] We\u0026rsquo;re making a new network device called gre-example here and we\u0026rsquo;re telling systemd-networkd about the servers participating in the link. Add this configuration file to both hosts but be sure that your Remote= line is correct. If you\u0026rsquo;re writing the configuration file for the first host, then the Remote= line should have the IP address of your second host. Do the same thing on the second host, but use the IP address of your first host there.\nNow that we have a network device, we need to tell systemd-networkd how to configure the IP address on these new GRE tunnels. Let\u0026rsquo;s make a .network file for our GRE tunnel.\nOn the first host:\n# cat /etc/systemd/network/gre-example.network [Match] Name=gre-example [Network] Address=192.168.254.1/24 On the second host:\n# cat /etc/systemd/network/gre-example.network [Match] Name=gre-example [Network] Address=192.168.254.2/24 Bringing up the tunnel #Although systemd-networkd knows we have a tunnel configured now, it\u0026rsquo;s not sure which interface should manage the tunnel. In our case, our public interface (eth0) is required to be up for this tunnel to function. Go back to your original eth0.network files and add one line under the [Network] section for our tunnel:\n[Network] Tunnel=gre-example Restart systemd-networkd on both hosts and check the network interfaces:\n# systemctl restart systemd-networkd # networkctl IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 eth0 ether routable configured 3 eth1 ether off unmanaged 4 gre0 ipgre off unmanaged 5 gretap0 ether off unmanaged 6 gre-example ipgre routable configured 6 links listed. Hooray! Our GRE tunnel is up! However, we have a firewall in the way.\nFixing the firewall #We need to tell the firewall two things: trust the GRE interface and trust the public IP of the other server. Trusting the GRE interface is easy with firewalld — just add this on both hosts:\nfirewall-cmd --add-interface=gre-example --zone=trusted Now, we need a rich rule to tell firewalld to trust the public IP of each host. I talked about this last year on the blog. Run this command on both hosts:\nfirewall-cmd --zone=public --add-rich-rule=\u0026#39;rule family=\u0026#34;ipv4\u0026#34; source address=\u0026#34;[IP ADDRESS]\u0026#34; accept\u0026#39; If you run this on your first host, use the public IP address of your second host in the firewall-cmd command. Use the first host\u0026rsquo;s public IP address when you run the command on the second host.\nSave your configuration permanently on both hosts:\nfirewall-cmd --runtime-to-permanent Try to ping between your servers using the IP addresses we configured on the GRE tunnel and you should get some replies!\nFinal words #Remember that GRE tunnels are not encrypted. You can add IPSec over the tunnel or you can ensure that you use encrypted streams across the tunnel at all times (SSL, ssh, etc).\n","date":"16 October 2015","permalink":"/p/gre-tunnels-with-systemd-networkd/","section":"Posts","summary":"Switching to systemd-networkd for managing your networking interfaces makes things quite a bit simpler over standard networking scripts or NetworkManager.","title":"GRE tunnels with systemd-networkd"},{"content":"","date":null,"permalink":"/tags/selinux/","section":"Tags","summary":"","title":"Selinux"},{"content":"The blog posts have slowed down a bit lately because I\u0026rsquo;ve been heads down on a security project at work. I\u0026rsquo;m working with people in the OpenStack community to create a new Ansible role called openstack-ansible-security. The role aims to improve host security by using hardening standards to improve the configuration of various parts of the operating system.\nThis means applying security hardening to Ubuntu 14.04 systems since that\u0026rsquo;s the only host operating system supported by openstack-ansible at the moment. I have plenty of experience with securing Red Hat-based systems like Red Hat Enteprise Linux, CentOS and Fedora; but Ubuntu is new territory entirely. The rest of this post is full of lessons learned along the way.\nSearching for hardening standards #Finding a complete hardening standard for Ubuntu 14.04 is challenging. The Center for Internet Security offers Ubuntu security benchmarks with two big caveats:\nThere are very few controls to apply (relative to what\u0026rsquo;s available for RHEL) The terms of use are highly restrictive (no derivative works allowed) With that idea off the table, I examined the other options that meet Requirement 2.2 of PCI-DSS 3.1 [PDF]. Anther choice was ISO 27002, but it\u0026rsquo;s not terribly specific or easy to automate with scripts. The same goes for NIST 800-53.\nAfter plenty of searching, the decision was made to go forth with the Security Technical Implementation Guide (STIG) from the Defense Information Systems Agency (DISA) (part of the US Department of Defense). The STIGs aren\u0026rsquo;t licensed and they\u0026rsquo;re in the public domain. The only downside is that the closest STIG for use with Ubuntu 14.04 is the RHEL 6 STIG.\nUsing the RHEL 6 STIG meant that plenty of things will need to be translated for the different tools, configuration files, and package names that come with Ubuntu. It was frustrating to search all over for a hardening standard that applies well to Ubuntu and comes with decent auditing tools, but this was the best we could find.\nAutomatically starting daemons #The standard Ubuntu and Debian practice of automatically starting daemons has perplexed me before and it still continues to do so. Starting a daemon before I\u0026rsquo;ve had a chance to configure it makes little sense. The main argument is that the daemons come up with a highly secure configuration, so starting it automatically shouldn\u0026rsquo;t be a big deal. I\u0026rsquo;d prefer to install a package, have a look at the configuration, alter the configuration, and then start the daemon. Also, it had better not start after a reboot unless I explicitly ask it to do so.\nThere are plenty of examples where automatically starting a daemon with its default configuration is a bad idea. Take the postfix package as an example. If you install the package in non-interactive mode (as Ansible does by default), postfix will come online wth the following configuration option set:\ninet_interfaces = all Since Ubuntu doesn\u0026rsquo;t come with a firewall enabled by default, your postfix server is listening on all interfaces for mail immediately. The mynetworks configuration should prevent relaying, but any potential vulnerabilities in your postfix daemon are exposed to the network without your consent. I would prefer to configure postfix first before I ever allow it to run on my server.\nVerifying packages #Say what you will about RPM packages and the rpm command, but the verification portions of the rpm command are quite helpful. Here\u0026rsquo;s an example of verifying the aide RPM in Fedora:\n# rpm -Vv aide ......... c /etc/aide.conf ......... c /etc/logrotate.d/aide ......... /usr/sbin/aide ......... /usr/share/doc/aide ......... d /usr/share/doc/aide/AUTHORS ......... d /usr/share/doc/aide/COPYING ......... d /usr/share/doc/aide/ChangeLog ......... d /usr/share/doc/aide/NEWS ......... d /usr/share/doc/aide/README ......... d /usr/share/doc/aide/README.quickstart ......... /usr/share/doc/aide/contrib ......... d /usr/share/doc/aide/contrib/aide-attributes.sh ......... d /usr/share/doc/aide/contrib/bzip2.sh ......... d /usr/share/doc/aide/contrib/gpg2_check.sh ......... d /usr/share/doc/aide/contrib/gpg2_update.sh ......... d /usr/share/doc/aide/contrib/gpg_check.sh ......... d /usr/share/doc/aide/contrib/gpg_update.sh ......... d /usr/share/doc/aide/contrib/sshaide.sh ......... d /usr/share/doc/aide/manual.html ......... d /usr/share/man/man1/aide.1.gz ......... d /usr/share/man/man5/aide.conf.5.gz ......... /var/lib/aide ......... /var/log/aide If the verification finds that nothing in the package has changed, it won\u0026rsquo;t print anything. I\u0026rsquo;ve added the -v here to ensure that everything is printed to the console. In the output, you can see that everything is checked. That includes configuration files, log directories, libraries, and documentation. If I change the content of the aide.conf by adding a comment, I see that change:\n# echo \u0026#34;# Comment\u0026#34; \u0026gt;\u0026gt; /etc/aide.conf # rpm -V aide S.5....T. c /etc/aide.conf The 5 denotes that the MD5 checksum on the file has changed since the package was installed. What happens if I change the owner, group, and mode of the aide.conf?\n# chown major:major /etc/aide.conf # rpm -V aide S.5..UGT. c /etc/aide.conf Now I have a UG there that denotes a user/group ownership change. Similar messages appear for changes to the permissions on files or directories. The restorecon command even lets you figure out when SELinux contexts have changed. If you set a file to have the wrong ownership or permission, one rpm command gets you back to normal:\n# rpm --setperms --setugids aide On the Ubuntu side, you can use the debsums package to help with some verification:\n# debsums aide /usr/bin/aide OK /usr/share/doc/aide/NEWS.Debian.gz OK /usr/share/doc/aide/changelog.Debian.gz OK ... # debums aide-common /usr/bin/aide-attributes OK /usr/bin/aide.wrapper OK /usr/sbin/aideinit OK ... But wait — where are the configuration files? Where are the log and library directories? If you type these commands on an Ubuntu system, you\u0026rsquo;ll see that the configuration files and directories aren\u0026rsquo;t checked. In addition, there\u0026rsquo;s not a method for querying whether a particular file in a package has changed ownership or has had its mode changed. There\u0026rsquo;s also no option to restore the right permissions and ownership after an errant chown -R or chmod -R.\nManaging AIDE #The AIDE package is critical for secure deployments since it helps administrators monitor for file integrity on a regular basis. However, Ubuntu ships with some interesting configuration files and wrappers for AIDE.\nOne of the unique configuration files is this one:\n# cat /etc/aide/aide.conf.d/99_aide_root / Full This causes AIDE to wander all over the system, indexing all types of files. It\u0026rsquo;s best to limit AIDE to a small number of directories whenever possible so that the AIDE runs complete quickly and the database file remains relatively small. Plenty of disk I/O can be used during AIDE runs, so it\u0026rsquo;s best to limit the scope.\nAlso, trying to initialize the database provides an unhelpful error:\n# aide --init Couldn\u0026#39;t open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db.new for writing That path doesn\u0026rsquo;t exist, and I\u0026rsquo;m confused because I did pass a parameter to aide. Long story short, you must use the aideinit command to initialize the aide database. That\u0026rsquo;s actually a bash script which then calls on aide.wrapper (another bash script) to actually run the aide binary for you. Better yet, aideinit is in /usr/sbin while aide.wrapper is in /usr/bin. This leads to plenty of confusion.\nLinux Security Modules #It\u0026rsquo;s possible to run SELinux on Ubuntu, but the policies aren\u0026rsquo;t as well maintained as they are on other distributions. AppArmor is the recommended LSM on Ubuntu, but it doesn\u0026rsquo;t provide the granularity of SELinux. For example, SELinux confines almost every single process on a minimal Fedora system, but AppArmor confines almost nothing on a minimal Ubuntu-based system. AppArmor policies aren\u0026rsquo;t terribly restrictive and it\u0026rsquo;s possible to work around them due to their reliance on path names.\nFortunately, both LSM\u0026rsquo;s provide decent coverage with virtual machines and containers (using libvirt\u0026rsquo;s sVirt capability).\nSummary #The upside is that there is plenty of room for security improvements, especially around usability, in Ubuntu. Ubuntu-centric hardening standards are difficult to find and challenging to apply. Every distribution has its quirks and differences, but it seems like securing Ubuntu comes with more unusual hoops to jump through relative to Red Hat-based distributions, OpenSUSE, and even Arch.\nI plan to open some bugs for some of these smaller issues in the coming days. However, some of the larger philosophical issues (like automatically starting daemons) will be tougher to tackle.\n","date":"14 October 2015","permalink":"/p/what-i-learned-while-securing-ubuntu/","section":"Posts","summary":"The blog posts have slowed down a bit lately because I\u0026rsquo;ve been heads down on a security project at work.","title":"What I learned while securing Ubuntu"},{"content":" Earlier today, I wrote a post about my first thoughts on the Supermicro 5028D-T4NT server. The 10Gb interfaces on the server came up with the names eth0 and eth1. That wasn\u0026rsquo;t what I expected. There\u0026rsquo;s tons of detail on the problem in the blog post as well as the Github issue.\nKay Sievers gave a hint about how to adjust the interfacing naming in a more granular way than simply disabling the predictable network names. The documentation on .link files is quite helpful. Skip to the NamePolicy= section under [Link] and look the options there.\nLooking back to another post I wrote about predictable device naming in systemd, we can see how these names fit. In my case, I\u0026rsquo;d like to have the network device names enp3s0f0 and enp3s0f1 instead of eth0 and eth1.\nHere\u0026rsquo;s the file I created:\n# cat /etc/systemd/network/10gb.link [Match] Driver=ixgbe [Link] NamePolicy=path The interfaces came up with the expected names after a reboot:\n# ip link 6: enp3s0f0: \u0026lt;BROADCAST,MULTICAST\u0026gt; mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c:c4:7a:75:91:c8 brd ff:ff:ff:ff:ff:ff 7: enp3s0f1: \u0026lt;BROADCAST,MULTICAST\u0026gt; mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c:c4:7a:75:91:c9 brd ff:ff:ff:ff:ff:ff That will be my workaround until something can be fixed in the server\u0026rsquo;s firmware itself or in systemd.\nPhoto credit: Wikimedia Commons\n","date":"29 September 2015","permalink":"/p/customizing-systemds-network-device-names/","section":"Posts","summary":"Earlier today, I wrote a post about my first thoughts on the Supermicro 5028D-T4NT server.","title":"Customizing systemd’s network device names"},{"content":"I\u0026rsquo;ve recently moved over to Rackspace\u0026rsquo;s OpenStack Private Cloud team and the role is full of some great challenges. One of those challenges was figuring out a home lab for testing.\nThe search #My first idea was to pick up some lower-power machines that would give me some infrastructure at a low price with a low power bill as well. I found some Dell Optiplex 3020\u0026rsquo;s on Newegg with Haswell i3\u0026rsquo;s that came in at a good price point. In addition, they delivered the virtualization extensions that I needed without a high TDP.\nOnce I started talking about my search on Twitter, someone piped in with a suggestion:\n@majorhayden what are you planning? If you have to replace any server, take a look into this http://t.co/W6dcDqow5l I'd love to own one !! \u0026mdash; Sergio Galvan (@sgmac) September 19, 2015 Supermicro, eh? I\u0026rsquo;ve had great success with two Supermicro boxes from Silicon Mechanics (and I can\u0026rsquo;t say enough good things about both companies) in my colocation environment. I decided to take a closer look at the Supermicro 5028D-TN4T.\nThere\u0026rsquo;s a great review on AnandTech about the Supermicro 5028D-TN4T. It gets plenty of praise for packing a lot of advanced features into a small, energy-efficient server. AnandTech found that the idle power draw was around 30 watts and as low as 27 watts in some cases. I haven\u0026rsquo;t tested it with my Kill A Watt yet, but I intend to do so later this week.\nInitial thoughts #This chassis is small. I snapped a quick photo for some folks who were asking about it on Twitter:\n@claco @sgmac Relatively small. That's an X1 Carbon in the background as a side reference. pic.twitter.com/EJ8ef6wl1d \u0026mdash; Major Hayden (@majorhayden) September 25, 2015 I\u0026rsquo;ll have better pictures soon in a more detailed review. If you\u0026rsquo;re itching for more photos now, head on over to the AnandTech article I mentioned earlier.\nInstalling the RAM was a piece of cake, but I did need to hold a fan shroud out of the way as I installed some of them. There are three spots for installing SSD drives: one for an M.2 SATA drive and two 2.5″ drive spots. Routing the cables to the SSD drives is quite easy, but you will have to clip a zip tie or two (carefully).\nThe IPMI is fantastic, as expected. If you\u0026rsquo;ve ever used other Supermicro servers with built-in IPMI, then you\u0026rsquo;ll recognize the interface. You have full control over power, fans, and serial output. In addition, the standard iKVM interface is there so you can view the graphical console remotely, attach disks over the network, and power cycle the server. The IPMI was configured to use DHCP out of the box.\nThe fan noise is a bit higher than I\u0026rsquo;d like during boot, but it\u0026rsquo;s nothing like your average 1U/2U server. It\u0026rsquo;s louder than my Optiplex 3020 (which is whisper silent) but much quieter than the ASA 5520. The system is very quiet once it finishes booting and it settles down.\nLinux fun #As expected, everything worked fine in Linux - except the 10Gb interfaces. It has a X557 controller for the dual 10Gb interfaces:\n# lspci | grep Eth 03:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 03:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 05:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) The downside here is that the X557 PHY ID wasn\u0026rsquo;t added until Linux 4.2. However, I upgraded the server from Fedora 22 to 23 (and picked up Linux 4.2.1 along the way), and everything worked.\nThe onboard 1Gb interfaces showed up as eno1 and eno2, as expected, but the 10Gb cards showed up as eth0 and eth1. If you\u0026rsquo;ve read my post on systemd\u0026rsquo;s predictable interface named, you\u0026rsquo;ll notice this is a little unpredictable. The 10Gb interfaces seem to come up as eno1 and eno2 in udev, but that won\u0026rsquo;t work since the onboard I350 ethernet ports already use those names:\nP: /devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/eth0 E: DEVPATH=/devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/eth0 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=Ethernet Connection X552/X557-AT 10GBASE-T E: ID_MODEL_ID=0x15ad E: ID_NET_DRIVER=ixgbe E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME=eno1 E: ID_NET_NAME_MAC=enx0cc47a7591c8 E: ID_NET_NAME_ONBOARD=eno1 E: ID_NET_NAME_PATH=enp3s0f0 E: ID_OUI_FROM_DATABASE=Super Micro Computer, Inc. E: ID_PATH=pci-0000:03:00.0 E: ID_PATH_TAG=pci-0000_03_00_0 E: ID_PCI_CLASS_FROM_DATABASE=Network controller E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller E: ID_VENDOR_FROM_DATABASE=Intel Corporation E: ID_VENDOR_ID=0x8086 E: IFINDEX=4 E: INTERFACE=eth0 E: SUBSYSTEM=net E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/eno1 E: TAGS=:systemd: E: USEC_INITIALIZED=7449982 I opened up a Github issue for systemd and it\u0026rsquo;s getting some attention. We\u0026rsquo;ll hopefully see it fixed soon.\nMore to come #Keep an eye out for a more detailed review once I start throwing some OpenStack workloads on the Supermicro server. I\u0026rsquo;ll also take some more detailed photos and share the additional parts I added to my server.\n","date":"28 September 2015","permalink":"/p/first-thoughts-linux-on-the-supermicro-5028d-t4nt/","section":"Posts","summary":"I\u0026rsquo;ve recently moved over to Rackspace\u0026rsquo;s OpenStack Private Cloud team and the role is full of some great challenges.","title":"First thoughts: Linux on the Supermicro 5028D-TN4T"},{"content":"If you\u0026rsquo;re running Fedora 22 and you\u0026rsquo;ve recently updated to systemd-219-24.fc22, you might see errors like these:\n# systemctl restart postfix Failed to restart postfix.service: Access denied Your audit logs will have entries like these:\ntype=USER_AVC msg=audit(1442602150.292:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=\u0026#39;avc: denied { start } for auid=n/a uid=0 gid=0 path=\u0026#34;/run/systemd/system/session-4.scope\u0026#34; cmdline=\u0026#34;/usr/lib/systemd/systemd-logind\u0026#34; scontext=system_u:system_r:systemd_logind_t:s0 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=service exe=\u0026#34;/usr/lib/systemd/systemd\u0026#34; sauid=0 hostname=? addr=? terminal=?\u0026#39; type=USER_AVC msg=audit(1442602150.437:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=\u0026#39;avc: denied { start } for auid=n/a uid=0 gid=0 path=\u0026#34;/usr/lib/systemd/system/user@.service\u0026#34; cmdline=\u0026#34;/usr/lib/systemd/systemd-logind\u0026#34; scontext=system_u:system_r:systemd_logind_t:s0 tcontext=system_u:object_r:systemd_unit_file_t:s0 tclass=service exe=\u0026#34;/usr/lib/systemd/systemd\u0026#34; sauid=0 hostname=? addr=? terminal=?\u0026#39; type=USER_AVC msg=audit(1442602150.440:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=\u0026#39;avc: denied { start } for auid=n/a uid=0 gid=0 path=\u0026#34;/run/systemd/system/session-4.scope\u0026#34; cmdline=\u0026#34;/usr/lib/systemd/systemd-logind\u0026#34; scontext=system_u:system_r:systemd_logind_t:s0 There\u0026rsquo;s a very active bug under review to get it fixed. As a workaround, you can re-execute systemd with the following command:\nsystemctl daemon-reexec That should allow you to stop, start, and restart services properly again. Also, you\u0026rsquo;ll be able to switch runlevels for reboots and shutdowns.\nKeep an eye on the bug for more details as they develop. Kudos to Kevin Fenzi for the workaround!\n","date":"18 September 2015","permalink":"/p/systemd-in-fedora-22-failed-to-restart-service-access-denied/","section":"Posts","summary":"If you\u0026rsquo;re running Fedora 22 and you\u0026rsquo;ve recently updated to systemd-219-24.","title":"systemd in Fedora 22: Failed to restart service: Access Denied"},{"content":"","date":null,"permalink":"/tags/firewall/","section":"Tags","summary":"","title":"Firewall"},{"content":"","date":null,"permalink":"/tags/router/","section":"Tags","summary":"","title":"Router"},{"content":" Although Time Warner Cable is now Spectrum and wide-dhcpv6 is quite old, this post is still what I\u0026rsquo;m using today (in 2019)!\nI\u0026rsquo;ve written about how to get larger IPv6 subnets from Time Warner Cable\u0026rsquo;s Road Runner service on a Mikrotik router before, but I\u0026rsquo;ve converted to using a Linux server as my router for my home. Getting the larger /56 IPv6 subnet is a little tricky and it\u0026rsquo;s not terribly well documented.\nMy network #My Linux router has two bridges, br0 and br1, that handle WAN and LAN traffic respectively. This is a fairly simple configuration.\n+-------------------+ | | +-----------+ | | +----------+ |Cable modem+---+ br0 br1 +-----+LAN switch| +-----------+ | | +----------+ | Linux router | +-------------------+ Ideally, I\u0026rsquo;d like to have a single address assigned to br0 so that my Linux router can reach IPv6 destinations. I\u0026rsquo;d also like a /64 assigned to the br1 interface so that I can distribute addresses from that subnet to devices on my LAN.\nGetting DHCPv6 working #The wide-dhcpv6 package provides a DHCPv6 client and also takes care of assigning some addresses for you. Installing it is easy with dnf:\ndnf install wide-dhcpv6 We will create a new configuration file at /etc/wide-dhcpv6/dhcp6c.conf:\ninterface br0 { send ia-pd 1; send ia-na 1; }; id-assoc na 1 { }; id-assoc pd 1 { prefix ::/56 infinity; prefix-interface br0 { sla-id 1; sla-len 8; }; prefix-interface br1 { sla-id 2; sla-len 8; }; prefix-interface vlan1 { sla-id 3; sla-len 8; }; }; If this configuration file makes sense to you without explanation, I\u0026rsquo;m impressed. Let\u0026rsquo;s break it up into pieces to understand it.\nThe first section with interface br0 specifies that we want to do our DHCPv6 requests on the br0 interface. The configuration lines inside the curly braces says we want to specify a prefix delegation (the IA_PD DHCPv6 option) and we also want a stateful (SLAAC) address assigned on br0 (the IA_NA DHCPv6 option). These are just simple flags that tell the upstream DHCPv6 server that we want to specify a particular prefix size and that we also want a single address (via SLAAC) for our external interface.\nThe id-assoc na 1 section specifies that we want to accept the default SLAAC address provided by the upstream network device.\nThe id-assoc pd 1 section gives the upstream DHCPv6 server a hint that we really want a /56 block of IPv6 addresses. The next three sections give our DHCPv6 client an idea of how we want addresses configured on our internal network devices. The three interfaces in each prefix-interface section will receive a different block (noted by the sla-id increasing by one each time). Also, the block size we intend to assign is a /64 (sla-len is 8, which means we knock 8 bits off a /56 and end up with a /64). Don\u0026rsquo;t change your sla-id after you set it. That will cause the DHCPv6 client to move your /64 address blocks around to a different interface.\nStill with me? This stuff is really confusing and documentation is sparse.\nStart the DHCPv6 client and ensure it comes up at boot:\nsystemctl enable dhcp6c systemctl start dhcp6c Run ip addr and look for IPv6 blocks configured on each interface. In my case, br0 got a single address, and the other interfaces received unique /64\u0026rsquo;s.\nTelling the LAN about IPv6 #The router is working now, but we need to tell our devices on the LAN that we have some IPv6 addresses available. You have different options for this, such as dnsmasq or radvd, but we will use radvd here:\ndnf -y install radvd If you open /etc/radvd.conf, you\u0026rsquo;ll notice a helpful comment block at the top with a great example configuration. I only want to announce IPv6 on my br1 interface, so I\u0026rsquo;ll add this configuration block:\ninterface br1 { AdvSendAdvert on; MaxRtrAdvInterval 30; prefix ::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; }; You don\u0026rsquo;t actually need to specify the IPv6 prefix since radvd is smart enough to examine your interface and discover the IPv6 subnet assigned to it. This configuration says we will send router advertisements, let systems on the network choose their own addresses, and we will advertise those addresses as soon as the link comes up.\nLet\u0026rsquo;s start radvd and ensure it comes up at boot:\nsystemctl enable radvd systemctl start radvd Connect a machine to your LAN and you should receive an IPv6 address shortly after the link comes up!\nTroubleshooting #If you\u0026rsquo;re having trouble getting an IPv6 address, double-check your iptables rules. You will need to ensure you\u0026rsquo;re allowing UDP 546 into your external interface. Here are some examples you can use:\n# If you\u0026#39;re using firewalld firewall-cmd --add-port=546/udp firewall-cmd --add-port=546/udp --permanent # If you\u0026#39;re using bare ip6tables ip6tables -A INPUT -p udp -m udp --dport 546 -j ACCEPT ","date":"11 September 2015","permalink":"/p/time-warner-road-runner-linux-and-large-ipv6-subnets/","section":"Posts","summary":"Although Time Warner Cable is now Spectrum and wide-dhcpv6 is quite old, this post is still what I\u0026rsquo;m using today (in 2019)!","title":"Time Warner Road Runner, Linux, and large IPv6 subnets"},{"content":" I\u0026rsquo;ve decided to start a series of posts called \u0026ldquo;Chronicles of SELinux\u0026rdquo; where I hope to educate more users on how to handle SELinux denials with finesse rather than simply disabling it entirely. To kick things off, I\u0026rsquo;ll be talking about dealing with web content in the first post.\nFirst steps #If you\u0026rsquo;d like to follow along, simply hop onto a system running Fedora 21 (or later), CentOS 7 or Red Hat Enterprise Linux 7. We need SELinux in enforcing mode on the host, so be sure to check the status with getenforce. Depending on what getenforce returns, you\u0026rsquo;ll need to make adjustments:\nEnforcing: No adjustments needed - you\u0026rsquo;re all set! Permissive: Run setenforce 1 and adjust SELinux configuration file (see below) Disabled: Adjust the SELinux configuration file and reboot (see below) To enable enforcing mode in the SELinux configuration file, edit /etc/selinux/config and ensure your SELINUX line has enforcing:\n# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing If getenforce returned Disabled earlier, you will need to reboot to get SELinux working. Also be sure that the selinux-policy-targeted package is installed and run fixfiles onboot -B to relabel the system on reboot (thanks to immanetize for the comment).\nLet\u0026rsquo;s install httpd and create a developer user:\n# For Fedora dnf -y install httpd # For CentOS/RHEL yum -y install httpd useradd developer systemctl enable httpd systemctl start httpd On to the guide!\nHosting content in an unique directory #On Red Hat-based systems, httpd expects to find its content in /var/www/html, but some system administrators prefer to have content stored elsewhere on the system. It could be on a SAN or other remote storage, but it could also just be in a different directory to make things easier for the business.\nLet\u0026rsquo;s consider a situation where the web content is hosted from /web/. We can create the directory:\n[root@fedora22 ~]# mkdir -v /web mkdir: created directory \u0026#39;/web\u0026#39; We can edit /etc/httpd/conf/httpd.conf and set our new DocumentRoot:\n# # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot \u0026#34;/web\u0026#34; Let\u0026rsquo;s reload the httpd configuration:\nsystemctl reload httpd And now we can add some amazing web content:\n/web/index.html It\u0026rsquo;s time to test our web server:\n# curl -i localhost/index.html HTTP/1.1 403 Forbidden Date: Thu, 10 Sep 2015 12:54:19 GMT Server: Apache/2.4.16 (Fedora) Content-Length: 219 Content-Type: text/html; charset=iso-8859-1 Oh, come on. What\u0026rsquo;s with this 403 error?\nInvestigating the 403 #The first step for any situation like this is to review some logs. Let\u0026rsquo;s check the logs for httpd:\n[Thu Sep 10 12:55:04.541789 2015] [core:error] [pid 16597] (13)Permission denied: [client ::1:49860] AH00035: access to /index.html denied (filesystem path \u0026#39;/web/index.html\u0026#39;) because search permissions are missing on a component of the path Search permissions are missing? What? Let\u0026rsquo;s check the permissions on our web directory:\n# ls -al /web total 12 drwxr-xr-x. 2 root root 4096 Sep 10 12:53 . dr-xr-xr-x. 19 root root 4096 Sep 10 12:51 .. -rw-r--r--. 1 root root 21 Sep 10 12:54 index.html The httpd user has the ability to get into the directory (o+x is set on /web/) and the httpd user can read the file (o+r is set on /web/index.html). Let\u0026rsquo;s check the system journal just in case:\n# journalctl -n 1 | tail -- Logs begin at Thu 2015-09-10 12:31:37 UTC, end at Thu 2015-09-10 12:55:04 UTC. -- Sep 10 12:55:04 fedora22 audit[16597]: \u0026lt;audit-1400\u0026gt; avc: denied { getattr } for pid=16597 comm=\u0026#34;httpd\u0026#34; path=\u0026#34;/web/index.html\u0026#34; dev=\u0026#34;xvda1\u0026#34; ino=524290 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=file permissive=0 That\u0026rsquo;s quite a long log line. Let\u0026rsquo;s break it into pieces:\navc: denied { getattr } for pid=16597 comm=\u0026quot;httpd\u0026quot;: httpd tried to do something and was denied path=\u0026quot;/web/index.html\u0026quot; dev=\u0026quot;xvda1\u0026quot; ino=524290: path to the file (index.html) involved in the denial scontext=system_u:system_r:httpd_t:s0: the SELinux context of the httpd process tcontext=unconfined_u:object_r:default_t:s0: the SELinux contect that is actually applied to our index.html tclass=file: the denial came from accessing a file (index.html) permissive=0: we\u0026rsquo;re in enforcing mode, not permissive mode Long story short, when httpd tried to access our /web/index.html file, the httpd process was labeled with httpd_t, but the kernel found that the HTML file was labeled with default_t. The httpd process (labeled with httpd_t) isn\u0026rsquo;t allowed to read files that are labeled as default_t, so the access is denied.\nFixing it the right way #Since we know what SELinux expects for this file (from the log line in the journal), we can apply the right context and re-test. The chcon command has a handy argument that allows you to reference a file or directory, and apply the contexts from there. Since we know that /var/www/html has the right contexts already, we can use it as a reference:\n# chcon -v -R --reference=/var/www/html /web changing security context of \u0026#39;/web/index.html\u0026#39; changing security context of \u0026#39;/web\u0026#39; Now we see some different contexts on /web:\n# ls -alZ /web/ total 12 drwxr-xr-x. 2 root root system_u:object_r:httpd_sys_content_t:s0 4096 Sep 10 13:19 . dr-xr-xr-x. 19 root root system_u:object_r:root_t:s0 4096 Sep 10 13:19 .. -rw-r--r--. 1 root root system_u:object_r:httpd_sys_content_t:s0 21 Sep 10 13:19 index.html Let\u0026rsquo;s test again:\n# curl -I localhost/index.html HTTP/1.1 403 Forbidden Date: Thu, 10 Sep 2015 13:21:22 GMT Server: Apache/2.4.16 (Fedora) Content-Type: text/html; charset=iso-8859-1 Darn! What\u0026rsquo;s in the httpd logs?\n[Thu Sep 10 13:21:22.267719 2015] [authz_core:error] [pid 16593] [client ::1:49861] AH01630: client denied by server configuration: /web/index.html Ah, we cleared the SELinux problem but now httpd is upset. Just below the DocumentRoot line that we edited earlier, look for two Directory blocks. Change /var/www/ and /var/www/html to /web in those blocks. Reload the httpd configuration and try once more:\n# systemctl reload httpd # curl -I localhost/index.html HTTP/1.1 200 OK Date: Thu, 10 Sep 2015 13:25:16 GMT Server: Apache/2.4.16 (Fedora) Last-Modified: Thu, 10 Sep 2015 13:19:47 GMT ETag: \u0026#34;15-51f6474064d50\u0026#34; Accept-Ranges: bytes Content-Length: 21 Content-Type: text/html; charset=UTF-8 Success!\nLong term fix #The chcon method is good for fixing one-off issues and for testing, but we need a good long term fix. SELinux has some file contexts already configured for certain directories, but not for our custom web directory. You can examine the defaults here:\n# semanage fcontext -l | grep ^/var/www/html /var/www/html(/.*)?/sites/default/files(/.*)? all files system_u:object_r:httpd_sys_rw_content_t:s0 /var/www/html(/.*)?/sites/default/settings\\.php regular file system_u:object_r:httpd_sys_rw_content_t:s0 /var/www/html(/.*)?/uploads(/.*)? all files system_u:object_r:httpd_sys_rw_content_t:s0 /var/www/html(/.*)?/wp-content(/.*)? all files system_u:object_r:httpd_sys_rw_content_t:s0 /var/www/html/[^/]*/cgi-bin(/.*)? all files system_u:object_r:httpd_sys_script_exec_t:s0 /var/www/html/cgi/munin.* all files system_u:object_r:munin_script_exec_t:s0 /var/www/html/configuration\\.php all files system_u:object_r:httpd_sys_rw_content_t:s0 /var/www/html/munin(/.*)? all files system_u:object_r:munin_content_t:s0 /var/www/html/munin/cgi(/.*)? all files system_u:object_r:munin_script_exec_t:s0 /var/www/html/owncloud/data(/.*)? all files system_u:object_r:httpd_sys_rw_content_t:s0 SELinux\u0026rsquo;s tools have a concept of equivalency. This allows you to say that one directory is equivalent to another one in the long term. We already used chcon to apply contexts with a reference to a directory with valid contexts, but this equivalency concept gives us a longer term fix. Here\u0026rsquo;s the command to use:\nsemanage fcontext --add --equal /var/www /web If we break this down, we\u0026rsquo;re saying we want to add a new file context where /web is equal to /var/www. This means we want the same SELinux contexts applied in the same places and want them treated equally. After running the semanage command, let\u0026rsquo;s make an index2.html file to test:\n/web/index2.html # curl -I localhost/index2.html HTTP/1.1 200 OK Date: Thu, 10 Sep 2015 13:35:24 GMT Server: Apache/2.4.16 (Fedora) Last-Modified: Thu, 10 Sep 2015 13:34:11 GMT ETag: \u0026#34;15-51f64a78266c8\u0026#34; Accept-Ranges: bytes Content-Length: 21 Content-Type: text/html; charset=UTF-8 Great! We didn\u0026rsquo;t have to use chcon this time around because we configured /web as an equivalent directory to /var/www. Let\u0026rsquo;s double check the contexts:\n# ls -alZ /web total 16 drwxr-xr-x. 2 root root unconfined_u:object_r:httpd_sys_content_t:s0 4096 Sep 10 13:34 . dr-xr-xr-x. 19 root root system_u:object_r:root_t:s0 4096 Sep 10 13:33 .. -rw-r--r--. 1 root root unconfined_u:object_r:httpd_sys_content_t:s0 21 Sep 10 13:34 index2.html -rw-r--r--. 1 root root unconfined_u:object_r:httpd_sys_content_t:s0 21 Sep 10 13:33 index.html Perfect! We now have all of the security benefits of SELinux in a completely custom web directory.\n","date":"10 September 2015","permalink":"/p/chronicles-of-selinux-dealing-with-web-content-in-unusual-directories/","section":"Posts","summary":"I\u0026rsquo;ve decided to start a series of posts called \u0026ldquo;Chronicles of SELinux\u0026rdquo; where I hope to educate more users on how to handle SELinux denials with finesse rather than simply disabling it entirely.","title":"Chronicles of SELinux: Dealing with web content in unusual directories"},{"content":"I\u0026rsquo;ve had a great time talking to people about my \u0026ldquo;Be an inspiration, not an impostor\u0026rdquo; talk that I delivered in August. I spoke to audiences at Fedora Flock 2015, Texas Linux Fest, and at Rackspace. The biggest lesson I learned is that delivering talks is exhausting!\nFrequently Asked Questions #Someone asked a good one at Fedora Flock:\nHow do you deal with situations where you are an impostor for a reason you can\u0026rsquo;t change? For example, if you\u0026rsquo;re the only woman in a male group or you\u0026rsquo;re the youngest person in a mostly older group?\nI touched on this a bit in the presentation, but it\u0026rsquo;s a great question. This is one of those times where you have to persevere and overcome the things you can\u0026rsquo;t change by improving in all of the areas where you can change.\nFor example, if you\u0026rsquo;re the youngest in the group, find ways to relate to the older group. Find out what they value and what they don\u0026rsquo;t. If they prefer communication in person over electronic methods, change your communication style and medium. However, you shouldn\u0026rsquo;t have to change your complete identity just for the rest of the group. Just make an adjustment so that you get the right response.\nAlso, impostor syndrome isn\u0026rsquo;t restricted to a particular gender or age group. I\u0026rsquo;ve seen it in both men and women in equal amounts, and I\u0026rsquo;ve even seen it in people with 40 years of deep experience. It affects us all from time to time, and we need structured frameworks (like OODA) to fight it.\nHow do I battle impostor syndrome without becoming cocky and overconfident?\nThe opposite of impostor syndrome, often called the Dunning-Kruger Effect, is just as dangerous. Go back the observe and orient steps of the OODA loop (see the slides toward the end of the presentation) to be sure that you\u0026rsquo;re getting good feedback from your peers and leaders. Back up your assertions with facts and solid reasoning to avoid cognitive bias. Bounce those ideas and assertions off the people you trust.\nWhen I make an assertion or try to get someone else to change what they\u0026rsquo;re doing, I\u0026rsquo;ll often end with \u0026ldquo;Am I off-base here?\u0026rdquo; or \u0026ldquo;Let me know if I\u0026rsquo;m on the right track\u0026rdquo; to give others an opportunity to provide criticism. The added benefit is that these phrases could drag someone with impostor syndrome out of the shadows and into the discussion.\nThat leads into another good question I received:\nHow can we reduce impostor syndrome in open source communities as a whole?\nThe key here is to find ways to get people involved, and then get them more involved over time. If someone is interested in participating but they aren\u0026rsquo;t sure how to start, come up with ways they can get involved in less-formal ways. This could be through bug triaging, fixing simple bugs, writing documentation, or simply joining some IRC meetings. I\u0026rsquo;ve seen several communities go through a process of tagging bugs with \u0026ldquo;easy\u0026rdquo; tags so that beginners can try to fix them.\nAnother more direct option is to call upon people to do certain things in the community and assign them a mentor to help them do it. If someone isn\u0026rsquo;t talking during an IRC meeting or piping up on a mailing list, call them out - gently. It could be something as simple as: \u0026ldquo;Hey, [name], we know you\u0026rsquo;re knowledgeable in [topic]. Do you think this is a good idea?\u0026rdquo; Do that a few times and you\u0026rsquo;ll find their confidence to participate will rise quickly.\nFollow-ups #Insides vs. outsides #Someone stopped me outside the talk room at Texas Linux Fest and said a leader at his church summarized impostor syndrome as \u0026ldquo;comparing your insides to someone else\u0026rsquo;s outsides\u0026rdquo;. That led me to do some thinking.\nEach and every one of us has strengths and weaknesses. I\u0026rsquo;d wager that we all have at least once vice (I have plenty), and there are things about ourselves that we don\u0026rsquo;t like. Everyone has insecurities about something in their life, whether it\u0026rsquo;s personal or professional. These are things we can\u0026rsquo;t see from looking at someone on the outside. We\u0026rsquo;re taking our laundry list of issues and comparing it to something we think is close to perfection.\nDon\u0026rsquo;t do that. It\u0026rsquo;s on my last slide in the presentation.\nYou know at least one thing someone else wants to know #After doing the talk at Rackspace, I was pulled into quite a few hallway conversations and I received feedback about my presentation. In addition, many people talked about their desire to get up and do a talk, too. What I heard most often was: \u0026ldquo;I want to do a talk, but I don\u0026rsquo;t know what to talk about.\u0026rdquo;\nIt reminds me of a post I wrote about writing technical blogs. There is at least one thing you know that someone else wants to know. You might be surprised that the most hit post on my blog is an old one about deleting an iptables rule. Deleting an iptables rule is an extremely basic step in system administration but it\u0026rsquo;s tough to remember how to do it if you don\u0026rsquo;t use the iptables syntax regularly.\nRackspace holds Tech Talk Tuesdays during lunch at our headquarters in San Antonio each week. It\u0026rsquo;s open to Rackers and escorted guests only for now, but our topic list is wide open. Rackers have talked about highly technical topics and they\u0026rsquo;ve also talked about how to brew beer. I\u0026rsquo;ve encouraged my coworkers to think about something within their domain of expertise and deliver a talk on that topic.\nTalk about your qualifications and experience without bragging #You can be humble and talk about your strengths at the same time. They aren\u0026rsquo;t mutually exclusive. It can be a challenge to bring these things up during social settings, especially job interviews. My strategy is to weave these aspects about myself into a story. Humans love stories.\nAs an example, if you\u0026rsquo;re asked about your experience with Linux, tell a short story about a troubleshooting issue from your past and how you solved it. If you\u0026rsquo;re asked about your python development experience, talk about a project you created or a hard problem you solved in someone else\u0026rsquo;s project. Through the story, talk about your thought process when you were solving the problem. Try your best to keep it brief. These stories will keep the other people in the room interested and it won\u0026rsquo;t come off as bragging.\n","date":"2 September 2015","permalink":"/p/impostor-syndrome-talk-faqs-and-follow-ups/","section":"Posts","summary":"I\u0026rsquo;ve had a great time talking to people about my \u0026ldquo;Be an inspiration, not an impostor\u0026rdquo; talk that I delivered in August.","title":"Impostor syndrome talk: FAQs and follow-ups"},{"content":"This post originally appeared on the Fedora Magazine blog.\nOne of my favorite features of Fedora 22 is systemd-networkd and all of the new features that came with it in recent systemd versions. The configuration files are easy to read, bridging is simple, and tunnels are resilient.\nI\u0026rsquo;ve recently started using a small Linux server at home again as a network router and firewall. However, I used systemd-networkd this time and had some great results. Let\u0026rsquo;s get started!\nOverview #Our example router in this example has two network interfaces:\neth0: public internet connectivity eth1: private LAN (192.168.3.1/24) We want machines on the private LAN to route their traffic through the router to the public internet via NAT. Also, we want clients on the LAN to get their IP addresses assigned automatically.\nNetwork configuration #All of the systemd-networkd configuration files live within /etc/systemd/network and we need to create that directory:\nmkdir /etc/systemd/network We need to write a network configuration file for our public interface that systemd-networkd can read. Open up /etc/systemd/network/eth0.network and write these lines:\n[Match] Name=eth0 [Network] Address=PUBLIC_IP_ADDRESS/CIDR Gateway=GATEWAY DNS=8.8.8.8 DNS=8.8.4.4 IPForward=yes If we break this configuration file down, we\u0026rsquo;re telling systemd-networkd to apply this configuration to any devices that are called eth0. Also, we\u0026rsquo;re specifying a public IP address and CIDR mask (like /24 or /22) so that the interface can be configured. The gateway address will be added to the routing table. We\u0026rsquo;ve also provided DNS servers to use with systemd-resolved (more on that later).\nI added IPForward=yes so that systemd-networkd will automatically enable forwarding for the interface via sysctl. (That always seems to be the step I forget when I build a Linux router.)\nLet\u0026rsquo;s do the same for our LAN interface. Create this configuration file and store it as /etc/systemd/network/eth1.network:\n[Match] Name=eth1 [Network] Address=192.168.3.1/24 IPForward=yes We don\u0026rsquo;t need to specify a gateway address here because this interface will be the gateway for the LAN.\nPrepare the services #If we\u0026rsquo;re planning to use systemd-networkd, we need to ensure that it runs instead of traditional network scripts or NetworkManager:\nsystemctl disable network systemctl disable NetworkManager systemctl enable systemd-networkd Also, let\u0026rsquo;s be sure to use systemd-resolved to handle our /etc/resolv.conf:\nsystemctl enable systemd-resolved systemctl start systemd-resolved rm -f /etc/resolv.conf ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf Reboot #We\u0026rsquo;re now set to reboot! It\u0026rsquo;s possible to bring up systemd-networkd without rebooting but I\u0026rsquo;d rather verify with a reboot now than get goosed with a broken network after a reboot later.\nOnce your router is back up, run networkctl and verify that you have routable in the output for both interfaces:\n[root@router ~]# networkctl IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 eth0 ether routable configured 3 eth1 ether routable configured DHCP #Now that both network interfaces are online, we need something to tell our clients about the IP configuration they should be using. There are plenty of good options here, but I prefer dnsmasq. It has served me well over the years and it provides some handy features along with DHCP, such as DNS caching, TFTP and IPv6 router announcements.\nLet\u0026rsquo;s install dnsmasq and enable it at boot:\ndnf -y install dnsmasq systemctl enable dnsmasq Open /etc/dnsmasq.conf in your favorite text editor and edit a few lines:\nUncomment dhcp-authoritative This tells dnsmasq that it\u0026rsquo;s the exclusive DHCP server on the network and that it should answer all requests Uncomment interface= and add eth1 on the end (should look like interface=eth1 when you\u0026rsquo;re done) Most ISP\u0026rsquo;s filter DHCP replies on their public networks, but we don\u0026rsquo;t want to take chances here. We need to restrict DHCP to our public interface only. Look for the dhcp-range line and change it to dhcp-range=192.168.3.50,192.168.3.150,12h We\u0026rsquo;re giving clients 12 hour leases on 192.168.3.0/24 Save the file and start dnsmasq:\nsystemctl start dnsmasq Firewall #We\u0026rsquo;re almost done! Now it\u0026rsquo;s time to tell iptables to masquerade any packets from our LAN to the internet. But wait, it\u0026rsquo;s 2015 and we have tools like firewall-cmd to do that for us in Fedora.\nLet\u0026rsquo;s enable masquerading, allow DNS, and allow DHCP traffic. We can then make the state permanent:\nfirewall-cmd --add-masquerade firewall-cmd --add-service=dns --add-service=dhcp firewall-cmd --runtime-to-permanent Testing #Put a client machine on your LAN network and you should be able to ping some public sites from the client:\n[root@client ~]# ping -c 4 icanhazip.com PING icanhazip.com (104.238.141.75) 56(84) bytes of data. 64 bytes from lax.icanhazip.com (104.238.141.75): icmp_seq=1 ttl=52 time=69.8 ms 64 bytes from lax.icanhazip.com (104.238.141.75): icmp_seq=2 ttl=52 time=69.7 ms 64 bytes from lax.icanhazip.com (104.238.141.75): icmp_seq=3 ttl=52 time=69.6 ms 64 bytes from lax.icanhazip.com (104.238.141.75): icmp_seq=4 ttl=52 time=69.7 ms --- icanhazip.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 69.659/69.758/69.874/0.203 ms Extras #If you need to adjust your network configuration, just run systemctl restart systemd-networkd afterwards. I\u0026rsquo;ve found that it\u0026rsquo;s quite intelligent about the network devices and it won\u0026rsquo;t reconfigure anything that hasn\u0026rsquo;t changed.\nThe networkctl command is very powerful. Check out the status and lldp functions to get more information about your network devices and the networks they\u0026rsquo;re connected to.\nWhen something goes wrong, look in your systemd journal:\n[root@router ~]# journalctl -u systemd-networkd -- Logs begin at Fri 2015-07-31 01:22:38 UTC, end at Fri 2015-07-31 02:11:24 UTC. -- Jul 31 01:46:14 router systemd[1]: Starting Network Service... Jul 31 01:46:14 router systemd-networkd[286]: Enumeration completed Jul 31 01:46:14 router systemd[1]: Started Network Service. Jul 31 01:46:15 router systemd-networkd[286]: eth1 : link configured Jul 31 01:46:15 router systemd-networkd[286]: eth0 : gained carrier Jul 31 01:46:15 router systemd-networkd[286]: eth0 : link configured Jul 31 01:46:16 router systemd-networkd[286]: eth1 : gained carrier ","date":"27 August 2015","permalink":"/p/build-a-network-router-and-firewall-with-fedora-22-and-systemd-networkd/","section":"Posts","summary":"This post originally appeared on the Fedora Magazine blog.","title":"Build a network router and firewall with Fedora 22 and systemd-networkd"},{"content":"Thanks to all of the people who attended my \u0026ldquo;Be an inspiration, not an impostor\u0026rdquo; talk at Texas Linux Fest 2015. Some A/V issues caused my time slot to get squeezed and the audience had to put up with the \u0026ldquo;ludicrous speed\u0026rdquo; version of the presentation.\nThe slides are a little different from the slides at Fedora Flock, but they\u0026rsquo;re mainly the same:\n","date":"22 August 2015","permalink":"/p/slides-from-my-texas-linux-fest-2015-talk/","section":"Posts","summary":"Thanks to all of the people who attended my \u0026ldquo;Be an inspiration, not an impostor\u0026rdquo; talk at Texas Linux Fest 2015.","title":"Slides from my Texas Linux Fest 2015 talk"},{"content":" I talked a bit about systemd\u0026rsquo;s network device name in my earlier post about systemd-networkd and bonding and I received some questions about how systemd rolls through the possible names of network devices to choose the final name. These predictable network device names threw me a curveball last summer when I couldn\u0026rsquo;t figure out how the names were constructed.\nLet\u0026rsquo;s walk through this process.\nWhat\u0026rsquo;s in a name? #Back in the systemd-networkd bonding post, I dug into a dual port Intel network card that showed up in a hotplug slot:\n# udevadm info -e | grep -A 9 ^P.*eth0 P: /devices/pci0000:00/0000:00:03.2/0000:08:00.0/net/eth0 E: DEVPATH=/devices/pci0000:00/0000:00:03.2/0000:08:00.0/net/eth0 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet OCP Server Adapter X520-2) E: ID_MODEL_ID=0x10fb E: ID_NET_DRIVER=ixgbe E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME_MAC=enxa0369f2cec90 E: ID_NET_NAME_PATH=enp8s0f0 E: ID_NET_NAME_SLOT=ens9f0 This udev database dump shows that it came up with a few different names for the network interface:\nID_NET_NAME_MAC=enxa0369f2cec90 ID_NET_NAME_PATH=enp8s0f0 ID_NET_NAME_SLOT=ens9f0 Where do these names come from? We can dig into systemd\u0026rsquo;s source code to figure out the origin of the names and which one is selected as the final choice.\nDown the udev rabbit hole #Let\u0026rsquo;s take a look at src/udev/udev-builtin-net_id.c:\n/* * Predictable network interface device names based on: * - firmware/bios-provided index numbers for on-board devices * - firmware-provided pci-express hotplug slot index number * - physical/geographical location of the hardware * - the interface\u0026#39;s MAC address * * http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames * * Two character prefixes based on the type of interface: * en -- ethernet * sl -- serial line IP (slip) * wl -- wlan * ww -- wwan * * Type of names: * b\u0026lt;number\u0026gt; -- BCMA bus core number * ccw\u0026lt;name\u0026gt; -- CCW bus group name * o\u0026lt;index\u0026gt;[d\u0026lt;dev_port\u0026gt;] -- on-board device index number * s\u0026lt;slot\u0026gt;[f\u0026lt;function\u0026gt;][d\u0026lt;dev_port\u0026gt;] -- hotplug slot index number * x\u0026lt;MAC\u0026gt; -- MAC address * [P\u0026lt;domain\u0026gt;]p\u0026lt;bus\u0026gt;s\u0026lt;slot\u0026gt;[f\u0026lt;function\u0026gt;][d\u0026lt;dev_port\u0026gt;] * -- PCI geographical location * [P\u0026lt;domain\u0026gt;]p\u0026lt;bus\u0026gt;s\u0026lt;slot\u0026gt;[f\u0026lt;function\u0026gt;][u\u0026lt;port\u0026gt;][..][c\u0026lt;config\u0026gt;][i\u0026lt;interface\u0026gt;] * -- USB port number chain So here\u0026rsquo;s where our names actually begin. Ethernet cards will always start with en, but they might be followed by a p (for PCI slots), a s (for hotplug PCI-E slots), and o (for onboard cards). Scroll down just a bit more for some examples starting at line 56.\nReal-world examples #We already looked at the hotplug slot naming from Rackspace\u0026rsquo;s OnMetal servers. They show up as ens9f0 and ens9f1. That means they\u0026rsquo;re on a hotplug slot which happens to be slot 9. The function indexes are 0 and 1 (for both ports on the Intel 82599ES).\nLinux firewall with a dual-port PCI card #Here\u0026rsquo;s an example of my Linux firewall at home. It\u0026rsquo;s a Dell Optiplex 3020 with an Intel I350-T2 (dual port):\n# udevadm info -e | grep -A 10 ^P.*enp1s0f1 P: /devices/pci0000:00/0000:00:01.0/0000:01:00.1/net/enp1s0f1 E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.1/net/enp1s0f1 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=I350 Gigabit Network Connection (Ethernet Server Adapter I350-T2) E: ID_MODEL_ID=0x1521 E: ID_NET_DRIVER=igb E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME=enp1s0f1 E: ID_NET_NAME_MAC=enxa0369f6e5227 E: ID_NET_NAME_PATH=enp1s0f1 E: ID_OUI_FROM_DATABASE=Intel Corporate And the output from lspci:\n# lspci -s 01:00 01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) This card happens to sit on PCI bus 1 (enp1), slot 0 (s0). Since it\u0026rsquo;s a dual-port card, it has two function indexes (f0 and f1). That leaves me with two predictable names: enp1s0f1 and enp1s0f0.\n1U server with four ethernet ports #Let\u0026rsquo;s grab another example. Here\u0026rsquo;s a SuperMicro 1U X9SCA server with four onboard PCI ethernet cards:\n# udevadm info -e | grep -A 10 ^P.*enp2s0 P: /devices/pci0000:00/0000:00:1c.4/0000:02:00.0/net/enp2s0 E: DEVPATH=/devices/pci0000:00/0000:00:1c.4/0000:02:00.0/net/enp2s0 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=82574L Gigabit Network Connection E: ID_MODEL_ID=0x10d3 E: ID_NET_DRIVER=e1000e E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME=enp2s0 E: ID_NET_NAME_MAC=enx00259025963a E: ID_NET_NAME_PATH=enp2s0 E: ID_OUI_FROM_DATABASE=Super Micro Computer, Inc. And here\u0026rsquo;s all four ports in lspci:\n# for i in `seq 2 5`; do lspci -s 0${i}:; done 02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection 03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection 04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection 05:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection These are interesting because they\u0026rsquo;re not all on the same PCI bus. They sit on buses 2-5 in slot 0. There are no function indexes here, so they\u0026rsquo;re named enp2s0 through enp5s0. These aren\u0026rsquo;t true onboard cards, so they\u0026rsquo;re named based on their locations.\nStorage server with onboard ethernet #Here\u0026rsquo;s an example of a server with a true inboard ethernet card:\n$ udevadm info -e | grep -A 11 ^P.*eno1 P: /devices/pci0000:00/0000:00:19.0/net/eno1 E: DEVPATH=/devices/pci0000:00/0000:00:19.0/net/eno1 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=Ethernet Connection I217-V E: ID_MODEL_ID=0x153b E: ID_NET_DRIVER=e1000e E: ID_NET_LABEL_ONBOARD=en Onboard LAN E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME_MAC=enxe03f49b159c0 E: ID_NET_NAME_ONBOARD=eno1 E: ID_NET_NAME_PATH=enp0s25 E: ID_OUI_FROM_DATABASE=ASUSTek COMPUTER INC. And the lspci output:\n$ lspci -s 00:19.0 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 05) This card has a new name showing up in udev: ID_NET_NAME_ONBOARD. The systemd udev code has some special handling for onboard cards because they usually sit on the main bus. The naming can get a bit ugly because that 19 would need to be converted into hex for the name.\nIf systemd didn\u0026rsquo;t handle onboard cards differently, this card might be named something ugly like enp0s13 (since 19 in decimal becomes 13 in hex). That\u0026rsquo;s really confusing.\nPicking the final name #As we\u0026rsquo;ve seen above, udev makes a big list of names in the udev database. However, there can only be one name in the OS when you try to use the network card.\nLet\u0026rsquo;s wander back into the code. this time we\u0026rsquo;re going to take a look in src/udev/net/link-config.c starting at around line 403:\nname_policy) { NamePolicy *policy; for (policy = config-\u0026gt;name_policy; !new_name \u0026amp;\u0026amp; *policy != _NAMEPOLICY_INVALID; policy++) { switch (*policy) { case NAMEPOLICY_KERNEL: respect_predictable = true; break; case NAMEPOLICY_DATABASE: new_name = udev_device_get_property_value(device, \u0026#34;ID_NET_NAME_FROM_DATABASE\u0026#34;); break; case NAMEPOLICY_ONBOARD: new_name = udev_device_get_property_value(device, \u0026#34;ID_NET_NAME_ONBOARD\u0026#34;); break; case NAMEPOLICY_SLOT: new_name = udev_device_get_property_value(device, \u0026#34;ID_NET_NAME_SLOT\u0026#34;); break; case NAMEPOLICY_PATH: new_name = udev_device_get_property_value(device, \u0026#34;ID_NET_NAME_PATH\u0026#34;); break; case NAMEPOLICY_MAC: new_name = udev_device_get_property_value(device, \u0026#34;ID_NET_NAME_MAC\u0026#34;); break; default: break; } } } If we look at the overall case statement, you can see that the first match is the one that takes precedence. Working from top to bottom, udev takes the first match of:\nID_NET_NAME_FROM_DATABASE ID_NET_NAME_ONBOARD ID_NET_NAME_SLOT ID_NET_NAME_PATH ID_NET_NAME_MAC If we go back to our OnMetal example way at the top of the post, we can follow the logic. The udev database contained the following:\nE: ID_NET_NAME_MAC=enxa0369f2cec90 E: ID_NET_NAME_PATH=enp8s0f0 E: ID_NET_NAME_SLOT=ens9f0 The udev daemon would start with ID_NET_NAME_FROM_DATABASE, but that doesn\u0026rsquo;t exist for this card. Next, it would move to ID_NET_NAME_ONBOARD, but that\u0026rsquo;s not present. Next comes ID_NET_NAME_SLOT, and we have a match! The ID_NET_NAME_SLOT entry has ens9f0 and that\u0026rsquo;s the final name for the network device.\nThis loop also handles some special cases. The first check is to see if someone requested for udev to not use predictable naming. We saw this in the systemd-networkd bonding post when the bootloader configuration contained net.ifnames=0. If that kernel command line parameter is present, predictable naming logic is skipped.\nAnother special case is ID_NET_NAME_FROM_DATABASE. Those ports come from udev\u0026rsquo;s internal hardware database. That file only has one item at the moment and it\u0026rsquo;s for a particular Dell iDRAC network interface.\nPerplexed by hex #If the PCI slot numbers don\u0026rsquo;t seem to line up, be sure to read my post from last summer. I ran into a peculiar Dell server with a dual port Intel card on PCI bus 42. The interface ended up with a name of enp66s0f0 and I was stumped.\nThe name enp66s0f0 seems to say that we have a card on PCI bus 66, in slot 0, with multiple function index numbers (for multiple ports). However, systemd does a conversion of PCI slot numbers into hex. That means that decimal 66 becomes 42 in hex.\nMost servers won\u0026rsquo;t be this complicated, but it\u0026rsquo;s key to remember the hex conversion.\nFeedback #Are these systemd-related posts interesting? Let me know. I\u0026rsquo;m a huge fan of systemd and I enjoy writing about it.\nPhoto credit: University of Michigan Library\n","date":"21 August 2015","permalink":"/p/understanding-systemds-predictable-network-device-names/","section":"Posts","summary":"I talked a bit about systemd\u0026rsquo;s network device name in my earlier post about systemd-networkd and bonding and I received some questions about how systemd rolls through the possible names of network devices to choose the final name.","title":"Understanding systemd’s predictable network device names"},{"content":"","date":null,"permalink":"/tags/bonding/","section":"Tags","summary":"","title":"Bonding"},{"content":"","date":null,"permalink":"/tags/grub2/","section":"Tags","summary":"","title":"Grub2"},{"content":"","date":null,"permalink":"/tags/json/","section":"Tags","summary":"","title":"Json"},{"content":"","date":null,"permalink":"/tags/onmetal/","section":"Tags","summary":"","title":"Onmetal"},{"content":" I\u0026rsquo;ve written about systemd-networkd in the past and how easy it can be to set up new network devices and tunnels. However, the documentation on systemd-networkd with bonding is a bit lacking (but I have a pull request pending for that).\nRackspace\u0026rsquo;s OnMetal Servers are a good place to test since they have bonded networks configured by default. They\u0026rsquo;re also quite fast and always fun for experiments.\nTo get started, head on over to the Rackspace Cloud control panel and build a compute-1 OnMetal server and choose Fedora 22 as your operating system. Once it starts pinging and you\u0026rsquo;re able to log in, start following the guide below.\nNetwork device naming #By default, most images come with systemd\u0026rsquo;s predictable network naming disabled. You can see the kernel command line adjustments here:\n# cat /boot/extlinux.conf TIMEOUT 1 default linux LABEL Fedora (4.1.5-200.fc22.x86_64) 22 (Twenty Two) KERNEL /boot/vmlinuz-4.1.5-200.fc22.x86_64 APPEND root=/dev/sda1 console=ttyS4,115200n8 8250.nr_uarts=5 modprobe.blacklist=mei_me net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8 initrd /boot/initramfs-4.1.5-200.fc22.x86_64.img This ensures that both network devices show up as eth0 and eth1. Although it isn\u0026rsquo;t my favorite way to configure a server, it does make it easier for most customers to get up an running quickly with some device names that they are familiar with from virtualized products.\nWe need to figure out what systemd plans to call these interfaces when we allow udev to name them predictably. The easiest method for figuring out what udev wants to call these devices is to dump the udev database and use grep:\n# udevadm info -e | grep -A 9 ^P.*eth0 P: /devices/pci0000:00/0000:00:03.2/0000:08:00.0/net/eth0 E: DEVPATH=/devices/pci0000:00/0000:00:03.2/0000:08:00.0/net/eth0 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet OCP Server Adapter X520-2) E: ID_MODEL_ID=0x10fb E: ID_NET_DRIVER=ixgbe E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME_MAC=enxa0369f2cec90 E: ID_NET_NAME_PATH=enp8s0f0 E: ID_NET_NAME_SLOT=ens9f0 Look for those lines that contain ID_NET_NAME_*. Those tell us what udev prefers to call these network devices. The last name you see in the list is what the interface will be called. Here\u0026rsquo;s what you need to look for in that output:\nE: ID_NET_NAME_MAC=enxa0369f2cec90 E: ID_NET_NAME_PATH=enp8s0f0 E: ID_NET_NAME_SLOT=ens9f0 We can see that this device is in slot 0 of PCI bus 8. However, since udev is able to dig in a bit further, it decides to name the device ens9f0, which means:\nHotplug slot 9 Function index number 0 Udev rolls through a list of possible network names and uses the very last one as the name of the network device. Gentoo\u0026rsquo;s documentation has a nice explanation. In our case, ID_NET_NAME_SLOT took precedence over the others since this particular device sits in a hotplug PCI-Express slot.\nWe can find the slot number here:\n# lspci -v -s 08:00.00 | head -n 3 08:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) Subsystem: Intel Corporation Ethernet OCP Server Adapter X520-2 Physical Slot: 9 Although this is a bit confusing, it can be helpful in servers when parts are added, removed, or replaced. You\u0026rsquo;ll always be assured that the same device in the same slot will never be renamed.\nOur first ethernet device is called ens9f0, but what is the second device called?\n# udevadm info -e | grep -A 9 ^P.*eth1 P: /devices/pci0000:00/0000:00:03.2/0000:08:00.1/net/eth1 E: DEVPATH=/devices/pci0000:00/0000:00:03.2/0000:08:00.1/net/eth1 E: ID_BUS=pci E: ID_MODEL_FROM_DATABASE=82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet OCP Server Adapter X520-2) E: ID_MODEL_ID=0x10fb E: ID_NET_DRIVER=ixgbe E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME_MAC=enxa0369f2cec91 E: ID_NET_NAME_PATH=enp8s0f1 E: ID_NET_NAME_SLOT=ens9f1 Now we know our ethernet devices are called ens9f0 and ens9f1. It\u0026rsquo;s time to configure systemd-networkd.\nBond interface creation #Ensure that you have a /etc/systemd/network/ directory on your server and create the network device file:\n# /etc/systemd/network/bond1.netdev [NetDev] Name=bond1 Kind=bond [Bond] Mode=802.3ad TransmitHashPolicy=layer3+4 MIIMonitorSec=1s LACPTransmitRate=fast We\u0026rsquo;re telling systemd-networkd that we want a new bond interface called bond1 configured using 802.3ad mode. (Want to geek out on 802.3ad? Check out IEEE\u0026rsquo;s PDF.) In addition, we specify a transmit hash policy, a monitoring frequency, and a requested rate for LACP updates.\nNow that we have a device defined, we need to provide some network configuration:\n# /etc/systemd/network/bond1.network [Match] Name=bond1 [Network] VLAN=public VLAN=servicenet BindCarrier=ens9f0 ens9f1 This tells systemd-networkd that we have an interface called bond1 and it has two VLANs configured on it (more on that later). Also, we specify the interfaces participating in the bond. This ensures that the bond comes up and down cleanly as interfaces change state.\nAs one last step, we need to configure the physical interfaces themselves:\n# /etc/systemd/network/ens9f0.network [Match] Name=ens9f0 [Network] Bond=bond1 # /etc/systemd/network/ens9f1.network [Match] Name=ens9f1 [Network] Bond=bond1 These files help systemd-networkd understand which interfaces are participating in the bond. You can get fancy here with your [Match] sections and use only one interface file with ens9f*, but I prefer to be more explicit. Check the documentation for systemd-networkd for that.\nPublic network VLAN #The public network for your OnMetal server is delivered via a VLAN. Packets are tagged as VLAN 101 and you need to configure your network interface to handle that traffic. We already told systemd-networkd about our VLANs within the bond1.network file, but now we need to explain the configuration for the public network VLAN.\nStart by creating a network device file:\n# /etc/systemd/network/public.netdev [NetDev] Name=public Kind=vlan MACAddress=xx:xx:xx:xx:xx:xx [VLAN] Id=101 You can get the correct MAC address from the information in your server\u0026rsquo;s config drive:\nmkdir /mnt/configdrive mount /dev/sda2 /mnt/configdrive/ python -m json.tool /mnt/configdrive/openstack/latest/vendor_data.json Look inside the network_info section for vlan0. It will look something like this:\n{ \u0026#34;ethernet_mac_address\u0026#34;: \u0026#34;xx:xx:xx:xx:xx:xx\u0026#34;, \u0026#34;id\u0026#34;: \u0026#34;vlan0\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;vlan\u0026#34;, \u0026#34;vlan_id\u0026#34;: 101, \u0026#34;vlan_link\u0026#34;: \u0026#34;bond0\u0026#34; }, Take what you see in ethernet_mac_address and use that MAC address on the MACAddress line in your public.netdev file above. If you skip this part, your packets won\u0026rsquo;t make it onto the network. For security reasons, the switch strictly checks to ensure that the right VLAN/IP/MAC combination is use when you communicate on the network.\nNow that we have a network device, let\u0026rsquo;s actually configure the network on it:\n# /etc/systemd/network/public.network [Match] Name=public [Network] DNS=8.8.8.8 DNS=8.8.4.4 [Address] Address=xxx.xxx.xxx.xxx/24 [Route] Destination=0.0.0.0/0 Gateway=xxx.xxx.xxx.1 To get your IP address and gateway, you can use ip addr and ip route. Or, you can look in your config drive within the networks section for the same data. Ensure that your IP address and gateway are configured correctly. I\u0026rsquo;ve used Google\u0026rsquo;s default DNS servers here but you can use your own if you prefer.\nServiceNet VLAN #Rackspace\u0026rsquo;s ServiceNet is the backend network that connects you to other servers as well as other Rackspace products, like Cloud Databases and Cloud Files. We will configure this network in the same fashion, starting with the network device file:\n# /etc/systemd/network/servicenet.netdev [NetDev] Name=servicenet Kind=vlan MACAddress=xx:xx:xx:xx:xx:xx [VLAN] Id=401 As we did before, go look in your config drive for the right MAC address to use. You\u0026rsquo;ll look in the network_info section again but this time you\u0026rsquo;ll look for vlan1\nNow we\u0026rsquo;re ready to create the network file:\n# /etc/systemd/network/servicenet.network [Match] Name=servicenet [Network] Address=xxx.xxx.xxx.xxx/20 [Route] Destination=10.176.0.0/12 Gateway=10.184.0.1 [Route] Destination=10.208.0.0/12 Gateway=10.184.0.1 Review your config drive json for the correct IP address and routes. Your routes will likely be the same as mine, but that can change over time.\nEnable systemd-networkd #All of our configuration files are in place, but now we need to enable systemd-networkd at boot time:\nsystemctl disable network systemctl disable NetworkManager systemctl enable systemd-networkd systemctl enable systemd-resolved We also need to let systemd-resolved handle our DNS resolution:\nsystemctl start systemd-resolved rm /etc/resolv.conf ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf Finally, there\u0026rsquo;s one last gotcha that is only on OnMetal that needs to be removed. Comment out the second and third line in /etc/rc.d/rc.local:\n#!/usr/bin/sh #sleep 20 #/etc/init.d/network restart exit 0 That\u0026rsquo;s there as a workaround for some network issues that sometimes appear during first boot. We won\u0026rsquo;t need it with systemd-networkd.\nReboot #We\u0026rsquo;re ready to test our new configuration! First, let\u0026rsquo;s disable the forced old interface names on the kernel command line. Open /boot/extlinux.conf and ensure that the following two items are not in the kernel command line:\nnet.ifnames=0 biosdevname=0 Remove them from any kernel command lines you see and save the file. Reboot and cross your fingers.\nChecking our work #If you get pings after a reboot, you did well! If you didn\u0026rsquo;t. you can use OnMetal\u0026rsquo;s rescue mode to hop into a temporary OS and mount your root volume. Be sure to look inside /var/log/messages for signs of typos or other errors.\nWe can use some simple tools to review our network status:\n# networkctl IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 bond0 ether off unmanaged 3 bond1 ether degraded configured 4 public ether routable configured 5 servicenet ether routable configured 6 ens9f0 ether carrier configured 7 ens9f1 ether carrier configured Don\u0026rsquo;t be afraid of the degraded status for bond1. That\u0026rsquo;s there because systemd doesn\u0026rsquo;t have networking configuration for the interface since we do that with our VLANs. Also, both physical network interfaces are listed as carrier because they don\u0026rsquo;t have network configuration, either. They\u0026rsquo;re just participating in the bond.\nFeel free to ignore bond0, too. The bonding module in the Linux kernel automatically creates the interface when it\u0026rsquo;s loaded.\nExtra credit: Switch to grub2 #Sure, extlinux is fine for most use cases, but I prefer something a little more powerful. Luckily, switching to grub2 is quite painless:\ndnf -y remove syslinux-extlinux rm -f /boot/extlinux.conf dnf -y install grubby grub2 grub2-mkconfig -o /boot/grub2/grub.cfg grub2-install /dev/sda Simply reboot and you\u0026rsquo;ll be booting with grub2!\n","date":"21 August 2015","permalink":"/p/using-systemd-networkd-with-bonding-on-rackspaces-onmetal-servers/","section":"Posts","summary":"I\u0026rsquo;ve written about systemd-networkd in the past and how easy it can be to set up new network devices and tunnels.","title":"Using systemd-networkd with bonding on Rackspace’s OnMetal servers"},{"content":"","date":null,"permalink":"/tags/giac/","section":"Tags","summary":"","title":"Giac"},{"content":"","date":null,"permalink":"/tags/lxc/","section":"Tags","summary":"","title":"Lxc"},{"content":"It seems like there\u0026rsquo;s a new way to run containers every week. The advantages and drawbacks of each approach are argued about on mailing lists, in IRC channels, and in person, around the world. However, the largest amount of confusion seems to be around security.\nLaunching secure containers #I\u0026rsquo;ve written about launching secure containers on this blog many times before:\nLaunch secure LXC containers on Fedora 20 using SELinux and sVirt Improving LXC template security Try out LXC with an Ansible playbook However, my goal this time around was to do something more comprehensive and slightly more formal. After getting my GSEC and GCUX certifications from SANS/GIAC, there was an option to enhance the certification to a gold status by writing a peer-reviewed research paper on a topic related to the exam. It was a great experience to go through the review process and get feedback on the technical material as well as the structure of the paper itself.\nThe paper #Without further ado, here are links to the Securing Linux Containers paper:\nPDF version without watermarks PDF version from SANS (has some watermarks and SANS/GIAC extra pages) The paper is written for readers who have some level of familiarity with Linux and some virtualization technologies. It\u0026rsquo;s a useful paper even for people who haven\u0026rsquo;t worked with containers.\nIt starts with an overview of Linux containers and how they differ from other types of virtualization, such as KVM or Xen. From there, it covers how to secure the host system underneath the containers and how to provide security within the containers themselves. There\u0026rsquo;s also a section on how to start a simple container on CentOS 7 and inspect the security controls inside and outside the container.\nLicensing #I\u0026rsquo;m also very proud to announce that the paper is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA). You are free to quote it as much as you like (even for commercial purposes), but I\u0026rsquo;d ask that you maintain the same license and attribute me as the author.\nThank you #This paper wouldn\u0026rsquo;t have been possible without some serious help from these awesome people:\nRichard Carbone was my advisor from SANS and he helped tremendously Dan Walsh reviewed the content and gave me several pointers on topics to add and adjust Paul Voccio, Antony Messerli, and Brad McConnell from Rackspace also provided feedback My mother, Neta Greene, is the best educator I know and she fueled my interest in writing and sharing with others Feedback #Please let me know if you spot any errors or areas that need clarification. This is one of my favorite topics and I enjoy talking about it. Find me on Freenode IRC as mhayden and I\u0026rsquo;ll be glad to talk more there.\n","date":"14 August 2015","permalink":"/p/research-paper-securing-linux-containers/","section":"Posts","summary":"It seems like there\u0026rsquo;s a new way to run containers every week.","title":"Research Paper: Securing Linux Containers"},{"content":"","date":null,"permalink":"/tags/sans/","section":"Tags","summary":"","title":"Sans"},{"content":"Fedora Flock 2015 is still going here in Rochester, New York, and I kicked off our second day with a keynote talk about overcoming impostor syndrome.\nIf you\u0026rsquo;d like to review the slides, they\u0026rsquo;re on SlideShare:\nQuite a few people came up after the talk and throughout the day to share some of their stories and challenges. It was extremely rewarding to have those conversations and share solutions.\nI\u0026rsquo;ll be doing the talk once more at Texas Linux Fest in San Marcos on August 22.\n","date":"14 August 2015","permalink":"/p/fedora-flock-2015-keynote-slides/","section":"Posts","summary":"Fedora Flock 2015 is still going here in Rochester, New York, and I kicked off our second day with a keynote talk about overcoming impostor syndrome.","title":"Fedora Flock 2015: Keynote slides"},{"content":"I started working on the Ansible CIS playbook for CentOS and RHEL 6 back in 2014 and I\u0026rsquo;ve made a few changes to increase quality and make it easier to use.\nFirst off, the role itself is no longer a submodule. You can now just clone the repository and get rolling. This should reduce the time it takes to get started.\nAlso, all pull requests to the repository now go through integration testing at Rackspace. Each pull request goes through the gauntlet:\nSyntax check on Travis-CI Travis-CI builds a server at Rackspace The entire Ansible playbook runs on the Rackspace Cloud Server Results are sent back to GitHub The testing process usually takes under five minutes.\nStay tuned: Updates are coming for RHEL and CentOS 7. ;)\n","date":"5 August 2015","permalink":"/p/automated-testing-for-ansible-cis-playbook-on-rhelcentos-6/","section":"Posts","summary":"I started working on the Ansible CIS playbook for CentOS and RHEL 6 back in 2014 and I\u0026rsquo;ve made a few changes to increase quality and make it easier to use.","title":"Automated testing for Ansible CIS playbook on RHEL/CentOS 6"},{"content":"","date":null,"permalink":"/tags/libvirt/","section":"Tags","summary":"","title":"Libvirt"},{"content":"I decided to change some of my infrastructure back to KVM again, and the overall experience has been quite good in Fedora 22. Using libvirt with KVM is a breeze and the virt-manager tools make it even easier. However, I ran into some problems while trying to migrate virtual machines from one server to another.\nThe error ## virsh migrate --live --copy-storage-all bastion qemu+ssh://root@192.168.250.33/system error: internal error: unable to execute QEMU command \u0026#39;drive-mirror\u0026#39;: Failed to connect socket: Connection timed out That error message wasn\u0026rsquo;t terribly helpful. I started running through my usual list of checks:\nCan the hypervisors talk to each other? Yes, iptables is disabled. Are ssh keys configured? Yes, verified. What about ssh host keys being accepted on each side? Both sides can ssh without interaction. SELinux? No AVC\u0026rsquo;s logged. Libvirt logs? Nothing relevant in libvirt\u0026rsquo;s qemu logs. Filesystem permissions for libvirt\u0026rsquo;s directories? Identical on both sides. Libvirt daemon running on both sides? Yes. I was pretty confused at this point. A quick Google search didn\u0026rsquo;t reveal too many relevant issues, but I did find a Red Hat Bug from 2013 that affected RHEL 7. The issue in the bug was that libvirt wasn\u0026rsquo;t using the right ports to talk between servers and those packets were being dropped by iptables. My iptables rules were empty.\nDebug time #I ran the same command with LIBVIRT_DEBUG=1 at the front:\ndebug.log After scouring the pages and pages of output, I couldn\u0026rsquo;t find anything useful.\nEureka! #I spotted an error message briefly in virt-manager or the debug logs that jogged my brain to think about a potential problem: hostnames. Both hosts had a fairly bare /etc/hosts file without IP/hostname pairs for each hypervisor. After editing both servers\u0026rsquo; /etc/hosts file to include the short and full hostnames for each hypervisor, I tested the live migration one more time.\nSuccess!\nThe migration went off without a hitch in virt-manager and via the virsh client. I migrated several VM\u0026rsquo;s, including the one running this site, with no noticeable interruption.\n","date":"3 August 2015","permalink":"/p/live-migration-failures-with-kvm-and-libvirt/","section":"Posts","summary":"I decided to change some of my infrastructure back to KVM again, and the overall experience has been quite good in Fedora 22.","title":"Live migration failures with KVM and libvirt"},{"content":"","date":null,"permalink":"/tags/qemu/","section":"Tags","summary":"","title":"Qemu"},{"content":"I\u0026rsquo;ve recently set up a Fedora 22 firewall/router at home (more on that later) and I noticed that remote ssh logins were extremely slow. In addition, sudo commands seemed to stall out for the same amount of time (about 25-30 seconds).\nI\u0026rsquo;ve done all the basic troubleshooting already:\nSwitch to UseDNS no in /etc/ssh/sshd_config Set GSSAPIAuthentication no in /etc/ssh/sshd_config Tested DNS resolution These lines kept cropping up in my system journal when I tried to access the server using ssh:\ndbus[4865]: [system] Failed to activate service \u0026#39;org.freedesktop.login1\u0026#39;: timed out sshd[7391]: pam_systemd(sshd:session): Failed to create session: Activation of org.freedesktop.login1 timed out sshd[7388]: pam_systemd(sshd:session): Failed to create session: Activation of org.freedesktop.login1 timed out The process list on the server looked fine. I could see dbus-daemon and systemd-logind processes and they were in good states. However, it looked like dbus-daemon had restarted at some point and systemd-logind had not been restarted since then. I crossed my fingers and bounced systemd-logind:\nsystemctl restart systemd-logind Success! Logins via ssh and escalations with sudo worked instantly.\n","date":"27 July 2015","permalink":"/p/very-slow-ssh-logins-on-fedora-22/","section":"Posts","summary":"I\u0026rsquo;ve recently set up a Fedora 22 firewall/router at home (more on that later) and I noticed that remote ssh logins were extremely slow.","title":"Very slow ssh logins on Fedora 22"},{"content":"","date":null,"permalink":"/tags/networkmanager/","section":"Tags","summary":"","title":"Networkmanager"},{"content":" My upgrade to Fedora 22 on the ThinkPad X1 Carbon was fairly uneventful and the hiccups were minor. One of the more annoying items that I\u0026rsquo;ve been struggling with for quite some time is how to boot up with the wireless LAN and Bluetooth disabled by default. Restoring wireless and Bluetooth state between reboots is normally handled quite well in Fedora.\nIn Fedora 21, NetworkManager saved my settings between reboots. For example, if I shut down with wifi off and Bluetooth on, the laptop would boot up later with wifi off and Bluetooth on. This wasn\u0026rsquo;t working well in Fedora 22: both the wifi and Bluetooth were always enabled by default.\nDigging into rfkill #I remembered rfkill and began testing out some commands. It detected that I had disabled both devices via NetworkManager (soft):\n$ rfkill list 0: tpacpi_bluetooth_sw: Bluetooth Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: no It looked like systemd has some hooks already configured to manage rfkill via the systemd-rfkill service. However, something strange happened when I tried to start the service:\n# systemctl start systemd-rfkill@0 Failed to start systemd-rfkill@0.service: Unit systemd-rfkill@0.service is masked. Well, that\u0026rsquo;s certainly weird. While looking into why it\u0026rsquo;s masked, I found an empty file in /etc/systemd:\n# ls -al /etc/systemd/system/systemd-rfkill@.service -rwxr-xr-x. 1 root root 0 May 11 16:36 /etc/systemd/system/systemd-rfkill@.service I don\u0026rsquo;t remember making that file. Did something else put it there?\n# rpm -qf /etc/systemd/system/systemd-rfkill@.service tlp-0.7-4.fc22.noarch Ah, tlp!\nConfiguring tlp #I looked in tlp\u0026rsquo;s configuration file in /etc/default/tlp and found a few helpful configuration items:\n# Restore radio device state (Bluetooth, WiFi, WWAN) from previous shutdown # on system startup: 0=disable, 1=enable. # Hint: the parameters DEVICES_TO_DISABLE/ENABLE_ON_STARTUP/SHUTDOWN below # are ignored when this is enabled! RESTORE_DEVICE_STATE_ON_STARTUP=0 # Radio devices to disable on startup: bluetooth, wifi, wwan. # Separate multiple devices with spaces. #DEVICES_TO_DISABLE_ON_STARTUP=\u0026#34;bluetooth wifi wwan\u0026#34; # Radio devices to enable on startup: bluetooth, wifi, wwan. # Separate multiple devices with spaces. #DEVICES_TO_ENABLE_ON_STARTUP=\u0026#34;wifi\u0026#34; # Radio devices to disable on shutdown: bluetooth, wifi, wwan # (workaround for devices that are blocking shutdown). #DEVICES_TO_DISABLE_ON_SHUTDOWN=\u0026#34;bluetooth wifi wwan\u0026#34; # Radio devices to enable on shutdown: bluetooth, wifi, wwan # (to prevent other operating systems from missing radios). #DEVICES_TO_ENABLE_ON_SHUTDOWN=\u0026#34;wwan\u0026#34; # Radio devices to enable on AC: bluetooth, wifi, wwan #DEVICES_TO_ENABLE_ON_AC=\u0026#34;bluetooth wifi wwan\u0026#34; # Radio devices to disable on battery: bluetooth, wifi, wwan #DEVICES_TO_DISABLE_ON_BAT=\u0026#34;bluetooth wifi wwan\u0026#34; # Radio devices to disable on battery when not in use (not connected): # bluetooth, wifi, wwan #DEVICES_TO_DISABLE_ON_BAT_NOT_IN_USE=\u0026#34;bluetooth wifi wwan\u0026#34; So tlp\u0026rsquo;s default configuration doesn\u0026rsquo;t restore device state and it masked systemd\u0026rsquo;s rfkill service. I adjusted one line in tlp\u0026rsquo;s configuration and rebooted:\nDEVICES_TO_DISABLE_ON_STARTUP=\u0026#34;bluetooth wifi wwan\u0026#34; After the reboot, both the wifi and Bluetooth functionality were shut off! That\u0026rsquo;s exactly what I needed.\nExtra credit #Thanks to a coworker, I was able to make a NetworkManager script to automatically shut off the wireless LAN whenever I connected to a network via ethernet. This is typically what I do when coming back from an in-person meeting to my desk (where I have ethernet connectivity).\nIf you want the same automation, just drop this script into /etc/NetworkManager/dispatcher.d/70-wifi-wired-exclusive.sh and make it executable:\n#!/bin/bash export LC_ALL=C enable_disable_wifi () { result=$(nmcli dev | grep \u0026#34;ethernet\u0026#34; | grep -w \u0026#34;connected\u0026#34;) if [ -n \u0026#34;$result\u0026#34; ]; then nmcli radio wifi off fi } if [ \u0026#34;$2\u0026#34; = \u0026#34;up\u0026#34; ]; then enable_disable_wifi fi Unplug the ethernet connection, start wifi, and then plug the ethernet connection back in. Once NetworkManager fully connects (DHCP lease obtained, connectivity check passes), the wireless LAN should shut off automatically.\n","date":"19 July 2015","permalink":"/p/restoring-wireless-and-bluetooth-state-after-reboot-in-fedora-22/","section":"Posts","summary":"My upgrade to Fedora 22 on the ThinkPad X1 Carbon was fairly uneventful and the hiccups were minor.","title":"Restoring wireless and Bluetooth state after reboot in Fedora 22"},{"content":"","date":null,"permalink":"/tags/wireless/","section":"Tags","summary":"","title":"Wireless"},{"content":"I stumbled upon a strange bug at work one day and found I couldn\u0026rsquo;t connect to our wireless access points any longer. After some investigation in the systemd journal, I found that my card associated with the access point but never went any further past that. It looked as if the authentication wasn\u0026rsquo;t ever taking place.\nA quick dig through my recent dnf update history didn\u0026rsquo;t reveal much but then I found a tip from a coworker on an internal wiki that wpa_supplicant 2.4 has problems with certain Aruba wireless access points.\nThere\u0026rsquo;s an open ticket on the Red Hat Bugzilla about the issues in wpa_supplicant 2.4. The changelog for 2.4 is lengthy and it has plenty of mentions of EAP; Aruba\u0026rsquo;s preferred protocol on certain networks. One of those changes could be related. A formal support case1 is open with Aruba as well.\nIf this bug affects you, you can return to wpa_supplicant-2.3-3.fc22.x86_64 easily by running:\ndnf downgrade wpa_supplicant This isn\u0026rsquo;t a good long-term solution, but it fixes the bug and gets you back online.\nThe support case is no longer accessible as of May 2021.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"17 July 2015","permalink":"/p/aruba-access-points-eap-and-wpa_supplicant-2-4-bugs/","section":"Posts","summary":"I stumbled upon a strange bug at work one day and found I couldn\u0026rsquo;t connect to our wireless access points any longer.","title":"Aruba access points, EAP, and wpa_supplicant 2.4 bugs"},{"content":"GNOME 3 generally works well for me but it has some quirks. One of those quirks is that new windows don\u0026rsquo;t actually pop up on the screen with focus as they do in Windows and OS X. When opening a new window, you get a “[Windowname] is ready” notification:\nMy preference is for new windows to pop in front and steal focus. I can see why that\u0026rsquo;s not the default since it might cause you to type something in another window where you weren\u0026rsquo;t expecting to. Fortunately, you can enable what GNOME calls strict window focus with a quick trip to dconf-editor.\nInstalling dconf-editor is easy:\n# RHEL/CentOS 7 and Fedora 21 yum -y install dconf-editor # Fedora 22 dnf -y install dconf-editor Open dconf-editor and navigate to org -\u0026gt; gnome -\u0026gt; desktop -\u0026gt; wm -\u0026gt; preferences.\nOnce you\u0026rsquo;re there, look for focus-new-windows. The default setting is smart which will keep new windows in the background and alert you via a notification. If you click on smart, a drop down will appear and you can select strict. That will enable functionality similar to OS X and Windows where new windows will pop up in the front and steal your focus.\nThe new setting takes effect immediately and there\u0026rsquo;s no need to logout or close and reopen windows.\nUPDATE: If you\u0026rsquo;d like to avoid installing dconf-editor, use Alexander\u0026rsquo;s suggestion below and simply run:\ngsettings set org.gnome.desktop.wm.preferences focus-new-windows \u0026#39;strict\u0026#39; ","date":"6 July 2015","permalink":"/p/allow-new-windows-to-steal-focus-in-gnome-3/","section":"Posts","summary":"GNOME 3 generally works well for me but it has some quirks.","title":"Allow new windows to steal focus in GNOME 3"},{"content":" Woot suckered me into buying a 4K display at a fairly decent price and now I have a Samsung U28D590D sitting on my desk at home. I ordered a mini-DisplayPort to DisplayPort from Amazon and it arrived just before the monitor hit my doorstep. It\u0026rsquo;s time to enter the world of 4K displays.\nThe unboxing of the monitor was fairly uneventful and it powered up after small amount of assembly. I plugged my mini-DP to DP cable into the monitor and then into my X1 Carbon 3rd gen. After a bunch of flickering, the display sprang to life but the image looked fuzzy. After some hunting, I found that the resolution wasn\u0026rsquo;t at the monitor\u0026rsquo;s maximum:\n$ xrandr -q DP1 connected 2560x1440+2560+0 (normal left inverted right x axis y axis) 607mm x 345mm 2560x1440 59.95*+ 1920x1080 60.00 59.94 1680x1050 59.95 1600x900 59.98 I bought this thing because it does 3840×2160. How confusing. After searching through the monitor settings, I found an option for \u0026ldquo;DisplayPort version\u0026rdquo;. It was set to version 1.1 but version 1.2 was available. I selected version 1.2 (which appears to come with something called HBR2) and then the display flickered for 5-10 seconds. There was no image on the display.\nI adjusted GNOME\u0026rsquo;s Display settings back down to 2560×1440. The display sprang back to life, but it was fuzzy again. I pushed the settings back up to 3840×2160. The flickering came back and the monitor went to sleep.\nMy laptop has an HDMI port and I gave that a try. I had a 3840×2160 display up immediately! Hooray! But wait - that resolution runs at 30Hz over HDMI 1.4. HDMI 2.0 promises faster refresh rates but neither my laptop or the display support it. After trying to use the display at max resolution with a 30Hz refresh rate, I realized that it wasn\u0026rsquo;t going to work.\nThe adventure went on and I joined #intel-gfx on Freenode. This is apparently a common problem with many onboard graphics chips as many of them cannot support a 4K display at 60Hz. It turns out that the i5-5300U (that\u0026rsquo;s a Broadwell) can do it.\nOne of the knowledgeable folks in the channel suggested a new modeline. That had no effect. The monitor flickered and went back to sleep as it did before.\nI picked up some education on the difference between SST and MST displays. MST displays essentially have two chips handling half of the display within the monitor. Both of those do the work to drive the entire display. SST monitors (the newer variety, like the one I bought) take a single stream and one single chip in the monitor figures out how to display the content.\nAt this point, I\u0026rsquo;m stuck with a non-working display at 4K resolution over DisplayPort. I can get lower resolutions working via DisplayPort, but that\u0026rsquo;s not ideal. 4K works over HDMI, but only at 30Hz. Again, not ideal. I\u0026rsquo;ll do my best to update this post as I come up with some other ideas.\nUPDATE 2015-07-01: Thanks to Sandro Mathys for spotting a potential fix:\n@majorhayden Uh, did you update your BIOS? \"Supported the 60Hz refresh rate of 4K (3840 x 2160) resolution monitor.\" http://t.co/NbnktzZMgj \u0026mdash; Sandro Mathys (@red_trela) July 1, 2015 I found BIOS 1.08 waiting for me on Lenovo\u0026rsquo;s site. One of the last items fixed in the release notes was:\n(New) Supported the 60Hz refresh rate of 4K (3840 x 2160) resolution monitor.\nAfter a quick flash of a USB stick and a reboot to update the BIOS, the monitor sprang to life after logging into GNOME. It looks amazing! The graphics performance is still not amazing (but hey, this is Broadwell graphics we\u0026rsquo;re talking about) but it does 3840×2160 at 60Hz without a hiccup. I tried unplugging and replugging the DisplayPort cable several times and it never flickered.\n","date":"1 July 2015","permalink":"/p/stumbling-into-the-world-of-4k-displays/","section":"Posts","summary":"Woot suckered me into buying a 4K display at a fairly decent price and now I have a Samsung U28D590D sitting on my desk at home.","title":"Stumbling into the world of 4K displays [UPDATED]"},{"content":"","date":null,"permalink":"/tags/dbus/","section":"Tags","summary":"","title":"Dbus"},{"content":"My older post about rotating GNOME\u0026rsquo;s wallpaper with systemd timers doesn\u0026rsquo;t seem to work in Fedora 22. The DISPLAY=:0 environment variable isn\u0026rsquo;t sufficient to allow systemd to use gsettings.\nInstead, the script run by the systemd timer must know a little bit more about dbus. More specifically, the script needs to know the address of the dbus session so it can communicate on the bus. That\u0026rsquo;s normally kept within the DBUS_SESSION_BUS_ADDRESS environment variable.\nOpen a shell and you can verify that yours is set:\n$ env | grep ^DBUS_SESSION DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-xxxxxxxxxx,guid=fa6ff8ded93c1df77eba3fxxxxxxxxxx That is actually set when gnome-session starts as your user on your machine. for the script to work, we need to add a few lines at the top:\n#!/bin/bash # These three lines are new USER=$(whoami) PID=$(pgrep -u $USER gnome-session) export DBUS_SESSION_BUS_ADDRESS=$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/$PID/environ|cut -d= -f2-) # These three lines are unchanged from the original script walls_dir=$HOME/Pictures/Wallpapers selection=$(find $walls_dir -type f -name \u0026#34;*.jpg\u0026#34; -o -name \u0026#34;*.png\u0026#34; | shuf -n1) gsettings set org.gnome.desktop.background picture-uri \u0026#34;file://$selection\u0026#34; Let\u0026rsquo;s look at what the script is doing:\nFirst, we get the username of the user running the script We look for the gnome-session process that is running as that user We pull out the dbus environment variable from gnome-session\u0026rsquo;s environment variables when it was first started Go ahead and adjust your script. Once you\u0026rsquo;re done, test it by simply running the script manually and then using systemd to run it:\n$ bash ~/bin/rotate_bg.sh $ systemctl --user start gnome-background-change Both of those commands should now rotate your GNOME wallpaper in Fedora 22.\n","date":"23 June 2015","permalink":"/p/fedora-22-and-rotating-gnome-wallpaper-with-systemd-timers/","section":"Posts","summary":"My older post about rotating GNOME\u0026rsquo;s wallpaper with systemd timers doesn\u0026rsquo;t seem to work in Fedora 22.","title":"Fedora 22 and rotating GNOME wallpaper with systemd timers"},{"content":"","date":null,"permalink":"/tags/book/","section":"Tags","summary":"","title":"Book"},{"content":"I picked up a copy of Robert Love\u0026rsquo;s book, Linux Kernel Development, earlier this year and I\u0026rsquo;ve worked my way through it over the past several weeks. A few people recommended the book to me on Twitter and I\u0026rsquo;m so glad they did. This book totally changed how I look at a system running Linux.\nYou must be this tall to ride this ride #I\u0026rsquo;ve never had formal education in computer science or software development in the past. After all, my degree was in Biology and I was on the path to becoming a phyisician when this other extremely rewarding career came into play. (That\u0026rsquo;s a whole separate blog post in itself.)\nJust to level-set: I can read C and make small patches when I spot problems. However, I\u0026rsquo;ve never set out and started a project in C on my own and I haven\u0026rsquo;t really made any large contributions to projects written in C. However, I\u0026rsquo;m well-versed in Perl, Ruby, and Python mainly from job experience and leaning on some much more skilled colleagues.\nThe book recommends that you have a basic grasp of C and some knowledge around memory management and process handling. I found that I was able to fully understand about 70% of the book immediately, another 20% or so required some additional research and practice, while about 10% was mind-blowing. Obviously, that leaves me with plenty of room to grow.\nHonestly, if you understand how most kernel tunables work and you know at least one language that runs on your average Linux box, you should be able to understand the majority of the material. Some sections might require some re-reading and you might need to go back and read a section when a later chapter sheds more light on the subject.\nMoving through the content #I won\u0026rsquo;t go into a lot of detail around the content itself other than to say it\u0026rsquo;s extremely comprehensive. After all, you wouldn\u0026rsquo;t be reading a book about something as complex as the Linux kernel if you weren\u0026rsquo;t ready for an onslaught of information.\nThe information is organized in an effective way. Initial concepts are familiar to someone who has worked in user space for quite some time. If you\u0026rsquo;ve dealt with oom-killer, loaded kernel modules, or written some horrible code that later needed to be optimized, you\u0026rsquo;ll find the beginning of the book to be very useful. Robert draws plenty of distinctions around kernel space, user space, and how they interact. He take special care to cover SMP-safe code and how to take non-SMP-safe code and improve it.\nI found a ton of value in the memory management, locking, and the I/O chapters. I didn\u0026rsquo;t fully understand the blocks of C code within the text but there was a ton of value in the deep explanations of how data flows (and doesn\u0026rsquo;t flow) from memory to disk and back again.\nThe best part #If I had to pick one thing to entice more people to read the book, it would be the way Robert explains every concept in the book. He has a good formula that helps you understand the how, the what, and the why. So many books forget the why.\nHe takes the time to explain what frustrated the kernel developers that made them write a feature in the first place and then goes into detail about how they fixed it. He also talks about differences between other operating systems (like Unix, Windows, and others) and other hardware types (like ARM and Alpha). So many books leave this part out but it\u0026rsquo;s often critical for understanding difficult topics. I learned this the hard way in my biology classes when I tried to memorize concepts rather than trying to understand the evolutionary or chemical reasons for why it occurs.\nRobert also rounds out the book with plenty of debugging tips that allow readers to trudge through bug hunts with better chances of success. He helps open the doors to the Linux kernel community and gives tips on how to get the best interactions from the community.\nWrap-up #This book is worth it for anyone who wants to learn more about how their Linux systems operate or who want to actually write code for the kernel. Much of the deep workings of the kernel was a mystery to me before and I really only knew how to interact with a few interfaces.\nReading this book was like watching a cover being taken off of a big machine and listening to an expert explain how it works. It\u0026rsquo;s definitely worth reading.\n","date":"21 June 2015","permalink":"/p/book-review-linux-kernel-development/","section":"Posts","summary":"I picked up a copy of Robert Love\u0026rsquo;s book, Linux Kernel Development, earlier this year and I\u0026rsquo;ve worked my way through it over the past several weeks.","title":"Book Review: Linux Kernel Development"},{"content":"I\u0026rsquo;ve been getting involved with the Fedora Security Team lately and we\u0026rsquo;re working as a group to crush security bugs that affect Fedora, CentOS (via EPEL) and Red Hat Enterprise Linux (via EPEL). During some of this work, I stumbled upon a group of Red Hat Bugzilla tickets talking about LXC template security.\nThe gist of the problem is that there\u0026rsquo;s a wide variance in how users and user credentials are handled by the different LXC templates. An inventory of the current situation revealed some horrifying problems with many OS templates.\nMany of the templates set an awful default root password, like rooter, toor, or root. Some of the others create a regular user with sudo privileges and give it a default, predictable password unless the user specifies otherwise.\nThere are some bright spots, though. Fedora and CentOS templates will accept a root password from the user during the build and set a randomized password for the root user if a password isn\u0026rsquo;t specified. Ubuntu Cloud takes another approach by locking out the root user and requiring cloud-init configuration data to configure the root account.\nI kicked off a mailing list thread and wrote a terrible pull request to get things underway. Stéphane Graber requested that all templates use a shared script to handle users and credentials via standardized environment variables and command line arguments. In addition, all passwords for users (regular or root) should be empty with password-less logins disabled. Those are some reasonable requests and I\u0026rsquo;m working on a shell script that\u0026rsquo;s easy to import into LXC templates.\nThere\u0026rsquo;s also a push to remove sshd from all LXC templates by default, but I\u0026rsquo;m hoping to keep that one tabled until the credentials issue is solved.\nIf you\u0026rsquo;d like to help out with the effort, let me know! I\u0026rsquo;ll probably get some code up onto Github soon and as for comments.\n","date":"18 June 2015","permalink":"/p/improving-lxc-template-security/","section":"Posts","summary":"I\u0026rsquo;ve been getting involved with the Fedora Security Team lately and we\u0026rsquo;re working as a group to crush security bugs that affect Fedora, CentOS (via EPEL) and Red Hat Enterprise Linux (via EPEL).","title":"Improving LXC template security"},{"content":" After an unfortunate death of my Yubikey NEO and a huge mistake on backups, I\u0026rsquo;ve come to realize that it\u0026rsquo;s time for a new GPG key. My new one is already up on Keybase and there\u0026rsquo;s a plain text copy on my resume site.\nAction required #If you\u0026rsquo;re using a key for me with a fingerprint of 6DC99178, that one is no longer valid. My new one is C1011FB1.\nFor the impatient, here\u0026rsquo;s the easiest way to retrieve my new key:\ngpg2 --keyserver pgp.mit.edu --recv-key C1011FB1 Lessons learned #Always ensure that you have complete backups of all of your keys. I made a mistake and forgot to back up my original signing subkey before I moved that key to my Yubikey. When the NEO died, so did the last copy of the most important subkey. It goes without saying but I don\u0026rsquo;t plan on making that mistake again.\nAlways make a full backup of all keys and make a revocation certificate that also gets backed up. There\u0026rsquo;s a good guide on this topic if you\u0026rsquo;re new to the process.\nWait. A Yubikey stopped working? #This is the first Yubikey failure that I\u0026rsquo;ve ever experienced. I\u0026rsquo;ve had two regular Yubikeys that are still working but this is my first NEO.\nI emailed Yubico support earlier today about the problem and received an email back within 10-15 minutes. They offered me a replacement NEO with free shipping. It\u0026rsquo;s still a bummer about the failure but at least they worked quickly to get me a free replacement.\n","date":"11 June 2015","permalink":"/p/time-for-a-new-gpg-key/","section":"Posts","summary":"After an unfortunate death of my Yubikey NEO and a huge mistake on backups, I\u0026rsquo;ve come to realize that it\u0026rsquo;s time for a new GPG key.","title":"Time for a new GPG key"},{"content":"","date":null,"permalink":"/tags/yubikey/","section":"Tags","summary":"","title":"Yubikey"},{"content":"I ran some package updates last night and ended up with a new version of Google Chrome from the stable branch. After restarting Chrome, everything in the interface was huge. The icons in the bookmark bar, the text, the padding - all of it looked enormous.\nAfter a little searching, I found a helpful line in the ArchLinux HiDPI documentation:\nFull HiDPI support in Chrome is now available in the main branch google-chromeAUR as of version 43.0.2357.2-1 and works out of the box as tested with Gnome and Cinnamon.\nIt looks like there was a flag available for quite some time to test the feature but it disappeared sometime in March. I scoured my list of flags as well as my Chrome configuration directories and couldn\u0026rsquo;t find any trace of it.\nTemporary Workaround #While I search for a fix, my current workaround is to manually edit the .desktop file that comes with the Chrome RPM. On my Fedora system, that file is /usr/share/applications/google-chrome.desktop. If you open that file, look for a line that starts with Exec:\nExec=/usr/bin/google-chrome-stable %U Change that line so that it includes --force-device-scale-factor=1 to disable HiDPI support:\nExec=/usr/bin/google-chrome-stable --force-device-scale-factor=1 %U Depending on your display manager, you might need to do something to refresh the .desktop files. If you\u0026rsquo;re using GNOME 3, just press Alt-F2, type r, and press enter. Your screen will flicker a lot and GNOME will restart in place. Try starting Chrome once more and you should be back to normal.\nStill having problems? #If you don\u0026rsquo;t see a change after doing all of that, ensure that you fully exited Chrome. Depending on your configuration, Chrome might still be running in your taskbar even if you close all of the browser windows. If that\u0026rsquo;s the case, completely exit Chrome using the taskbar menu or pkill -f google-chrome. Start Chrome again and you should be all set.\n","date":"10 June 2015","permalink":"/p/chrome-43-stuck-in-hidpi-mode/","section":"Posts","summary":"I ran some package updates last night and ended up with a new version of Google Chrome from the stable branch.","title":"Chrome 43 stuck in HiDPI mode"},{"content":"Applications on my Fedora 22 system kept stalling when I attempted to print. My system journal was full of these log messages:\nsystemd[1]: cups.service start operation timed out. Terminating. systemd[1]: Failed to start CUPS Scheduler. systemd[1]: Unit cups.service entered failed state. systemd[1]: cups.service failed. audit[1]: \u0026lt;audit-1130\u0026gt; pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=\u0026#39;unit=cups comm=\u0026#34;systemd\u0026#34; exe=\u0026#34;/usr/lib/systemd/systemd\u0026#34; hostname=? addr=? terminal=? res=failed\u0026#39; If I tried to run systemctl start cups, the command would hang for quite a while and then fail. I broke out strace and tried to figure out what was going wrong.\nThe strace output showed that cups was talking to my local DNS servers and was asking constantly for the IP address of my laptop\u0026rsquo;s hostname.\nOh, I felt pretty stupid at this point.\nI added my laptop\u0026rsquo;s hostname onto the line starting with 127.0.0.1 in my /etc/hosts and tried to start cups once more. It started up in less than a second and is now working well.\n","date":"9 June 2015","permalink":"/p/cups-service-start-operation-timed-out-in-fedora-22/","section":"Posts","summary":"Applications on my Fedora 22 system kept stalling when I attempted to print.","title":"cups.service start operation timed out in Fedora 22"},{"content":"","date":null,"permalink":"/tags/printing/","section":"Tags","summary":"","title":"Printing"},{"content":"","date":null,"permalink":"/tags/pulseaudio/","section":"Tags","summary":"","title":"Pulseaudio"},{"content":" My transition from Fedora 21 to 22 on the ThinkPad X1 Carbon was fairly uneventful even with over 2,400 packages involved in the upgrade. The only problem I dealt with on reboot was that my icons on the GNOME 3 desktop were way too large. That\u0026rsquo;s a pretty easy problem to fix.\nHowever, something else cropped up after a while. I started listening to music in Chrome and a Pidgin notification sound came through. There was a quiet pop before the Pidgin sound and a loud pop on the end. Thunderbird\u0026rsquo;s notifications sounded the same. The pops at the end of the sound were sometimes very loud and hurt my ears.\nI started running PulseAudio in debug mode within a terminal:\npulseaudio -k pulseaudio --start There were some messages about buffer underruns and latency issues but they were all very minimal. I loaded up pavucontrol and couldn\u0026rsquo;t find anything unusual when multiple sounds played. I gave pavumeter a try and found something very interesting.\nWhen Chrome was playing audio, the meters in pavumeter were at 80-90%. That seems to make sense because I keep Chrome as one of the loudest applications on my laptop. My logic there is that I don\u0026rsquo;t want to get blasted by a notification tone that is drastically louder than my music.\nHowever, if I received a Pidgin or Thunderbird notification while Chrome was playing music, the pavumeter showed the volume levels dropping to 30% or less. As soon as the sound was over, the meters snapped back to 80-90% and there was a big popping sound. I lowered Chrome\u0026rsquo;s volume so that it showed up at the 30% level in pavumeter and forced a new Pidgin notification sound - the pops were still there.\nI started searching in Google and stumbled upon ArchLinux\u0026rsquo;s PulseAudio documentation. (Their documentation is really awesome.) There\u0026rsquo;s a mention of the flat-volumes PulseAudio configuration option. If it\u0026rsquo;s set to no, you get the older ALSA functionality where volumes can be set independently per application. The default is yes and that default comes with a warning in the documentation:\nWarning: The default behavior can sometimes be confusing and some applications, unaware of this feature, can set their volume to 100% at startup, potentially blowing your speakers or your ears. To restore the classic (ALSA) behavior set this to no.\nAs a test, I switched flat-volumes to no in /etc/pulse/daemon.conf. I restarted PulseAudio with the new setting:\npulseaudio -k pulseaudio --start I started music in Chrome and sent myself an IM in Pidgin. No pops! An email came through and Thunderbird and a notification sound played. No pops there, either!\nGNOME 3 was a bit unhappy at my PulseAudio tinkering and the volume control disappeared from the menu at the top right. I logged out of my GNOME session and logged back in to find the volume control working again.\nPhoto Credit: Our Thrift Apt. via Compfight cc\n","date":"8 June 2015","permalink":"/p/pulseaudio-popping-with-multiple-sounds-in-fedora-22/","section":"Posts","summary":"My transition from Fedora 21 to 22 on the ThinkPad X1 Carbon was fairly uneventful even with over 2,400 packages involved in the upgrade.","title":"PulseAudio popping with multiple sounds in Fedora 22"},{"content":" I recently picked up a RB850GX2 from my favorite Mikrotik retailer, r0c-n0c. It\u0026rsquo;s a dual-core PowerPC board with five ethernet ports and some decent performance for the price.\nI still have the RB493G in a colocation and I usually connect my home and the colo via OpenVPN or IPSec. Networking is not one of my best skills and I\u0026rsquo;m always looking to learn more about it when I can. I decided to try out a GRE tunnel on top of IPSec this time around. Combining GRE and IPSec allows you to simplify connectivity between two network segments through an encrypted tunnel.\nThe Setup #The LAN in my colo and at home is fairly simple: a /24 of RFC1918 space behind a Mikrotik doing NAT. My goal was to get a tunnel up between both environments so that I could reach devices behind my colo firewall from home and vice versa. I do plenty of ssh back and forth along with backups from time to time.\nIn this example, here\u0026rsquo;s the current network configuration:\nHome: 192.168.50.0/24 on the LAN, 1.1.1.1 as the public IP Colo: 192.168.150.0/24 on the LAN, 2.2.2.2 as the public IP I want devices on 192.168.50.0/24 to talk to 192.168.150.0/24 and vice versa. Let\u0026rsquo;s get the GRE tunnel up first.\nGRE #Plain GRE tunnels aren\u0026rsquo;t encrypted, but I prefer to set them up first to test connectivity prior to adding IPSec into the mix. IPSec can be a challenge to configure the first time around.\nI\u0026rsquo;ll first create a GRE interface at home:\n/interface gre add !keepalive local-address=1.1.1.1 name=home-to-colo remote-address=2.2.2.2 We\u0026rsquo;ll do the same on the colo router:\n/interface gre add !keepalive local-address=2.2.2.2 name=colo-to-home remote-address=1.1.1.1 You can check to see if the GRE tunnel is running from either router:\n/interface gre print Look for the R in the flags column.\nIf you\u0026rsquo;ve made it this far, you now have a GRE tunnel configured but we can\u0026rsquo;t pass any traffic across it yet. We need to add some IP\u0026rsquo;s to both sides and configure some routes.\nIP\u0026rsquo;s and Routes #You have some freedom here to choose the IP addresses for both ends of your tunnel but don\u0026rsquo;t choose anything that interferes with your current LAN IP addresses. In my case, I\u0026rsquo;ll choose 10.10.10.1/30 and 10.10.10.2/30 for both ends of the tunnel.\nI\u0026rsquo;ll give the 10.10.10.2 address to the home firewall:\n/ip address add address=10.10.10.2/30 interface=home-to-colo network=10.10.10.0 And I\u0026rsquo;ll give the 10.10.10.1 address to the colo firewall:\n/ip address add address=10.10.10.1/30 interface=colo-to-home network=10.10.10.0 At this point, systems at home can ping 10.10.10.1 (the colo router\u0026rsquo;s GRE tunnel endpoint) and systems at the colo can ping 10.10.10.2 (the home router\u0026rsquo;s GRE tunnel endpoint). That\u0026rsquo;s great because we will use these IP\u0026rsquo;s to route our LAN traffic across the tunnel.\nWe need to tell the home router how to get traffic from its LAN over to the colo LAN and vice versa. We can do that with the tunnel endpoints we just configured.\nLet\u0026rsquo;s tell the home router to use the colo router\u0026rsquo;s GRE tunnel endpoint to reach the colo LAN:\n/ip route add distance=1 dst-address=192.168.150.0/24 gateway=home-to-colo And tell the colo router to use the home router\u0026rsquo;s GRE endpoint to reach the home LAN:\n/ip route add distance=1 dst-address=192.168.50.0/24 gateway=colo-to-home We don\u0026rsquo;t have to tell the router about the tunnel\u0026rsquo;s IP address since those routes are generated automatically when we added the IP addresses to each side of the GRE tunnel.\nIf you\u0026rsquo;ve made it this far, systems in your home LAN should be able to ping the colo LAN and vice versa. If not, go back and double-check your IP addresses on both sides of the tunnel and your routes.\nAdding IPSec #BEFORE YOU GO ANY FURTHER, ensure you have some sort of out-of-band access to both routers. If you make a big mistake like I did (more on that later), you\u0026rsquo;re going to be glad you set up another way to reach your devices!\nWe have an GRE tunnel without encryption already and that\u0026rsquo;s allowing us to pass traffic. That\u0026rsquo;s fine, but it\u0026rsquo;s not terribly secure to send our packets in that tunnel across a hostile internet. IPSec will allow us to tell both routers that we want packets between the public IP addresses of both routers to be encrypted. The GRE tunnel will take care of actually delivering the packets, however. IPSec isn\u0026rsquo;t an interface and it can\u0026rsquo;t be a conduit for networking all by itself.\nHave you configured another way to access both routers yet? Seriously, stop now and do that. I mean it.\nIf you have native IPv6 access (not a IPv6 over IPv4 tunnel!) into each device, that can be a viable backup plan. Another option might be serial cables or a dedicated console connection. You\u0026rsquo;ll thank me later.\nConfiguring IPSec is done in three chunks:\nMake a proposal: both routers must agree on how to authenticate each other and encrypt traffic Configure a peer list: both routers need to know how to reach each other and have some shared secrets Set a policy: both routers need to agree on which packets must be encrypted We will start with the proposal. The defaults are good for both routers. Add this configuration on both devices:\n/ip ipsec proposal set [ find default=yes ] auth-algorithms=md5 enc-algorithms=aes-128-cbc,twofish Now our routers agree on what methods they\u0026rsquo;ll use to encrypt traffic. Feel free to adjust these algorithms later if needed. Let\u0026rsquo;s tell each router about its peer.\nAt home:\n/ip ipsec peer add address=2.2.2.2/32 nat-traversal=no secret=letshavefunwithipsec At the colo:\n/ip ipsec peer add address=1.1.1.1/32 nat-traversal=no secret=letshavefunwithipsec Both routers now know about each other and they both have the same shared secret (please use a better shared secret in production). All we have left is configuring a policy.\nAt this point, ensure you\u0026rsquo;re accessing both routers via an out-of-band method (native IPv6, console, serial, etc). YOU ARE ABOUT TO LOSE CONNECTIVITY TO YOUR REMOTE DEVICE.\nAt home, we set up a policy that says all traffic between the public addresses of both firewalls must be encrypted (GRE will carry the traffic for us). Ensure that the CIDR portion of the IP address for dst-address/src-address is present!\n/ip ipsec policy add dst-address=2.2.2.2/32 sa-dst-address=2.2.2.2 sa-src-address=1.1.1.1 src-address=1.1.1.1/32 tunnel=yes We will do something similar on the colo side. Again, ensure that the CIDR portion of the IP address for dst-address/src-address is present!\n/ip ipsec policy add dst-address=1.1.1.1/32 sa-dst-address=1.1.1.1 sa-src-address=2.2.2.2 src-address=2.2.2.2/32 tunnel=yes You should now be able to ping across your GRE tunnel but it\u0026rsquo;s encrypted this time! If you find that one of your devices is inaccessible, don\u0026rsquo;t panic. Disable the policy you just added (set disabled=yes number=[number of your policy]) and review your configuration.\nIn the policy step, we told both routers that if traffic moves between the src-address and dst-address, we want it encrypted. Also, the sa-src-address and sa-dst-address gives the router a hint to figure out the identity of the peer and what their shared secret is.\nChecking our work #You can check your work with something like this on the home router:\n[major@Home] \u0026gt; /ip ipsec remote-peers print 0 local-address=1.1.1.1 remote-address=2.2.2.2 state=established side=initiator established=7h17m10s If you have a line like that, your IPSec peers can communicate properly. To test the encryption, you have two options. One option is to put a device outside your firewall and dump traffic via a tap or hub.\nAnother option (albeit less accurate) is to use the profile tool built into RouterOS. Run the following:\n/tool profile You\u0026rsquo;ll see some output showing where the majority of your CPU is consumed. Now, transfer some large files between systems behind both routers. You can use iperf for this as well if you really want to stress out the network link. When you do that, you should see encrypting in the profile output as a very large consumer of the CPU. If you only see something like gre or ethernet as your top CPU consumers, you may have missed something on your IPSec policy and your traffic is likely not being encrypted. This isn\u0026rsquo;t true for all routers — it depends on your normal workloads.\nHow I made a huge mistake #When I was going through this process, I made it through the GRE portion without a hitch. Everything worked well. Once I added IPSec to the mix, I used the GRE tunnel endpoints (10.10.10.1 and 10.10.10.2) as my src-address and dst-address in my IPSec policy. Nothing was getting encrypted and I was getting really frustrated.\nI kept reading tutorials on various sites and came to realize that I didn\u0026rsquo;t need an encryption policy between the tunnel endpoints, I needed a policy between the actual public addresses of the routers. I wasn\u0026rsquo;t aware that the GRE tunnel would happily keep working between the two public IP addresses even with the IPSec policy in place between the IP addresses.\nFirst mistake: I didn\u0026rsquo;t access my colo router via an out-of-band path. Second mistake: I applied my IPSec policy on the home router first and was shocked that I lost connectivity to the colo router. That was a quick fix — I just disabled the IPSec policy on the home router and I could access the colo router again.\nJust after adjusting the IPSec policy on the colo router to use the public IP addresses, I noticed that connectivity dropped. At this point, I expected that — I set up a policy there but I hadn\u0026rsquo;t done it on the home router yet. I enabled the policy on the home router and then started pinging. Nothing.\nThen came the Pingdom and UptimeRobot alerts for my sites in the colo. Oh crap.\nOnce I was able to reach the colo router via IPv6 through some other VM\u0026rsquo;s, I realized what happened. I left the CIDR mask off the src-address and dst-address in the IPSec policy.\nGuess what RouterOS chose as a CIDR mask for me? /0. Ouch.\nI quickly adjusted those to be /32\u0026rsquo;s. Within seconds, everything was up again and the GRE tunnel began working. As the Pingdom alerts cleared and my heart rate returned to normal, I figured the best thing I should do is share my story so that others don\u0026rsquo;t make the same mistake. ;)\n","date":"27 May 2015","permalink":"/p/adventures-with-gre-and-ipsec-on-mikrotik-routers/","section":"Posts","summary":"I recently picked up a RB850GX2 from my favorite Mikrotik retailer, r0c-n0c.","title":"Adventures with GRE and IPSec on Mikrotik routers"},{"content":"","date":null,"permalink":"/tags/ipsec/","section":"Tags","summary":"","title":"Ipsec"},{"content":"","date":null,"permalink":"/tags/gcc/","section":"Tags","summary":"","title":"Gcc"},{"content":"","date":null,"permalink":"/tags/xen/","section":"Tags","summary":"","title":"Xen"},{"content":"If you\u0026rsquo;re currently running a Xen hypervisor on a Fedora release before 22, stay put for now.\nThere\u0026rsquo;s a bug in Xen when you compile it with GCC 5 that will cause your system to get an error during bootup. In my case, I\u0026rsquo;m sometimes getting the crash shortly after the hypervisor to dom0 kernel handoff and sometimes it\u0026rsquo;s happening later in the boot process closer to when I\u0026rsquo;d expect a login screen to appear.\nHere are some helpful links to follow the progress of the fix:\nCrash logs from the kernel panic Bug 1219197 – Xen BUG at page_alloc.c:1738 [Red Hat Bugzilla] Bug 1908 – Xen BUG at page_alloc.c:1738 xen-devel mailing list thread Michael Young found that Xen 4.5.1-rc1 (which has code very similar to 4.5) will compile and boot if compiled with GCC 4.x in Fedora 21. It\u0026rsquo;s a decent workaround but it\u0026rsquo;s certainly not a long term fix.\nI\u0026rsquo;m still doing some additional testing and I\u0026rsquo;ll update this post as soon as there\u0026rsquo;s more information available.\n","date":"27 May 2015","permalink":"/p/xen-4-5-crashes-during-boot-on-fedora-22/","section":"Posts","summary":"If you\u0026rsquo;re currently running a Xen hypervisor on a Fedora release before 22, stay put for now.","title":"Xen 4.5 crashes during boot on Fedora 22"},{"content":" I really enjoy operating icanhazip.com and the other domains. It\u0026rsquo;s fun to run some really busy services and find ways to reduce resource consumption and the overall cost of hosting.\nMy brain has a knack for optimization and improving the site is quite fun for me. So much so that I\u0026rsquo;ve decided to host all of icanhazip.com out of my own pocket starting today.\nHowever, something seriously needs to change.\nA complaint came in yesterday from someone who noticed that their machines were making quite a few requests to icanhazip.com. It turns out there was a problem with malware and the complaint implicated my site as part of the problem. One of my nodes was taken down as a precaution while I furiously worked to refute the claims within the complaint. Although the site stayed up on other nodes, it was an annoyance for some and I received a few tweets and emails about it.\nLong story short, if you\u0026rsquo;re sending me or my ISP a complaint about icanhazip.com, there\u0026rsquo;s one thing you need to know: the problem is on your end, not mine. Either you have users making legitimate requests to my site or you have malware actively operating on your network.\nNo, it\u0026rsquo;s not time to panic.\nYou can actually use icanhazip.com as a tool to identify problems on your network.\nFor example, add rules to your intrusion detection systems (IDS) to detect requests to the site in environments where you don\u0026rsquo;t expect those requests to take place. Members of your support team might use the site regularly to test things but your Active Directory server shouldn\u0026rsquo;t start spontaneously talking to my site overnight. That\u0026rsquo;s a red flag and you can detect it easily.\nAlso, don\u0026rsquo;t report the site as malicious or hosting malware when it\u0026rsquo;s not. I\u0026rsquo;ve been accused of distributing malware and participating in attacks but then, after further investigation, it was discovered that I was only returning an IPv4 address to a valid request. That hardly warrants the blind accusations that I often receive.\nI\u0026rsquo;ve taken some steps to ensure that there\u0026rsquo;s a way to contact me with any questions or concerns you might have. For example:\nYou can email abuse, postmaster, and security at icanhazip.com anytime There\u0026rsquo;s a HTTP header with a link to the FAQ (which has been there for years) I monitor any tweets or blog posts that are written about the site As always, if you have questions or concerns, please reach out to me and read the FAQ. Thanks to everyone for all the support!\nPhoto Credit: Amir Kamran via Compfight cc\n","date":"20 May 2015","permalink":"/p/you-have-a-problem-and-icanhazip-com-isnt-one-of-them/","section":"Posts","summary":"I really enjoy operating icanhazip.","title":"You have a problem and icanhazip.com isn’t one of them"},{"content":"When you upgrade packages on Red Hat, CentOS and Fedora systems, the newer package replaces the older package. That means that files managed by RPM from the old package are removed and replaced with files from the newer package.\nThere\u0026rsquo;s one exception here: kernel packages.\nUpgrading a kernel package with yum and dnf leaves the older kernel package on the system just in case you need it again. This is handy if the new kernel introduces a bug on your system or if you need to work through a compile of a custom kernel module.\nHowever, yum and dnf will clean up older kernels once you have more than three. The oldest kernel will be removed from the system and the newest three will remain. In some situations, you may want more than three to stay on your system.\nTo change the setting, simply open up /etc/yum.conf or /etc/dnf/dnf.conf in your favorite text editor. Look for this line:\ninstallonly_limit=3 To keep five kernels, simply replace the 3 with a 5. If you\u0026rsquo;d like to keep every old kernel on the system forever, just change the 3 to a 0. A zero means you never want \u0026ldquo;installonly\u0026rdquo; packages (like kernels) to ever be removed from your system.\n","date":"18 May 2015","permalink":"/p/keep-old-kernels-with-yum-and-dnf/","section":"Posts","summary":"When you upgrade packages on Red Hat, CentOS and Fedora systems, the newer package replaces the older package.","title":"Keep old kernels with yum and dnf"},{"content":" With Fedora 22\u0026rsquo;s release date quickly approaching, it\u0026rsquo;s time to familiarize yourself with dnf. It\u0026rsquo;s especially important since clean installs of Fedora 22 won\u0026rsquo;t have yum.\nAlmost all of the command line arguments are the same but automated updates are a little different. If you\u0026rsquo;re used to yum-updatesd, then you\u0026rsquo;ll want to look into dnf-automatic.\nInstallation #Getting the python code and systemd unit files for automated dnf updates is a quick process:\ndnf -y install dnf-automatic Configuration #There\u0026rsquo;s only one configuration file to review and most of the defaults are quite sensible. Open up /etc/dnf/automatic.conf with your favorite text editor and review the available options. The only adjustment I made was to change the emit_via option to email as opposed to the stdio.\nYou may want to change the email_to option if you want to redirect email elsewhere. In my case, I already have an email forward for the root user.\ndnf Automation #If you look at the contents of the dnf-automatic package, you\u0026rsquo;ll find some python code, configuration files, and two important systemd files:\nFor Fedora 25 and earlier:\n# rpm -ql dnf-automatic | grep systemd /usr/lib/systemd/system/dnf-automatic.service /usr/lib/systemd/system/dnf-automatic.timer For Fedora 26 and later:\n# rpm -ql dnf-automatic | grep systemd /usr/lib/systemd/system/dnf-automatic-download.service /usr/lib/systemd/system/dnf-automatic-download.timer /usr/lib/systemd/system/dnf-automatic-install.service /usr/lib/systemd/system/dnf-automatic-install.timer /usr/lib/systemd/system/dnf-automatic-notifyonly.service /usr/lib/systemd/system/dnf-automatic-notifyonly.timer These systemd files are what makes dnf-automatic run. The service file contains the instructions so that systemd knows what to run. The timer file contains the frequency of the update checks (defaults to one day). We need to enable the timer and then start it.\nFor Fedora 25 and earlier:\nsystemctl enable dnf-automatic.timer For Fedora 26 and later:\nsystemctl enable dnf-automatic-install.timer Check your work:\n# systemctl list-timers *dnf* NEXT LEFT LAST PASSED UNIT ACTIVATES Tue 2015-05-12 19:57:30 CDT 23h left Mon 2015-05-11 19:57:29 CDT 14min ago dnf-automatic.timer dnf-automatic.service The output here shows that the dnf-automatic job last ran at 19:57 on May 11th and it\u0026rsquo;s set to run at the same time tomorrow, May 12th. Be sure to disable and stop your yum-updatesd service if you still have it running on your system from a previous version of Fedora.\nPhoto Credit: Outer Rim Emperor via Compfight cc\n","date":"12 May 2015","permalink":"/p/automatic-package-updates-with-dnf/","section":"Posts","summary":"With Fedora 22\u0026rsquo;s release date quickly approaching, it\u0026rsquo;s time to familiarize yourself with dnf.","title":"Automatic package updates with dnf"},{"content":"With the last few weeks, I noticed that Tweetdeck\u0026rsquo;s notifications weren\u0026rsquo;t showing up in Chrome any longer. I double-checked all of the Tweetdeck settings and notifications were indeed enabled. However, I found that Tweetdeck wasn\u0026rsquo;t allowed to send notifications when I checked in my Chrome settings.\nCheck your settings #To check these for yourself, hop into Chrome\u0026rsquo;s content settings. Scroll down to Notifications and click Manage Exceptions. In my case, https://tweetdeck.twitter.com was missing from the list entirely.\nFrom here, you have two options: enable notifications for all sites (not ideal) or add an exception.\nThe big hammer approach #To enable notifications for all sites (good for testing, not ideal in the long term), click Allow all sites to show notifications in the Notifications session.\nThe right way #To enable notifications just for Tweetdeck, you may be able to add a new exception right there in the Chrome settings interface. Many users are reporting that newer versions of Chrome don\u0026rsquo;t allow for that. In that case, your fix involves editing your Chrome configuration on the command line.\nChrome preferences are in different locations depending on your OS:\nWindows: C:\\Users\u0026lt;username\u0026gt;\\AppData\\Local\\Google\\Chrome\\User Data\\ Mac: ~/Library/Application Support/Google/Chrome/ Linux: ~/.config/google-chrome/ BEFORE EDITING ANYTHING, be sure you\u0026rsquo;ve quit Chrome and ensured that nothing Chrome-related is running in the background. Seriously. Don\u0026rsquo;t skip this step.\nI\u0026rsquo;m on Linux, so I\u0026rsquo;ll open up .config/google-chrome/Default/Preferences in vim and make some edits. You\u0026rsquo;re looking for some lines that look like this:\n\u0026#34;https://tweetdeck.twitter.com:443,https://tweetdeck.twitter.com:443\u0026#34;: { \u0026#34;last_used\u0026#34;: { \u0026#34;notifications\u0026#34;: 1431092689.014171 } }, Replace those lines with this:\n\u0026#34;https://tweetdeck.twitter.com,*\u0026#34;: { \u0026#34;last_used\u0026#34;: { \u0026#34;notifications\u0026#34;: 1414673538.301078 }, \u0026#34;notifications\u0026#34;: 1 }, \u0026#34;https://tweetdeck.twitter.com:443,https://tweetdeck.twitter.com:443\u0026#34;: { \u0026#34;last_used\u0026#34;: { \u0026#34;notifications\u0026#34;: 1431094902.014302 } }, Save the file and start up Chrome once more. Head on over to Tweetdeck and you should now see the familiar Chrome toast notifications for Twitter updates!\n","date":"8 May 2015","permalink":"/p/tweetdecks-chrome-notifications-stopped-working/","section":"Posts","summary":"With the last few weeks, I noticed that Tweetdeck\u0026rsquo;s notifications weren\u0026rsquo;t showing up in Chrome any longer.","title":"Tweetdeck’s Chrome notifications stopped working"},{"content":"Mikrotik firewalls have been good to me over the years and they work well for multiple purposes. Creating an OpenVPN server on the device can allow you to connect into your local network when you\u0026rsquo;re on the road or protect your traffic when you\u0026rsquo;re using untrusted networks.\nAlthough Miktrotik\u0026rsquo;s implementation isn\u0026rsquo;t terribly robust (TCP only, client cert auth is wonky), it works quite well for most users. I\u0026rsquo;ll walk you through the process from importing certificates through testing it out with a client.\nImport certificates #Creating a CA and signing a certificate and key is outside the scope of this post and there are plenty of sites that cover the basics of creating a self-signed certificate. You could also create a certificate signing request (CSR) on the Mikrotik and have that signed by a trusted CA. In my case, I have a simple CA already and I signed a certificate for myself.\nUpload your certificate, key, and CA certificate (if applicable) to the Mikrotik. After that, import those files into the Mikrotik\u0026rsquo;s certificate storage:\nimport file-name=firewall.example.com.crt passphrase: certificates-imported: 1 private-keys-imported: 0 files-imported: 1 decryption-failures: 0 keys-with-no-certificate: 0 [major@home] /certificate\u0026gt; import file-name=firewall.example.com.pem passphrase: certificates-imported: 0 private-keys-imported: 1 files-imported: 1 decryption-failures: 0 keys-with-no-certificate: 0 [major@home] /certificate\u0026gt; import file-name=My_Personal_CA.crt passphrase: certificates-imported: 1 private-keys-imported: 0 files-imported: 1 decryption-failures: 0 keys-with-no-certificate: 0 Always import the certificate first, then the key. You should be able to do a /certificate print and see the entries for the files you imported. In the print output, look at the flags column and verify that the line with your certificate has a T and a K. If the K is missing, import the key one more time. If that still doesn\u0026rsquo;t work, ensure that your certificate and key match.\nThe default naming conventions used for certificates is a little confusing. You can rename a certificate by running set name=firewall.example.com number=0 (run a /certificate print to get the right number).\nOpenVPN server configuration #We\u0026rsquo;re now ready to do the first steps of the OpenVPN setup on the Mikrotik. You can do this configuration via the Winbox GUI or via the web interface, but I prefer to use the command line. Let\u0026rsquo;s start:\n/interface ovpn-server server set certificate=firewall.example.com cipher=blowfish128,aes128,aes192,aes256 default-profile=default-encryption enabled=yes This tells the device that we want to use the certificate we imported earlier along with all of the available ciphers. We\u0026rsquo;re also selecting the default-encryption profile that we will configure in more detail later. Feel free to adjust your cipher list later on but I recommend allowing all of them until you\u0026rsquo;re sure that the VPN configuration works.\nWe\u0026rsquo;re now ready to add an OpenVPN interface. In Mikrotik terms, you can have multiple OpenVPN server profiles running under the same server. They will all share the same certificate, but each may have different authentication methods or network configurations. Let\u0026rsquo;s define our first profile:\n/interface ovpn-server add name=openvpn-inbound user=openvpn There\u0026rsquo;s now a profile with a username of openvpn. That will be the username that we use to connect to this VPN server.\nSecrets #The router needs a way to identify the user we just created. We can define a secret easily:\n/ppp secret add name=openvpn password=vpnsarefun profile=default-encryption We\u0026rsquo;ve set a password secret and defined a connection profile that corresponds to the secret.\nProfiles #We\u0026rsquo;ve been referring to this default-encryption profile several times and now it\u0026rsquo;s time to configure it. This is one of the things I prefer to configure using the Winbox GUI or the web interface since there are plenty of options to review.\nThe most important part is how you connect the VPN connection into your internal network. You have a few options here. You can configure an IP address that will always be assigned to this connection no matter what. There are upsides and downsides with that choice. You\u0026rsquo;ll always get the same IP on the inside network but you won\u0026rsquo;t be able to connect to the same profile with multiple clients.\nI prefer to set the bridge option to my internal network bridge (which I call lanbridge). That allows me to use my existing bridge configuration and filtering rules on my OpenVPN tunnels. My configuration looks something like this:\n/ppp profile set 1 bridge=lanbridge local-address=default-dhcp only-one=no remote-address=default-dhcp I\u0026rsquo;ve told the router that I want VPN connections to be hooked up to my main bridge and it should get local and remote IP addresses from my default DHCP server. In addition, I\u0026rsquo;ve also allowed more than one simultaneous connection to this profile.\nThe other defaults are fairly decent to get started. You can go back and adjust them later if needed.\nOpenVPN client #Every client has things configured a bit differently but I\u0026rsquo;ll be working with a basic OpenVPN configuration file here that should work on most systems (or at least show you what to click in your client GUI).\nHere\u0026rsquo;s my OpenVPN client configuration file:\nremote firewall.example.com 1194 tcp-client persist-key auth-user-pass /etc/openvpn/firewall-creds.txt tls-client pull ca /home/major/.cert/ca.crt redirect-gateway def1 dev tun persist-tun cert /home/major/.cert/cert.crt nobind key /home/major/.cert/key.key In my configuration, I refer to a /etc/openvpn/firewall-creds.txt file to hold my credentials. You can store the file anywhere (or this might be configurable in a GUI) but it should look like this:\nusername password That\u0026rsquo;s it - just a two line file with the username, a line feed, and a password.\nAt this point, you should be able to test your client.\nTroubleshooting #Firewall - Ensure that you have a firewall rule set to allow traffic into your OpenVPN port. This could be something as simple as:\n/ip firewall filter add chain=input dst-port=1194 protocol=tcp Certificates - Check that your certificate and key were imported properly and that your client is configured to trust the self-signed certificate or the CA you used.\nCompression - For some reason, I have lots of problems if compression is enabled on the client. They range from connection failures to being unable to pass traffic through the tunnel after getting connected. Be sure that anything that mentions compression or LZO is disabled.\nSecurity #There are some security improvements that can be made after configuring everything:\nLimit access to your OpenVPN port in your firewall to certain source IP\u0026rsquo;s Configure better passwords for your OpenVPN secret Consider making a separate bridge or network segment for VPN users when they connect and apply filters to it Adjust the list of ciphers in the default-encryption profile so that only the strongest can be used (may cause some clients to be unable to connect) ","date":"1 May 2015","permalink":"/p/howto-mikrotik-openvpn-server/","section":"Posts","summary":"\u003cp\u003e\u003ca href=\"/wp-content/uploads/2015/05/rb850_picture.jpg\"\u003e\u003cimg src=\"/wp-content/uploads/2015/05/rb850_picture-300x300.jpg\" alt=\"RB850Gx2 mikrotik\" width=\"300\" height=\"300\" class=\"alignright size-medium wp-image-5543\" srcset=\"/wp-content/uploads/2015/05/rb850_picture-300x300.jpg 300w, /wp-content/uploads/2015/05/rb850_picture-150x150.jpg 150w, /wp-content/uploads/2015/05/rb850_picture.jpg 800w\" sizes=\"(max-width: 300px) 100vw, 300px\" /\u003e\u003c/a\u003eMikrotik firewalls have been good to me over the years and they work well for multiple purposes. Creating an OpenVPN server on the device can allow you to connect into your local network when you\u0026rsquo;re on the road or protect your traffic when you\u0026rsquo;re using untrusted networks.\u003c/p\u003e\n\u003cp\u003eAlthough Miktrotik\u0026rsquo;s implementation isn\u0026rsquo;t terribly robust (TCP only, client cert auth is wonky), it works quite well for most users. I\u0026rsquo;ll walk you through the process from importing certificates through testing it out with a client.\u003c/p\u003e","title":"HOWTO: Mikrotik OpenVPN server"},{"content":"","date":null,"permalink":"/tags/openvpn/","section":"Tags","summary":"","title":"Openvpn"},{"content":"This post originally appeared on the Rackspace Blog and I\u0026rsquo;ve posted it here for readers of this blog. Feel free to send over any comments you have!\nMost IT professionals would agree that 2014 was a long year. Heartbleed, Shellshock, Sandworm and POODLE were just a subset of the vulnerabilities that caused many of us to stay up late and reach for more coffee. As these vulnerabilities became public, I found myself fielding questions from non-technical family members after they watched the CBS Evening News and wondered what was happening. Security is now part of the popular discussion.\nAaron Hackney and I delivered a presentation at Rackspace::Solve Atlanta called \u0026ldquo;The New Normal\u0026rdquo; where we armed the audience with security strategies that channel spending to the most effective security improvements. Our approach at Rackspace is simple and balanced: use common sense prevention strategies, invest heavily in detection, and be sure you\u0026rsquo;re ready to respond when (not if) disaster strikes. We try to help companies prioritize by focusing on a few key areas. Know when there\u0026rsquo;s a breach. Know what they touched. Know who\u0026rsquo;s responsible. Below, I\u0026rsquo;ve included five ways to put this approach into practice.\nFirst, common sense prevention includes using industry best practices like system and network hardening standards. Almost every device provides some kind of logging but we rarely review the logs and we often don\u0026rsquo;t know which types of events should trigger suspicion. Monitoring logs, securely configuring devices, and segmenting networks will lead to a great prevention strategy without significant costs (in time or money).\nSecond, many businesses will overspend on more focused prevention strategies before they know what they\u0026rsquo;re up against. This is where detection becomes key. Intrusion detection systems, log management systems, and NetFlow analysis can give you an idea of where an intruder might be within your systems and what they may have accessed. Combining these systems allows you to thwart the more advanced attackers that might use encrypted tunnels or move data via unusual protocols (like exfiltration via DNS or ICMP).\nThird, when an incident does happen, everyone needs to know their place: including employees, partners, and customers. Every business needs a way to communicate incident severity without talking about the incident in great detail. If you\u0026rsquo;ve seen the movie WarGames, you probably remember them changing DEFCON levels at NORAD. Everyone knew their place and their duties whenever the DECFON level changed even if they didn\u0026rsquo;t know the specific nature of the incident. Think about how you will communicate when you really can\u0026rsquo;t - this is critical.\nFourth, the data gathered by the layers of detection combined with the root cause analysis (RCA) from the incident response will show you where to spend on additional prevention. RCA will also give you the metrics you need for conversation with executives around security changes.\nOne last tip - when you think about changes, opt for a larger number of smaller changes. The implementation will be less expensive and the probability of employee and customer backlash is greatly reduced.\nFor more tips on making changes within a company, I highly recommend reading Switch: How to Change When Change Is Hard.\nWe\u0026rsquo;d like to thank all of the Solve attendees who joined us for our talk. The questions after the talk were great and they led to plenty of hallway conversations afterwards. We hope to see you at a future Solve event!\nThe New Normal: Managing the constant stream of new vulnerabilities from Major Hayden ","date":"15 April 2015","permalink":"/p/rackspacesolve-atlanta-session-recap-the-new-normal/","section":"Posts","summary":"\u003cp\u003e\u003cem\u003eThis post originally appeared on the \u003ca href=\"http://www.rackspace.com/blog/rackspacesolve-atlanta-session-recap-the-new-normal/\" target=\"_blank\" rel=\"noreferrer\"\u003eRackspace Blog\u003c/a\u003e and I\u0026rsquo;ve posted it here for readers of this blog. Feel free to send over any comments you have!\u003c/em\u003e\u003c/p\u003e\n\u003chr\u003e\n\u003cp\u003e\u003ca href=\"/wp-content/uploads/2015/04/solve-logo-1.png\"\u003e\u003cimg src=\"/wp-content/uploads/2015/04/solve-logo-1-300x300.png\" alt=\"solve-logo-1\" width=\"300\" height=\"300\" class=\"alignright size-medium wp-image-5519\" srcset=\"/wp-content/uploads/2015/04/solve-logo-1-300x300.png 300w, /wp-content/uploads/2015/04/solve-logo-1-150x150.png 150w, /wp-content/uploads/2015/04/solve-logo-1.png 640w\" sizes=\"(max-width: 300px) 100vw, 300px\" /\u003e\u003c/a\u003eMost IT professionals would agree that 2014 was a long year. Heartbleed, Shellshock, Sandworm and POODLE were just a subset of the vulnerabilities that caused many of us to stay up late and reach for more coffee. As these vulnerabilities became public, I found myself fielding questions from non-technical family members after they watched the CBS Evening News and wondered what was happening. Security is now part of the popular discussion.\u003c/p\u003e\n\u003cp\u003eAaron Hackney and I delivered a presentation at Rackspace::Solve Atlanta called \u0026ldquo;The New Normal\u0026rdquo; where we armed the audience with security strategies that channel spending to the most effective security improvements. Our approach at Rackspace is simple and balanced: use common sense prevention strategies, invest heavily in detection, and be sure you\u0026rsquo;re ready to respond when (not if) disaster strikes. We try to help companies prioritize by focusing on a few key areas. Know when there\u0026rsquo;s a breach. Know what they touched. Know who\u0026rsquo;s responsible. Below, I\u0026rsquo;ve included five ways to put this approach into practice.\u003c/p\u003e","title":"Rackspace::Solve Atlanta Session Recap: “The New Normal”"},{"content":"Libvirt is a handy way to manage containers and virtual machines on various systems. On most distributions, you can only access the libvirt daemon via the root user by default. I\u0026rsquo;d rather use a regular non-root user to access libvirt and limit that access via groups.\nModern Linux distributions use Polkit to limit access to the libvirt daemon. You can add an extra rule to the existing set of Polkit rules to allow regular users to access libvirtd. Here\u0026rsquo;s an example rule (in Javascript) from the ArchWiki:\n/* Allow users in kvm group to manage the libvirt daemon without authentication */ polkit.addRule(function(action, subject) { if (action.id == \u0026#34;org.libvirt.unix.manage\u0026#34; \u0026amp;\u0026amp; subject.isInGroup(\u0026#34;wheel\u0026#34;)) { return polkit.Result.YES; } }); As shown on the ArchWiki, I saved this file as /etc/polkit-1/rules.d/49-org.libvirt.unix.manager.rules. I\u0026rsquo;m using the wheel group to govern access to the libvirt daemon but you could use any group you choose. Just update the subject.isInGroup line in the rules file. You shouldn\u0026rsquo;t have to restart any daemons after adding the new rule file.\nI\u0026rsquo;m now able to run virsh as my regular user:\n[major@host ~]$ id uid=1000(major) gid=1000(major) groups=1000(major),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 [major@host ~]$ virsh list --all Id Name State ---------------------------------------------------- ","date":"11 April 2015","permalink":"/p/run-virsh-and-access-libvirt-as-a-regular-user/","section":"Posts","summary":"\u003cp\u003e\u003ca href=\"/wp-content/uploads/2015/04/libvirtLogo.png\"\u003e\u003cimg src=\"/wp-content/uploads/2015/04/libvirtLogo-300x241.png\" alt=\"libvirt logo\" width=\"300\" height=\"241\" class=\"alignright size-medium wp-image-5474\" srcset=\"/wp-content/uploads/2015/04/libvirtLogo-300x241.png 300w, /wp-content/uploads/2015/04/libvirtLogo.png 344w\" sizes=\"(max-width: 300px) 100vw, 300px\" /\u003e\u003c/a\u003e\u003ca href=\"http://libvirt.org/\" target=\"_blank\" rel=\"noreferrer\"\u003eLibvirt\u003c/a\u003e is a handy way to manage containers and virtual machines on various systems. On most distributions, you can only access the libvirt daemon via the root user by default. I\u0026rsquo;d rather use a regular non-root user to access libvirt and limit that access via groups.\u003c/p\u003e","title":"Run virsh and access libvirt as a regular user"},{"content":" After a boatload of challenges with what I thought would be my favorite Linux laptop, the Dell XPS 13 9343, I decided to take the plunge on a new Lenovo X1 Carbon (3rd gen). My late-2013 MacBook Pro Retina (MacbookPro11,1) had plenty of quirks when running Linux and I was eager to find a better platform.\nDisplay \u0026amp; Screen #I opted for the model with the i5-5300U, 8GB RAM, 256GB SSD, and the 2560×1440 display. The high resolution display comes in two flavors: touch (glossy) and non-touch (matte). I went with the matte and I\u0026rsquo;ve been very pleased with it so far. It comes up a bit short on pixels when you compare it with the XPS 13\u0026rsquo;s 3200×1800 display but it\u0026rsquo;s still very good. I run GNOME 3 with HIDPI disabled and having a few less pixels makes it much easier to read while still being very detailed.\nThe display is plenty bright and also very readable when set to very low brightness. Reducing the brightness also extends the battery life by quite a bit (more on that later). Gaming performance isn\u0026rsquo;t good but you wouldn\u0026rsquo;t want this laptop as a gaming rig, anyway.\nStorage #You can get the PCI-e storage option with 512GB but the high price tag hurts. The 256GB m2 SATA drive in my X1 is plenty fast. The drive in my laptop is a Samsung and it\u0026rsquo;s a big improvement over some of the Sandisk drives I\u0026rsquo;ve had in other Lenovo laptops.\nNetwork #The wireless card is an Intel 7265 and is supported out of the box with the iwlwifi module in the upstream kernel. It also provides Bluetooth and it works like a charm. I\u0026rsquo;ve paired up with many devices easily and transferred data as I\u0026rsquo;d expect.\nThere\u0026rsquo;s no full size ethernet port on this laptop (obviously). However, you can use the Lenovo proprietary ethernet dongle provided with the laptop and use the built-in Intel I218-LM ethernet card. It uses the e1000 driver and works out of the box.\nInput devices #The island-style keyboard takes a little getting used to when you\u0026rsquo;re coming from the chiclets on the MacBook Pro. It feels great and the key travel is quite nice when compared with other laptops. The Dell XPS 13\u0026rsquo;s key travel is poor in comparison.\nThe touchpad at the front of the laptop works quite well and the little trough right in front of the bottom of the pad is handy for click and drag gestures. The synaptics driver for X works right out of the box and libinput works, too.\nThe trackpoint (also called “keyboard nipple”) is fine but I can\u0026rsquo;t use it worth a darn. I\u0026rsquo;m downright horrible at it. That\u0026rsquo;s not Lenovo\u0026rsquo;s fault — my brain is probably dysfunctional. The trackpoint buttons (below the space bar) are hooked up to the touchpad and this has caused some problems. There\u0026rsquo;s a fix to get the left and middle buttons working in Linux 4.0 and you\u0026rsquo;ll find that patch backported in some other distributions, like Arch and Fedora. I don\u0026rsquo;t use those buttons much but I could see how some people might want to do some two-handed click and drag gestures with them.\nAll of the keys on the keyboard work as expected, but you\u0026rsquo;ll need to load up the thinkpad_acpi module to get the brightness buttons working. In my case, I had to force the module to load since the module didn\u0026rsquo;t recognize my embedded controller:\nmodprobe thinkpad_acpi force_load=1 Another nice benefit of the module is that you can control some of the LED\u0026rsquo;s on the laptop programmatically. For example, you could blink the power button to signify your own custom alerts. You could also disable it entirely.\nBattery life\nBroadwell was supposed to bring some good power benefits and it\u0026rsquo;s obvious that the X1 Carbon benefits from that CPU. I\u0026rsquo;ve been off battery for about two hours while writing this post, handling email and updating some packages. GNOME says there is 84% of my battery left and it\u0026rsquo;s estimating about 7 hours and 45 minutes remaining. I\u0026rsquo;ve yet to see this laptop actually empty out entirely. I\u0026rsquo;ve gone for 10 hour stretches with it and it still has one or two hours left.\nI\u0026rsquo;m not using any powertop tweaks, but I did install tlp and I\u0026rsquo;m using it on startup. Some folks have tweaked a few additional things from powertop and they\u0026rsquo;ve messed with the i915 module\u0026rsquo;s refresh rate. That might give you another 5-10% on the battery but I\u0026rsquo;m already very pleased with my current battery life.\nLinux compatibility\nThere are two main issues:\nTrackpoint left/middle buttons don\u0026rsquo;t work (fixed in 4.0 and backported in many distros) Brightness and display switch keys don\u0026rsquo;t work (load the thinkpad_acpi module for that) Considering that the fix for the first issue is widely available in most distributions and the second one is only a modprobe away, I\u0026rsquo;d say this laptop is pretty darned Linux compatible. I\u0026rsquo;m currently running Fedora 21 without any problems.\nWrap up #Thanks for reading this far! Let me know if I\u0026rsquo;ve missed anything and I\u0026rsquo;ll be glad to update the post.\n","date":"30 March 2015","permalink":"/p/review-lenovo-x1-carbon-3rd-generation-and-linux/","section":"Posts","summary":"\u003cp\u003e\n\n\n\n\n\n\n \n \n\u003cfigure\u003e\u003cimg src=\"https://major.io/wp-content/uploads/2015/03/ThinkPad-Carbon-X1.jpg\" alt=\"1\" class=\"mx-auto my-0 rounded-md\" /\u003e\n\u003c/figure\u003e\n\u003c/p\u003e\n\u003cp\u003eAfter a \u003ca href=\"/2015/02/03/linux-support-dell-xps-13-9343-2015-model/\"\u003eboatload of challenges\u003c/a\u003e with what I thought would be my favorite Linux laptop, the \u003ca href=\"http://www.dell.com/us/p/xps-13-9343-laptop/pd\" target=\"_blank\" rel=\"noreferrer\"\u003eDell XPS 13 9343\u003c/a\u003e, I decided to take the plunge on a new \u003ca href=\"http://shop.lenovo.com/us/en/laptops/thinkpad/x-series/x1-carbon/\" target=\"_blank\" rel=\"noreferrer\"\u003eLenovo X1 Carbon (3rd gen)\u003c/a\u003e. My late-2013 MacBook Pro Retina (MacbookPro11,1) had plenty of quirks when running Linux and I was eager to find a better platform.\u003c/p\u003e","title":"Review: Lenovo X1 Carbon 3rd generation and Linux"},{"content":"There are some situations where you want to do the opposite of creating a wireless hotspot and you want to share a wireless connection to an ethernet connection. For example, if you\u0026rsquo;re at a hotel that offers only WiFi internet access, you could share that connection to an ethernet switch and plug in more devices. Also, you could get online with your wireless connection and create a small NAT network to test a network device without mangling your home network.\nDoing this in older versions of GNOME and NetworkManager was fairly easy. Newer versions can be a bit more challenging. To get started, I generally like to name my ethernet connections with something I can remember. In this example, I have a USB ethernet adapter that I want to use for sharing a wireless connection. Opening the Network panel in GNOME 3 gives me this:\nClick the cog wheel at the bottom right and then choose the Identity tab on the next window. Use a name for the interface that is easy to remember. I chose Home USB Ethernet for mine:\nPress Apply and then go to a terminal. Type nm-connection-editor and you should get a window like this:\nWe can add a shared network connection by pressing the Add button. Do the following afterwards:\nChoose Ethernet from the list and press Create… click IPv4 Settings Choose Shared to other computers in the Method drop-down menu Enter Share via ethernet as the Connection name at the top (or choose a name you prefer) When that\u0026rsquo;s all done, you can close the Network Connections menu we opened via the terminal. Now open the Network control panel once more. It should have two profiles for your ethernet connection now (mine is a USB ethernet device):\nIf it\u0026rsquo;s not already selected, just click on the Share via ethernet text. NetworkManager will automatically configure NAT, DHCP and firewall rules for you. When you\u0026rsquo;re ready to go back to normal ethernet operation and you want to stop sharing, simply click on the other profile (mine is called Home USB Ethernet). NetworkManager will put the ethernet device back into the original way you had it configured (default is DHCP with automatic IPv6 via SLAAC).\n","date":"30 March 2015","permalink":"/p/share-a-wireless-connection-via-ethernet-in-gnome-3-14/","section":"Posts","summary":"\u003cp\u003eThere are some situations where you want to do the opposite of creating a wireless hotspot and you want to share a wireless connection to an ethernet connection. For example, if you\u0026rsquo;re at a hotel that offers only WiFi internet access, you could share that connection to an ethernet switch and plug in more devices. Also, you could get online with your wireless connection and create a small NAT network to test a network device without mangling your home network.\u003c/p\u003e","title":"Share a wireless connection via ethernet in GNOME 3.14"},{"content":"There are plenty of guides out there for making ethernet bridges in Linux to support virtual machines using built-in network scripts or NetworkManager. I decided to try my hand with creating a bridge using only systemd-networkd and it was surprisingly easy.\nFirst off, you\u0026rsquo;ll need a version of systemd with networkd support. Fedora 20 and 21 will work just fine. RHEL/CentOS 7 and Arch Linux should also work. Much of the networkd support has been in systemd for quite a while, but if you\u0026rsquo;re looking for fancier network settings, like bonding, you\u0026rsquo;ll want at least systemd 216.\nGetting our daemons in order #Before we get started, ensure that systemd-networkd will run on a reboot and NetworkManager is disabled. We also need to make a config file director for systemd-networkd if it doesn\u0026rsquo;t exist already. In addition, let\u0026rsquo;s enable the caching resolver and make a symlink to systemd\u0026rsquo;s resolv.conf:\nsystemctl enable systemd-networkd systemctl disable NetworkManager systemctl enable systemd-resolved ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf mkdir /etc/systemd/network Configure the physical network adapter #In my case, the network adapter connected to my external network is enp4s0 but yours will vary. Run ip addr to get a list of your network cards. Let\u0026rsquo;s create /etc/systemd/network/uplink.network and put the following in it:\n[Match] Name=enp4s0 [Network] Bridge=br0 I\u0026rsquo;m telling systemd to look for a device called enp4s0 and then add it to a bridge called br0 that we haven\u0026rsquo;t configured yet. Be sure to change enp4s0 to match your ethernet card.\nMake the bridge #We need to tell systemd about our new bridge network device and we also need to specify the IP configuration for it. We start by creating /etc/systemd/network/br0.netdev to specify the device:\n[NetDev] Name=br0 Kind=bridge This file is fairly self-explanatory. We\u0026rsquo;re telling systemd that we want a device called br0 that functions as an ethernet bridge. Now create /etc/systemd/network/br0.network to specify the IP configuration for the br0 interface:\n[Match] Name=br0 [Network] DNS=192.168.250.1 Address=192.168.250.33/24 Gateway=192.168.250.1 This file tells systemd that we want to apply a simple static network configuration to br0 with a single IPv4 address. If you want to add additional DNS servers or IPv4/IPv6 addresses, just add more DNS= and Address lines right below the ones you see above. Yes, it\u0026rsquo;s just that easy.\nLet\u0026rsquo;s do this #Some folks are brave enough to stop NetworkManager and start all of the systemd services here but I prefer to reboot so that everything comes up cleanly. That will also allow you to verify that future reboots will cause the server to come back online with the right configuration. After the reboot, run networkctl and you\u0026rsquo;ll get something like this (with color):\nHere\u0026rsquo;s what\u0026rsquo;s in the screenshot:\nIDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 enp2s0 ether off unmanaged 3 enp3s0 ether off unmanaged 4 enp4s0 ether degraded configured 5 enp5s0 ether off unmanaged 6 br0 ether routable configured 7 virbr0 ether no-carrier unmanaged 7 links listed. My ethernet card has four ports and only enp4s0 is in use. It has a degraded status because there is no IP address assigned to enp4s0. You can ignore that for now but it would be nice to see this made more clear in a future systemd release.\nLook at br0 and you\u0026rsquo;ll notice that it\u0026rsquo;s configured and routable. That\u0026rsquo;s the best status you can get for an interface. You\u0026rsquo;ll also see that my other ethernet devices are in the unmanaged state. I could easily add more .network files to /etc/systemd/network to configure those interfaces later.\nFurther reading #As usual, the Arch Linux wiki page on systemd-networkd is a phenomenal resource. There\u0026rsquo;s a detailed overview of all of the available systemd-networkd configuration file options over at systemd\u0026rsquo;s documentation site.\n","date":"26 March 2015","permalink":"/p/creating-a-bridge-for-virtual-machines-using-systemd-networkd/","section":"Posts","summary":"There are plenty of guides out there for making ethernet bridges in Linux to support virtual machines using built-in network scripts or NetworkManager.","title":"Creating a bridge for virtual machines using systemd-networkd"},{"content":"Fedora 22 will be arriving soon and it\u0026rsquo;s easy to test on Rackspace\u0026rsquo;s cloud with my Ansible playbook:\nhttps://github.com/major/ansible-rax-fedora22 As with the previous playbook I created for Fedora 21, this playbook will ensure your Fedora 21 instance is fully up to date and then perform the upgrade to Fedora 22.\nWARNING: It\u0026rsquo;s best to use this playbook against a non-production system. Fedora 22 is an alpha release at the time of this post\u0026rsquo;s writing.\nThis playbook should work well against other servers and virtual machines from other providers but there are a few things that are Rackspace-specific cleanups that might not apply to other servers.\n","date":"24 March 2015","permalink":"/p/test-fedora-22-at-rackspace-with-ansible/","section":"Posts","summary":"Fedora 22 will be arriving soon and it\u0026rsquo;s easy to test on Rackspace\u0026rsquo;s cloud with my Ansible playbook:","title":"Test Fedora 22 at Rackspace with Ansible"},{"content":"I do a bunch of Linux-related tasks daily. Some are difficult and others are easy. Printing has always been my nemesis.\nSome printers offer up highly standardized methods for printing. For example, many HP printers simply work with JetDirect and PCL 5. However, the quirkier ones that require plenty of transformations before paper starts rolling can be tricky.\nWe have some Xerox ColorQube printers at the office and they require some proprietary software to get them printing under Linux. To get started, you\u0026rsquo;ll need a Linux printer driver for the Xerox ColorQube 9200 series.\nOnce you\u0026rsquo;ve downloaded the RPM (or DEB), install it:\nsudo rpm -Uvh Xeroxv5Pkg-Linuxx86_64-5.15.551.3277.rpm Start the Xerox Printer Manager:\nsudo xeroxprtmgr You should have a screen like this:\nPress the double down arrow button at the top (it\u0026rsquo;s the one on the left), and then press the button at the top right of the next window that looks like rectangles stacked on top of one another. Choose Manual Install from the menu that appears.\nIn the next menu, enter a nickname for the printer, the printer\u0026rsquo;s IP address, and select the correct printer model from the list. The printer should be properly configured in your CUPS system afterwards:\nAny new print jobs set to the printer will cause the Xerox printer manager to pop up. This gives you the opportunity to customize your job (collating, stapling, etc) and you can also use secure print (which I highly recommend).\n","date":"16 March 2015","permalink":"/p/xerox-colorqube-9302-and-linux/","section":"Posts","summary":"I do a bunch of Linux-related tasks daily.","title":"Xerox ColorQube 9302 and Linux"},{"content":"I wrote a post last summer about preventing Chrome from stealing the media buttons (like play, pause, previous track and next track) from OS X. Now that I\u0026rsquo;m using Linux regularly and I fell in love with Google Play Music All Access, I found that GNOME was stealing the media keys from Chrome.\nThe fix is quite simple. Press the SUPER key (Windows key or Mac Command key), type settings, and press enter. Click on Keyboard and then on the Shortcuts tab. You should now see something like this:\nClick on each entry that shows Disabled above. After clicking on the entry, press your backspace key to clear the shortcut. If you\u0026rsquo;re using a Mac keyboard, that\u0026rsquo;s your oddly-named delete key that sits right above the pipe/backslash key.\nYou should be set to go once they\u0026rsquo;re all cleared out. If you disabled the media keys in Chrome, go to this post and do all of the steps in reverse. ;)\n","date":"20 February 2015","permalink":"/p/using-playpause-buttons-in-chrome-with-gnome-3/","section":"Posts","summary":"I wrote a post last summer about preventing Chrome from stealing the media buttons (like play, pause, previous track and next track) from OS X.","title":"Using play/pause buttons in Chrome with GNOME 3"},{"content":"NOTE: This works in Fedora 21, but not in Fedora 22. Review this post for the fixes.\nGNOME 3 has improved by leaps and bounds since its original release and it\u0026rsquo;s my daily driver window manager on my Linux laptop. Even with all of these improvements, there\u0026rsquo;s still no built-in way to rotate wallpaper (that I\u0026rsquo;ve found).\nThere are some extensions, like BackSlide, that enable background rotation on a time interval. Fedora 21 uses GNOME 3.14 and the current BackSlide version is incompatible. BackSlide\u0026rsquo;s interface is fairly useful but I wanted something different.\nOne of systemd\u0026rsquo;s handy features is the ability to set up systemd unit files on a per-user basis. Every user can create unit files in their home directory and tell systemd to begin using those.\nGetting started #We first need a script that can rotate the background based on files in a particular directory. All of my wallpaper images are in ~/Pictures/wallpapers. I adjusted this script that I found on GitHub so that it searches through files in my wallpaper directory and picks one at random to use:\n#!/bin/bash walls_dir=$HOME/Pictures/Wallpapers selection=$(find $walls_dir -type f -name \u0026#34;*.jpg\u0026#34; -o -name \u0026#34;*.png\u0026#34; | shuf -n1) gsettings set org.gnome.desktop.background picture-uri \u0026#34;file://$selection\u0026#34; I tossed this script into ~/bin/rotate_bg.sh and made it executable with chmod +x ~/bin/rotate_bg.sh. Before you go any further, run the script manually in a terminal to verify that your background rotates to another image.\nPreparing the systemd service unit file #You\u0026rsquo;ll need to create a user-level systemd service file directory if it doesn\u0026rsquo;t exist already:\nmkdir ~/.config/systemd/user/ Drop this file into ~/.config/systemd/user/gnome-background-change.service:\n[Unit] Description=Rotate GNOME background [Service] Type=oneshot Environment=DISPLAY=:0 ExecStart=/usr/bin/bash /home/[USERNAME]/bin/rotate_bg.sh [Install] WantedBy=basic.target This unit file tells systemd that we have a oneshot script that will exit when it\u0026rsquo;s finished. In addition, we also give the environment details to systemd so that it\u0026rsquo;s aware of our existing X session.\nDon\u0026rsquo;t enable or start the service file yet. We will let our timer handle that part.\nSetting a timer #Systemd\u0026rsquo;s concept of timers is pretty detailed. You have plenty of control over how and when you want a particular service to run. We need a simple calendar-based timer (much like cron) that will start up our service from the previous step.\nDrop this into ~/.config/systemd/user/gnome-background-change.timer:\n[Unit] Description=Rotate GNOME wallpaper timer [Timer] OnCalendar=*:0/5 Persistent=true Unit=gnome-background-change.service [Install] WantedBy=gnome-background-change.service We\u0026rsquo;re telling systemd that we want this timer to run every five minutes and we want to start our service unit file from the previous step. The Persistent line tells systemd that we want this unit file run if the last run was missed. For example, if you log in at 7:02AM, we don\u0026rsquo;t want to wait until 7:05AM to rotate the background. We can rotate it immediately after login.\nIf you\u0026rsquo;d like a different interval, be sure to review systemd\u0026rsquo;s time syntax for the OnCalendar line. It\u0026rsquo;s a little quirky if you\u0026rsquo;re used to working with crontabs but it\u0026rsquo;s very powerful once you understand it.\nNow we can enable and start the timer:\nsystemctl --user enable gnome-background-change.timer systemctl --user start gnome-background-change.timer Checking our work #You can use systemctl to query the timer we just activated:\n$ systemctl --user list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2015-02-11 08:15:00 CST 3min 53s left Wed 2015-02-11 08:10:49 CST 16s ago gnome-background-change.timer gnome-background-change.service In my case, this shows that the background rotation service last ran 16 seconds ago. It will run again in just under four minutes. If you find that the service runs but your wallpaper doesn\u0026rsquo;t change, try running journalctl -xe to see if your service is throwing any errors.\nAdditional reading #This is just the tip of the iceberg of what systemd can do with user unit files and timers. the Arch Linux wiki has some awesome documentation about user unit files and timers. Check out the other timers that already exist on your system for more ideas.\n","date":"11 February 2015","permalink":"/p/rotate-gnome-3s-wallpaper-systemd-user-units-timers/","section":"Posts","summary":"NOTE: This works in Fedora 21, but not in Fedora 22.","title":"Rotate GNOME 3’s wallpaper with systemd user units and timers"},{"content":"","date":null,"permalink":"/tags/script/","section":"Tags","summary":"","title":"Script"},{"content":"","date":null,"permalink":"/tags/git/","section":"Tags","summary":"","title":"Git"},{"content":"I\u0026rsquo;m far from being a kernel developer, but I found myself staring down a [peculiar touchpad problem][2] with my new Dell XPS 13. Before kernel 3.17, the touchpad showed up as a standard PS/2 mouse, which certainly wasn\u0026rsquo;t ideal. That robbed the pad of its multi-touch capabilities. Kernel 3.17 added the right support for the pad but freezes began to occur somewhere between 3.17 and 3.19.\nBisecting #It became apparent that bisecting the kernel would be required. If you\u0026rsquo;re not familiar with [bisection][3], it\u0026rsquo;s a process than can help you narrow down where a particular piece of software picked up a bug. You tell git which revision you know is good and you also tell it which revision has a problem. Git will pick a revision right in the middle and let you re-test. If the test is good, you mark the revision as good and git scoots to the middle between the two known good revisions. The same thing happens if you mark the revision as a bad one.\nYou\u0026rsquo;ll eventually find yourself staring down fewer and fewer commits until you isolate the commit that is causing problems. From there, you\u0026rsquo;ll need to write a new patch to fix the bug or consider reverting the problematic patch entirely.\nLessons learned #Making mistakes during a kernel bisection are quite painful since the build times are fairly extensive. Kernel builds on my laptop took about a half hour and a 32-core Rackspace Cloud Server still took about 10 minutes to compile and package the kernel.\nCome up with a solid test plan #Before you get started, define a good test plan so that you know what a good or bad revision should look like. In my case, the touchpad froze when I applied more than one finger to the touchpad or tried to do multi-finger taps and clicks. It\u0026rsquo;s even better if you can figure out a way to run a script to test the revision. If you can do that, git can automated the bisection for you and you\u0026rsquo;ll be done really quickly.\nBuild the project consistently #Ensure that you build the software project the same way each time. In my case, I was careful to use the same exact kernel config file and use the same script to build the kernel for each round of bisection. Introducing changes in the build routine could sway your results and cause you to mislabel a good or bad revision.\nWrite the upcoming revisions to a file #You can protect yourself from many mistakes by writing the list of revisions in your bisection to a file. That would allow you to come back to the bisection after a mistake and pick up where you left off. You could use something like this:\nThat file will help in case you accidentally run a `git bisect reset` or delete the repository. I cannot confirm or deny that anything like that happened during my work. :) [2]: /2015/02/03/linux-support-dell-xps-13-9343-2015-model/ [3]: http://git-scm.com/docs/git-bisect ","date":"9 February 2015","permalink":"/p/lessons-learned-kernel-bisection/","section":"Posts","summary":"I\u0026rsquo;m far from being a kernel developer, but I found myself staring down a [peculiar touchpad problem][2] with my new Dell XPS 13.","title":"Lessons learned from a kernel bisection"},{"content":"For those of you in the market for a cheap webcam for videoconferencing or home surveillance, the Logitech C270 is hard to beat at about $20-25 USD. It can record video at 1280×960 and it\u0026rsquo;s fairly good at low light levels. The white balance gets a bit off when it\u0026rsquo;s bright in the room but hey — this camera is cheap.\nZoneMinder can monitor multiple cameras connected via USB or network. Setting up the C270 with ZoneMinder is relatively straightforward. (Getting ZoneMinder installed and running is well outside the scope of this post.)\nAdjust groups #If a user wants to access the webcam in Linux, they must be in the video group. On my system, ZoneMinder runs as the apache user. I needed to add the apache user to the video group:\nusermod -G video apache Configuring the C270 #After clicking Add New Monitor, here\u0026rsquo;s the data for each tab:\nGeneral Tab:\nSource Type: Local Function: Modect Source:\nDevice Path: /dev/video0 Capture Method: Video For Linux version2 Device Format: PAL Capture Palette: YUYV Capture Width: 1280 Capture Height: 960 The width and height settings are suggestions. You can crank them down to something like 640×480 if you\u0026rsquo;d like to save disk space or get a higher frame rate.\nOnce you save the configuration and the window disappears, you should see /dev/video0 (0) turn green in the ZoneMinder web interface. If it\u0026rsquo;s red, there may be a different permissions issue to solve or your ZoneMinder instance might be running as a different user than you expected. If the text is yellow/orange, go back and check your camera configuration settings in the ZoneMinder interface.\n","date":"8 February 2015","permalink":"/p/using-zoneminder-logitech-c270-webcam/","section":"Posts","summary":"For those of you in the market for a cheap webcam for videoconferencing or home surveillance, the Logitech C270 is hard to beat at about $20-25 USD.","title":"Using ZoneMinder with a Logitech C270 webcam"},{"content":"","date":null,"permalink":"/tags/video/","section":"Tags","summary":"","title":"Video"},{"content":"I\u0026rsquo;M ALL DONE: I\u0026rsquo;m not working on Linux compatibility for the XPS 13 any longer. I\u0026rsquo;ve purchased a Lenovo X1 Carbon (3rd gen) and that\u0026rsquo;s my preferred laptop. More on this change later.\nI\u0026rsquo;ve been looking for a good laptop to run Linux for a while now and my new Dell XPS 13 9343 has arrived. It was released at CES in 2015 and it received quite a lot of attention for packing a large amount of pixels into a very small laptop frame with excellent battery life. Ars Technica has a great overall review of the device.\nLinux support has been historically good on the previous generation XPS 13\u0026rsquo;s and a blog post from Dell suggests that the latest revision will have good support as well. For a deep dive on the hardware inside the laptop, review this GitHub Gist.\nAfter wiping Windows 8.1 off the laptop, I started with the Fedora 21 installation. If you want to run Linux on one of these laptops, here\u0026rsquo;s what you need to know:\nThe good #All of the most basic devices work just fine. The display, storage, and peripheral connections (USB, SD card slot, mini DisplayPort) all work out of the box in Linux 3.18.5 with Fedora 21. The display looks great with GNOME 3\u0026rsquo;s default HiDPI settings and it\u0026rsquo;s very readable with the default font sizes without HiDPI (although this is a bit subjective).\nThe webcam works without any additional configuration the video quality is excellent.\nThe wireless card in the laptop I received is a BCM4352:\n02:00.0 Network controller: Broadcom Corporation BCM4352 802.11ac Wireless Network Adapter (rev 03) It\u0026rsquo;s possible to get this card working with the b43 kernel modules but I\u0026rsquo;ve had better luck with the binary blob STA drivers from Broadcom. There are plenty of guides out there to help you install the kernel module for your Fedora kernel. I\u0026rsquo;ve had great network performance with the binary driver.\nSome users are seeing Intel wireless cards in their Dell XPS 13\u0026rsquo;s, especially in Europe. Opening the laptop for service isn\u0026rsquo;t terribly difficult and you could replace the bluetooth/wireless card with a different one.\nPRO TIP: If you\u0026rsquo;re seeing errors in your journald logs about NetworkManager being unable to scan for access points, be sure to hit the wireless switch key on your keyboard (Fn-F12) to enable the card. This had me stumped for about 45 minutes. There\u0026rsquo;s an option in the BIOS to disable the switch and let the OS control the wireless card.\nThe special keyboard buttons (volume up/down, brightness up/down) all work flawlessly.\nThe bad #The touchpad and keyboard are on the I2C bus and this creates some problems. Many users have reported that keys on the keyboard seem to repeat themselves while you\u0026rsquo;re typing and the touchpad has an issue where X stops receiving input from it. However, when the touchpad seems to freeze, the kernel still sees data coming from the device (verified with evtest and evemu-record).\nThere are some open bugs and discussion about the touchpad issues:\nLinux Support is Terrible on the New Dell XPS 13 (2015) [Reddit] touchpad does not respond to tap-to-clicks in I2C mode in Ubuntu 15.04 on 2015 XPS 13 (9343) [Launchpad Bug] Dell XPS 13 9343 (2015) touchpad freeze [Red Hat Bug] You can connect up a mouse and keyboard to the laptop and work around those issues. However, dragging around some big peripherals with such a small laptop isn\u0026rsquo;t a great long-term solution. Some users suggested blacklisting the i2c_hid module so that the touchpad shows up as a plain PS/2 touchpad but I\u0026rsquo;m still seeing freezes even after making that change.\nIf you\u0026rsquo;re having one of those “touchpad on the I2C bus?” moments like I had, read Synaptics\u0026rsquo; brief page about Intertouch. Using the I2C bus saves power, reduces USB port consumption, and allows for more powerful multi-touch gestures.\nOddly enough, the touchscreen is an ELAN Touchscreen and it runs over USB. It suffers from the same freezes that the touchpad does.\nThe ugly #Sound is a big problem. The microphone, speakers and headphone port don\u0026rsquo;t work under 3.18.5 and 3.19.0-rc7. The audio device is a ALC3263 from RealTek and it should use the same module as the RT286. However, the probing still fails and the module can\u0026rsquo;t be used. The module code seems to be correct but the probing still fails.\nThere\u0026rsquo;s an open bug on Launchpad about the problem:\nAudio broken on 2015 XPS 13 (9343) in I2S mode in Ubuntu 14.10/15.04 [Launchpad bug] No sound on Dell XPS 13 9343 (2015 model) [Red Hat bug] broadwell-audio: rt286 device appears, no sound (Dell XPS 13 9343) [Linux kernel bug] I connected up an old Syba USB audio device to the USB port and was able to get sound immediately. This is also a horrible workaround.\nWhat now? #From what I gather, Dell is extremely eager to make Linux work on the new XPS 13 and we should see some movement on these bugs soon. I\u0026rsquo;m still doing a bunch of testing on my own with kernel 3.19 and I\u0026rsquo;ll be keeping this page updated as news becomes available.\nIf you know much about the I2C bus or about the sound devices in this laptop and you have some time available to help, just let me know where to send the beer. ;)\nLatest updates #2015-02-03 #Added Red Hat bug link for sound issues.\n2015-02-05 #The touchpad bug has been reduced to a kernel issue. Recordings from evemu-record look fine when they\u0026rsquo;re played back in X. Users reported in Launchpad and in the Red Hat bug that kernel 3.16 works perfectly but 3.17 doesn\u0026rsquo;t. A kernel bisection will most likely be required to find the patch that broke the touchpad.\nMany users find that adding acpi.os=\u0026quot;!Windows 2013\u0026quot; to the kernel boot line will bring the audio card online after 1-3 reboots. Apparently there is some level of state information stored in memory that requires a few reboots to clear it. I haven\u0026rsquo;t verified this yet.\n2015-02-06 #Kernel bisect for the touchpad issue is underway. Every 3.16.x kernel I built would keep the trackpad in PS/2 mode and that\u0026rsquo;s not helpful at all. There\u0026rsquo;s no multi-finger taps/clicks/gestures. 3.17.0 works perfectly, however. My gut says something broke down between 3.17.0 and 3.18.0 but it might actually be closer to 3.17.4 since Fedora 21\u0026rsquo;s initial kernel is 3.17.4 (and the touchpad doesn\u0026rsquo;t work well with it).\nA post was made on Barton\u0026rsquo;s Blog yesterday about Dell being aware of the Linux issues. (Thanks to Chris\u0026rsquo; comment below!)\nAfter about 35 kernel builds during the most frustrating git bisect of my life, I found the problematic patch. The Red Hat bug is updated now and I\u0026rsquo;m hoping that someone with a detailed knowledge of this part of the kernel can make sense of it:\nFrom d1c7e29e8d276c669e8790bb8be9f505ddc48888 Mon Sep 17 00:00:00 2001 From: Gwendal Grignou \u0026lt;gwendal@chromium.org\u0026gt; Date: Thu, 11 Dec 2014 16:02:45 -0800 Subject: HID: i2c-hid: prevent buffer overflow in early IRQ Before -\u0026gt;start() is called, bufsize size is set to HID_MIN_BUFFER_SIZE, 64 bytes. While processing the IRQ, we were asking to receive up to wMaxInputLength bytes, which can be bigger than 64 bytes. Later, when -\u0026gt;start is run, a proper bufsize will be calculated. Given wMaxInputLength is said to be unreliable in other part of the code, set to receive only what we can even if it results in truncated reports. Signed-off-by: Gwendal Grignou \u0026lt;gwendal@chromium.org\u0026gt; Reviewed-by: Benjamin Tissoires \u0026lt;benjamin.tissoires@redhat.com\u0026gt; Cc: stable@vger.kernel.org Signed-off-by: Jiri Kosina \u0026lt;jkosina@suse.cz\u0026gt; diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c index 747d544..9c014803b4 100644 --- a/drivers/hid/i2c-hid/i2c-hid.c +++ b/drivers/hid/i2c-hid/i2c-hid.c @@ -369,7 +369,7 @@ static int i2c_hid_hwreset(struct i2c_client *client) static void i2c_hid_get_input(struct i2c_hid *ihid) { int ret, ret_size; -\tint size = le16_to_cpu(ihid-\u0026gt;hdesc.wMaxInputLength); +\tint size = ihid-\u0026gt;bufsize; ret = i2c_master_recv(ihid-\u0026gt;client, ihid-\u0026gt;inbuf, size); if (ret != size) { I reverted the patch in Linux 3.19-rc7 and built the kernel. The touchpad works flawlessly. However, simply reverting the patch probably isn\u0026rsquo;t the best idea long term. ;)\n2015-02-07 #The audio patch mentioned in the Launchpad bug report didn\u0026rsquo;t work for me on Linux 3.19-rc7.\n2015-02-10 #Progress is still being made on the touchpad in the Red Hat bug ticket. If you can live with the pad working as PS/2, you can get sound by adding acpi_osi=\u0026quot;!Windows 2013\u0026quot; to your kernel command line. Once you do that, you\u0026rsquo;ll need to:\nDo a warm reboot Wait for the OS to boot, then do a full poweroff Boot the laptop, then do a full poweroff Sound should now be working If sound still isn\u0026rsquo;t working, you may need to install pavucontrol, the PulseAudio volume controller, and disable the HDMI sound output that is built into the Broadwell chip.\nThis obviously isn\u0026rsquo;t a long-term solution, but it\u0026rsquo;s a fair workaround.\n2015-02-11 #There is now a patch that you can apply to 3.18 or 3.19 kernels that eliminates the trackpad freeze:\nFrom 2a2aa272447d0ad4340c73db91bd8e995f6a0c3f Mon Sep 17 00:00:00 2001 From: Benjamin Tissoires \u0026lt;benjamin.tissoires@redhat.com\u0026gt; Date: Tue, 10 Feb 2015 12:40:13 -0500 Subject: [PATCH] HID: multitouch: force release of touches when i2c communication is not reliable The Dell XPS 13 9343 (2015) shows that from time to time, i2c_hid misses some reports from the touchpad. This can lead to a freeze of the cursor in user space when the missing report contains a touch release information. Win 8 devices should have a contact count reliable, to we can safely release all touches that has not been seen in the current report. Signed-off-by: Benjamin Tissoires \u0026lt;benjamin.tissoires@redhat.com\u0026gt; --- drivers/hid/hid-multitouch.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c index f65e78b..48b051e 100644 --- a/drivers/hid/hid-multitouch.c +++ b/drivers/hid/hid-multitouch.c @@ -1021,6 +1021,14 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id) if (id-\u0026gt;vendor == HID_ANY_ID \u0026amp;\u0026amp; id-\u0026gt;product == HID_ANY_ID) td-\u0026gt;serial_maybe = true; +\tif ((id-\u0026gt;group == HID_GROUP_MULTITOUCH_WIN_8) \u0026amp;\u0026amp; (hdev-\u0026gt;bus == BUS_I2C)) +\t/* +\t* Some i2c sensors are not completely reliable with the i2c +\t* communication. Force release of unseen touches in a report +\t* to prevent bad behavior from user space. +\t*/ +\ttd-\u0026gt;mtclass.quirks |= MT_QUIRK_NOT_SEEN_MEANS_UP; + ret = hid_parse(hdev); if (ret != 0) return ret; I\u0026rsquo;ve tested it against 3.19-rc7 as well as Fedora\u0026rsquo;s 3.18.5. However, tapping still doesn\u0026rsquo;t work yet with more than one finger. The touchpad jumps around a bit when you apply two fingers to it.\n2015-02-12 #Rene commented below that he found a post in alsa devel with a patch for the “Dell Dino” that looks like it might help with the i2c audio issues. Another kernel maintainer replied and asked for some of the code to be rewritten to make it easier to handle audio quirks. UPDATE: Audio patch didn\u0026rsquo;t work.\nWe\u0026rsquo;ve created an IRC channel on Freenode: #xps13.\nThere\u0026rsquo;s an interesting kernel patch mentioning “Dell Dino” that is line for inclusion in 3.20-rc1. Someone in IRC found “Dell Dino” mentioned on a Dell business purchase page. The board name from dmidecode in the patch is 0144P8 but that doesn\u0026rsquo;t match other known board names. My i5-5200U with touch is 0TM99H while a user with a non-touch i5 has a board name of OTRX4F. Other i5 touch models have the same board name as mine. All BIOS revisions found so far are A00 (the latest on Dell\u0026rsquo;s site).\nA probe for the rt286 module looks like it starts to happen and then it fails (skip to line 795):\n[ 4.141189] rt286 i2c-INT343A:00: probe [ 4.141245] i2c i2c-8: master_xfer[0] W, addr=0x1c, len=4 [ 4.141246] i2c i2c-8: master_xfer[1] R, addr=0x1c, len=4 [ 4.141249] i2c_designware INT3432:00: i2c_dw_xfer: msgs: 2 [ 4.141389] i2c_designware INT3432:00: Standard-mode HCNT:LCNT = 432:507 [ 4.141391] i2c_designware INT3432:00: Fast-mode HCNT:LCNT = 72:160 [ 4.141662] i2c_designware INT3432:00: i2c_dw_isr: Synopsys DesignWare I2C adapter enabled= 0x1 stat=0x10 [ 4.141670] i2c_designware INT3433:00: i2c_dw_isr: Synopsys DesignWare I2C adapter enabled= 0x1 stat=0x0 [ 4.141695] i2c_designware INT3432:00: i2c_dw_isr: Synopsys DesignWare I2C adapter enabled= 0x1 stat=0x750 [ 4.141703] i2c_designware INT3433:00: i2c_dw_isr: Synopsys DesignWare I2C adapter enabled= 0x1 stat=0x0 [ 4.141965] i2c_designware INT3432:00: i2c_dw_handle_tx_abort: slave address not acknowledged (7bit mode) [ 4.141968] rt286 i2c-INT343A:00: Device with ID register 0 is not rt286 [ 4.160506] i2c-core: driver [rt286] registered 2015-02-16\nI received an email from a Realtek developer about the sound card in the XPS:\nI see “rt286 i2c-INT343A:00: Device with ID register 0 is not rt286” in the log. It means there are something wrong when the driver is trying to read the device id of codec. I believe that is due to I2C read/write issue. ALC3263 is a dual mode (I2S and HDA) codec. And BIOS will decide which mode according to OS type. So, if you want to use i2s mode, you need to configure your BIOS to set ALC3263 to I2S mode.\nAfter poring through the DSDT and other ACPI tables over the weekend (and building way too many kernels with overriden DSDT\u0026rsquo;s), it sounds like a BIOS update may be required for the sound card to function properly. The sound devices specified in the DSDT that are on the i2c bus are only activated after a BUNCH of checks succeed. One of them is the check of OSYS, the system\u0026rsquo;s operating system. Setting acpi_osi=\u0026quot;Windows 2013\u0026quot; does flip OSYS to 0x07DD, but that\u0026rsquo;s only part of the fix. There are other variables checked, like CODS (that shows up very often) that are instantiated early in the DSDT but I can\u0026rsquo;t find them ever being set to a value anywhere in the DSDT code. These variables equal zero by default and that disables critical parts of the sound device.\nMy take: This laptop is going to need a BIOS update of some sort before we can get sound working properly in Linux with an i2c touchpad. If someone is more skilled with DSDT\u0026rsquo;s than I am, I\u0026rsquo;ll be glad to share all of my work that I\u0026rsquo;ve tried so far. As for now, I\u0026rsquo;m going to be waiting eagerly for some type of firmware update from Dell.\n2015-02-17\nThere\u0026rsquo;s some progress on the sound card in Linux! After building the latest commits from linux.git\u0026rsquo;s master branch, my XPS started showing a device called “broadwell-rt286” in pavucontrol. It showed up as a normal audio device but it had no output support, only input. I tried to enable the microphone but I couldn\u0026rsquo;t record any sound.\nI found a kernel bug from a ThinkPad Helix 2 user with a very similar hardware setup. Their rt286 device is on the I2S bus with a Haswell SoC. Their fix was to copy over the latest firmware binaries from linux-firmware.git and reboot. I did the same and an output device suddenly showed up in pavucontrol after a reboot.\nWhen I played sounds via aplay, canberra-gtk-play, and rhythmbox, I could see the signal level fluctuating in pavucontrol on the broadwell-rt286 device. However, I couldn\u0026rsquo;t hear the sound through the speakers. I connected headphones and I couldn\u0026rsquo;t hear any sound there either.\nThere\u0026rsquo;s now a kernel bug ticket open for the sound issue.\nStay tuned for a BIOS update with a potential keyboard repeat fix. It\u0026rsquo;s already been talked about in IRC as a potential A01 release sssssssssssoon:\nsomeone asked about the fix for the repeating keypresses. yes, it was traced back to the source and will be fixed on all affected Dell platforms soon\nI just saw that the one for 9343 was promoted to our factories so should be up on support.dell.com any day now as BIOS A01\nYou can get notifications about driver releases for the XPS on Dell\u0026rsquo;s site.\n2015-03-04\nSound on the I2S bus is working in Linux 4.0-rc2! See note from 2015-03-08 below. I was too exhausted last night for a full write-up, but here\u0026rsquo;s the gist of what I did:\nFirst off, build 4.0-rc2 with all of the available I2C and ALSA SoC modules. I haven\u0026rsquo;t narrowed down which modules are critical quite yet. Once you\u0026rsquo;ve built the kernel and rebooted into it, run alsamixer and choose the broadwell-rt286 card. Hold the right arrow key until you go all the way to the right of the alsamixer display and press M to unmute the last control there. You should now be able to turn up the volume and play some test sounds.\nLuckily, no update for linux-firmware is required. Also, there\u0026rsquo;s no need for any ALSA UCM files as I had originally thought.\nStay tuned for a more in-depth write-up soon.\n2015-03-08\nAfter a few more reboots, I can\u0026rsquo;t get sound working again. I\u0026rsquo;m wondering if I had an errant acpi_osi setting somewhere during my testing that brought sound up on the HDA bus. :/\n","date":"3 February 2015","permalink":"/p/linux-support-dell-xps-13-9343-2015-model/","section":"Posts","summary":"I\u0026rsquo;M ALL DONE: I\u0026rsquo;m not working on Linux compatibility for the XPS 13 any longer.","title":"Linux support for the Dell XPS 13 9343 (2015 model)"},{"content":"![1]\nKeeping current with the latest trends and technologies in the realm of information security is critical and there are many options to choose from. However, as with any content on the internet, it takes some effort to find sites with a good signal-to-noise ratio. Information security is a heavily FUD-laden industry and I\u0026rsquo;ve taken some time to compile a list of helpful sites.\nGeneral sites # Cryptanalys.is: https://cryptanalys.is/ Linux Weekly News (subscription optional but highly recommended): http://lwn.net/ Reddit\u0026rsquo;s r/netsec: https://www.reddit.com/r/netsec SANS Internet Storm Center: https://isc.sans.edu/ Wired: Threat Level: http://www.wired.com/category/threatlevel/ Blogs # Andy Ellis\u0026rsquo; Blog (CSO @ Akamai): https://blogs.akamai.com/author/andy-ellis/ Bruce Schneier\u0026rsquo;s Blog: https://www.schneier.com/ Imperial Violet (good SSL/crypto knowledge): https://www.imperialviolet.org/ Network Security Blog: http://www.mckeay.net/ Red Hat\u0026rsquo;s Security Blog: https://securityblog.redhat.com/ TaoSecurity: http://taosecurity.blogspot.com/ Mailing Lists # Apple Product Security ML: https://lists.apple.com/mailman/listinfo/security-announce/ Full Disclosure List: http://seclists.org/fulldisclosure/ Humor (come on, we need it) # Security Reactions (occasionally NSFW): http://securityreactions.tumblr.com/ DNS Reactions (also occasionally NSFW): http://dnsreactions.tumblr.com/ Many thanks to my coworkers for helping to compile the list. If you have any others that you really enjoy, let me know! I\u0026rsquo;ll be glad to add them to the post.\nPhoto Credit: stevehuang7 via Compfight cc\n","date":"8 January 2015","permalink":"/p/helpful-low-fud-information-security-sites-mailing-lists-blogs/","section":"Posts","summary":"![1]","title":"Helpful, low-FUD information security sites, mailing lists, and blogs"},{"content":"The world of containers is constantly evolving lately. The latest turn of events involves the CoreOS developers when they announced Rocket as an alternative to Docker. However, LXC still lingers as a very simple path to begin using containers.\nWhen I talk to people about LXC, I often hear people talk about how difficult it is to get started with LXC. After all, Docker provides an easy-to-use image downloading function that allows you to spin up multiple different operating systems in Docker containers within a few minutes. It also comes with a daemon to help you manage your images and your containers.\nManaging LXC containers using the basic LXC tools isn\u0026rsquo;t terribly easy - I\u0026rsquo;ll give you that. However, managing LXC through libvirt makes the process much easier. I wrote a little about this earlier in the year.\nI decided to turn the LXC container deployment process into an Ansible playbook that you can use to automatically spawn an LXC container on any server or virtual machine. At the moment, only Fedora 20 and 21 are supported. I plan to add CentOS 7 and Debian support soon.\nClone the repository to get started:\ngit clone https://github.com/major/ansible-lxc.git cd ansible-lxc ansible-playbook -i hosts playbook.yml If you\u0026rsquo;re running the playbook on the actual server or virtual machine where you want to run LXC, there\u0026rsquo;s no need to alter the hosts file. You will need to adjust it if you\u0026rsquo;re running your playbook from a remote machine.\nAs the playbook runs, it will install all of the necessary packages and begin assembling a Fedora 21 chroot. It will register the container with libvirt and do some basic configuration of the chroot so that it will work as a container. You\u0026rsquo;ll end up with a running Fedora 21 LXC container that is using the built-in default NAT network created by libvirt. The playbook will print out the IP address of the container at the end. The default password for root is fedora. I wouldn\u0026rsquo;t recommend leaving that for a production use container. ;)\nAll of the normal virsh commands should work on the container. For example:\n# Stop the container gracefully virsh shutdown fedora21 # Start the container virsh start fedora21 Feel free to install the virt-manager tool and manage everything via a GUI locally or via X forwarding:\nyum -y install virt-manager dejavu* xorg-x11-xauth # OPTIONAL: For a better looking virt-manager interface, install these, too yum -y install gnome-icon-theme gnome-themes-standard ","date":"17 December 2014","permalink":"/p/try-lxc-ansible-playbook/","section":"Posts","summary":"The world of containers is constantly evolving lately.","title":"Try out LXC with an Ansible playbook"},{"content":"One of the first tools I learned about after working with Red Hat was sysstat. It can write down historical records about your server at regular intervals. This can help you diagnose CPU usage, RAM usage, or network usage problems. In addition, sysstat also provides some handy command line utilities like vmstat, iostat, and pidstat that give you a live view of what your system is doing.\nOn Debian-based systems (including Ubuntu), you install the sysstat package and enable it with a quick edit to /etc/default/sysstat and the cron job takes it from there. CentOS and Fedora systems call the collector process using a cron job in /etc/cron.d and it\u0026rsquo;s enabled by default.\nFedora 21 comes with sysstat 11 and there are now systemd unit files to control the collection and management of stats. You can find the unit files by listing the files in the sysstat RPM:\n$ rpm -ql sysstat | grep systemd /usr/lib/systemd/system/sysstat-collect.service /usr/lib/systemd/system/sysstat-collect.timer /usr/lib/systemd/system/sysstat-summary.service /usr/lib/systemd/system/sysstat-summary.timer /usr/lib/systemd/system/sysstat.service These services and timers aren\u0026rsquo;t enabled by default in Fedora 21. If you run sar after installing sysstat, you\u0026rsquo;ll see something like this:\n# sar Cannot open /var/log/sa/sa12: No such file or directory Please check if data collecting is enabled All you need to do is enable and start the main sysstat service:\nsystemctl enable sysstat systemctl start sysstat From there, systemd will automatically call for collection and management of the statistics using its internal timers. Opening up /usr/lib/systemd/system/sysstat-collect.timer reveals the following:\n# /usr/lib/systemd/system/sysstat-collect.timer # (C) 2014 Tomasz Torcz \u0026lt;tomek@pipebreaker.pl\u0026gt; # # sysstat-11.0.0 systemd unit file: # Activates activity collector every 10 minutes [Unit] Description=Run system activity accounting tool every 10 minutes [Timer] OnCalendar=*:00/10 [Install] WantedBy=sysstat.service The timer unit file ensures that the sysstat-collect.service is called every 10 minutes based on the real time provided by the system clock. (There are other options to set timers based on relative time of when the server booted or when a user logged into the system). The familiar sa1 command appears in /usr/lib/systemd/system/sysstat-collect.service:\n# /usr/lib/systemd/system/sysstat-collect.service # (C) 2014 Tomasz Torcz \u0026lt;tomek@pipebreaker.pl\u0026gt; # # sysstat-11.0.0 systemd unit file: # Collects system activity data # Activated by sysstat-collect.timer unit [Unit] Description=system activity accounting tool Documentation=man:sa1(8) [Service] Type=oneshot User=root ExecStart=/usr/lib64/sa/sa1 1 1 ","date":"12 December 2014","permalink":"/p/install-sysstat-fedora-21/","section":"Posts","summary":"One of the first tools I learned about after working with Red Hat was sysstat.","title":"Install sysstat on Fedora 21"},{"content":"IRC is my main communication mechanism and I\u0026rsquo;ve gradually moved from graphical clients, to irssi and then to weechat. Text-based IRC removes quite a few distractions for me and it allows me to get access to my IRC communications from anything that can act as an ssh client.\nI wanted a better way to get notifications when people send me messages and I\u0026rsquo;m away from my desk. Pushover is a great service that will take notification data via an API and blast it out to various devices. Once you configure your account, just install the mobile application on your device and you\u0026rsquo;re ready to go.\nConnecting weechat to Pushover is quite easy thanks to the pushover.pl script. Go into your main weechat console (usually by pressing META/ALT/OPTION-1 on your keyboard) and install it:\n/script install pushover.pl There are quite a few variables to configure. You can get a list of them by typing:\n/set plugins.var.perl.push* You\u0026rsquo;ll need two pieces of information to configure the plugin:\nUser key: The user key is displayed on your main account page when you login at Pushover. Application key: Click on Register an Application towards the bottom or use this direct link. Now you\u0026rsquo;re ready to configure the plugin:\n/set plugins.var.perl.pushover.token [YOUR PUSHOVER APPLICATION TOKEN] /set plugins.var.perl.pushover.user [YOUR USER KEY] /set plugins.var.perl.pushover.enabled on You can test it out quickly by using Freenode\u0026rsquo;s web chat to send yourself a private message from another account.\n","date":"5 December 2014","permalink":"/p/send-weechat-notifications-via-pushover/","section":"Posts","summary":"IRC is my main communication mechanism and I\u0026rsquo;ve gradually moved from graphical clients, to irssi and then to weechat.","title":"Send weechat notifications via Pushover"},{"content":"Managing firewall rules with iptables can be tricky at times. The rule syntax itself isn\u0026rsquo;t terribly difficult but you can quickly run into problems if you don\u0026rsquo;t save your rules to persistent storage after you get your firewall configured. Things can also get out of hand quickly if you run a lot of different tables with jumps scattered through each.\nWhy FirewallD? #FirewallD\u0026rsquo;s goal is to make this process a bit easier by adding a daemon to the mix. You can send firewall adjustment requests to the daemon and it handles the iptables syntax for you. It can also write firewall configurations to disk. It\u0026rsquo;s especially useful on laptops since you can quickly jump between different firewall configurations based on the network you\u0026rsquo;re using. You might run a different set of firewall rules at a coffee shop than you would run at home.\nAdding a trusted IP address to a device running firewalld requires the use of rich rules.\nAn example #Consider a situation where you have a server and you want to allow unrestricted connectivity to that server from a bastion or from your home internet connection. First off, determine your default zone (which is most likely \u0026ldquo;public\u0026rdquo; unless you\u0026rsquo;ve changed it to something else):\n# firewall-cmd --get-default-zone public We will use 11.22.33.44 as our example IP address. Let\u0026rsquo;s add the rich rule:\nfirewall-cmd --zone=public --add-rich-rule=\u0026#39;rule family=\u0026#34;ipv4\u0026#34; source address=\u0026#34;11.22.33.44\u0026#34; accept\u0026#39; Let\u0026rsquo;s break down what we\u0026rsquo;re asking firewalld to do. We\u0026rsquo;re asking to allow IPv4 connectivity from 11.22.33.44 to all ports on the server and we\u0026rsquo;re asking for that rule to be added to the public (default) zone. If you list the contents of your public zone, it should look like this:\n# firewall-cmd --list-all --zone=public public (default, active) interfaces: eth0 sources: services: dhcpv6-client mdns ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: rule family=\u0026#34;ipv4\u0026#34; source address=\u0026#34;11.22.33.44\u0026#34; accept ","date":"24 November 2014","permalink":"/p/trust-ip-address-firewallds-rich-rules/","section":"Posts","summary":"Managing firewall rules with iptables can be tricky at times.","title":"Trust an IP address with firewalld’s rich rules"},{"content":"Fedora 21 reached Alpha status last month and will reach beta status at the end of October. There are plenty of new features planned for the final release.\nYou can test Fedora 21 now using Rackspace\u0026rsquo;s Cloud Servers. I\u0026rsquo;ve assembled a small Ansible playbook that will automate the upgrade process from Fedora 20 to 21 and it takes around 7-9 minutes to complete.\nThe Ansible playbook is on GitHub along with instructions: ansible-rax-fedora21\n","date":"3 October 2014","permalink":"/p/easily-test-fedora-21-rackspace-cloud/","section":"Posts","summary":"Fedora 21 reached Alpha status last month and will reach beta status at the end of October.","title":"Test Fedora 21 at Rackspace with Ansible"},{"content":"","date":null,"permalink":"/tags/apache/","section":"Tags","summary":"","title":"Apache"},{"content":" BitTorrent Sync allows you to keep files synchronized between multiple computers or mobile devices. It\u0026rsquo;s a handy way to do backups, share files with friends, or automate the movement of data from device to device. It comes with a web frontend, called the Web UI, that allows for connections over HTTP or HTTPS.\nUsing HTTP across the internet to administer Sync seems totally absurd, so I decided to enable HTTPS. I quickly realized two things:\nMy SSL certificates were now specified in Apache and Sync Sync\u0026rsquo;s Web UI is relatively slow with SSL enabled (especially over higher latency links) I really wanted to keep things simple by wedging Sync into my existing Apache configuration using mod_proxy. That was easier said than done since the Web UI has some hard-coded paths for certain assets, like CSS and Javascript. After quite a bit of trial end error, this configuration works well:\nProxyPass /btsync http://127.0.0.1:8888 ProxyPassReverse /btsync http://127.0.0.1:8888 ProxyHTMLURLMap http://127.0.0.1:8888 /btsync Redirect permanent /gui /btsync/gui The ProxyPass and ProxyPassReverse lines tell Apache where to proxy the requests and it also tells Apache to make requests on behalf of the browser making the request. The ProxyHTMLURLMap directive tells Apache that any requests to /btsync from a client browser should be translated as a request to the root directory (/) of the Web UI. The last line redirects hard-coded requests to /gui up to /btsync/gui instead.\nWhen your configuration is in place, be sure to run a configuration check (httpd -S) and reload the Apache daemon. If you\u0026rsquo;d like to access your application at a different URI, just replace /btsync in the example configuration with that URI.\nOnce all this is done, I\u0026rsquo;m able to access Sync at https://example.com/btsync and Apache handles all of the backend requests properly. On some distributions, you may find that mod_proxy_html isn\u0026rsquo;t installed by default. You\u0026rsquo;ll need to install it if you want to use ProxyHTMLURLMap in your configuration. For Fedora users, just install it via yum:\nyum install mod_proxy_html Photo: Old Vintage Railway by Viktor Hanacek\n","date":"28 September 2014","permalink":"/p/apaches-mod_proxy-mod_ssl-bittorrent-sync/","section":"Posts","summary":"BitTorrent Sync allows you to keep files synchronized between multiple computers or mobile devices.","title":"Apache’s mod_proxy, mod_ssl, and BitTorrent Sync"},{"content":"","date":null,"permalink":"/tags/sync/","section":"Tags","summary":"","title":"Sync"},{"content":"\nTime Warner has gradually rolled out IPv6 connectivity to their Road Runner customers over the past couple of years and it started appearing on my home network earlier this year. I had some issues getting the leases to renew properly after they expired (TWC\u0026rsquo;s default lease length appears to be seven days) and there were some routing problems that cropped up occasionally. However, over the past month, things seem to have settled down on TWC\u0026rsquo;s San Antonio network.\nDo you have IPv6 yet? #Before you make any adjustments to your network, I\u0026rsquo;d recommend connecting your computer directly to the cable modem briefly to see if you can get an IPv6 address via stateless autoconfiguration (SLAAC). You\u0026rsquo;ll only get one IPv6 address via SLAAC, but we can get a bigger network block later on (keep reading). Check your computer\u0026rsquo;s network status to see if you received an IPv6 address. If you have one, try accessing ipv6.google.com. You can always check ipv6.icanhazip.com or ipv6.icanhaztraceroute.com as well.\nThere\u0026rsquo;s a chance your computer didn\u0026rsquo;t get an IPv6 address while directly connected to the cable modem. Here are some possible solutions:\nPower off the cable modem for 30 seconds, then plug it back in and see if your computer gets an address Ensure you have one of TWC\u0026rsquo;s approved modems. (Bear in mind that not all of these modems support IPv6.) Verify that your computer has IPv6 enabled. (Instructions for Windows, Mac and Linux are available.) But I want more addresses #If you were able to get an IPv6 address, it\u0026rsquo;s now time to allocate a network block for yourself and begin using it! We will request an allocation via DHCPv6. Every router is a little different, but the overall concept is the same. Your router will request an allocation on the network and receive that allocation from Time Warner\u0026rsquo;s network. From there, your router will assign that block to an interface (most likely your LAN, more on that in a moment) and begin handing our IPv6 addresses to devices in your home.\nBy default, TWC hands out /64 allocations regardless of what you request via DHCPv6. I had some success in late 2013 when I requested a /56 but it appears that allocations of that size aren\u0026rsquo;t available any longer. Sure, a /64 allocation is gigantic (bigger than the entire IPv4 address space), but getting a /56 would allow you to assign multiple /64 allocations to different interfaces. See the last section of this post on how to get a /56 allocation. Splitting /64\u0026rsquo;s into smaller subnets is a bad idea.\nLet\u0026rsquo;s talk security #IPv6 eliminates the need for network address translation (NAT). This means that by the time you finish this howto, each device in your network with have a publicly accessible internet address. Also, bear in mind that with almost all network devices, firewall rules and ACL\u0026rsquo;s that are configured with IPv4 will have no effect on IPv6. This means that you\u0026rsquo;ll end up with devices on your network with all of their ports exposed to the internet.\nIn Linux, be sure to use ip6tables (via firewalld, if applicable). For other network devices, review their firewall configuration settings to see how you can filter IPv6 traffic. This is a critical step. Please don\u0026rsquo;t skip it.\nOn my Mikrotik device, I have a separate IPv6 firewall interface that I can configure. Here is my default ruleset:\n/ipv6 firewall filter /ipv6 firewall filter add chain=input connection-state=related add chain=input connection-state=established add chain=forward connection-state=established add chain=input in-interface=lanbridge add chain=forward connection-state=related add chain=input dst-port=546 protocol=udp add chain=input protocol=icmpv6 add chain=forward protocol=icmpv6 add chain=forward out-interface=ether1-gateway add action=drop chain=input add action=drop chain=forward The first five rules ensure that only related or established connections can make it to my internal LAN. I allow UDP 546 for DHCPv6 connectivity and I\u0026rsquo;m allowing all ICMPv6 traffic to the router and internal devices. Finally, I allow all of my devices inside the network to talk to the internet and block the remainder of the unmatched traffic.\nConfiguring the router #It\u0026rsquo;s no secret that I\u0026rsquo;m a big fan of Mikrotik devices and I\u0026rsquo;ll guide you through the setup of IPv6 on the Mikrotik in this post. Before starting this step, ensure that your firewall is configured (see previous section).\nOn the Mikrotik, just add a simple DHCPv6 configuration. I\u0026rsquo;ll call mine \u0026rsquo;twc':\n/ipv6 dhcp-client add add-default-route=yes interface=ether1-gateway pool-name=twc After that, you should see an allocation pop up within a few seconds (run ipv6 dhcp-client print):\n# INTERFACE STATUS PREFIX EXPIRES-AFTER 0 ether1-gat... bound 2605:xxxx:xxxx:xxxx::/64 6d9h15m45s Check that a new address pool was allocated by running ipv6 pool print:\n# NAME PREFIX PREFIX-LENGTH EXPIRES-AFTER 0 D twc 2605:xxxx:xxxx:xxxx::/64 64 6d9h13m33s You can now assign that address pool to an interface. Be sure to assign the block to your LAN interface. In my case, that\u0026rsquo;s called lanbridge:\n/ipv6 address add address=2605:xxxx:xxxx:xxxx:: from-pool=twc interface=lanbridge By default, the Mikrotik device will now begin announcing that network allocation on your internal network. Some of your devices may already be picking up IPv6 addresses via SLAAC! Try accessing the Google or icanhazip IPv6 addresses from earlier in the post.\nChecking a Linux machine for IPv6 connectivity is easy. Here\u0026rsquo;s an example from a Fedora 20 server I have at home:\n$ ip -6 addr 2: em1: \u0026lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026gt; mtu 1500 qlen 1000 inet6 2605:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/64 scope global mngtmpaddr dynamic valid_lft 2591998sec preferred_lft 604798sec inet6 2605:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/64 scope global deprecated mngtmpaddr dynamic valid_lft 1871064sec preferred_lft 0sec If you only see an address that starts with fe80, that\u0026rsquo;s your link local address. It\u0026rsquo;s not an address that can be accessed from the internet.\nTroubleshooting #If you run into some problems or your router can\u0026rsquo;t pull an allocation via DHCPv6, try the troubleshooting steps from the first section of this post.\nGetting assistance from Time Warner is a real challenge. Everyone I\u0026rsquo;ve contacted via phone or Twitter has not been able to help and many of them don\u0026rsquo;t even know what IPv6 is. I was even told \u0026ldquo;we have plenty of regular IPv4 addresses left, don\u0026rsquo;t worry\u0026rdquo; when I asked for help. Even my unusual methods haven\u0026rsquo;t worked:\n@TWC_Help I'll buy one of your engineers a six pack of beer if they can enable IPv6 for my internet connection. ;) \u0026mdash; Major Hayden (@majorhayden) August 9, 2014 My old SBG6580 that was issued by Time Warner wouldn\u0026rsquo;t ever do IPv6 reliably. I ended up buying a SB6121 and I was able to get IPv6 connectivity fairly easily. The SB6121 only does 172mb/sec down - I\u0026rsquo;ll be upgrading it if TWC MAXX shows up in San Antonio.\nGet a /56 #You can get a /56 block of IP addresses from Time Warner by adding prefix-hint=::/56 onto your IPv6 dhcp client configuration. You\u0026rsquo;ll need to carve out some /64 subnets on your own for your internal network and that\u0026rsquo;s outside the scope of this post. The prefix hint configuration isn\u0026rsquo;t available in the graphical interface or on the web (at the time of this post\u0026rsquo;s writing).\n","date":"11 September 2014","permalink":"/p/howto-time-warner-cable-ipv6/","section":"Posts","summary":"","title":"HOWTO: Time Warner Cable and IPv6"},{"content":"","date":null,"permalink":"/tags/asus/","section":"Tags","summary":"","title":"Asus"},{"content":"It\u0026rsquo;s been quite a while since I built a computer but I decided to give it a try for a new hypervisor/NAS box at home. I picked up an Asus Maximus VI Gene motherboard since it had some good parts installed and it seems to work well with Linux. This was my first time doing water cooling for the CPU and I picked up a Seidon 240M after getting some recommendations.\nRubber hits the road #Once everything was in the box and the power was applied, I was stuck with an error code. There\u0026rsquo;s a two-digit LCD display on the motherboard that rapidly flips between different codes during boot-up. If it stays on a code for a while and you don\u0026rsquo;t get any display output, you have a problem. For me, this Asus Q code was 55.\nThe manual says it means that RAM isn\u0026rsquo;t installed. I pulled out my four sticks of RAM and reseated all of them. I still got the same error. After reading a bunch of forum posts, I ran through a lot of troubleshooting steps:\nReseat the RAM Try one stick of RAM and add more until the error comes back Reseat the CPU cooler (at least three times) Reseat the CPU (at least three times) Upgrade the BIOS Clear the CMOS Curse loudly, drink a beer, and come back I still had error 55 and wasn\u0026rsquo;t going anywhere fast. After some further testing, I found that if I left the two RAM slots next to the CPU empty, the system would boot. If I put any RAM in the two left RAM slots (A1 and A2), the system wouldn\u0026rsquo;t boot. Here\u0026rsquo;s an excerpt from the manual:\nCPU is on the left. RAM slots are A1, A2, B1, B2, left to right.\nFine-tuning the Google search #I adjusted my Google terms to include \u0026ldquo;A1 A2 slots\u0026rdquo; and found more posts talking about CPU coolers being installed incorrectly. Mine had to be correct - I installed it four times! I decided to try re-installing it one last time.\nWhen I removed the CPU cooler from the CPU, I noticed something strange. There are four standoffs around the CPU that the cooler would attach to with screws. Those standoffs screwed into posts that connected to a bracket on the back of the motherboard.\nThe lower two standoffs are highlighted.\nI removed the two standoffs that were closest to the A1/A2 RAM slots and noticed something peculiar. One side of the standoff had a black coating that seemed a bit tacky while the other side of the standoff was bare metal. Three of the standoffs had the black side down (against the board) while one had the black side up. I unscrewed that standoff and found that the bare metal side was wedged firmly onto some connections that run from the CPU to the A1/A2 RAM slots. Could this be the issue?\nEureka #After double-checking all of the CPU cooler standoffs and attaching the cooler to the board, I crossed my fingers and hit the power button. The machine shot through POST and I was staring down a Fedora logo that quickly led to a GDM login.\nThe culprit\nI don\u0026rsquo;t talk about hardware too often on the blog, but I certainly hopes this helps someone else who is desperately trying to find a solution.\n","date":"22 August 2014","permalink":"/p/asus-maximus-vi-gene-error-55/","section":"Posts","summary":"It\u0026rsquo;s been quite a while since I built a computer but I decided to give it a try for a new hypervisor/NAS box at home.","title":"Asus Maximus VI Gene – Error 55"},{"content":"","date":null,"permalink":"/tags/jenkins/","section":"Tags","summary":"","title":"Jenkins"},{"content":"Installing Jenkins on Fedora 20 is quite easy thanks to the available Red Hat packages, but I ran into problems when I tried to start Jenkins. Here are the installation steps I followed:\nwget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo rpm --import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key yum -y install jenkins systemctl enable jenkins systemctl start jenkins Your first error will show up if Java isn\u0026rsquo;t installed. You can fix that by installing Java:\nyum -y install java-1.7.0-openjdk-headless After installing Java, Jenkins still refused to start. Nothing showed up in the command line or via journalctl -xn, so I jumped into the Jenkins log file (found at /var/log/jenkins/jenkins.log):\nAug 13, 2014 2:21:44 PM org.eclipse.jetty.util.log.JavaUtilLog info INFO: jetty-8.y.z-SNAPSHOT Aug 13, 2014 2:21:46 PM org.eclipse.jetty.util.log.JavaUtilLog info INFO: NO JSP Support for , did not find org.apache.jasper.servlet.JspServlet My Java knowledge is relatively limited, so I tossed the JSP error message into Google. A stackoverflow thread was the first result and it talked about a possible misconfiguration with Jetty. I tried their trick of using the OPTIONS environment variable, but that didn\u0026rsquo;t work.\nThen I realized that there wasn\u0026rsquo;t a Jetty package installed on my server. Ouch. The installation continues:\nyum -y install jetty-jsp Jenkins could now get off the ground and I saw the familiar log messages that I\u0026rsquo;m more accustomed to seeing:\nAug 13, 2014 2:24:26 PM hudson.WebAppMain$3 run INFO: Jenkins is fully up and running Much of these problems could stem from the fact that Jenkins RPM\u0026rsquo;s are built to suit a wide array of system versions and the dependencies aren\u0026rsquo;t configured correctly. My hope is that the Jenkins project for Fedora 21 will alleviate some of these problems and give the user a better experience.\n","date":"13 August 2014","permalink":"/p/get-jenkins-start-fedora-20/","section":"Posts","summary":"Installing Jenkins on Fedora 20 is quite easy thanks to the available Red Hat packages, but I ran into problems when I tried to start Jenkins.","title":"Start Jenkins on Fedora 20"},{"content":"","date":null,"permalink":"/tags/httpry/","section":"Tags","summary":"","title":"Httpry"},{"content":"Red Hat Enterprise Linux and CentOS 7 users can now install httpry 0.1.8 in EPEL 7 Beta. The new httpry version is also available for RHEL/CentOS 6 and supported Fedora versions (19, 20, 21 branched, and rawhide).\nConfiguring EPEL on a RHEL/CentOS server is easy. Follow the instructions on EPEL\u0026rsquo;s site and install the epel-release RPM that matches your OS release version.\nIf you haven\u0026rsquo;t used httpry before, check the output on Jason Bittel\u0026rsquo;s site. It\u0026rsquo;s a handy way to watch almost any type of HTTP server and see the traffic in an easier to read (and easier to grep) format.\n","date":"13 August 2014","permalink":"/p/httpry-rhel-centos-7/","section":"Posts","summary":"Red Hat Enterprise Linux and CentOS 7 users can now install httpry 0.","title":"httpry 0.1.8 available for RHEL and CentOS 7"},{"content":"\nThe gist gem from GitHub allows you to quickly post text into a GitHub gist. You can use it with the public github.com site but you can also configure it to work with a GitHub Enterprise installation.\nTo get started, add two aliases to your ~/.bashrc:\nalias gist=\u0026#34;gist -c\u0026#34; alias workgist=\u0026#34;GITHUB_URL=https://github.mycompany.com gist -c\u0026#34; The -c will copy the link to the gist to your keyboard whenever you use the gist tool on the command line. Now, go through the login process with each command after sourcing your updated ~/.bashrc:\nsource ~/.bashrc gist --login (follow the prompts to auth and get an oauth token from github.com) workgist --login (follow the prompts to auth and get an oauth token from GitHub Enterprise) You\u0026rsquo;ll now be able to use both aliases quickly from the command line:\ncat boring_public_data.txt | gist cat super_sensitive_company_script.sh | workgist ","date":"8 August 2014","permalink":"/p/use-gist-gem-github-enterprise-github-com/","section":"Posts","summary":"","title":"Quickly post gists to GitHub Enterprise and github.com"},{"content":"","date":null,"permalink":"/tags/ruby/","section":"Tags","summary":"","title":"Ruby"},{"content":" While using a Dell R720 at work today, we stumbled upon a problem where the predictable network device naming with systemd gave us some unpredictable results. The server has four onboard network ports (two 10GbE and two 1GbE) and an add-on 10GbE card with two additional ports.\nRunning lspci gives this output:\n# lspci | grep Eth 01:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) 01:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) 08:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 08:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 42:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) 42:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) If you\u0026rsquo;re not familiar with that output, it says:\nTwo 10GbE ports on PCI bus 1 (ports 0 and 1) Two 1GbE ports on PCI bus 8 (ports 0 and 1) Two 10GbE ports on PCI bus 42 (ports 0 and 1) When the system boots up, the devices are named based on systemd-udevd\u0026rsquo;s criteria. Our devices looked like this after boot:\n# ip addr | egrep ^[0-9] 1: lo: \u0026lt;LOOPBACK,UP,LOWER_UP\u0026gt; mtu 65536 qdisc noqueue state UNKNOWN group default 2: enp8s0f0: \u0026lt;BROADCAST,MULTICAST\u0026gt; mtu 1500 qdisc noop state DOWN group default qlen 1000 3: enp8s0f1: \u0026lt;BROADCAST,MULTICAST\u0026gt; mtu 1500 qdisc noop state DOWN group default qlen 1000 4: enp1s0f0: \u0026lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026gt; mtu 1500 qdisc mq state UP group default qlen 1000 5: enp1s0f1: \u0026lt;BROADCAST,MULTICAST\u0026gt; mtu 1500 qdisc noop state DOWN group default qlen 1000 6: enp66s0f0: \u0026lt;BROADCAST,MULTICAST\u0026gt; mtu 1500 qdisc noop state DOWN group default qlen 1000 7: enp66s0f1: \u0026lt;BROADCAST,MULTICAST\u0026gt; mtu 1500 qdisc noop state DOWN group default qlen 1000 Devices 2-5 make sense since they\u0026rsquo;re on PCI buses 1 and 8. However, our two-port NIC on PCI bus 42 has suddenly been named 66. We rebooted the server with the rd.udev.debug kernel command line to display debug messages from systemd-udevd during boot. That gave us this:\n# journalctl | grep enp66s0f0 systemd-udevd[471]: renamed network interface eth0 to enp66s0f0 systemd-udevd[471]: NAME \u0026#39;enp66s0f0\u0026#39; /usr/lib/udev/rules.d/80-net-setup-link.rules:13 systemd-udevd[471]: changing net interface name from \u0026#39;eth0\u0026#39; to \u0026#39;enp66s0f0\u0026#39; systemd-udevd[471]: renamed netif to \u0026#39;enp66s0f0\u0026#39; systemd-udevd[471]: changed devpath to \u0026#39;/devices/pci0000:40/0000:40:02.0/0000:42:00.0/net/enp66s0f0\u0026#39; So the system sees that the enp66s0f0 device is actually on PCI bus 42. What gives? A quick trip to #systemd on Freenode caused a facepalm:\nmhayden | weird, udev shows it on pci bus 42 but yet names it 66 jwl | 0x42 = 66 I didn\u0026rsquo;t expect to see hex. Sure enough, converting 42 in hex to decimal yields 66:\n$ printf \u0026#34;%d\\n\u0026#34; 0x42 66 That also helps to explain why the devices on buses 1 and 8 were unaffected. Converting 1 and 8 in hex to decimal gives 1 and 8. If you\u0026rsquo;re new to hex, this conversion table may help.\nPhoto Credit: mindfieldz via Compfight cc\n","date":"6 August 2014","permalink":"/p/unexpected-predictable-network-naming-systemd/","section":"Posts","summary":"While using a Dell R720 at work today, we stumbled upon a problem where the predictable network device naming with systemd gave us some unpredictable results.","title":"Unexpected predictable network naming with systemd"},{"content":"","date":null,"permalink":"/tags/mac/","section":"Tags","summary":"","title":"Mac"},{"content":"My play/pause button mysteriously stopped working in iTunes and VLC mysteriously this week on my laptop. It affected the previous track and next track buttons as well. It turns out that my Google Music extension in Chrome stole the keyboard bindings after the extension updated this week.\nIf your buttons stopped working as well, follow these steps to check your keyboard shortcuts in Chrome:\nChoose Preferences in the Chrome menu in the menu bar Click Extensions in the left sidebar Scroll all the way to the bottom of the page Click Keyboard Shortcuts Look at the key bindings in the Google Play Music section Your shortcuts might look like the ones shown here in an Apple support forum. Click each box with the X to clear each key binding or click on the key binding box itself to bind it to another key combination. If you do that, it should end up like this:\nYou also have the options of switching the shortcuts to only work within Chrome by using the drop down menus to the right of the key binding boxes.\n","date":"30 July 2014","permalink":"/p/playpause-button-stopped-working-in-os-x-mavericks/","section":"Posts","summary":"My play/pause button mysteriously stopped working in iTunes and VLC mysteriously this week on my laptop.","title":"Play/pause button stopped working in OS X Mavericks"},{"content":"We\u0026rsquo;re all familiar with live booting Linux distributions. Almost every Linux distribution under the sun has a method for making live CD\u0026rsquo;s, writing live USB sticks, or booting live images over the network. The primary use case for some distributions is on a live medium (like KNOPPIX).\nHowever, I embarked on an adventure to look at live booting Linux for a different use case. Sure, many live environments are used for demonstrations or installations - temporary activities for a desktop or a laptop. My goal was to find a way to boot a large fleet of servers with live images. These would need to be long-running, stable, feature-rich, and highly configurable live environments.\nFinding off the shelf solutions wasn\u0026rsquo;t easy. Finding cross-platform off the shelf solutions for live booting servers was even harder. I worked on a solution with a coworker to create a cross-platform live image builder that we hope to open source soon. (I\u0026rsquo;d do it sooner but the code is horrific.) ;)\nDebian jessie (testing) #First off, we took a look at Debian\u0026rsquo;s Live Systems project. It consists of two main parts: something to build live environments, and something to help live environments boot well off the network. At the time of this writing, the live build process leaves a lot to be desired. There\u0026rsquo;s a peculiar tree of directories that are required to get started and the documentation isn\u0026rsquo;t terribly straightforward. Although there\u0026rsquo;s a bunch of documentation available, it\u0026rsquo;s difficult to follow and it seems to skip some critical details. (In all fairness, I\u0026rsquo;m an experienced Debian user but I haven\u0026rsquo;t gotten into the innards of Debian package/system development yet. My shortcomings there could be the cause of my problems.)\nThe second half of the Live Systems project consist of multiple packages that help with the initial boot and configuration of a live instance. These tools work extremely well. Version 4 (currently in alpha) has tools for doing all kinds of system preparation very early in the boot process and it\u0026rsquo;s compatible with SysVinit or systemd. The live images boot up with a simple SquashFS (mounted read only) and they use AUFS to add on a writeable filesystem that stays in RAM. Reads and writes to the RAM-backed filesystem are extremely quick and you don\u0026rsquo;t run into a brick wall when the filesystem fills up (more on that later with Fedora).\nUbuntu 14.04 #Ubuntu uses casper which seems to precede Debian\u0026rsquo;s Live Systems project or it could be a fork (please correct me if I\u0026rsquo;m incorrect). Either way, it seemed a bit less mature than Debian\u0026rsquo;s project and left a lot to be desired.\nFedora and CentOS #Fedora 20 and CentOS 7 are very close in software versions and they use the same mechanisms to boot live images. They use dracut to create the initramfs and there are a set of dmsquash modules that handle the setup of the live image. The livenet module allows the live images to be pulled over the network during the early part of the boot process.\nBuilding the live images is a little tricky. You\u0026rsquo;ll find good documentation and tools for standard live bootable CD\u0026rsquo;s and USB sticks, but booting a server isn\u0026rsquo;t as straightforward. Dracut expects to find a squashfs which contains a filesystem image. When the live image boots, that filesystem image is connected to a loopback device and mounted read-only. A snapshot is made via device mapper that gives you a small overlay for adding data to the live image.\nThis overlay comes with some caveats. Keeping tabs on how quickly the overlay is filling up can be tricky. Using tools like df is insufficient since device mapper snapshots are concerned with blocks. As you write 4k blocks in the overlay, you\u0026rsquo;ll begin to fill the snapshot, just as you would with an LVM snapshot. When the snapshot fills up and there are no blocks left, the filesystem in RAM becomes corrupt and unusable. There are some tricks to force it back online but I didn\u0026rsquo;t have much luck when I tried to recover. The only solution I could find was to hard reboot.\nArch #The ArchLinux live boot environments seem very similar to the ones I saw in Fedora and CentOS. All of them use dracut and systemd, so this makes sense. Arch once used a project called Larch to create live environments but it\u0026rsquo;s fallen out of support due to AUFS2 being removed (according to the wiki page).\nAlthough I didn\u0026rsquo;t build a live environment with Arch, I booted one of their live ISO\u0026rsquo;s and found their live environment to be much like Fedora and CentOS. There was a device mapper snapshot available as an overlay and once it\u0026rsquo;s full, you\u0026rsquo;re in trouble.\nOpenSUSE #The path to live booting an OpenSUSE image seems quite different. The live squashfs is mounted read only onto /read-only. An ext3 filesystem is created in RAM and is mounted on /read-write. From there, overlayfs is used to lay the writeable filesystem on top of the read-only squashfs. You can still fill up the overlay filesystem and cause some temporary problems, but you can back out those errant files and still have a useable live environment.\nHere\u0026rsquo;s the problem: overlayfs was given the green light for consideration in the Linux kernel by Linus in 2013. It\u0026rsquo;s been proposed for several kernel releases and it didn\u0026rsquo;t make it into 3.16 (which will be released soon). OpenSUSE has wedged overlayfs into their kernel tree just as Debian and Ubuntu have wedged AUFS into theirs.\nWrap-up #Building highly customized live images isn\u0026rsquo;t easy and running them in production makes it more challenging. Once the upstream kernel has a stable, solid, stackable filesystem, it should be much easier to operate a live environment for extended periods. There has been a parade of stackable filesystems over the years (remember funion-fs?) but I\u0026rsquo;ve been told that overlayfs seems to be a solid contender. I\u0026rsquo;ll keep an eye out for those kernel patches to land upstream but I\u0026rsquo;m not going to hold my breath quite yet.\n","date":"29 July 2014","permalink":"/p/adventures-in-live-booting-linux-distributions/","section":"Posts","summary":"We\u0026rsquo;re all familiar with live booting Linux distributions.","title":"Adventures in live booting Linux distributions"},{"content":"","date":null,"permalink":"/tags/arch/","section":"Tags","summary":"","title":"Arch"},{"content":"","date":null,"permalink":"/tags/filesystem/","section":"Tags","summary":"","title":"Filesystem"},{"content":"","date":null,"permalink":"/tags/live/","section":"Tags","summary":"","title":"Live"},{"content":"","date":null,"permalink":"/tags/squashfs/","section":"Tags","summary":"","title":"Squashfs"},{"content":"","date":null,"permalink":"/tags/x11/","section":"Tags","summary":"","title":"X11"},{"content":"Forwarding X over ssh is normally fairly straightforward when you have the correct packages installed. I have another post about the errors that appear when you\u0026rsquo;re missing the xorg-x11-xauth (CentOS, Fedora, RHEL) or xauth (Debian, Ubuntu) packages.\nToday\u0026rsquo;s error was a bit different. Each time I accessed a particular Debian server via ssh with X forwarding requested, I saw this:\n$ ssh -YC myserver.example.com X11 forwarding request failed on channel 0 The xauth package was installed and I found a .Xauthority file in root\u0026rsquo;s home directory. Removing the .Xauthority file and reconnecting via ssh didn\u0026rsquo;t help. After some searching, I stumbled upon a GitHub gist that had some suggestions for fixes.\nOn this particular server, IPv6 was disabled. That caused the error. The quickest fix was to restrict sshd to IPv4 only by adding this line to /etc/ssh/sshd_config:\nAddressFamily inet I restarted the ssh daemon and I was able to forward X applications over ssh once again.\n","date":"24 July 2014","permalink":"/p/x11-forwarding-request-failed-on-channel-0/","section":"Posts","summary":"Forwarding X over ssh is normally fairly straightforward when you have the correct packages installed.","title":"X11 forwarding request failed on channel 0"},{"content":"I\u0026rsquo;m always impressed with the content published by folks at Etsy and Ben Hughes\u0026rsquo; presentation from DevOpsDays Minneapolis 2014 is no exception.\nBen adds some levity to the topic of information security with some hilarious (but relevant) images and reminds us that security is an active process that everyone must practice. Everyone plays a part - not just traditional corporate security employees.\nI\u0026rsquo;ve embedded the presentation here for your convenience:\nHere\u0026rsquo;s a link to the original presentation on SpeakerDeck:\nHandmade security at Etsy ","date":"22 July 2014","permalink":"/p/etsy-reminds-us-that-information-security-is-an-active-process/","section":"Posts","summary":"I\u0026rsquo;m always impressed with the content published by folks at Etsy and Ben Hughes\u0026rsquo; presentation from DevOpsDays Minneapolis 2014 is no exception.","title":"Etsy reminds us that information security is an active process"},{"content":"I\u0026rsquo;ve been working with some Fedora environments in chroots and I ran into a peculiar SELinux AVC denial a short while ago:\navc: denied { dyntransition } for pid=809 comm=\u0026#34;sshd\u0026#34; scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:sshd_net_t:s0 tclass=process The ssh daemon is running on a non-standard port but I verified that the port is allowed with semanage port -l. The target context of sshd_net_t from the AVC seems sensible for the ssh daemon. I started to wonder if a context wasn\u0026rsquo;t applied correctly to the sshd excutable itself, so I checked within the chroot:\n# ls -alZ /usr/sbin/sshd -rwxr-xr-x. 1 root root system_u:object_r:sshd_exec_t:SystemLow 652816 May 15 03:56 /usr/sbin/sshd That\u0026rsquo;s what it should be. I double-checked my running server (which booted a squashfs containing the chroot) and saw something wrong:\n# ls -alZ /usr/sbin/sshd -rwxr-xr-x. root root system_u:object_r:file_t:s0 /usr/sbin/sshd How did file_t get there? It turns out that I was using rsync to drag data out of the chroot and I forgot to use the --xattrs argument with rsync.\n","date":"3 July 2014","permalink":"/p/avc-denied-dyntransition-from-sshd/","section":"Posts","summary":"I\u0026rsquo;ve been working with some Fedora environments in chroots and I ran into a peculiar SELinux AVC denial a short while ago:","title":"AVC: denied dyntransition from sshd"},{"content":"My work at Rackspace has involved working with a bunch of Debian chroots lately. One problem I had was that daemons tried to start in the chroot as soon as I installed them. That created errors and made my ansible output look terrible.\nIf you\u0026rsquo;d like to prevent daemons from starting after installing a package, just toss a few lines into /usr/sbin/policy-rc.d:\ncat \u0026gt; /usr/sbin/policy-rc.d \u0026lt;\u0026lt; EOF #!/bin/sh echo \u0026#34;All runlevel operations denied by policy\u0026#34; \u0026gt;\u0026amp;2 exit 101 EOF Now, install any packages that you need and the daemons will remain stopped until you start them (or reboot the server). Be sure to remove the policy file you added once you\u0026rsquo;re done installing your packages.\nThis seems like a good opportunity to get on a soapbox about automatically starting daemons. ;)\nI still have a very difficult time understanding why Debian-based distributions start daemons as soon as the package is installed. Having an option to enable this might be useful for some situations, but this shouldn\u0026rsquo;t be the default.\nYou end up with situations like the one in this puppet bug report. The daemon shouldn\u0026rsquo;t start until you\u0026rsquo;re ready to configure it and use it. However, the logic is that the daemon is so horribly un-configured that it shouldn\u0026rsquo;t hurt anything if starts immediately. So why start the daemon at all?\nWhen I run the command apt-get install or yum install, I expect that packages will be installed to disk and nothing more. Even the definition of the English word \u0026ldquo;install\u0026rdquo; talks about “preparing” something for use, not actually using it:\nTo connect, set up or prepare something for use\nIf I install an electrical switch at home, I don\u0026rsquo;t install it in the ON position with my circuit breaker in the ON position. I install it with everything off, verify my work, ensure that it fits in place, and then I apply power. The installation and actual use of the new switch are two completely separate activities with additional work required in between.\nI strongly urge the Debian community to consider switching to a mechanism where daemons don\u0026rsquo;t start until the users configure them properly and are ready to use them. This makes configuration management much easier, improves security, and provides consistency with almost every other Linux distribution.\n","date":"26 June 2014","permalink":"/p/install-debian-packages-without-starting-daemons/","section":"Posts","summary":"My work at Rackspace has involved working with a bunch of Debian chroots lately.","title":"Install Debian packages without starting daemons"},{"content":"Working with ansible is enjoyable, but it\u0026rsquo;s a little bland when you use it with Jenkins. Jenkins doesn\u0026rsquo;t spawn a TTY and that causes ansible to skip over the code that outputs status lines with colors. The fix is relatively straightforward.\nFirst, install the AnsiColor Plugin on your Jenkins node.\nOnce that\u0026rsquo;s done, edit your Jenkins job so that you export ANSIBLE_FORCE_COLOR=true before running ansible:\nexport ANSIBLE_FORCE_COLOR=true ansible-playbook -i hosts site.yml If your ansible playbook requires sudo to run properly on your local host, be sure to use the -E option with sudo so that your environment variables are preserved when your job runs. For example:\nexport ANSIBLE_FORCE_COLOR=true sudo -E ansible-playbook -i hosts site.yml HOLD UP: As Sam Sharpe reminded me, the better way to handle environment variables with sudo is to add them to env_keep in your sudoers file (use visudo to edit it):\nDefaults env_reset Defaults env_keep += \u0026#34;ANSIBLE_FORCE_COLOR\u0026#34; Adding it to env_keep is a more secure method and you won\u0026rsquo;t need the -E any longer on the command line.\nWhile you\u0026rsquo;re on the configuration page for your Jenkins job, look for Color ANSI Console Output under the Build Environment section. Enable it and ensure xterm is selected in the drop-down box.\nSave your new configuration and run your job again. You should have some awesome colors in your console output when your ansible job runs.\n","date":"25 June 2014","permalink":"/p/get-colorful-ansible-output-in-jenkins/","section":"Posts","summary":"Working with ansible is enjoyable, but it\u0026rsquo;s a little bland when you use it with Jenkins.","title":"Get colorful ansible output in Jenkins"},{"content":"","date":null,"permalink":"/tags/sudo/","section":"Tags","summary":"","title":"Sudo"},{"content":"Dell provides the racadm software on Linux that allows you to manage Dell hardware from a Linux system. Getting it installed on a very modern distribution like Fedora 20 isn\u0026rsquo;t supported, but here are some steps that might help you along the way:\nFirst off, go to Dell\u0026rsquo;s site and review the racadm download instructions. I\u0026rsquo;d recommend following the Remote RACADM instructions so that you can manage multiple systems from your Fedora installation. You\u0026rsquo;ll be looking for a download with the text Linux Remote Access Utilities in the name. At the time of this post\u0026rsquo;s writing, the filename is OM-MgmtStat-Dell-Web-LX-7.4.0-866_A00.tar.gz.\nUn-tar the file and you\u0026rsquo;ll get two directories dumped out into your working directory: docs and linux:\ntar xvzf OM-MgmtStat-Dell-Web-LX-7.4.0-866_A00.tar.gz cd linux/rac/RHEL6/x86_64/ yum localinstall *.rpm That should install all of the software you need. There weren\u0026rsquo;t any dependencies to install on my Fedora workstation but yum should take care of these for you if you have a more minimal installation.\nOnce that\u0026rsquo;s done, close your shell and re-open it. You should be able to run racadm from your terminal. You\u0026rsquo;ll probably get an error like this if you run it:\nERROR: Failed to initialize transport Running strace reveals that racadm is looking for libssl.so but can\u0026rsquo;t find it. Fix that by installing openssl-devel:\nyum -y install openssl-devel Now you should be able to run racadm and configure your servers!\n","date":"20 June 2014","permalink":"/p/getting-dells-racadm-working-in-fedora-20/","section":"Posts","summary":"Dell provides the racadm software on Linux that allows you to manage Dell hardware from a Linux system.","title":"Getting Dell’s racadm working in Fedora 20"},{"content":"I talked about the joys of running my own mail server last week only to find that my mail server was broken yesterday. Spamassassin stopped doing DNS lookups for RBL and SPF checks.\nI had one of these moments:\nMy logs looked like this:\nplugin: eval failed: available_nameservers: No DNS servers available! plugin: eval failed: available_nameservers: No DNS servers available! rules: failed to run NO_DNS_FOR_FROM RBL test, skipping: (available_nameservers: [...] No DNS servers available!) (available_nameservers: [...] No DNS servers available! My /etc/resolv.conf was correct and had two valid DNS servers listed. Also, the permissions set on /etc/resolv.conf were reasonable (0644) and the SELinux context applied to the file was appropriate (net_conf_t). Everything else on the system was able to resolve DNS records properly. Even an strace on the spamd process showed it reading /etc/resolv.conf successfully!\nIt was Google time. I put some snippets of my error output into the search bar and found a spamassassin bug report. Mark Martinec found the root cause of the bug:\nNet::DNS version 0.76 changed the field name holding a set of nameservers in a Net::DNS::Resolver object: it used to be \u0026rsquo;nameservers\u0026rsquo;, but is now split into two fields: \u0026rsquo;nameserver4\u0026rsquo; and \u0026rsquo;nameserver6'.\nMail/SpamAssassin/DnsResolver.pm relied on the internal field name of a Net::DNS::Resolver object to obtain a default list of recursive name servers, so the change in Net::DNS broke that.\nThe patch from the bug report worked just fine on my Fedora 20 mail server. Be sure to restart spamd after making the change.\nThere\u0026rsquo;s a Fedora bug report as well.\nIf anyone is interested, I plan to write up my email configuration on Fedora soon for other folks to use. I might even make some ansible playbooks for it. ;)\nFedora update: Fedora\u0026rsquo;s spamassassin package has been updated to 3.4.0-7 and it fixes two bugs. You\u0026rsquo;ll find it in the stable repositories in a few days.\n","date":"20 June 2014","permalink":"/p/fixing-broken-dns-lookups-in-spamassassin/","section":"Posts","summary":"I talked about the joys of running my own mail server last week only to find that my mail server was broken yesterday.","title":"Fixing broken DNS lookups in spamassassin"},{"content":"","date":null,"permalink":"/tags/perl/","section":"Tags","summary":"","title":"Perl"},{"content":"","date":null,"permalink":"/tags/postfix/","section":"Tags","summary":"","title":"Postfix"},{"content":"","date":null,"permalink":"/tags/spamassassin/","section":"Tags","summary":"","title":"Spamassassin"},{"content":"Citrix has some helpful documentation online about configuring remote syslog support for XenServer using the XenCenter GUI. However, if you need to do this via configuration management or scripts, using a GUI isn\u0026rsquo;t an option.\nGetting it done via the command line is relatively easy:\nHOSTUUID=`xe host-list --minimal` SYSLOGHOST=syslog.example.com xe host-param-set uuid=${HOSTUUID} logging:syslog_destination=${SYSLOGHOST} xe host-syslog-reconfigure host-uuid=${HOSTUUID} Removing the configuration and going back to only local logging is easy as well:\nHOSTUUID=`xe host-list --minimal` xe host-param-clear uuid=${HOSTUUID} param-name=logging xe host-syslog-reconfigure host-uuid=${HOSTUUID} ","date":"3 June 2014","permalink":"/p/configure-remote-syslog-for-xenserver-via-the-command-line/","section":"Posts","summary":"Citrix has some helpful documentation online about configuring remote syslog support for XenServer using the XenCenter GUI.","title":"Configure remote syslog for XenServer via the command line"},{"content":"","date":null,"permalink":"/tags/syslog/","section":"Tags","summary":"","title":"Syslog"},{"content":"","date":null,"permalink":"/tags/xenserver/","section":"Tags","summary":"","title":"Xenserver"},{"content":"This post appeared on the Rackspace Blog last week and I copied it here so that readers of this blog will see it.\nYou\u0026rsquo;ve heard it before: information security isn\u0026rsquo;t easy. There\u0026rsquo;s no perfect security policy or piece of technology that will protect your business from all attacks. However, security is a process and processes can always be improved.\nLast month, the great folks at Accruent invited me to talk about this topic at the annual Accruent Insights 2014 conference held in Austin, Texas. Their users wanted to know more about the Target breach and the Heartbleed attack, as well as strategies for strengthening their security safeguards against unknown threats.\nTo understand these threats, it\u0026rsquo;s important to have a good grasp of the basic concepts around information security. Businesses don\u0026rsquo;t exist to be secure; they exist to build innovative products, create relationships with customers and provide a great work environment for their employees. Security must be woven into the processes that drive a business forward. There\u0026rsquo;s no finish line for security and it\u0026rsquo;s rarely successful when it\u0026rsquo;s bolted on as an afterthought.\nDonald Rumsfeld delivered an unexpectedly cohesive summary of modern information security back in 2002 when reporters asked him about the lack of evidence surrounding Iraq and weapons of mass destruction:\nReports that say there\u0026rsquo;s - that something hasn\u0026rsquo;t happened are always interesting to me, because as we know, there are known knowns; there are things that we know that we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns, the ones we don\u0026rsquo;t know we don\u0026rsquo;t know.\n-Donald Rumsfeld, United States Secretary of Defense\nRumsfeld probably didn\u0026rsquo;t know it at the time, but he summarized the challenges of information security in a few sentences. There are things we know will be problematic (a known known) and we must fix them or prepare ourselves for the damage they may cause. There are other things that we don\u0026rsquo;t know enough about (a known unknown) and we must learn more about them. The last group, the unknown unknowns, is the most challenging. If you\u0026rsquo;re looking for a good example of these, just examine the Heartbleed attack.\nDealing with all of these attacks requires a multi-layer approach: preventative, detective and corrective.\nThe preventative layer reduces your chances of being breached. If you lock your doors or close your blinds when you leave your home, then you already understand the value of the preventative layer. Making the attacker\u0026rsquo;s job more difficult reduces the chance that they will target you. Let\u0026rsquo;s face it: most attackers are looking for an easy target. Going after a hard target means there\u0026rsquo;s a greater risk of getting caught.\nHowever, there are situations where someone has targeted your business individually, and they will do whatever it takes to get what they want. It\u0026rsquo;s critical to detect that activity as soon as it occurs. At home, we set our security alarms and join neighborhood watch programs. These measures will alert us to attacks that make it through our preventative layers. Businesses might use intrusion detection systems or log monitoring solutions in their defensive layer.\nWhen all else fails, the corrective layer is the last line of defense. This layer consists of the things you must do to remove a threat and return everything back to normal. For property owners, examples of the corrective layer include calling the police, purchasing homeowner\u0026rsquo;s insurance or acquiring firearms. These mechanisms are much more costly, and they require thought before they\u0026rsquo;re used.\nEach layer gives you a feedback loop for the previous layers. For example, if someone breaks in through a window and takes your TV, you may invest in better detective layers (like an alarm system with a glass break sensor) or preventative layers (like thorny bushes in front of your windows).\nIf these layers make sense, then you understand defense in depth and risk management. Defense in depth requires you to assume the worst and build more layers of defense (think about castles). Risk management involves identifying and avoiding risk. If you have heirloom jewelry at home, you might place it in fire safe. You\u0026rsquo;ve just practiced defense in depth (the jewelry is in a locked safe in a locked house) and risk management (there\u0026rsquo;s a high impact to you if the jewelry is stolen and you reduced the risk).\nIn summary, good security practice stems from exactly that: practice security each day and make it part of your normal business processes. Security improvements must be made with changes to people, process and technology. The businesses that truly excel in information security are those that insulate themselves from risk-internal and external-with effective preventative, detective and corrective layers.\nIf you\u0026rsquo;d like to review the presentation slides from the Accruent Insights conference, you can find them on SlideShare.\nI\u0026rsquo;m always trying to get better at presenting so please feel free to send me some constructive criticism. ;)\n","date":"24 May 2014","permalink":"/p/evade-the-breach/","section":"Posts","summary":"This post appeared on the Rackspace Blog last week and I copied it here so that readers of this blog will see it.","title":"Evade the Breach"},{"content":"It seems like everyone is embracing systemd these days. It\u0026rsquo;s been in Fedora since 2011 and it\u0026rsquo;s already in the RHEL 7 release candidate. Arch Linux and Gentoo have it as well. Debian got on board with the jessie release (which is currently in testing).\nSwitching from old SysVinit to systemd in Debian jessie is quite simple. For the extremely cautious system administrators, you can follow Debian\u0026rsquo;s guide and test systemd before you make the full cutover.\nHowever, I\u0026rsquo;ve had great results with making the jump in one pass:\napt-get update apt-get install systemd systemd-sysv reboot After you reboot, you might notice /sbin/init still hanging out in your process list:\n# ps aufx | grep init root 1 0.0 0.1 45808 3820 ? Ss 08:16 0:00 /sbin/init That\u0026rsquo;s actually a symlink to systemd:\n# ls -al /sbin/init lrwxrwxrwx 1 root root 20 Mar 19 13:15 /sbin/init -\u0026gt; /lib/systemd/systemd You also have journald for quick access to logs:\n# journalctl -u cron -- Logs begin at Tue 2014-05-20 08:16:21 CDT, end at Tue 2014-05-20 08:31:20 CDT. -- May 20 08:16:24 jessie-auditd-2 /usr/sbin/cron[837]: (CRON) INFO (pidfile fd = 3) May 20 08:16:24 jessie-auditd-2 cron[774]: Starting periodic command scheduler: cron. May 20 08:16:24 jessie-auditd-2 systemd[1]: Started LSB: Regular background program processing daemon. May 20 08:16:24 jessie-auditd-2 /usr/sbin/cron[842]: (CRON) STARTUP (fork ok) May 20 08:16:24 jessie-auditd-2 /usr/sbin/cron[842]: (CRON) INFO (Running @reboot jobs) May 20 08:17:01 jessie-auditd-2 CRON[990]: pam_unix(cron:session): session opened for user root by (uid=0) May 20 08:17:01 jessie-auditd-2 /USR/SBIN/CRON[991]: (root) CMD ( cd / \u0026amp;\u0026amp; run-parts --report /etc/cron.hourly) ","date":"20 May 2014","permalink":"/p/switching-to-systemd-on-debian-jessie/","section":"Posts","summary":"It seems like everyone is embracing systemd these days.","title":"Switching to systemd on Debian jessie"},{"content":"I\u0026rsquo;m in the process of trying Fedora 20 on my retina MacBook and I ran into a peculiar issue with Chrome. Some sites would load up normally and I could read everything on the page. Other sites would load up and only some of the text would be displayed. Images were totally unaffected.\nIt wasn\u0026rsquo;t this way on the initial installation of Fedora but it cropped up somewhere along the way as I installed software. Changing the configuration within Chrome wasn\u0026rsquo;t an option - I couldn\u0026rsquo;t even see any text on the configuration pages!\nThe only commonality I could find is that all pages that specified their own web fonts (like the pages on this site) loaded up perfectly. Everything was visible. However, on sites that tend to use whatever font is available in the browser (sites that specify a font family), the text was missing. A good example was The Aviation Herald.\nI remembered installing some Microsoft core fonts via Fedy and I added in some patched powerline fonts to work with tmux. A quick check of the SELinux troubleshooter alerted me to the problem: the new fonts had the wrong SELinux labels applied and Chrome wasn\u0026rsquo;t allowed to access them.\nI decided to relabel the whole filesystem:\nrestorecon -Rv / The restorecon output was line after line of fonts that I had installed earlier in the evening. Once it finished running, I started Chrome and it was working just as I had expected.\n","date":"18 May 2014","permalink":"/p/text-missing-in-chrome-on-linux/","section":"Posts","summary":"I\u0026rsquo;m in the process of trying Fedora 20 on my retina MacBook and I ran into a peculiar issue with Chrome.","title":"Text missing in chrome on Linux"},{"content":"During one of my regular trips to reddit, I stumbled upon an amazingly helpful Linux I/O stack diagram:\nIt\u0026rsquo;s quite comprehensive and it can really help if you\u0026rsquo;re digging through a bottleneck and you\u0026rsquo;re not quite sure where to look. The original diagram is available in multiple formats from Thomas Krenn\u0026rsquo;s website.\nIf you combine that with this slide from Brendan Gregg\u0026rsquo;s Linux Performance Analysis and Tools presentation from Scale 11x, you can attack performance problems with precision:\n","date":"30 April 2014","permalink":"/p/helpful-linux-io-stack-diagram/","section":"Posts","summary":"During one of my regular trips to reddit, I stumbled upon an amazingly helpful Linux I/O stack diagram:","title":"Helpful Linux I/O stack diagram"},{"content":"Amid all of the Docker buzz at the Red Hat Summit, Project Atomic was launched. It\u0026rsquo;s a minimalistic Fedora 20 image with a few tweaks, including rpm-ostree and geard.\nThere are great instructions on the site for firing up a test instance under KVM but my test server doesn\u0026rsquo;t have a DHCP server on its network. You can use Project Atomic with static IP addresses fairly easily:\nCreate a one-line /etc/sysconfig/network:\nNETWORKING=yes Drop in a basic network configuration into /etc/sysconfig/network-scripts/ifcfg-eth0:\nDEVICE=eth0 IPADDR=10.127.92.32 NETMASK=255.255.255.0 GATEWAY=10.127.92.1 ONBOOT=yes All that\u0026rsquo;s left is to set DNS servers and a hostname:\necho \u0026#34;nameserver 8.8.8.8\u0026#34; \u0026gt; /etc/resolv.conf hostnamectl set-hostname myatomichost.example.com Bring up the network interface:\nifup eth0 Of course, you could do all of this via the nmcli tool if you prefer to go that route.\n","date":"23 April 2014","permalink":"/p/configure-static-ip-addresses-for-project-atomics-kvm-image/","section":"Posts","summary":"Amid all of the Docker buzz at the Red Hat Summit, Project Atomic was launched.","title":"Configure static IP addresses for Project Atomic’s KVM image"},{"content":" Getting started with LXC is a bit awkward and I\u0026rsquo;ve assembled this guide for anyone who wants to begin experimenting with LXC containers in Fedora 20. As an added benefit, you can follow almost every step shown here when creating LXC containers on Red Hat Enterprise Linux 7 Beta (which is based on Fedora 19).\nYou\u0026rsquo;ll need a physical machine or a VM running Fedora 20 to get started. (You could put a container in a container, but things get a little dicey with that setup. Let\u0026rsquo;s just avoid talking about nested containers for now. No, really, I shouldn\u0026rsquo;t have even brought it up. Sorry about that.)\nPrep Work #Start by updating all packages to the latest versions available:\nyum -y upgrade Verify that SELinux is in enforcing mode by running getenforce. If you see Disabled or Permissive, get SELinux into enforcing mode with a quick configuration change:\nsed -i \u0026#39;s/^SELINUX=.*/SELINUX=enforcing/\u0026#39; /etc/selinux/config I recommend installing setroubleshoot-server to make it easier to find the root cause of AVC denials:\nReboot now. This will ensure that SELinux comes up in enforcing mode (verify that with getenforce after reboot) and it ensures that auditd starts up sedispatch (for setroubleshoot).\nInstall management libraries and utilities #Let\u0026rsquo;s grab libvirt along with LXC support and a basic NAT networking configuration.\nyum -y install libvirt-daemon-lxc libvirt-daemon-config-network Launch libvirtd via systemd and ensure that it always comes up on boot. This step will also adjust firewalld for your containers and ensure that dnsmasq is serving up IP addresses via DHCP on your default NAT network.\nsystemctl start libvirtd.service systemctl enable libvirtd.service Bootstrap our container #Installing packages into the container\u0026rsquo;s filesystem will take some time.\nyum -y --installroot=/var/lib/libvirt/filesystems/fedora20 --releasever=20 --nogpg install systemd passwd yum fedora-release vim-minimal openssh-server procps-ng iproute net-tools dhclient This step fills in the filesystem with the necessary packages to run a Fedora 20 container. We now need to tell libvirt about the container we\u0026rsquo;ve just created.\nvirt-install --connect lxc:// --name fedora20 --ram 512 --filesystem /var/lib/libvirt/filesystems/fedora20/,/ At this point, libvirt will know enough about the container to start it and you\u0026rsquo;ll be connected to the console of the container! We need to adjust some configuration files within the container to use it properly. Detach from the console with CTRL-].\nLet\u0026rsquo;s stop the container so we can make some adjustments.\nvirsh -c lxc:// shutdown fedora20 Get the container ready for production #Hop into your container and set a root password.\nchroot /var/lib/libvirt/filesystems/fedora20 /bin/passwd root We will be logging in as root via the console occasionally and we need to allow that access.\necho \u0026#34;pts/0\u0026#34; \u0026gt;\u0026gt; /var/lib/libvirt/filesystems/fedora20/etc/securetty Since we will be using our NAT network with our auto-configured dnsmasq server (thanks to libvirt), we can configure a simple DHCP setup for eth0:\ncat \u0026lt; \u0026lt; EOF \u0026gt; /var/lib/libvirt/filesystems/fedora20/etc/sysconfig/network NETWORKING=yes EOF cat \u0026lt; \u0026lt; EOF \u0026gt; /var/lib/libvirt/filesystems/fedora20/etc/sysconfig/network-scripts/ifcfg-eth0 BOOTPROTO=dhcp ONBOOT=yes DEVICE=eth0 EOF Using ssh makes the container a lot easier to manage, so let\u0026rsquo;s ensure that it starts when the container boots. (You could do this via systemctl after logging in at the console, but I\u0026rsquo;m lazy.)\nchroot /var/lib/libvirt/filesystems/fedora20/ ln -s /usr/lib/systemd/system/sshd.service /etc/systemd/system/multi-user.target.wants/ exit Launch! #Cross your fingers and launch the container.\nvirsh -c lxc:// start --console fedora20 You\u0026rsquo;ll be attached to the console during boot but don\u0026rsquo;t worry, hold down CTRL-] to get back to your host prompt. Check the dnsmasq leases to find your container\u0026rsquo;s IP address and you can login as root over ssh.\ncat /var/lib/libvirt/dnsmasq/default.leases Security #After logging into your container via ssh, check the process labels within the container:\n# ps aufxZ LABEL USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 1 0.0 1.3 47444 3444 ? Ss 03:18 0:00 /sbin/init system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 18 0.0 2.0 43016 5368 ? Ss 03:18 0:00 /usr/lib/systemd/systemd-journald system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 38 0.4 7.8 223456 20680 ? Ssl 03:18 0:00 /usr/bin/python -Es /usr/sbin/firewalld - system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 40 0.0 0.7 26504 2084 ? Ss 03:18 0:00 /usr/sbin/smartd -n -q never system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 41 0.0 0.4 19268 1252 ? Ss 03:18 0:00 /usr/sbin/irqbalance --foreground system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 44 0.0 0.6 34696 1636 ? Ss 03:18 0:00 /usr/lib/systemd/systemd-logind system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 46 0.0 1.8 267500 4832 ? Ssl 03:18 0:00 /sbin/rsyslogd -n system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 dbus 47 0.0 0.6 26708 1680 ? Ss 03:18 0:00 /bin/dbus-daemon --system --address=syste system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 rpc 54 0.0 0.5 41992 1344 ? Ss 03:18 0:00 /sbin/rpcbind -w system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 55 0.0 0.3 25936 924 ? Ss 03:18 0:00 /usr/sbin/atd -f system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 56 0.0 0.5 22728 1488 ? Ss 03:18 0:00 /usr/sbin/crond -n system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 60 0.0 0.2 6412 784 pts/0 Ss+ 03:18 0:00 /sbin/agetty --noclear -s console 115200 system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 74 0.0 3.2 339808 8456 ? Ssl 03:18 0:00 /usr/sbin/NetworkManager --no-daemon system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 394 0.0 5.9 102356 15708 ? S 03:18 0:00 \\_ /sbin/dhclient -d -sf /usr/libexec/nm system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 polkitd 83 0.0 4.4 514792 11548 ? Ssl 03:18 0:00 /usr/lib/polkit-1/polkitd --no-debug system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 rpcuser 110 0.0 0.6 46564 1824 ? Ss 03:18 0:00 /sbin/rpc.statd system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 111 0.0 1.3 82980 3620 ? Ss 03:18 0:00 /usr/sbin/sshd -D system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 409 0.0 1.9 131576 5084 ? Ss 03:18 0:00 \\_ sshd: root@pts/1 system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 413 0.0 0.9 115872 2592 pts/1 Ss 03:18 0:00 \\_ -bash system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 438 0.0 0.5 123352 1344 pts/1 R+ 03:19 0:00 \\_ ps aufxZ system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 411 0.0 0.8 44376 2252 ? Ss 03:18 0:00 /usr/lib/systemd/systemd --user system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 412 0.0 0.5 66828 1328 ? S 03:18 0:00 \\_ (sd-pam) system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 436 0.0 0.4 21980 1144 ? Ss 03:19 0:00 /usr/lib/systemd/systemd-hostnamed You\u0026rsquo;ll notice something interesting if you run getenforce now within the container — SELinux is disabled. Actually, it\u0026rsquo;s not really disabled. The processing of SELinux policy is done on the host. The container isn\u0026rsquo;t able to see what\u0026rsquo;s going on outside of its own files and processes. The libvirt documentation for LXC hints at the importance of this isolation:\nA suitably configured UID/GID mapping is a pre-requisite to making containers secure, in the absence of sVirt confinement.\nIn the absence of the “user” namespace being used, containers cannot be considered secure against exploits of the host OS. The sVirt SELinux driver provides a way to secure containers even when the “user” namespace is not used. The cost is that writing a policy to allow execution of arbitrary OS is not practical. The SELinux sVirt policy is typically tailored to work with an simpler application confinement use case, as provided by the “libvirt-sandbox” project.\nThis leads to something really critical to understand:\nContainers don\u0026rsquo;t contain #Dan Walsh has a great post that goes into the need for sVirt and the protections it can provide when you need to be insulated from potentially dangerous virtual machines or containers. If a user is root inside a container, they\u0026rsquo;re root on the host as well. (There\u0026rsquo;s an exception: UID namespaces. But let\u0026rsquo;s not talk about that now. Oh great, first it was nested containers and now I brought up UID namespaces. Sorry again.)\nDan\u0026rsquo;s talk about securing containers hasn\u0026rsquo;t popped up on the Red Hat Summit presentations page quite yet but here are some notes that I took and then highlighted:\nContainers don\u0026rsquo;t contain. The kernel doesn\u0026rsquo;t know about containers. Containers simply use kernel subsystems to carve up namespaces for applications. Containers on Linux aren\u0026rsquo;t complete. Don\u0026rsquo;t compare directly to Solaris zones yet. Running containers without Mandatory Access Control (MAC) systems like SELinux or AppArmor opens the door for full system compromise via untrusted applications and users within containers. Using MAC gives you one extra barrier to keep a malicious container from getting higher levels of access to the underlying host. There\u0026rsquo;s always a chance that a kernel exploit could bypass MAC but it certainly raises the level of difficulty for an attacker and allows server operators extra time to react to alerts.\n","date":"22 April 2014","permalink":"/p/launch-secure-lxc-containers-on-fedora-20-using-selinux-and-svirt/","section":"Posts","summary":"Getting started with LXC is a bit awkward and I\u0026rsquo;ve assembled this guide for anyone who wants to begin experimenting with LXC containers in Fedora 20.","title":"Launch secure LXC containers on Fedora 20 using SELinux and sVirt"},{"content":"As I wait in the airport to fly back home from this year\u0026rsquo;s Red Hat Summit, I\u0026rsquo;m thinking back over the many conversations I had over breakfast, over lunch, and during the events. One common theme that kept cropping up was around bringing DevOps to the enterprise. I stumbled upon Mathias Meyer\u0026rsquo;s post, The Developer is Dead, Long Live the Developer, and I was inspired to write my own.\nBefore I go any further, here\u0026rsquo;s my definition of DevOps: it\u0026rsquo;s a mindset shift where everyone is responsible for the success of the customer experience. The success (and failure) of the project rests on everyone involved. If it goes well, everyone celebrates and looks for ways to highlight what worked well. If it fails, everyone gets involved to bring it back on track. Doing this correctly means that your usage of \u0026ldquo;us\u0026rdquo; and \u0026ldquo;them\u0026rdquo; should decrease sharply.\nThe issue at hand #One of the conference attendees told me that he and his technical colleagues are curious about trying DevOps but their organization isn\u0026rsquo;t set up in a way to make it work. On top of that, very few members of the teams knew about the concept of continuous delivery and only one or two people knew about tools that are commonly used to practice it.\nI dug deeper and discovered that they have outages just like any other company and they treat outages as an operations problem primarily. Operations teams don\u0026rsquo;t get much sleep and they get frustrated with poorly written code that is difficult to deploy, upgrade, and maintain. Feedback loops with the development teams are relatively non-existent since the development teams report into a different portion of the business. His manager knows that something needs to change but his manager wasn\u0026rsquo;t sure how to change it.\nHis company certainly isn\u0026rsquo;t unique. My advice for him was to start a three step process:\nStep 1: Start a conversation around responsibility. #Leaders need to understand that the customer experience is key and that experience depends on much more than just uptime. This applies to products and systems that support internal users within your company and those that support your external customers.\nImagine if you called for pizza delivery and received a pizza without any cheese. You drive back to the pizza place to show the manager the partial pizza you received. The manager turns to the employees and they point to the person assigned to putting toppings on the pizza. They might say: \u0026ldquo;It\u0026rsquo;s his fault, I did my part and put it in the oven.\u0026rdquo; The delivery driver might say: \u0026ldquo;Hey, I did what I was supposed to and I delivered the pizza. It\u0026rsquo;s not my fault.\u0026rdquo;\nAll this time, you, the customer, are stuck holding a half made pizza. Your experience is awful.\nLooking back, the person who put the pizza in the oven should have asked why it was only partially made. The delivery driver should have asked about it when it was going into the box. Most important of all, the manager should have turned to the employees and put the responsibility on all of them to make it right.\nStep 2: Foster collaboration via cross-training. #Once responsibility is shared, everyone within the group needs some knowledge of what other members of the group do. This is most obvious with developers and operations teams. Operations teams need to understand what the applications do and where their weak points are. Developers need to understand resource constraints and how to deploy their software. They don\u0026rsquo;t need to become experts but they need to know enough overlapping knowledge to build a strong, healthy feedback loop.\nThis cross-training must include product managers, project managers, and leaders. Feedback loops between these groups will only be successful if they can speak some of the language of the other groups.\nStep 3: Don\u0026rsquo;t force tooling. #Use the tools that make the most sense to the groups that need to use them. Just because a particular software tool helps another company collaborate or deploy software more reliably doesn\u0026rsquo;t mean it will have a positive impact on your company.\nWatch out for the \u0026ldquo;sunk cost\u0026rdquo; fallacy as well. Neal Ford talked about this during a talk at the Red Hat Summit and how it can really stunt the growth of a high performing team.\nSummary #The big takeaway from this post is that making the mindset shift is the first and most critical step if you want to use the DevOps model in a large organization. The first results you\u0026rsquo;ll see will be in morale and camaraderie. That builds momentum faster than anything else and will carry teams into the idea of shared responsibility and ownership.\n","date":"17 April 2014","permalink":"/p/devops-and-enterprise-inertia/","section":"Posts","summary":"As I wait in the airport to fly back home from this year\u0026rsquo;s Red Hat Summit, I\u0026rsquo;m thinking back over the many conversations I had over breakfast, over lunch, and during the events.","title":"DevOps and enterprise inertia"},{"content":"The openssl heartbleed bug has made the rounds today and there are two new testing builds or openssl out for Fedora 19 and 20:\nFedora 19 Fedora 20 Both builds are making their way over into the updates-testing stable repository thanks to some quick testing and karma from the Fedora community.\nIf the stable updates haven\u0026rsquo;t made it into your favorite mirror yet, you can live on the edge and grab the koji builds:\nFor Fedora 19 x86_64: #yum -y install koji koji download-build --arch=x86_64 openssl-1.0.1e-37.fc19.1 yum localinstall openssl-1.0.1e-37.fc19.1.x86_64.rpm For Fedora 20 x86_64: #yum -y install koji koji download-build --arch=x86_64 openssl-1.0.1e-37.fc20.1 yum localinstall openssl-1.0.1e-37.fc20.1.x86_64.rpm Be sure to replace x86_64 with i686 for 32-bit systems or armv7hl for ARM systems (Fedora 20 only). If your system has openssl-libs or other package installed, be sure to install those with yum as well.\nKudos to Dennis Gilmore for the hard work and to the Fedora community for the quick tests.\n","date":"8 April 2014","permalink":"/p/openssl-heartbleed-updates-for-fedora-19-and-20/","section":"Posts","summary":"The openssl heartbleed bug has made the rounds today and there are two new testing builds or openssl out for Fedora 19 and 20:","title":"openssl heartbleed updates for Fedora 19 and 20"},{"content":"Docker is a hot topic in the Linux world at the moment and I decided to try out the new trusted build process. Long story short, you put your Dockerfile along with any additional content into your GitHub repository, link your GitHub account with Docker, and then fire off a build. The Docker index labels it as \u0026ldquo;trusted\u0026rdquo; since it was build from source files in your repository.\nI set off to build a Dockerfile to provision a container that would run all of the icanhazip services. Getting httpd running was a little tricky, but I soon had a working Dockerfile that built and ran successfully on Fedora 20.\nThe trusted build process kicked off without much fuss and I found myself waiting for a couple of hours for my job to start. I was sad to see an error after waiting so long:\nInstalling : httpd-2.4.7-3.fc20.x86_64 error: unpacking of archive failed on file /usr/sbin/suexec: cpio: cap_set_file Well, that\u0026rsquo;s weird. It turns out that cap_set_file is part of libcap that sets filesystem capabilities based on the POSIX.1e standards. You can read up on capabilities in the Linux kernel capabilities FAQ. (Special thanks to Andrew Clayton getting me pointed in the right direction there.)\nMarek Goldmann ran into this problem back in September 2013 and opened a bug report. Marek proposed a change to the Docker codebase that would remove setfcap from the list of banned capabilities in the LXC template used by docker. Another workaround would be to use the -privileged option to perform a build in privileged mode (available in docker 0.6+).\nBoth of those workarounds are unavailable when doing trusted builds with docker\u0026rsquo;s index. Sigh.\nI fired off an email to Docker\u0026rsquo;s support staff and received a quick reply:\nMajor,\nWe are aware of this issue, and we are currently working on a fix, and we hope to have something we can start testing this week. I\u0026rsquo;m not sure when we will be able to roll out the fix, but we are hoping soon. Until then, there isn\u0026rsquo;t anything you can do to work around it. Sorry for the inconvenience.\nIf anything changes, we will be sure to let you know.\nKen\nIt wasn\u0026rsquo;t the answer I wanted but it\u0026rsquo;s good to know that the issue is being worked. In the meantime, I\u0026rsquo;ll push an untrusted build of the icanhazip Docker container up to the index for everyone to enjoy.\nStay tuned for updates.\nUPDATED 2014-08-08: Per Thomas\u0026rsquo; comment below, this has been fixed upstream.\n","date":"26 March 2014","permalink":"/p/docker-trusted-builds-and-fedora-20/","section":"Posts","summary":"Docker is a hot topic in the Linux world at the moment and I decided to try out the new trusted build process.","title":"Docker, trusted builds, and Fedora 20"},{"content":"","date":null,"permalink":"/tags/apple/","section":"Tags","summary":"","title":"Apple"},{"content":"I\u0026rsquo;ve received some very sophisticated phishing emails lately and I was showing some of them to my coworkers. One of my coworkers noticed that my Apple Mail client displays the X-Originating-IP header for all of the emails I receive.\nYou can enter that IP into a whois search and get a better idea of who sent you the message without diving into the headers. If someone that regularly exchanges email with me suddenly has an originating IP in another country that would be unusual for them to travel to, I can approach the message with more caution.\nEnabling this feature in Mail is a quick process:\nClick on the Mail menu, then Preferences Go to the Viewing tab Click the drop down menu next to Show header detail and choose Custom Click the plus (+) and type X-Originating-IP Click OK and close the Preferences window This should work in Apple Mail from OS X 10.6 through 10.9. You can also search your email for messages from certain IP addresses. Just start typing X-Originating-IP: 123.234... into the search field and watch the results appear.\n","date":"18 March 2014","permalink":"/p/show-originating-ip-address-in-apple-mail/","section":"Posts","summary":"I\u0026rsquo;ve received some very sophisticated phishing emails lately and I was showing some of them to my coworkers.","title":"Show originating IP address in Apple Mail"},{"content":"I stumbled upon this video earlier today via Tripwire\u0026rsquo;s Twitter feed:\nSome of the requests are hilarious, obviously, but many of them highlight a critical problem. In organizations where security is one department, silos develop and the \u0026ldquo;us versus them\u0026rdquo; mentality sets in quickly.\nFor organizations to grow and maintain security, the ownership of security and process maturity must be spread throughout the organization. Traditional corporate security teams simply cannot carry this burden alone. Security teams should be looked to as subject matter experts and consultants for critical projects. The business should be as eager to engage security experts as the security experts should be to engage the rest of the business.\nLopsided security ownership quickly leads to comments like the ones in the video.\n","date":"10 March 2014","permalink":"/p/annoying-security-requests-highlight-company-silos/","section":"Posts","summary":"I stumbled upon this video earlier today via Tripwire\u0026rsquo;s Twitter feed:","title":"Annoying security requests highlight company silos"},{"content":"","date":null,"permalink":"/tags/virt-manager/","section":"Tags","summary":"","title":"Virt-Manager"},{"content":"After upgrading my Fedora 20 Xen hypervisor to virt-manager 1.0.0, I noticed that I couldn\u0026rsquo;t open the console or VM details for any of my guests. Running virt-manager --debug gave me the following traceback:\nTraceback (most recent call last): File \u0026#34;/usr/share/virt-manager/virtManager/engine.py\u0026#34;, line 803, in _show_vm_helper details = self._get_details_dialog(uri, uuid) File \u0026#34;/usr/share/virt-manager/virtManager/engine.py\u0026#34;, line 760, in _get_details_dialog obj = vmmDetails(con.get_vm(uuid)) File \u0026#34;/usr/share/virt-manager/virtManager/details.py\u0026#34;, line 530, in __init__ self.init_details() File \u0026#34;/usr/share/virt-manager/virtManager/details.py\u0026#34;, line 990, in init_details for name in [c.model for c in cpu_values.cpus]: AttributeError: \u0026#39;NoneType\u0026#39; object has no attribute \u0026#39;cpus\u0026#39; [Tue, 04 Mar 2014 22:13:31 virt-manager 21019] DEBUG (error:84) error dialog message: summary=Error launching details: \u0026#39;NoneType\u0026#39; object has no attribute \u0026#39;cpus\u0026#39; details=Error launching details: \u0026#39;NoneType\u0026#39; object has no attribute \u0026#39;cpus\u0026#39; I opened a bug report and the fix was committed upstream today. If you want to make these updates to your Fedora 20 server before the update package is available, just snag the three RPM\u0026rsquo;s from koji and install them:\nmkdir /tmp/virt-manager cd /tmp/virt-manager wget http://kojipkgs.fedoraproject.org/packages/virt-manager/1.0.0/4.fc20/noarch/virt-install-1.0.0-4.fc20.noarch.rpm wget http://kojipkgs.fedoraproject.org/packages/virt-manager/1.0.0/4.fc20/noarch/virt-manager-1.0.0-4.fc20.noarch.rpm wget http://kojipkgs.fedoraproject.org/packages/virt-manager/1.0.0/4.fc20/noarch/virt-manager-common-1.0.0-4.fc20.noarch.rpm yum localinstall *.rpm UPDATE: Thanks to Cole\u0026rsquo;s comment below, you can actually pull in the RPM\u0026rsquo;s using koji directly:\nkoji download-build virt-manager-1.0.0-4.fc20 ","date":"6 March 2014","permalink":"/p/virt-manager-nonetype-object-has-no-attribute-cpus/","section":"Posts","summary":"After upgrading my Fedora 20 Xen hypervisor to virt-manager 1.","title":"virt-manager: ‘NoneType’ object has no attribute ‘cpus’"},{"content":"I\u0026rsquo;ve written about installing Xen on Fedora 19 and earlier versions on this blog before. Let\u0026rsquo;s tackle it on Fedora 20.\nStart with the Xen hypervisor and the basic toolset first:\nyum -y install xen xen-hypervisor xen-libs xen-runtime systemctl enable xend.service systemctl enable xendomains.service Get GRUB2 in order:\n# grep ^menuentry /boot/grub2/grub.cfg | cut -d \u0026#34;\u0026#39;\u0026#34; -f2 Fedora, with Linux 3.13.4-200.fc20.x86_64 Fedora, with Linux 0-rescue-c9dcecb251df472fbc8b4e620a749f6d Fedora, with Xen hypervisor # grub2-set-default \u0026#39;Fedora, with Xen hypervisor\u0026#39; # grub2-editenv list saved_entry=Fedora, with Xen hypervisor # grub2-mkconfig -o /boot/grub2/grub.cfg Now reboot. When the server restarts, verify that Xen is running:\n# xm dmesg | head __ __ _ _ _____ _ ___ __ ____ ___ \\ \\/ /___ _ __ | || | |___ / / | / _ \\ / _| ___|___ \\ / _ \\ \\ // _ \\ \u0026#39;_ \\ | || |_ |_ \\ | |_| (_) || |_ / __| __) | | | | / \\ __/ | | | |__ _| ___) || |__\\__, || _| (__ / __/| |_| | /_/\\_\\___|_| |_| |_|(_)____(_)_| /_(_)_| \\___|_____|\\___/ (XEN) Xen version 4.3.1 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=n Thu Feb 6 16:52:58 UTC 2014 (XEN) Latest ChangeSet: (XEN) Bootloader: GRUB 2.00 (XEN) Command line: placeholder As I\u0026rsquo;ve mentioned before, I enjoy using virt-manager to manage my VM\u0026rsquo;s. Let\u0026rsquo;s get started:\nyum -y install virt-manager dejavu* xorg-x11-xauth yum -y install libvirt-daemon-driver-network libvirt-daemon-driver-storage libvirt-daemon-xen systemctl enable libvirtd.service systemctl start libvirtd.service By this point, you have the Xen hypervisor running and you have VM management tools available from virt-manager and libvirt. Enjoy!\n","date":"28 February 2014","permalink":"/p/installing-xen-on-fedora-20/","section":"Posts","summary":"I\u0026rsquo;ve written about installing Xen on Fedora 19 and earlier versions on this blog before.","title":"Installing Xen on Fedora 20"},{"content":"","date":null,"permalink":"/tags/puppy-linux/","section":"Tags","summary":"","title":"Puppy Linux"},{"content":"I figured that the Puppy Linux and icanhazip.com fiasco was over, but I was wrong:\n@majorhayden you're in Puppy Linux controversy again http://t.co/B21JPIx7Ob\u0026#10;#Heat \u0026mdash; Michael Amadio (@01micko) January 14, 2014 After a quick visit to the forums, I found the debate stirred up again. Various users were wondering if their internet connections were somehow compromised or if a remote American network was somehow spying on their internet traffic. Others wondered if some secretive software was added to the Puppy Linux distribution that was calling out to the site.\nFortunately, quite a few users on the forum showed up to explain that Puppy Linux has a built-in feature to figure out a user\u0026rsquo;s external IP address to help them get started with their system after it boots. Another user was kind enough to dig up the Lifehacker post about icanhazip from 2011.\nMany users on the forum were still dissatisfied. Many of them turned their questions to maintainers of the distribution (which is where those questions should go), but many others felt that icanhazip was the source of the problem. Some of them felt so strongly that they called my hosting provider via telephone to curse at them. Here\u0026rsquo;s a snippet of an email I received from my colocation provider:\nI had an interesting call from someone today said that 216.69.252.101 was showing up on his computer. Sounded kind of [omitted] and called me a *\\* ******…\nLet\u0026rsquo;s get three things straight:\nI\u0026rsquo;m a huge supporter of everything Linux, including Puppy Linux. I don\u0026rsquo;t hold a grudge against the project for what a minority of their users do. I don\u0026rsquo;t collect data when users visit icanhazip.com other than standard Apache logs. No cookies are used. I run these applications on my own time, with my own money, and my own resources. Before I forget, thanks to all of the folks who came forward in the forums to explain what was actually happening and defend the work I\u0026rsquo;ve done. I\u0026rsquo;m tremendously flattered to receive that kind of support.\n","date":"10 February 2014","permalink":"/p/puppy-linux-icanhazip-and-tin-foil-hats/","section":"Posts","summary":"I figured that the Puppy Linux and icanhazip.","title":"Puppy Linux, icanhazip, and tin foil hats"},{"content":"Many of the non-technical posts on the blog are inspired by the comments of others. I stumbled upon this tweet after it was retweeted by someone I follow:\nImpostor syndrome is holding a lot of us back. Let's stop that. (yeah, easier said than done, but we should try): http://t.co/aqx9G1GJ52 \u0026mdash; erika owens (@erika_owens) December 16, 2013 The link in the tweet takes you to a blog post from Erika Owens about impostor syndrome. Erika touches on that uncomfortable feeling that some of us feel when we\u0026rsquo;re surrounded by other people from our field of study or work. These three sentences hit home for me:\nAt first, I thought people were just being modest. But it soon became clear that people were reluctant to recognize in themselves the same traits that awed them in other people. This dynamic holds people back while also overtaxing the limited number of anointed experts.\nAnne Gentle gave a presentation at this year\u0026rsquo;s offsite for leaders at Rackspace and talked about the challenges of defeating impostor syndrome while attending male-dominated technical conferences. She talked about a portion of these challenges in her Women of OpenStack post.\nI\u0026rsquo;ve struggled with this from time to time with various groups. Sure, I have some deep technical knowledge and experience in some areas, but I don\u0026rsquo;t always feel like the expert in those areas. One thing I\u0026rsquo;ve come to realize is that when you\u0026rsquo;re invited to talk to a group or asked to write an article, you\u0026rsquo;re being asked because the community has identified you as an expert.\n\u0026ldquo;Expert\u0026rdquo; is always a relative term. Toss me in a room with Windows system administrators and I can provide an expert level of guidance around the Linux kernel. If Linus or Greg Kroah-Hartman walk in the door, I\u0026rsquo;d certainly defer to them. I\u0026rsquo;d definitely offer up an opinion if asked or if I disagreed with something that was being said (even if an \u0026ldquo;expert\u0026rdquo; said it). With that said, I\u0026rsquo;ve spoken with Linus and Greg in person and they seem to understand this well. They leave gaps in conversation and defer to their peers to ensure that the experts around them get time in the spotlight.\nHere\u0026rsquo;s where the rubber meets the road: when you embrace your expertise and share it with others, you inspire them.\nWhat happens when you inspire others? They\u0026rsquo;re more eager to talk. They\u0026rsquo;re more eager to listen. They\u0026rsquo;re more eager to learn more and embrace their inner expert.\nThis process isn\u0026rsquo;t easy. Read through my post on why technical people should blog, but don\u0026rsquo;t. You\u0026rsquo;ll need to understand that you\u0026rsquo;ll be wrong from time to time and that you\u0026rsquo;ll need to do some homework when you\u0026rsquo;re asked for an expert opinion. You\u0026rsquo;ll also need to learn when and how to say \u0026ldquo;I don\u0026rsquo;t know, but I\u0026rsquo;ll find out the answer.\u0026rdquo;\nThe next time you feel like you know less than the other people in the room, speak up. You\u0026rsquo;ll probably be an inspiration to many in the room who feel like an impostor and they\u0026rsquo;ll want to follow your lead.\n","date":"5 February 2014","permalink":"/p/be-an-inspiration-not-an-impostor/","section":"Posts","summary":"Many of the non-technical posts on the blog are inspired by the comments of others.","title":"Be an inspiration, not an impostor"},{"content":"I\u0026rsquo;ve made posts about the DevOps Weekly mailing list before. If you haven\u0026rsquo;t signed up yet, do so now. You\u0026rsquo;ll thank me later.\nAaron Suggs\u0026rsquo; Hierarchy of DevOps Needs gives a great summary of the building blocks of a solid development culture and how they relate to one another. I laughed a bit when I saw his pyramid because I\u0026rsquo;ve seen many development groups build the pyramid upside down in the past.\nAt the core of DevOps is that everyone owns the result. Developers, operations folks, product managers, and leaders are all responsible for the success or failure of a development project. If the developers are happy and writing code that keeps the operations engineers up all night, that\u0026rsquo;s not DevOps. On the other hand, if the operations team is slowing down deployments or building inconsistent environments, then they\u0026rsquo;re not taking ownership of the results.\n","date":"3 February 2014","permalink":"/p/hierarchy-of-devops-needs-from-devops-weekly/","section":"Posts","summary":"I\u0026rsquo;ve made posts about the DevOps Weekly mailing list before.","title":"Hierarchy of DevOps Needs from DevOps Weekly"},{"content":"I was doing some testing with apachebench and received some peculiar results:\n[608487.317284] nf_conntrack: table full, dropping packet [608487.708916] nf_conntrack: table full, dropping packet [608488.010236] nf_conntrack: table full, dropping packet I\u0026rsquo;ve seen this problem before and I tried to fix it by adjusting /proc/sys/net/ipv4/ip_conntrack_max as I did back in 2008. However, Fedora 20 doesn\u0026rsquo;t have the same structure in /proc under kernel 3.12.\nThe fix is to adjust /proc/sys/net/netfilter/nf_conntrack_max instead:\necho 256000 \u0026gt; /proc/sys/net/netfilter/nf_conntrack_max After a quick test, apachebench was back to normal. You can make the change permanent and test it with:\necho \u0026#34;net.netfilter.nf_conntrack_max = 256000\u0026#34; \u0026gt;\u0026gt; /etc/sysctl.conf sysctl -p There are some handy connection tracking tools available in the conntrack-tools package. Take a look at the man page for conntrack and you\u0026rsquo;ll find ways to review and flush the connection tracking table.\n","date":"7 January 2014","permalink":"/p/nf-conntrack-table-full-dropping-packet/","section":"Posts","summary":"I was doing some testing with apachebench and received some peculiar results:","title":"nf_conntrack: table full, dropping packet"},{"content":"My SANS classmates were learning how to set and recognize file permissions on a Linux server and we realized it would be helpful to display the octal value of the permissions next to the normal rwx display. Fortunately, a quick search revealed that stat could deliver this information:\n# stat -c \u0026#34;%a %A %n\u0026#34; /usr/sbin/* | head 755 -rwxr-xr-x /usr/sbin/accessdb 755 -rwxr-xr-x /usr/sbin/acpid 755 -rwxr-xr-x /usr/sbin/addgnupghome 755 -rwxr-xr-x /usr/sbin/addpart 777 lrwxrwxrwx /usr/sbin/adduser 755 -rwxr-xr-x /usr/sbin/agetty 755 -rwxr-xr-x /usr/sbin/alternatives 755 -rwxr-xr-x /usr/sbin/anacron 755 -rwxr-xr-x /usr/sbin/apachectl 755 -rwxr-xr-x /usr/sbin/applygnupgdefaults The first octal digit (for setuid, setgid, and sticky) is left off for any files without those bits set.\n","date":"10 December 2013","permalink":"/p/learn-octal-file-permissions-easily-with-stat/","section":"Posts","summary":"My SANS classmates were learning how to set and recognize file permissions on a Linux server and we realized it would be helpful to display the octal value of the permissions next to the normal rwx display.","title":"Learn octal file permissions easily with stat"},{"content":"Keeping an eye out for the DevOps Weekly email is something I\u0026rsquo;ve enjoyed since it started at the end of 2010. It\u0026rsquo;s usually chock full of tips for systems engineers, developers, managers, or anyone who is focused on environments that utilize continuous integration and deployment strategies. Quite a few of the tips are totally relevant for information security professionals who are looking for an edge at work.\nThis week, there are four links worth reviewing if you work in information security:\nsitespeed.io Burnout, Recovery and Honesty Audits of High Deployment Environments [PDF] A tcpdump Primer The idea behind sitespeed.io is to monitor an application\u0026rsquo;s performance through deployments. Availability is critical to security (although it\u0026rsquo;s often de-prioritized until you feel the pain) and it can signal an attack in process. Performance degradation over time could allow the application to be knocked offline from smaller attacks.\nBurnout, Recovery and Honesty is an anecdote from an IT worker about how their job changed their personal and home life. It\u0026rsquo;s worth a read so that you can catch the warning signs of burnout within yourself and your coworkers.\nBringing continuous deployments to large companies is challenging due to the number of compliance and regulatory programs. A great slide deck called Audits of High Deployment Environments covers some of the basic strategies for how to deal with these challenges.\nFinally, my favorite nugget from this week\u0026rsquo;s newsletter is the the tcpdump primer. It\u0026rsquo;s a great resource for people who have never used tcpdump or for those of us who have only used some of the basic functionality. You\u0026rsquo;ll be able to get more data out of tcpdump with less fuss after reading the post.\n","date":"17 November 2013","permalink":"/p/information-security-nuggets-from-devops-weekly-150/","section":"Posts","summary":"Keeping an eye out for the DevOps Weekly email is something I\u0026rsquo;ve enjoyed since it started at the end of 2010.","title":"Information security nuggets from DevOps Weekly #150"},{"content":"Going to the dark side. Those were my first thoughts about taking an information security role one year ago. One year later, the situation seems much brighter than I expected.\nThis role has taught me more about how our business operates, how we set priorities, and how to respond to a setback. I\u0026rsquo;ve been fortunate enough to meet some extremely intelligent people along the way. Some of them frighten me with their descriptions of past experiences or their adversaries. Other people spin a different tale about mature, consistent information security programs that deliver value to the business.\nI was asked by a coworker last week to talk about three things I\u0026rsquo;ve learned over the past year as I transitioned from the world of managing Linux servers, wrangling software deployments, and writing python to a heavy focus on information security at a macro level. This post is a response to that request.\nWithout further ado, here are the three biggest lessons I\u0026rsquo;ve learned over the past year:\nDouble down on what motivates people\nIt\u0026rsquo;s easy to focus on the critics when you work in a corporate security department. They wrestle with you on anything that causes any changes in their day-to-day work. My immediate reaction was an angry one: \u0026ldquo;Why don\u0026rsquo;t they get it?\u0026rdquo; In most cases, they did get it; but the demands placed on them to complete a task or launch a product was the top priority. It didn\u0026rsquo;t take long before I was frustrated.\nLuckily, I found a copy of Switch on a bookshelf and took it home to read it. One of the big takeaways from the book is to look for your partners in the business when you\u0026rsquo;re focused on the critics. Find the people who are motivated to take security seriously and take them out to lunch. Learn about their background and their previous experiences. Discover why security is a priority for them and why it motivates them to change. Once you find out what motivates them, double down on that motivation when you spot a critic. It won\u0026rsquo;t work every time (you certainly can\u0026rsquo;t please everyone), but it has paid dividends for me.\nKeep in mind that what motivates one person might not motivate the next person. You may find someone who embraces security because they\u0026rsquo;ve worked through a serious breach in the past. Retelling that story to another person might not have the same impact, but it may lead to a higher-level discussion around the value of the change you\u0026rsquo;re trying to drive.\nIf you work in an environment with highly technical people, always remember to talk about the \u0026ldquo;why\u0026rdquo; behind the change. In most of my communication, I generally start with the \u0026ldquo;why\u0026rdquo; or \u0026ldquo;what\u0026rsquo;s broken\u0026rdquo; first. Get them to agree that something is broken and you can lead them to your desired solution. That initial agreement builds trust and it allows you to revert back to a common ground in case the conversation goes astray. A former manager taught me this method and it works extremely well.\nI prefer to talk about things you should do rather than the things you shouldn\u0026rsquo;t, but there are two critical things I have to mention. Don\u0026rsquo;t spread fear, uncertainty and doubt (FUD). Also, certifications don\u0026rsquo;t make you an expert. When people feel that you\u0026rsquo;re constantly throwing out doomsday scenarios or you\u0026rsquo;re grandstanding with alphabet soup after your name, they\u0026rsquo;ll become desensitized to your message. The difficulty involved with changing their minds is then ratcheted up another level.\nEvade analysis paralysis by painting with broad strokes\nIt\u0026rsquo;s easy to sweat the small stuff in information security. If you don\u0026rsquo;t believe me, just look at your average vulnerability scan report. Once you filter out all of the false positives and irrelevant vulnerabilities, you\u0026rsquo;re left with a final few items that are worth an additional review. Scan reports of multiple systems (or subnets) can get out of hand quickly. Sure, those vulnerabilities should be fixed, but think about how you can take a bigger picture approach to the problem.\nAnother favorite book of mine is The Phoenix Project. It\u0026rsquo;s an adaptation of The Goal that is specific to IT workers. The main character is suddenly promoted after his superiors are relieved and he is overwhelmed with tons of IT problems big and small. After a good dose of firefighting and tactical work, he discovers that the problems plaguing his department are very broad. Change management, documentation, and resource contention were completely out of control. He comes to terms with the problems and realizes that he won\u0026rsquo;t succeed unless he steps back and looks at the big picture. There\u0026rsquo;s also an amazing CISO character in the book and he goes through the same transformation.\nInstead of focusing on the small battles, focus on the war. Look for ways to drive consistency first. If nobody has set the bar for security within your organization, set it. Start out with something simple and partner with your supporters in the business to gain buy-in. Setting a standard does something interesting to humans: we don\u0026rsquo;t want anything we maintain to be called \u0026ldquo;substandard.\u0026rdquo;\nFind ways to weave your standards in with the business in helpful ways. Write scripts. Do demos. Figure out which configuration management software they use and try to build your standards into the existing frameworks. Talk to them on their turf and in their terms. When they hear you speaking their language and integrating with their tools, they will be much more eager to collaborate with you. That\u0026rsquo;s a great time to deliver your message and weave security into the fabric of their project.\nBuilding consistency will take time depending on the maturity of the organization. As it builds, raise the bar with the help of your supporters. Do it gradually and closely monitor the effects. Businesses constantly do this with software development cycles and uptime improvements. Implementing security is no different.\nDrive self-reliance by making them part of the process\nOne of my peers said it best: \u0026ldquo;Everyone is part of the security team. We all play a part.\u0026rdquo; Getting people to feel that they\u0026rsquo;re responsible for security isn\u0026rsquo;t easy, but you can make an impact by explaining the \u0026ldquo;why\u0026rdquo; behind your changes, partnering with standards, and keeping an open door policy.\nSecurity teams need to maintain a feedback loop with the business. I feel like I say this constantly: \u0026ldquo;We won\u0026rsquo;t have a security team if we never launch a product.\u0026rdquo; There\u0026rsquo;s always going to be a situation where something launches with vulnerabilities (whether known or unknown) and the business accepts the risk. Don\u0026rsquo;t dwell on that; you\u0026rsquo;re sweating the small stuff. Instead, think about helping the business avoid that risk in the future. Should we develop a standard? Is our testing process rigorous enough? Do we need more detailed training for developers or engineers?\nThat feedback loop must include open and frank discussions about failures without a rush to blame. My favorite example of this thought process is a post from John Allspaw titled Learning from Failure at Etsy. A business can drive accountability without needing to place blame. If you\u0026rsquo;ve ever gone through a fishbone diagram or you\u0026rsquo;ve answered the Five Why\u0026rsquo;s, you know what I\u0026rsquo;m talking about. Trace it back to the original failure and you\u0026rsquo;ll most often find a process or a technology problem and not a people problem.\nIf an IT team can\u0026rsquo;t be honest with a security team because they fear punishment or shaming, then they won\u0026rsquo;t share the real problems. This could be disastrous for a security team since they\u0026rsquo;re operating with only a portion of the real story. The opposite is also true: security teams must feel comfortable sharing their failures. Healthy feedback loops like these build trust and engagement. That leads to more process improvements and fewer failures. If there\u0026rsquo;s anything I\u0026rsquo;ve learned about security teams, it\u0026rsquo;s that we don\u0026rsquo;t want to fail.\nConclusion\nThis post might read a bit more pedantic than I intended, but I hope you find it useful. Much of it applies to more than just information security. Think about where you work in your company and which groups you find yourself at odds with daily. Learn what motivates your supporters, paint with broad strokes, and make everyone part of the process.\nYou might find more in common with them than you ever expected.\n","date":"13 November 2013","permalink":"/p/one-year-in-information-security/","section":"Posts","summary":"Going to the dark side.","title":"One year in information security"},{"content":"In my previous post about installing Fedora via PXE, I forgot to mention a big time saver for the installation. A Fedora PXE installation requires a few different things:\ninitial ramdisk (initrd.img) kernel (vmlinuz) installation repository If you only specify an installation repository, then Anaconda tries to drag down a 214MB squashfs.img file in each installation. You can host this file locally by recreating a portion of a Fedora repo\u0026rsquo;s structure and dropping two files into it.\nDo the following in a directory that can be served up via HTTP:\nmkdir -p fedora/releases/19/Fedora/x86_64/os/LiveOS/ cd fedora/releases/19/Fedora/x86_64/os/LiveOS/ wget http://mirror.rackspace.com/fedora/releases/19/Fedora/x86_64/os/LiveOS/squashfs.img cd .. wget http://mirror.rackspace.com/fedora/releases/19/Fedora/x86_64/os/.treeinfo Your files are now ready. Go back to your tftp server and adjust your pxelinux.0/default file:\nlabel linux menu label Install Fedora 19 guest kernel vmlinuz append initrd=initrd.img inst.stage2=http://localwebserver.example.com/fedora/releases/19/Fedora/x86_64/os/ inst.repo=http://mirror.rackspace.com/fedora/releases/19/Fedora/x86_64/os/ ks=http://example.com/kickstart.ks ip=eth0:dhcp This should speed up your installations by a large amount (unless your internet connection is much faster than mine).\n","date":"3 November 2013","permalink":"/p/speed-up-your-fedora-pxe-installations-by-hosting-the-stage2-installer-locally/","section":"Posts","summary":"In my previous post about installing Fedora via PXE, I forgot to mention a big time saver for the installation.","title":"Speed up your Fedora PXE installations by hosting the stage2 installer locally"},{"content":"I stumbled upon a helpful guide to securing an apache server via Reddit\u0026rsquo;s /r/netsec subreddit. Without further ado, here\u0026rsquo;s a link to the guide:\nApache web server hardening \u0026amp; security guide The guide covers the simplest changes, like reducing ServerTokens output and eliminating indexes, all the way up through configuring mod_security and using the SpiderLabs GitHub repository to add additional rules.\nIf you\u0026rsquo;d like a more in-depth post about installing mod_security, I\u0026rsquo;d recommend this one from Tecmint.\nOh, and as always, don\u0026rsquo;t forget about SELinux. :)\nUPDATE: Thanks to @matrixtek for mentioning Mozilla\u0026rsquo;s recommendations specific to TLS.\n","date":"22 October 2013","permalink":"/p/guide-to-securing-apache/","section":"Posts","summary":"I stumbled upon a helpful guide to securing an apache server via Reddit\u0026rsquo;s /r/netsec subreddit.","title":"Guide to securing apache"},{"content":"This post has been a bit delayed, but I want to follow up on the post I wrote last month about moving from OS X to Linux at work. I started out with a Lenovo Thinkpad X1 Carbon along with Fedora 19 and KDE. Although most things went really well, there were a few deal-breakers that sent me back to the Mac.\nJust to give you an idea of my daily workflow, much of my day revolved around my calendar and email. As much as I don\u0026rsquo;t like to have my life revolve around a calendar, that\u0026rsquo;s the way it can be at times. This means I need quick access to handle and generate invitations but I also need speedy access to entire email threads and email searches. On top of all that, I review and edit many documents. The majority of the documents I handle are fairly simple but there are some very complex ones as well. Outside of those tasks, I log into remote servers via ssh/RDP, manage social connections (IM, twitter, and IRC), and surf the web.\nWithout further ado, here are the top three things that (regrettably) pushed me back to OS X at work:\nEmail management\nConnecting to Exchange at work gives me quite a few options:\nThunderbird + davmail Thunderbird + IMAP/POP Thunderbird + Exquilla Evolution + EWS Evolution + IMAP/POP Claws Mail + IMAP/POP The best method I found was Thunderbird plus Exquilla. The performance was quite good and the GAL search worked decently. Thunderbird\u0026rsquo;s keyboard shortcuts were intuitive and easy to begin using regularly with a few days\u0026rsquo; use. Even with the global indexer enabled, Thunderbird\u0026rsquo;s overall performance was just fine on the X1.\nMy main gripes showed up when following large email threads on mailing lists or trying to find replies to a message I\u0026rsquo;d send previously. The Thunderbird Conversations extension helped to an extent, but it really mangled up the UI. Searching the global index was unpredictable. I knew an email was sitting in my inbox but the search function didn\u0026rsquo;t return the message. Even in situations where I knew I\u0026rsquo;d received hundreds of emails from the same sender, the global indexer sometimes couldn\u0026rsquo;t find any of them.\nThe Claws UI was too minimalistic and Evolution, although feature packed, really chewed up the CPU on the X1 and drained the battery.\nCalendar management\nAfter trying Thunderbird with davmail, Thunderbird with 1st setup\u0026rsquo;s extension, and Evolution with EWS, I was horribly frustrated. My calendar was a mess and some of the applications started marking meetings I\u0026rsquo;d previously accepted as tentative. It confused the meeting organizers and even confused some of the attendees of meetings that I\u0026rsquo;d scheduled.\nInviting other coworkers to meetings led to unpredictable results. Sometimes I could see their free/busy times but most times I couldn\u0026rsquo;t. Getting contacts from the GAL into the invitations sometimes worked and sometimes didn\u0026rsquo;t. In situations where an emergency get-together was required, this became extremely annoying.\nMy last resort was to keep OWA open in Chrome all day and use it for all of my calendaring. That worked quite well but it meant flipping between Thunderbird and OWA to handle invitations. I\u0026rsquo;d considered using OWA for email as well, but it lacked the functionality I needed for GPG signing among other things.\nMicrosoft Office compatibility\nLibreOffice\u0026rsquo;s work on compatibility was impressive, but it still falls well short of the native Microsoft Office applications. Excel documents with pivot tables were often mangled and Word documents with any complex formatting adjustments were left unreadable. I\u0026rsquo;m not a fan of PowerPoint, but I handle those documents regularly and LibreOffice did an acceptable job.\nIf you don\u0026rsquo;t have to worry with Office documents at your job, then you might think this is a silly requirement. However, I need quick access to review and edit these documents as I don\u0026rsquo;t like these tasks to occupy my day. I like to get in, get out, and get back to what I\u0026rsquo;m good at doing.\nSummary\nAll in all, I could use Linux as my daily laptop OS if I wasn\u0026rsquo;t so dependent on my calendar, email, and Office documents. It would definitely be a good choice for me if I was still doing heavy development and system administration. Linux has indeed come a long way (I\u0026rsquo;ve said it before) and the stability is impressive. Even during heavy usage periods, I never had a crash in X and hardly ever had a screen flicker. Adding monitors via DVI and USB (DisplayPort) was extremely easy in KDE and I was able to connect to projectors almost as easily as I can in OS X.\nI still have the X1 and I\u0026rsquo;m using it for other projects at home. The laptop itself is fantastic and I\u0026rsquo;m eager to see when Lenovo starts adding Haswell chips to the remainder of the Thinkpad line.\n","date":"23 September 2013","permalink":"/p/one-month-using-a-linux-laptop-at-work-back-to-the-mac/","section":"Posts","summary":"This post has been a bit delayed, but I want to follow up on the post I wrote last month about moving from OS X to Linux at work.","title":"One month using a Linux laptop at work: Back to the Mac"},{"content":"If you run bwm-ng and you\u0026rsquo;ve run a yum upgrade lately on Fedora 19, you have probably seen this:\n---\u0026gt; Package libstatgrab.x86_64 0:0.17-4.fc19 will be updated --\u0026gt; Processing Dependency: libstatgrab.so.6()(64bit) for package: bwm-ng-0.6-10.fc19.x86_64 --\u0026gt; Finished Dependency Resolution Error: Package: bwm-ng-0.6-10.fc19.x86_64 (@fedora) Requires: libstatgrab.so.6()(64bit) Removing: libstatgrab-0.17-4.fc19.x86_64 (@fedora) libstatgrab.so.6()(64bit) Updated By: libstatgrab-0.90-1.fc19.x86_64 (updates) ~libstatgrab.so.9()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest The error message mentions that libstatgrab needs to be updated to version 0.90 (released in August) but bwm-ng requires version 0.17 of libstatgrab. I\u0026rsquo;ve emailed the author of bwm-ng to ask if he plans to update it to use the newer libstatgrab version but I haven\u0026rsquo;t heard back yet. Two Fedora bugs are open for the package in Red Hat\u0026rsquo;s Bugzilla.\nThere are two available workarounds:\nSkip the libstatgrab update just this one time\nYou can skip the libstatgrab update for one run of yum by doing the following:\nyum upgrade --skip-broken However, this error will pop up again the next time you run an upgrade with yum. It will also derail your automatic updates with yum-updatesd (if you use it).\nExclude the libstatgrab package from updates\nIn your /etc/yum.conf, add this line:\nexclude=libstatgrab That will prevent libstatgrab from receiving any updates until you remove it from the exclude line. Of course, when Fedora 20 rolls around, this line could cause problems.\n","date":"20 September 2013","permalink":"/p/keeping-bwm-ng-0-6-functional-on-fedora-19/","section":"Posts","summary":"If you run bwm-ng and you\u0026rsquo;ve run a yum upgrade lately on Fedora 19, you have probably seen this:","title":"Keeping bwm-ng 0.6 functional on Fedora 19"},{"content":" I spent two days last week in a class called \u0026ldquo;Accounting and Finance for Non-Financial Managers\u0026rdquo; at UT Austin\u0026rsquo;s Texas Executive Education program. The assigned reading (a book of the same name as the class) was informative but I still felt like it was too advanced for me right off the bat.\nMy main goal for the class was to learn how my role can have a financial impact as well as an information security impact. It\u0026rsquo;s fairly common for people who work in information security to provide additional evidence that their recommendations are sound. After all, we may be recommending something that impacts productivity, communication, or the financial bottom line.\nThe class itself was superb. We started with accounting on the first day and we were surprised to see how much we all actually knew about accounting already. Dr. Hirst explained how accounting is an art more than a science and that learning the vocabulary would allow us to understand more of what\u0026rsquo;s happening within our own company.\nHe took the time to pull some balance sheets from 10K\u0026rsquo;s of various companies represented in the room by their employees. We were able to dissect end of year balance sheets, income statements, and cash flow statements from companies like Solvay Chemicals, Omnicell, Apple, and Rackspace. Dr. Hirst took us through several accounting failures and this helped to not only make it more real, but it drove home the idea that proper accounting is integral to the success of the firm. I\u0026rsquo;d never realized how Worldcom fell apart, but he was able to summarize it in accounting terms in a few sentences.\nThe second day was centered around finance and Dr. Nolen led the class. He gave us a model (the DuPont Formula) for understanding a firm\u0026rsquo;s return on equity that made sense. As he broke the model apart, he showed us what all of our C-level executives care about:\nCEO: return on equity COO: asset efficiency (net income over assets) CFO: leverage (assets over shareholder equity) In short, the CEO is looking to bring more profitability from less investment, the COO is looking to increase sales with fewer assets, and the CFO is looking for borrowing leverage to increase assets without increasing shareholder equity.\nWe also learned about the right and wrong times to raise capital and how to manage the cost of capital. The most head-scratching part of the course for me was around net present value. Long story short, the whole idea behind NPV is that a dollar gained a year from now is worth less than one gained today (think about inflation and what you could do with that dollar today before next year).\nDr. Nolen reminded us that although creative finance people often get promoted, creative accountants usually find themselves in jail. Also, the banks always get paid back first before shareholders.\nSo how does this all tie back into information security?\nYou have the potential to improve your firm\u0026rsquo;s finances through information security improvements. As you reduce risk to the firm, you might find that you need to purchase less insurance, or the potential for fines for losing data might decrease. That reduces your liabilities and increases your return on equity.\nOn the flip side, if you\u0026rsquo;re able to talk to your customers about the advances in information security that your company has taken, you might end up increasing sales. Offering additional security products or security enhancements to existing products could also increase revenue.\nIf you get the opportunity to spend some of your training budget next February, try to get in on this class at UT Austin. The pace is fast and the knowledge is extremely useful. Knowing what\u0026rsquo;s going on behind the scenes in your company\u0026rsquo;s finance and accounting departments may give you the edge to push your next project to completion.\n","date":"17 September 2013","permalink":"/p/need-an-edge-at-work-learn-accounting-and-finance/","section":"Posts","summary":"I spent two days last week in a class called \u0026ldquo;Accounting and Finance for Non-Financial Managers\u0026rdquo; at UT Austin\u0026rsquo;s Texas Executive Education program.","title":"Need an edge at work? Learn accounting and finance."},{"content":"The thought of using Linux as a manager in a highly Windows- and Mac-centric corporate environment isn\u0026rsquo;t something to be taken lightly. Integrating with Active Directory, wrangling email with Microsoft Exchange, and taming quirky Microsoft office documents can be a challenge even with a well-equipped Mac. I decided to make a change after using a Mac at Rackspace for six years.\nLet\u0026rsquo;s get one thing straight: I\u0026rsquo;m not a Windows or Mac basher. Windows 7 has been a solid performer for me and OS X has an amazing UI (and a vibrant community around it). I can\u0026rsquo;t make any sense out of Windows 8, but I\u0026rsquo;ve heard some positive things about it on tablets.\nMy main goal for switching to Linux is to reduce clutter. I moved away from the iPhone to Android last year because the Android gave me finer-grained controls over my phone and allowed me to troubleshoot my own problems. The Mac was working well for me, but as each release passed, it seems like more things were out of my control and I was constantly notified of something that my computer wanted me to do.\nWhile at this year\u0026rsquo;s Red Hat Summit, I saw someone using Linux on a laptop and I asked: \u0026ldquo;How do you survive on Linux at your office?\u0026rdquo; He confided that his office is extremely Windows-centric and that it was tough to overcome in the beginning. When I asked why he stuck with Linux, he smiled and responded quickly: \u0026ldquo;When I use Linux, I feel like I can do my work without being bothered. Reducing clutter has saved me a ton of time.\u0026rdquo;\nIn an effort to free up my time at work for the important stuff, I\u0026rsquo;m moving to Linux. I\u0026rsquo;m hoping that the move is permanent, but time will tell. If you\u0026rsquo;re eager to make the same change, here\u0026rsquo;s the workflow I\u0026rsquo;m using:\nHardware\nThinkpad X1 Carbon. It has a decent screen, a fantastic keyboard, good battery life, and it\u0026rsquo;s very light. Extra displays are connected with mini-DisplayPort and that allows me to use the Mac DisplayPort dongles that I find laying around all over the place. There\u0026rsquo;s no ethernet adapter, but you can pick up a USB 2.0 Gigabit adapter for $25 or less.\nOne nice benefit is that almost every piece of hardware is recognized within Linux. The only hangup is the fingerprint reader (due to proprietary firmware). That can be fixed but I\u0026rsquo;m too lazy to go down that road at the moment.\nOne of my favorite parts of the Thinkpad is the mouse buttons above the trackpad. As a Mac user, I sometimes find myself highlighting the wrong piece of text or rolling backwards and forwards to get the right selection. I\u0026rsquo;m able to hold the left mouse button with my left hand while using the touchpad with my right. It feels awkward at first but it\u0026rsquo;s extremely quick and accurate once you get it right.\nDistribution and Desktop Environment.\nI chose Fedora 19 with KDE. Some folks prefer Kubuntu (Ubuntu\u0026rsquo;s KDE release) or Linux Mint\u0026rsquo;s KDE release, but I\u0026rsquo;m a bit biased towards Fedora as I enjoy RPM/yum and I\u0026rsquo;m involved in the Fedora community.\nKDE makes sense for me because it\u0026rsquo;s feature-rich and the Qt-based applications are well-designed. GNOME 3 has an interface that just doesn\u0026rsquo;t make sense to me, but GNOME 3\u0026rsquo;s new classic mode shows a lot of potential. Cinnamon is a good alternative if you really enjoy GNOME applications. XFCE is good if you\u0026rsquo;re on older hardware or if you prefer something very lightweight.\nMicrosoft Exchange email\nExchange can even be a challenge on Windows, so don\u0026rsquo;t expect a cakewalk in Linux. My preferred method is to use Thunderbird and Davmail. Davmail is a translation layer that handles the Exchange connectivity (via OWA/EWS) and it serves up POP, IMAP, SMTP, LDAP, and CalDav to applications on your machine. Point Davmail to your OWA server and then configure Thunderbird to talk to Davmail. One downside is that Davmail can become a bit CPU-hungry at times and may drag down a battery on a laptop.\nThe latest release of Evolution for GNOME has an exchange-ews connector that works relatively well with newer versions of Exchange. There are still some bugs and missing features, especially around starring/flagging emails. The performance could be better, but it seems to perform slightly better than using Davmail. Evolution\u0026rsquo;s UI was too clunky for me to use and it seemed to have significant lags when fetching email.\nIf you\u0026rsquo;re not eager to mess with a fat client, just use Outlook Web Access in your favorite browser. Beware that OWA detects Chrome on Linux and presents you with the awful \u0026ldquo;light\u0026rdquo; interface for OWA. Add a user agent spoofing extension to Chrome and masquerade as Chrome on Windows or Mac. You\u0026rsquo;ll get the rich OWA interface that makes things much easier.\nMicrosoft Exchange calendar\nGetting calendaring right with Exchange seems to be more difficult than email. My preferred method is to use OWA to manage calendaring. As long as you set your user agent correctly (see previous paragraph), it works flawlessly.\nFat client users should look at Evolution\u0026rsquo;s calendaring capabilities. I found it to still be pretty buggy and complex recurring invitations were often botched in the interface. Coworkers reported not seeing confirmation responses for me on certain invitations while others reported receiving multiple acceptances for the same invitation.\nAnother option is to use Thunderbird with Davmail via CalDav. This was as buggy as Evolution and it was excruciatingly slow.\nMicrosoft Office\nLibreOffice copes well with the majority of the documents I need to edit. I took some time to bring over some of the most commonly used fonts from my Mac and I picked up the Windows fonts via fedorautils. The Calligra office suite in KDE fulfills a lot of the additional needs (like a Visio and Project replacement).\nHowever, there are those times when you need a little more from your Office applications. I have a Windows 7 VM running in VirtualBox when I need it for some Office heavy lifting. Another option is to use Office365\u0026rsquo;s web interface for the common Office applications. If your organization has SharePoint, some of the licenses allow you to have SkyDrive access within your organization and that includes the web-based Office applications as well.\nRSS feeds\nEver since Google Reader\u0026rsquo;s demise, I\u0026rsquo;ve switched to Tiny Tiny RSS running on a cheap VM. I can access the RSS feeds via any browser or via applications on my Nexus 4.\nIM and IRC\nPidgin has been my go-to choice for instant messaging ever since I used GAIM. I\u0026rsquo;ve heard good things about telepathy/empathy but the UI didn\u0026rsquo;t make much sense to me. For IRC, Konversation is a clear GUI winner with irssi being my favorite in the terminal.\nTwitter\nAs you probably know, I like to use Twitter, so this was critical to my workflow. I use TweetDeck\u0026rsquo;s Chrome application because it uses the streaming API and gives me plenty of one-click functionality.\nMusic\niTunes was hard to live without, but Clementine filled my needs well. It has built-in internet music services that are easy to use. I\u0026rsquo;m a Digitally Imported subscriber and I was able to log in via Clementine and access the premium streams. The podcast management isn\u0026rsquo;t perfect but it\u0026rsquo;s certainly a decent replacement for iTunes. It can monitor certain directories for new music and automatically populate itself with a playlist based on the music it finds.\nNetworking\nAll of my required VPN capabilities worked right out of the box, including OpenVPN and Cisco VPN\u0026rsquo;s via VPNC. I can join 802.1x-protected wireless and wired networks with ease. Every USB to ethernet adapter I\u0026rsquo;ve tried has worked right out of the box without any additional configuration needed. IPv6 connectivity works just fine (as expected).\nSummary\nWith one day on Linux under my belt, I\u0026rsquo;m glad I made the change. I\u0026rsquo;m able to sit down with my work laptop and use it for what I want to do with it: work. Sure, there are still notification popups from time to time, but they\u0026rsquo;re either notifications that I\u0026rsquo;ve configured intentionally or my laptop is trying to tell me something that I really need to know. So far, the switch has caused me to think about my software in a more minimalistic way. I regularly have my browser, IM client, and IRC client open - that\u0026rsquo;s all. I\u0026rsquo;m hoping that less clutter and fewer applications lead to better focus and increased productivity.\n","date":"27 August 2013","permalink":"/p/moving-from-os-x-to-linux-day-one/","section":"Posts","summary":"The thought of using Linux as a manager in a highly Windows- and Mac-centric corporate environment isn\u0026rsquo;t something to be taken lightly.","title":"Moving from OS X to Linux: Day One"},{"content":"The X1 Carbon\u0026rsquo;s touchpad has been my nemesis in Linux for quite some time because of its high sensitivity. I\u0026rsquo;d often find the cursor jumping over a few pixels each time I tried to tap to click. This was aggravating at first, but then I found myself closing windows when I wanted them minimized or confirming something in a dialog that I didn\u0026rsquo;t want to confirm.\nLast December, I wrote a post about some fixes. However, as I force myself to migrate to Linux (no turning back this time) again, my fixes didn\u0026rsquo;t work well enough. I stumbled upon a post about the X1\u0026rsquo;s touchpad and how an Ubuntu user found a configuration file that seemed to work well.\nJust as a timesaver, I\u0026rsquo;ve reposted his configuration here:\n# softlink this file into: # /usr/share/X11/xorg.conf.d # and prevent the settings app from overwriting our settings: # gsettings set org.gnome.settings-daemon.plugins.mouse active false Section \u0026#34;InputClass\u0026#34; Identifier \u0026#34;nathan touchpad catchall\u0026#34; MatchIsTouchpad \u0026#34;on\u0026#34; MatchDevicePath \u0026#34;/dev/input/event*\u0026#34; Driver \u0026#34;synaptics\u0026#34; # three fingers for the middle button Option \u0026#34;TapButton3\u0026#34; \u0026#34;2\u0026#34; # drag lock Option \u0026#34;LockedDrags\u0026#34; \u0026#34;1\u0026#34; # accurate tap-to-click! Option \u0026#34;FingerLow\u0026#34; \u0026#34;50\u0026#34; Option \u0026#34;FingerHigh\u0026#34; \u0026#34;55\u0026#34; # prevents too many intentional clicks Option \u0026#34;PalmDetect\u0026#34; \u0026#34;0\u0026#34; # \u0026#34;natural\u0026#34; vertical and horizontal scrolling Option \u0026#34;VertTwoFingerScroll\u0026#34; \u0026#34;1\u0026#34; Option \u0026#34;VertScrollDelta\u0026#34; \u0026#34;-75\u0026#34; Option \u0026#34;HorizTwoFingerScroll\u0026#34; \u0026#34;1\u0026#34; Option \u0026#34;HorizScrollDelta\u0026#34; \u0026#34;-75\u0026#34; Option \u0026#34;MinSpeed\u0026#34; \u0026#34;1\u0026#34; Option \u0026#34;MaxSpeed\u0026#34; \u0026#34;1\u0026#34; Option \u0026#34;AccelerationProfile\u0026#34; \u0026#34;2\u0026#34; Option \u0026#34;ConstantDeceleration\u0026#34; \u0026#34;4\u0026#34; EndSection Many many thanks to Nathan Hamblen for assembling this configuration and offering it out to the masses on his blog.\n","date":"24 August 2013","permalink":"/p/get-a-rock-solid-linux-touchpad-configuration-for-the-lenovo-x1-carbon/","section":"Posts","summary":"The X1 Carbon\u0026rsquo;s touchpad has been my nemesis in Linux for quite some time because of its high sensitivity.","title":"Get a rock-solid Linux touchpad configuration for the Lenovo X1 Carbon"},{"content":"","date":null,"permalink":"/tags/mikro/","section":"Tags","summary":"","title":"Mikro"},{"content":"Outside of the RHCA exams, I haven\u0026rsquo;t configured a PXE system for my personal needs. A colleague demoed his PXE setup for me and I was hooked. Once I realized how much time I could save when I\u0026rsquo;m building and tearing down virtual machines, it made complete sense. This post will show you how to configure PXE and tftpd in Mikrotik\u0026rsquo;s RouterOS to boot and install Fedora 19 (as well as provide rescue environments).\nThe first thing you\u0026rsquo;ll need are a few files from a working Fedora installation. Install the syslinux-tftpboot package and grab the following files:\n/tftpboot/pxelinux.0 /tftpboot/vesamenu.c32 You\u0026rsquo;ll also need a vmlinuz and initrd.img file from your favorite Fedora mirror (use the linked text here for F19 x86_64 or look in the os/images/pxeboot directory on the mirror for your architecture).\nWhen you have your four files, create a directory on the Mikrotik via FTP called tftp, and upload those to your Mikrotik. Your directory should look something like this:\nls tftp/ -rw-rw---- 1 root root 155792 Jul 23 00:01 vesamenu.c32 -rw-rw---- 1 root root 5055896 Jul 22 23:41 vmlinuz -rw-rw---- 1 root root 32829968 Jul 22 23:42 initrd.img -rw-rw---- 1 root root 26460 Jul 22 23:37 pxelinux.0 Within the tftp directory, make a directory called pxelinux.cfg. Add a file called default inside the pxelinux.cfg directory with these contents:\ndefault vesamenu.c32 prompt 0 timeout 600 display boot.msg label linux menu label ^Install or upgrade an existing system kernel vmlinuz append initrd=initrd.img repo=http://mirrors.kernel.org/fedora/releases/19/Fedora/x86_64/os/ ks=http://example.com/kickstart.ks ip=eth0:dhcp label vesa menu label Install system with ^basic video driver kernel vmlinuz append initrd=initrd.img xdriver=vesa nomodeset label rescue menu label ^Rescue installed system menu default kernel vmlinuz append initrd=initrd.img repo=http://mirrors.kernel.org/fedora/releases/19/Fedora/x86_64/os/ rescue ip=eth0:dhcp label local menu label Boot from ^local drive localboot 0xffff Be sure to adjust the ip= and repo= arguments to fit your server. Keep in mind that from Fedora 17 on, you\u0026rsquo;ll need to use the dracut syntax for anaconda boot options. Once that\u0026rsquo;s done, you\u0026rsquo;re ready to configure the Mikrotik firewall, so get logged into the firewall over ssh.\nWe need to set some network options for our Mikrotik\u0026rsquo;s DHCP server:\n/ip dhcp-server network set 0 boot-file-name=pxelinux.0 next-server=192.168.25.1 The value for next-server= should be the gateway address for your internal network (the Mikrotik\u0026rsquo;s internal IP).\nNext, we need to configure the tftp server so that it serves up files to our internal network:\n/ip tftp add ip-addresses=192.168.25.0/24 real-filename=tftp/pxelinux.0 req-filename=pxelinux.0 add ip-addresses=192.168.25.0/24 real-filename=tftp/pxelinux.cfg/default req-filename=pxelinux.cfg/default add ip-addresses=192.168.25.0/24 real-filename=tftp/vmlinuz req-filename=vmlinuz add ip-addresses=192.168.25.0/24 real-filename=tftp/vesamenu.c32 req-filename=vesamenu.c32 add ip-addresses=192.168.25.0/24 real-filename=tftp/initrd.img req-filename=initrd.img Now it\u0026rsquo;s time to test it! If you\u0026rsquo;re using a physical machine, double check your BIOS to verify that PXE boot is enabled for your ethernet interface. Most modern chipsets have support for it, but be sure to check that it\u0026rsquo;s enabled. You may have to reboot after enabling it in the BIOS for the ethernet BIOS to be included.\nIf you\u0026rsquo;re using a virtual machine, just start up virt-manager and choose Network Boot (PXE) from the installation options:\nOnce the VM boots, you\u0026rsquo;ll be sent straight to the PXE boot screen:\nTAKE NOTE! In the pxelinux.cfg/default file, I set rescue mode to boot as the default option. This will prevent a situation where you forget to remove PXE from a system\u0026rsquo;s boot order and accidentally re-kickstart over the live system.\nThe installer should now boot up normally and you can install your Fedora system via kickstart or via the anaconda interface.\n","date":"23 July 2013","permalink":"/p/pxe-boot-fedora-19-using-a-mikrotik-firewall/","section":"Posts","summary":"Outside of the RHCA exams, I haven\u0026rsquo;t configured a PXE system for my personal needs.","title":"PXE boot Fedora 19 using a Mikrotik firewall"},{"content":"I was shocked to see Robyn Bergeron\u0026rsquo;s email today about Seth Vidal\u0026rsquo;s passing. He was the victim of a hit and run accident while he was cycling last night. The suspect has turned himself in as of tonight.\nI first met Seth at FUDCon Tempe back in 2011. We had talked off and on via email and IRC about cloud-related topics. He was interested in how we assembled our cloud offering at Rackspace and I was eager to talk to him about building cloud images and handling mirrors. I gave him a compliment about yum and how handy it was. He thanked me, shrugged it off humbly, and then wanted to talk more about my work at Rackspace on our Cloud Servers offering.\nHe had some criticisms for our product and he delivered them in such a way that they were open ended. It wasn\u0026rsquo;t like \u0026ldquo;Rackspace\u0026rsquo;s cloud is terrible, I can\u0026rsquo;t use it\u0026rdquo;, but instead, his point was \u0026ldquo;I like your stuff - just not in its current state - so how can you set things up so my stuff will work?\u0026rdquo; Although I tried to do the same when talking about Fedora and Xen (which wasn\u0026rsquo;t a match made in heaven at the time), I don\u0026rsquo;t think I was nearly as effective as Seth was.\nWe talked every so often after that, mostly on IRC, about changing in the cloud environment. He\u0026rsquo;d elbow me about using Xen over KVM and I\u0026rsquo;d elbow him about using Eucalyptus over OpenStack. We\u0026rsquo;d have a good volley and eventually we\u0026rsquo;d make fun of each other\u0026rsquo;s stance in the conversation. I learned a lot from Seth on how to handle disagreements in the open source world and he was always a good sounding board when I had a good idea. Well, sometimes I thought it was a good idea but he quickly reminded me that I could do better. ;)\nSeth: I bid you farewell and safe passage to wherever you go from here. I\u0026rsquo;m not a religious guy, and we never talked religion together, but if there\u0026rsquo;s a good place a guy like you could go, I\u0026rsquo;m sure you\u0026rsquo;re on the way there now. If I\u0026rsquo;m able to be half the technologist you were on your worst day, I\u0026rsquo;d say I\u0026rsquo;ve accomplished something pretty amazing. I\u0026rsquo;ll cut this short here, Seth, because I need to use yum to get some servers updated. Thanks again.\n","date":"10 July 2013","permalink":"/p/a-humble-farewell-to-seth-vidal/","section":"Posts","summary":"I was shocked to see Robyn Bergeron\u0026rsquo;s email today about Seth Vidal\u0026rsquo;s passing.","title":"A humble farewell to Seth Vidal"},{"content":" Pairing virt-manager with KVM makes booting new VM\u0026rsquo;s pretty darned easy. I have a QNAP NAS at home with a bunch of ISO\u0026rsquo;s stored in share available to guests and I wanted to use that with libvirt to boot new VM\u0026rsquo;s. (By the way, if you\u0026rsquo;re looking for an off-the-shelf NAS that is built with solid hardware and pretty reliable software, try one of the QNAP devices. You still get access to many of the usual commands that you would normally find on a Linux box for emergencies. More on that in a later post.)\nThe first step was creating a mountpoint and configuring the mount in /etc/fstab:\n# mkdir /mnt/iso # grep qemu /etc/passwd qemu❌107:107:qemu user:/:/sbin/nologin # echo \u0026#34;//qnap/ISO /mnt/iso cifs _netdev,guest,uid=107,gid=107,defaults 0 0\u0026#34; \u0026gt;\u0026gt; /etc/fstab # mount /mnt/iso My QNAP is already in /etc/hosts so I didn\u0026rsquo;t need to specify the IP in the file. Adding _netdev ensures that the network will be up before the mount is made. The guest option ensures that I won\u0026rsquo;t be prompted for credentials and the uid=107,gid=107 mounts the share as the qemu user. If you forget this, virt-manager will throw some ugly permissions errors from libvirt.\nFrom there, I had another permissions error and I suspected that SELinux was preventing libvirt from accessing the files in the share. A quick check of /var/log/messages revealed that I was right:\nJul 6 16:12:51 nuc1 setroubleshoot: SELinux is preventing /usr/bin/qemu-system-x86_64 from open access on the file /mnt/iso/livecd.iso. For complete SELinux messages. run sealert -l c1c80b2c-b5df-4114-86c7-ffee98274552 Here\u0026rsquo;s the output from sealert:\n# sealert -l c1c80b2c-b5df-4114-86c7-ffee98274552 SELinux is preventing /usr/bin/qemu-system-x86_64 from open access on the file /mnt/iso/livecd.iso. ***** Plugin catchall_boolean (89.3 confidence) suggests ******************* If you want to allow virt to use samba Then you must tell SELinux about this by enabling the \u0026#39;virt_use_samba\u0026#39; boolean. You can read \u0026#39;None\u0026#39; man page for more details. Do setsebool -P virt_use_samba 1 The fix is a quick one:\n# setsebool -P virt_use_samba 1 You should be all set after that. Press “Browse Local” in virt-manager when you look for your ISO to boot the virtual machine and navigate over to /mnt/iso for your list of ISO\u0026rsquo;s.\n","date":"7 July 2013","permalink":"/p/boot-vms-with-virt-manager-and-libvirt-with-isos-stored-remotely-via-sambacifs/","section":"Posts","summary":"Pairing virt-manager with KVM makes booting new VM\u0026rsquo;s pretty darned easy.","title":"Boot VM’s with virt-manager and libvirt with ISO’s stored remotely via samba/cifs"},{"content":"","date":null,"permalink":"/tags/samba/","section":"Tags","summary":"","title":"Samba"},{"content":"The confined user support in SELinux is handy for ensuring that users aren\u0026rsquo;t able to do something that they shouldn\u0026rsquo;t. It seems more effective and easier to use than most of the other methods I\u0026rsquo;ve seen before. Thanks to Dan for reminding me about this during his SELinux in the Enterprise talk from this year\u0026rsquo;s Red Hat Summit.\nThere are five main SELinux user types (and a handy chart in the Fedora documentation):\nguest_u: - no X windows, no sudo, and no networking xguest_u: - same as guest_u, but X is allowed and connectivity is allowed to web ports only (handy for kiosks) user_u: - same as xguest_u, but networking isn\u0026rsquo;t restricted staff_u: - same as user_u, but sudo is allowed (su isn\u0026rsquo;t allowed) unconfined_u: - full access (this is the default) One interesting thing to note is that all users are allowed to execute binary applications within their home directories by default. This can be switch off via some booleans (which I\u0026rsquo;ll demonstrate in a moment).\nLet\u0026rsquo;s kick off a demonstration to show the power of these restrictions. First off, let\u0026rsquo;s get a list of the default configuration:\n# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 * By default, all new users come with no restrictions (as shown by unconfined_u). I\u0026rsquo;ll create a new user called selinuxtest and set a password. If I ssh to the server as the selinuxtest user, I see that I\u0026rsquo;m unconfined:\n$ id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 That\u0026rsquo;s what we expected. Let\u0026rsquo;s apply the strongest restrictions to this user and apply guest_u:\n# semanage login -a -s guest_u selinuxtest I\u0026rsquo;ll start a new ssh session as selinuxtest and try out some commands that I\u0026rsquo;d normally expect to work on a Linux server:\n$ ping google.com ping: icmp open socket: Permission denied $ curl google.com curl: (7) Failed to connect to 74.125.225.129: Permission denied $ sudo su - sudo: unable to change to sudoers gid: Operation not permitted $ ./hello Hello world $ file hello hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x5ffb25a7171c3338d6c76147cccc666ddc752dde, not stripped The networking and sudo restrictions applied as we expected. However, I was able to compile a small \u0026ldquo;Hello World\u0026rdquo; binary in C and run it. That could become a problem for some servers. Let\u0026rsquo;s adjust a boolean that will restrict this activity:\n# getsebool -a | grep exec_content auditadm_exec_content --\u0026gt; on guest_exec_content --\u0026gt; on secadm_exec_content --\u0026gt; on staff_exec_content --\u0026gt; on sysadm_exec_content --\u0026gt; on user_exec_content --\u0026gt; on xguest_exec_content --\u0026gt; on # setsebool guest_exec_content off Now I try running the binary again as my selinuxtest user:\n$ ./hello -bash: ./hello: Permission denied I can\u0026rsquo;t execute binary content in my home directory or in /tmp any longer after adjusting the boolean. Let\u0026rsquo;s switch selinuxtest to xguest_u:\n# semanage login -a -s xguest_u selinuxtest And now I\u0026rsquo;ll re-test as the selinuxtest user:\n$ curl -si google.com | head -1 HTTP/1.1 301 Moved Permanently $ ping google.com ping: icmp open socket: Permission denied I have full web connectivity but I can\u0026rsquo;t do anything else on the network. Now for a switch to user_u:\n# semanage login -a -s user_u selinuxtest And testing user_u with selinuxtest reveals:\n$ ping -c 1 google.com PING google.com (74.125.225.134) 56(84) bytes of data. 64 bytes from ord08s09-in-f6.1e100.net (74.125.225.134): icmp_seq=1 ttl=57 time=29.3 ms --- google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 29.332/29.332/29.332/0.000 ms $ curl -si google.com | head -n1 HTTP/1.1 301 Moved Permanently $ sudo su - sudo: PERM_SUDOERS: setresuid(-1, 1, -1): Operation not permitted Networking is wide open but I still don\u0026rsquo;t have sudo. Let\u0026rsquo;s try staff_u:\n# semanage login -a -s staff_u selinuxtest Testing staff_u with selinuxtest gives me the expected results:\n$ sudo su - [sudo] password for selinuxtest: I didn\u0026rsquo;t add selinuxtest to sudoers, so this command would fail. However, I\u0026rsquo;m actually allowed to execute it now.\nThese restrictions could be very helpful when dealing with users that you don\u0026rsquo;t fully trust on your system. You could use these restrictions to add a kiosk user to a Linux machine and allow family members or coworkers to surf the web using your device. In addition, you could use the restrictions as an extra layer of protection on heavily shared servers to prevent users from consuming resources or generating malicious traffic.\n","date":"5 July 2013","permalink":"/p/confine-untrusted-users-including-your-children-with-selinux/","section":"Posts","summary":"The confined user support in SELinux is handy for ensuring that users aren\u0026rsquo;t able to do something that they shouldn\u0026rsquo;t.","title":"Confine untrusted users (including your children) with SELinux"},{"content":"Most of my websites run on a pair of Supermicro servers that I purchased from Silicon Mechanics (and I can\u0026rsquo;t say enough good things about them and their servers). One problem that kept cropping up was that the servers would become unresponsive during a reboot. If I issued the reboot command in Linux, the machine would begin the reboot process, power off, and remain powered off.\nNeedless to say, this is highly annoying.\nThe only way to bring the machine back was to use ipmitool on my other server or access the IPMI/iKVM interface on the downed server. I tested Fedora 15 through 19 and confirmed the issue in each OS. Finally, I installed CentOS 6 and the problem disappeared. The servers would reboot and come back online as expected.\nFast forward to this evening. I discovered a helpful forum thread where users were discussing a similar problem on a X9SCA-F Supermicro board. The fix was to blacklist a kernel module like this:\n/etc/modprobe.d/blacklist.conf I tried to rmmod mei and reboot, but the machine stayed powered off again. When I powered it back on with the module blacklisted from the start, I found that I could reboot normally and the server would boot up again. The module is from Intel:\n# modinfo mei | grep desc summary: Intel(R) Management Engine Interface The Intel Management Engine is a BIOS extension that enables Intel Active Management Technology (AMT). Intel has a PDF that gives an overview of AMT:\nIntel® Active Management Technology (Intel® AMT) is a capability embedded in Intel-based platforms that enhances the ability of IT organizations to manage enterprise computing facilities. Intel AMT operates independently of the platform processor and operating system. Remote platform management applications can access Intel AMT securely, even when the platform is turned off, as long as the platform is connected to line power and to a network. Independent software vendors (ISVs) can build applications that take advantage of the features of Intel AMT using the application programming interface (API).\nThat\u0026rsquo;s a mouthful.\nIt essentially allows you to manage large amounts of hardware and keep an inventory. You can also pull event logs from the machine even if it\u0026rsquo;s powered off. Applications running within the OS on the server can give data to the AMT interface that allows administrators to retrieve the data without needing access to the OS.\nThe blacklisted module hasn\u0026rsquo;t affected the server negatively (as far as I can tell).\n","date":"3 June 2013","permalink":"/p/supermicro-x9scix9sca-server-does-a-shutdown-rather-than-a-reboot/","section":"Posts","summary":"Most of my websites run on a pair of Supermicro servers that I purchased from Silicon Mechanics (and I can\u0026rsquo;t say enough good things about them and their servers).","title":"Supermicro X9SCI/X9SCA server does a shutdown rather than a reboot"},{"content":"It\u0026rsquo;s been a little while since I last posted about installing Xen on Fedora, so I figured that Fedora 19\u0026rsquo;s beta release was as good a time as any to write a new post. To get started, you\u0026rsquo;ll need to get Fedora 19 installed on your favorite hardware (or virtual machine).\nInstall the Xen hypervisor and tools. Also, ensure that both of the necessary daemons are running on each boot:\nyum -y install xen xen-hypervisor xen-libs xen-runtime chkconfig xend on chkconfig xendomains on You\u0026rsquo;ll notice that I didn\u0026rsquo;t start the daemons quite yet. We will need the xen hypervisor running before they will be of any use.\nNow, let\u0026rsquo;s configure GRUB2. I wrote a quick post about these steps last year. The Xen kernel entry should already be configured (by grubby), but it\u0026rsquo;s not the default. Fixing that is a quick process:\n# grep ^menuentry /boot/grub2/grub.cfg | cut -d \u0026#34;\u0026#39;\u0026#34; -f2 Fedora, with Linux 3.9.4-300.fc19.x86_64 Fedora, with Linux 0-rescue-4ea51ecfff4f4e64a5ec903c495ee5b6 Fedora, with Xen hypervisor # grub2-set-default \u0026#39;Fedora, with Xen hypervisor\u0026#39; # grub2-editenv list saved_entry=Fedora, with Xen hypervisor At this point, you\u0026rsquo;re ready to reboot. After the reboot, verify that Xen is running:\n# xm dmesg | head __ __ _ _ ____ ____ ____ __ _ ___ \\ \\/ /___ _ __ | || | |___ \\ |___ \\ | ___| / _| ___/ |/ _ \\ \\ // _ \\ \u0026#39;_ \\ | || |_ __) | __) |_|___ \\ | |_ / __| | (_) | / \\ __/ | | | |__ _| / __/ _ / __/|__|__) || _| (__| |\\__, | /_/\\_\\___|_| |_| |_|(_)_____(_)_____| |____(_)_| \\___|_| /_/ (XEN) Xen version 4.2.2 (mockbuild@phx2.fedoraproject.org) (gcc (GCC) 4.8.0 20130412 (Red Hat 4.8.0-2)) Fri May 17 19:39:53 UTC 2013 (XEN) Latest ChangeSet: unavailable (XEN) Bootloader: GRUB 2.00 (XEN) Command line: placeholder If you\u0026rsquo;re adventurous on the command line, you\u0026rsquo;re done here. However, I enjoy using virt-manager for quick access to virtual machines and I also like all of the scripting and remote administration capabilities that libvirt delivers. Let\u0026rsquo;s get the tools and daemons installed and running:\nyum -y install virt-manager dejavu* xorg-x11-xauth yum -y install libvirt-daemon-driver-network libvirt-daemon-driver-storage libvirt-daemon-xen chkconfig libvirtd on service libvirtd start You\u0026rsquo;re now ready to use virt-manager to manage your virtual machines. Simply ssh to your hypervisor with X forwarding enabled (ssh -X hypervisor.mydomain.com) and run virt-manager. You won\u0026rsquo;t have a virtual network or bridge to use for virtual machines quite yet. You have two options: NAT your VM\u0026rsquo;s or configure a network bridge. I prefer the bridge but you may require something different in your environment.\nFor the NAT option (the easiest for beginners):\nyum -y install libvirt-daemon-config-network libvirt-daemon-config-nwfilter service libvirtd restart For the network-bridge option, you\u0026rsquo;ll need to adjust your network scripts to create a bridge and add your primary network interface to the bridge. That\u0026rsquo;s a bit outside the scope of this post, but the Fedora Wiki and HowtoForge (ignore the KVM parts of their guide).\nYou now have a working Xen installation on Fedora 19!\nFOR THOSE WHO EMBRACE SECURITY:\nIf you run SELinux in Enforcing mode, there\u0026rsquo;s still a lingering issue where SELinux prevents python (running under xend) from talking to block devices (like logical volumes). I opened a bug about a similar problem before but I need to open another one for the block device issue. If you\u0026rsquo;re itching for a workaround, you can force SELinux into permissive mode for the xend_t context only:\nyum -y install selinux-policy-devel semanage permissive -a xend_t That\u0026rsquo;s not the best option for now, but it\u0026rsquo;s certainly better than setenforce 0. ;)\n","date":"3 June 2013","permalink":"/p/installing-the-xen-hypervisor-on-fedora-19/","section":"Posts","summary":"It\u0026rsquo;s been a little while since I last posted about installing Xen on Fedora, so I figured that Fedora 19\u0026rsquo;s beta release was as good a time as any to write a new post.","title":"Installing the Xen hypervisor on Fedora 19"},{"content":"While rolling through my RSS feeds, I found a great presentation by David Quigley titled \u0026ldquo;Demystifying SELinux\u0026rdquo;. He makes come good comparisons between discretionary/mandatory access controls and dives into what makes SELinux useful. Basic troubleshooting commands are covered within the presentation as well.\nYou can find the presentation over on Speaker Deck. I\u0026rsquo;ve also mirrored a PDF copy here on the site.\nUPDATE: If you\u0026rsquo;re going to OSCON 2013 this year, it appears that David will be presenting this topic during the event.\n","date":"29 May 2013","permalink":"/p/presentation-demystifying-selinux/","section":"Posts","summary":"While rolling through my RSS feeds, I found a great presentation by David Quigley titled \u0026ldquo;Demystifying SELinux\u0026rdquo;.","title":"Presentation: Demystifying SELinux"},{"content":"I\u0026rsquo;ve converted one of my KVM hypervisors from CentOS 6 to Fedora 18 and now comes the task of migrating my virtual machines off of my single remaining CentOS 6 hypervisor. This is definitely on a budget, so there\u0026rsquo;s no shared storage to make this process easier.\nHere\u0026rsquo;s how I did it:\nMigrate the logical volume\nMy first VM to migrate is my Fedora development VM where I build and test new packages. I have a 10G logical volume on the old node:\n[root@helium ~]# lvs /dev/mapper/vg_helium-fedora--dev LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert fedora-dev vg_helium -wi-a--- 10.00g I made a 10G logical volume on the new hypervisor:\n[root@hydrogen ~]# lvcreate -n fedora-dev -L10G vg_hydrogen Logical volume \u0026#34;fedora-dev\u0026#34; created After getting ssh keys set up between both hypervisors and installing pv (to track progress), I started the storage migration over ssh:\ndd if=/dev/mapper/vg_helium-fedora--dev | pv | ssh hydrogen dd of=/dev/mapper/vg_hydrogen-fedora--dev Luckily it was only a 10GB logical volume so it transferred over in a few minutes.\nDump and adjust the source VM\u0026rsquo;s XML\nOn the source server, I dumped the VM configuration to an XML file and copied it to the new host:\nvirsh dumpxml fedora-dev \u0026gt; fedora-dev.xml scp fedora-dev.xml hydrogen: Before importing the XML file on the new host, there are some adjustments that need to be made. First off was an adjustment of the storage volume since the new host had the same logical volume name but a different volume group (the source line):\n\u0026lt;disk type=\u0026#39;block\u0026#39; device=\u0026#39;disk\u0026#39;\u0026gt; \u0026lt;driver name=\u0026#39;qemu\u0026#39; type=\u0026#39;raw\u0026#39; cache=\u0026#39;none\u0026#39; io=\u0026#39;native\u0026#39;\u0026gt;\u0026lt;/driver\u0026gt; \u0026lt;source dev=\u0026#39;/dev/vg_hydrogen/fedora-dev\u0026#39;/\u0026gt; \u0026lt;target dev=\u0026#39;vda\u0026#39; bus=\u0026#39;virtio\u0026#39;\u0026gt;\u0026lt;/target\u0026gt; \u0026lt;address type=\u0026#39;pci\u0026#39; domain=\u0026#39;0x0000\u0026#39; bus=\u0026#39;0x00\u0026#39; slot=\u0026#39;0x05\u0026#39; function=\u0026#39;0x0\u0026#39;\u0026gt; \u0026lt;/address\u0026gt; \u0026lt;/disk\u0026gt; Also, there\u0026rsquo;s a mismatch with the machine type (not architecture) between CentOS 6 and Fedora 18. I dumped the XML from a VM running on the Fedora 18 hypervisor and compared the machine type to my old CentOS VM\u0026rsquo;s XML (the XML from the CentOS VM is on top):\n- \u0026lt;type arch=\u0026#39;x86_64\u0026#39; machine=\u0026#39;rhel6.3.0\u0026#39;\u0026gt;hvm\u0026lt;/type\u0026gt; + \u0026lt;type arch=\u0026#39;x86_64\u0026#39; machine=\u0026#39;pc-1.2\u0026#39;\u0026gt;hvm\u0026lt;/type\u0026gt; I replaced rhel6.3.0 with pc-1.2. If you forget this step, your VM won\u0026rsquo;t start. You\u0026rsquo;ll get some errors about a mismatched machine type before the VM boots.\nThere\u0026rsquo;s one last fix: the path to the qemu-kvm emulator:\n- \u0026lt;emulator\u0026gt;/usr/libexec/qemu-kvm\u0026lt;/emulator\u0026gt; + \u0026lt;emulator\u0026gt;/usr/bin/qemu-kvm\u0026lt;/emulator\u0026gt; Replace /usr/libexec/qemu-kvm with /usr/bin/qemu-kvm and save your XML file.\nImport the VM configuration and launch the VM\nImporting the VM on the Fedora 18 hypervisor was easy:\nvirsh define fedora-dev.xml That causes the configuration to load into libvirt and it should appear in virt-manager or virsh list by this point. If not, double check your previous steps and look for error messages in your logs. That doesn\u0026rsquo;t actually start the virtual machine, so I started it on the command line:\nvirsh start fedora-dev Within a few moments, the VM was up and responding to pings.\nIt\u0026rsquo;s a good idea to hop into virt-manager and verify that the VM configuration is what you expect. Some configuration options don\u0026rsquo;t line up terribly well between CentOS 6 and Fedora 18. You might need to adjust a few to match the performance you expect to see.\n","date":"22 May 2013","permalink":"/p/migrate-kvm-virtual-machines-from-centos-6-to-fedora-18-without-the-luxury-of-shared-storage/","section":"Posts","summary":"I\u0026rsquo;ve converted one of my KVM hypervisors from CentOS 6 to Fedora 18 and now comes the task of migrating my virtual machines off of my single remaining CentOS 6 hypervisor.","title":"Migrate KVM virtual machines from CentOS 6 to Fedora 18 without the luxury of shared storage"},{"content":"This post is a quick one but I wanted to share it since I taught it to someone new today. When you have bash output with colors, less doesn\u0026rsquo;t handle the color codes properly by default:\n$ colordiff chunk/functions.php chunk-old/functions.php | less ESC[0;32m22a23,27ESC[0;0m ESC[0;34m\u0026gt; * Load up our functions for grabbing content from postsESC[0;0m ESC[0;34m\u0026gt; */ESC[0;0m ESC[0;34m\u0026gt; require( get_template_directory() . \u0026#39;/content-grabbers.php\u0026#39; );ESC[0;0m ESC[0;34m\u0026gt; ESC[0;0m Toss in the -R flag and you\u0026rsquo;ll be able to see the colors properly (no colors to see here, but use your imagination):\n$ colordiff chunk/functions.php chunk-old/functions.php | less -R 22a23,27 \u0026gt; * Load up our functions for grabbing content from posts \u0026gt; */ \u0026gt; require( get_template_directory() . \u0026#39;/content-grabbers.php\u0026#39; ); \u0026gt; \u0026gt; /** The man page for less explains the feature in greater detail:\n-R or --RAW-CONTROL-CHARS Like -r, but only ANSI \u0026#34;color\u0026#34; escape sequences are output in \u0026#34;raw\u0026#34; form. Unlike -r, the screen appear- ance is maintained correctly in most cases. ANSI \u0026#34;color\u0026#34; escape sequences are sequences of the form: ESC [ ... m where the \u0026#34;...\u0026#34; is zero or more color specification characters For the purpose of keeping track of screen appearance, ANSI color escape sequences are assumed to not move the cursor. You can make less think that characters other than \u0026#34;m\u0026#34; can end ANSI color escape sequences by setting the environment variable LESSANSIENDCHARS to the list of characters which can end a color escape sequence. And you can make less think that characters other than the standard ones may appear between the ESC and the m by setting the environment variable LESSANSIMIDCHARS to the list of characters which can appear. ","date":"22 May 2013","permalink":"/p/handling-terminal-color-escape-sequences-in-less/","section":"Posts","summary":"This post is a quick one but I wanted to share it since I taught it to someone new today.","title":"Handling terminal color escape sequences in less"},{"content":"Changing my ssh port from the default port (22) has been one of my standard processes for quite some time when I build new servers or virtual machines. However, I see arguments crop up regularly about it (like this reddit thread or this other one).\nBefore I go any further, let\u0026rsquo;s settle the \u0026ldquo;security through obscurity\u0026rdquo; argument. (This could probably turn into its own post but I\u0026rsquo;ll be brief for now.) Security should always be applied in layers. This provides multiple levels of protection from initial attacks, like information gathering attempts or casual threats against known vulnerabilities. In addition, these layers of security should be applied within the environment so that breaking into one server after getting a pivot point in the environment should be just as difficult (if not more difficult) than the original attack that created the pivot point. If \u0026ldquo;security through obscurity\u0026rdquo; tactics make up one layer of a multi-layered solution, I\u0026rsquo;d encourage you to obscure your environment as long as it doesn\u0026rsquo;t affect your availability.\nThe key takeaway is:\nSecurity through obscurity is effective if it\u0026rsquo;s one layer in a multi-layer security solution\nLet\u0026rsquo;s get back to the original purpose of the post.\nThe biggest benefit to changing the port is to avoid being seen by casual scans. The vast majority of people hunting for any open ssh servers will look for port 22. Some will try the usual variants, like 222 and 2222, but those are few and far between. I ran an experiment with a virtual machine exposed to the internet which had sshd listening on port 22. The server stayed online for one week and then I changed the ssh port to 222. The number of attacks dropped by 98%. Even though this is solely empirical evidence, it\u0026rsquo;s clear that moving off the standard ssh port reduces your server\u0026rsquo;s profile.\nIf it\u0026rsquo;s more difficult to scan for your ssh server, your chances of being attacked with an ssh server exploit are reduced. A determined attacker can still find the port if they know your server\u0026rsquo;s IP address via another means (perhaps via a website you host) and they can launch attacks once they find it. Paranoid server administrators might want to check into port knocking to reduce that probability even further.\nRemembering the non-standard ssh port can be annoying, but if you have a standard set of workstations that you use for access your servers, just utilize your ~/.ssh/config file to specify certain ports for certain servers. For example:\nHost *.mycompany.com Port 4321 Host nonstandard.mypersonalstuff.com Port 2345 Host *.mypersonalstuff.com Port 5432 If you run into SELinux problems with a non-standard ssh port, there are plenty of guides on this topic.. The setroubleshoot-server package helps out with this as well.\n# semanage port -a -t ssh_port_t -p tcp 4321 # semanage port -l | grep ssh ssh_port_t tcp 4321,22 Here is my list of ssh lockdown practices when I build a new server:\nUpdate the ssh server package and ensure that automatic updates are configured Enable SELinux and allow a non-standard ssh port Add my ssh public key to the server Disable password logins for ssh Adjust my AllowUsers setting in sshd_config to only allow my user Disable root logins For servers with sensitive data, I install fail2ban ","date":"15 May 2013","permalink":"/p/changing-your-ssh-servers-port-from-the-default-is-it-worth-it/","section":"Posts","summary":"Changing my ssh port from the default port (22) has been one of my standard processes for quite some time when I build new servers or virtual machines.","title":"Changing your ssh server’s port from the default: Is it worth it?"},{"content":"A coworker heard me grumbling about Linux system administration standards and recommended that I review the CIS Security Benchmarks. After downloading the Red Hat Enterprise Linux 6 security benchmark PDF, I quickly started to see the value of the document. Some of the standards were the installation defaults, some were often forgotten settings, and some were completely brand new to me.\nAutomating the standards can be a little treacherous simply due to the number of things to adjust and check. I\u0026rsquo;ve created a kickstart for CentOS 6 and tossed it on Github:\nhttps://github.com/rackerhacker/securekickstarts Be sure to read the disclaimers in the README before getting started. Also, keep in mind that the kickstarts are in no way approved by or affiliated with the Center for Internet Security in any way. This is just something I\u0026rsquo;m offering up to the community in the hope that it helps someone.\n","date":"26 April 2013","permalink":"/p/automate-centos-6-deployments-with-cis-security-benchmarks-already-applied/","section":"Posts","summary":"A coworker heard me grumbling about Linux system administration standards and recommended that I review the CIS Security Benchmarks.","title":"Automate CentOS 6 deployments with CIS Security Benchmarks already applied"},{"content":"","date":null,"permalink":"/tags/kickstart/","section":"Tags","summary":"","title":"Kickstart"},{"content":"The wheel group exists for a critical purpose and Wikipedia has a concise definition:\nIn computing, the term wheel refers to a user account with a wheel bit, a system setting that provides additional special system privileges that empower a user to execute restricted commands that ordinary user accounts cannot access. The term is derived from the slang phrase big wheel, referring to a person with great power or influence.\nOn Red Hat systems (including Fedora), the default sudo configuration allows users in the wheel group to use sudo while all others are restricted from using it in /etc/sudoers:\n## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL However, the su command can be used by all users by default (which is something I often forget). Fixing it is easy once you take a look at /etc/pam.d/su:\n# Uncomment the following line to require a user to be in the \u0026#34;wheel\u0026#34; group. #auth\trequired\tpam_wheel.so use_uid Uncomment the line and access to su will only be available for users in the wheel group.\n","date":"26 April 2013","permalink":"/p/limit-access-to-the-su-command/","section":"Posts","summary":"The wheel group exists for a critical purpose and Wikipedia has a concise definition:","title":"Limit access to the su command"},{"content":"","date":null,"permalink":"/tags/pam/","section":"Tags","summary":"","title":"Pam"},{"content":"This article appeared in SC Magazine and I\u0026rsquo;ve posted it here as well. For those of you who were left wanting more from my previous SELinux post, this should help. If it doesn\u0026rsquo;t help, leave a comment. ;)\nThe push to cloud transforms the way we apply information security principles to systems and applications. Perimeters of the past, secured heavily with traditional network devices in the outermost ring, lose effectiveness day by day. Shifting the focus to \u0026ldquo;defense in depth\u0026rdquo; brings the perimeter down to the individual cloud instances running your application. Security-Enhanced Linux, or SELinux, forms an effective part of that perimeter.\nSELinux operates in the realm of mandatory access control, or MAC. The design of MAC involves placing constraints on what a user (a subject) can do to a particular object (a target) on the system. In contrast, discretionary access control, or DAC, allows a user with certain access to use discretion to limit or allow access to certain files, directories, or devices. You can set any file system permissions that you want but SELinux can override them with ease at the operating system level.\nConsider a typical server running a web application. An attacker compromises the web application and executes malicious code via the web server daemon itself. SELinux has default policies that prevent the daemon from initiating communication on the network. That limits the attacker’s options to attack other services or servers.\nIn addition, SELinux sets policies on which files and directories the web server can access, regardless of any file system permissions. This protection limits the attacker’s access to other sensitive parts of the file system even if the administrator set the files to be readable to the world.\nThis is where SELinux shines. Oddly enough, this is the point where many system administrators actually disable SELinux on their systems.\nTroubleshooting these events, called AVC denials, without some helpful tools is challenging and frustrating. Each denial flows into to your audit log as a cryptic message. Most administrators will check the usual suspects, like firewall rules and file system permissions. As frustration builds, they disable SELinux and notice that their application begins working as expected. SELinux remains disabled and hundreds of helpful policies lie dormant solely because one policy caused a problem.\nDisabling SELinux without investigation frustrated me to the point where I started a site at stopdisablingselinux.com. The site is a snarky response to Linux administrators who reach for the disable switch as soon as SELinux gets in their way.\nAll jokes aside, here are some helpful tips to use SELinux effectively:\nUse the setroubleshoot helpers to understand denials\nWorking through denials is easy with the setroubleshoot-server package. When a denial occurs, you still receive a cryptic log message in your audit logs. However, you also receive a message via syslog that is very easy to read. Your server can email you these messages as well. The message contains guidance about adjusting SELinux booleans, setting contexts, or generating new SELinux policies to work around a really unusual problem. When I say guidance, I mean that the tools give you commands to copy and paste to adjust your policies, booleans and contexts.\nReview SELinux booleans for quick adjustments\nAlthough the myriad of SELinux user-space tools isn’t within the scope of this article, getsebool and togglesebool deserve a mention. Frequently adjusted policies are controlled by booleans that are toggled on and off with togglesebool. Start with getsebool –a for a full list of booleans and then use togglesebool to enable or disable the policy.\nQuickly restore file or directory contexts\nShuffling files or directories around a server can cause SELinux denials due to contexts not matching their original values. This happens to me frequently if I move a configuration file from one system to another. Correcting the context problem involves one of two simple commands. The restorecon command applies the default contexts specific to the file or directory. If you have a file in the directory with the correct context, use chcon to fix the context on the wrong file by giving it the path to the file with the correct context.\nHere are some additional links with helpful SELinux documentation:\nSELinux Project Wiki Red Hat Enterprise Linux 6 SELinux Guide Dan Walsh\u0026rsquo;s Blog ","date":"19 April 2013","permalink":"/p/reprint-stop-disabling-selinux/","section":"Posts","summary":"This article appeared in SC Magazine and I\u0026rsquo;ve posted it here as well.","title":"Reprint: Stop Disabling SELinux!"},{"content":"After many discussions with fellow Linux users, I\u0026rsquo;ve come to realize that most seem to disable SELinux rather than understand why it\u0026rsquo;s denying access. In an effort to turn the tide, I\u0026rsquo;ve created a new site as a public service to SELinux cowards everywhere: stopdisablingselinux.com.\nHere are some relatively useful SELinux posts from the blog:\nGetting started with SELinux Receive email reports for SELinux AVC denials Edit: The goal of the post was to poke some fun at system administrators who disable SELinux immediately without learning how it works or why they\u0026rsquo;re seeing certain operations being denied. Obviously, if your particular workload or demands don\u0026rsquo;t allow for the use of SELinux, then I\u0026rsquo;m going to be the last person to encourage you to use it. Many system administrators have found that it doesn\u0026rsquo;t provide a good ratio of work required to benefit gained, which I totally understand.\n","date":"16 April 2013","permalink":"/p/seriously-stop-disabling-selinux/","section":"Posts","summary":"After many discussions with fellow Linux users, I\u0026rsquo;ve come to realize that most seem to disable SELinux rather than understand why it\u0026rsquo;s denying access.","title":"Seriously, stop disabling SELinux"},{"content":"","date":null,"permalink":"/tags/command-lines/","section":"Tags","summary":"","title":"Command Lines"},{"content":"I\u0026rsquo;m in the process of moving back to a postfix/dovecot setup for hosting my own mail and I wanted a way to remove the more sensitive email headers that are normally generated when I send mail. My goal is to hide the originating IP address of my mail as well as my mail client type and version.\nTo get started, make a small file with regular expressions in /etc/postfix/header_checks:\n/^Received:.*with ESMTPSA/ IGNORE /^X-Originating-IP:/ IGNORE /^X-Mailer:/ IGNORE /^Mime-Version:/ IGNORE The \u0026ldquo;ESMTPSA\u0026rdquo; match works for me because I only send email via port 465. I don\u0026rsquo;t allow SASL authentication via port 25. You may need to adjust the regular expression if you accept SASL authentication via smtp.\nNow, add the following two lines to your /etc/postfix/main.cf:\nmime_header_checks = regexp:/etc/postfix/header_checks header_checks = regexp:/etc/postfix/header_checks Rebuild the hash table and reload the postfix configuration:\npostmap /etc/postfix/header_checks postfix reload Now, send a test email. View the headers and you should see the original received header (with your client IP address) removed, along with details about your mail client.\n","date":"15 April 2013","permalink":"/p/remove-sensitive-information-from-email-headers-with-postfix/","section":"Posts","summary":"I\u0026rsquo;m in the process of moving back to a postfix/dovecot setup for hosting my own mail and I wanted a way to remove the more sensitive email headers that are normally generated when I send mail.","title":"Remove sensitive information from email headers with postfix"},{"content":"The latest versions of virt-manager don\u0026rsquo;t release the mouse pointer when you\u0026rsquo;re doing X forwarding to a machine running OS X. This can lead to a rather frustrating user experience since your mouse pointer is totally stuck in the window. Although this didn\u0026rsquo;t affect me with CentOS 6 hosts, Fedora 18 hosts were a problem.\nThere\u0026rsquo;s a relatively elegant fix from btm.geek that solved it for me. On your Mac, exit X11/Xquartz and create an ~/.Xmodmap file containing this:\nclear Mod1 keycode 66 = Alt_L keycode 69 = Alt_R add Mod1 = Alt_L add Mod1 = Alt_R Start X11/Xquartz once more and virt-manager should release your mouse pointer if you hold the left control key and left option at the same time.\n","date":"20 March 2013","permalink":"/p/virt-manager-wont-release-the-mouse-when-using-ssh-forwarding-from-os-x/","section":"Posts","summary":"The latest versions of virt-manager don\u0026rsquo;t release the mouse pointer when you\u0026rsquo;re doing X forwarding to a machine running OS X.","title":"virt-manager won’t release the mouse when using ssh forwarding from OS X"},{"content":"I dragged out an old Aopen MP57-D tonight that was just sitting in the closet and decided to load up kvm on Fedora 18. I soon found myself staring at a very brief error message upon bootup:\nkvm: disabled by bios After a reboot, the BIOS screen was up and I saw that Virtualization and VT-d were both enabled. Trusted execution (TXT) was disabled, so I enabled it for kicks and rebooted. Now I had two errors:\nkvm: disable TXT in the BIOS or activate TXT before enabling KVM kvm: disabled by bios Time for another trip to the BIOS. I disabled TXT, rebooted, and I was back to the same error where I first started. A quick check of /proc/cpuinfo showed that I had the right processor extensions. Even the output of lshw showed that I should be ready to go. Some digging in Google led me to a blog post for a fix on Dell Optiplex hardware.\nThe fix was to do this:\nWithin the BIOS, disable virtualization, VT-d, and TXT Save the BIOS configuration, reboot, and pull power to the computer at grub Within the BIOS, enable virtualization and VT-d but leave TXT disabled Save the BIOS configuration, reboot, and pull power to the computer at grub Boot up the computer normally Although it seems a bit archaic, this actually fixed my problem and set me on my way.\n","date":"20 March 2013","permalink":"/p/late-night-virtualization-frustration-with-kvm/","section":"Posts","summary":"I dragged out an old Aopen MP57-D tonight that was just sitting in the closet and decided to load up kvm on Fedora 18.","title":"Late night virtualization frustration with kvm"},{"content":"","date":null,"permalink":"/tags/google-reader/","section":"Tags","summary":"","title":"Google Reader"},{"content":"","date":null,"permalink":"/tags/php/","section":"Tags","summary":"","title":"Php"},{"content":" It\u0026rsquo;s no secret that Google Reader is a popular way to keep up with your RSS feeds, but it\u0026rsquo;s getting shelved later this year. Most folks suggested Feedly as a replacement but I found the UI quite clunky in a browser and on Android devices.\nThen someone suggested Tiny Tiny RSS. I couldn\u0026rsquo;t learn more about it on the day Google Reader\u0026rsquo;s shutdown was announced because the site was slammed. In a nutshell, Tiny Tiny RSS is a well-written web UI for managing feeds and a handy API for using it with mobile applications. The backend code is written in PHP and it supports MySQL and Postgres.\nThere\u0026rsquo;s also an Android application that gives you a seven day trial once you install it. The pro key costs $1.99.\nThe installation took me a few minutes and then I was off to the races. I\u0026rsquo;d recommend implementing SSL for accessing your installation (unless you like passing credentials in the clear) and enable keepalive connections in Apache. The UI in the application drags down a ton of javascript as it works and enabling keepalives will keep your page load times low.\nIf you want to get your Google Reader feeds moved over in bulk, just export them from Google Reader:\nClick the settings cog at the top right of Google Reader and choose Reader Settings Choose Import/Export from the menu Press Export, head over to Google Takeout and download your zip file Unzip the file and find the .xml file. Open up a browser, access Tiny Tiny RSS and do this:\nClick Actions \u0026gt; Preferences Click the Feeds tab Click the OPML button at the bottom Import the xml file that was in the zip file from Google From there, just choose a method for updating feeds and you should be all set!\n","date":"17 March 2013","permalink":"/p/survive-the-google-reader-exodus-with-tiny-tiny-rss/","section":"Posts","summary":"It\u0026rsquo;s no secret that Google Reader is a popular way to keep up with your RSS feeds, but it\u0026rsquo;s getting shelved later this year.","title":"Survive the Google Reader exodus with Tiny Tiny RSS"},{"content":"This year\u0026rsquo;s RSA Conference was full of very useful content but the most useful session for me was a peer to peer discussion regarding BYOD on mobile devices. The session had room for about 25 people and many companies were represented. Some companies were huge, household names, while others were very small.\nThe discussion started around how to authenticate and manage mobile devices, but it soon ended up covering the handling of data on personal and company-issued devices. A corporate security leader for a large company said the healthiest shift for them was when they stopped focusing on the devices themselves and moved their focus to the data they wanted to protect. They found that they could lock down all the devices in the world, but their employees would mishandle the data no matter what actions they took to protect the endpoint.\nThat led me to start a ruckus on Twitter:\nHow does a corporate security team keep sensitive data out of products like Evernote and Dropbox effectively? It's a tall order. \u0026mdash; Major Hayden (@majorhayden) March 2, 2013 Which I soon followed with this:\nMy last question got a lot of good responses. Thanks! But how do you *ENFORCE* a corporate policy against something like Dropbox/Evernote? \u0026mdash; Major Hayden (@majorhayden) March 3, 2013 The responses started piling up in a hurry. (To see the verbatim responses for yourself, click the date on one the embedded tweets above.) Here\u0026rsquo;s a quick summary of the suggested ways to attack the problem from the tweets I received:\nEducation \u0026amp; awareness - Ensure that users not only understand where they should keep confidential data but also ensure they understand how to classify the data they\u0026rsquo;re handling. Provide alternatives - If users like the functionality of a particular product, try to purchase an enterprise version of the product or re-create the product internally. Users will be more likely to use the approved version of the product and the company will have a bit more control over the data. Top-down policies \u0026amp; enforcement - Make policies that define where data can and cannot go and follow that up with enforcement and accountability. Deny access - Set firewall or DLP policies to disallow access to certain products while on the corporate network. This doesn\u0026rsquo;t cover situations where employees are off the corporate network. Many people suggested a blend between educating, providing alternatives, and enforcement. This is a real change for corporate IT and security departments that would normally opt for denying access to unapproved applications entirely. This quickly turns into a game of cat-and-mouse in which there are no clear winners.\nTake an example like Evernote. If I was blocked from accessing it at work, I could VPN into another location and send Evernote over the VPN. If VPN access was blocked, I could start an ssh proxy and send the Evernote traffic through it. If ssh was blocked, I could remotely access another system via RDP or VNC where Evernote was installed and use it there. The truly frustrated user might invest in a 3G/4G device and use that in the office instead. That\u0026rsquo;s even worse for the security department since none of their traffic would be passing through the corporate network.\nHere are my suggestions for protecting data at a modern company:\nListen to your users - Find out why users like a particular third party application and why they don\u0026rsquo;t like the current tools provided by the company. Learn about the types of data they\u0026rsquo;re storing on that third party application. Regain some control of your data through alternatives - If your users prefer a particular application, try to purchase an enterprise or self-hosted version of the application. Your users will be pleased since they get the functionality they expect and the security teams can gain a little more control over the data stored in the application. Make a solid data classification policy - Creating an easy to use data classification policy is the first step to securing your data through awareness. Employees need to identify the sensitivity of the data they\u0026rsquo;re handling before they can know what they can and can\u0026rsquo;t do with it. Make the data classifications easy to identify and ensure that users have an escalation point they can use when they have questions or they need to release sensitive data. Create enforcement policies - If a user deliberately disobeys corporate policy, this where the rubber meets the road. Ensure that the policy is fair to users of various technical levels within the company and vet it thoroughly with your legal and HR departments. These enforcement policies may be required by various compliance programs, so check to see if they\u0026rsquo;re on paper but not enforced. Educate users about sensitive data - Humanize your data classification policy and help users understand how to identify and handle sensitive data. Remind employees about the importance of company data and what can happen if it was misplaced or stolen. There will be a significant amount of questions coming from this process so be sure that you\u0026rsquo;re ready to tackle them. If you do this right, you\u0026rsquo;ll get employees policing themselves and their peers. Rinse and repeat - Regularly check in with users to verify that the internal applications are meeting their needs. Go through the awareness work on a regular basis. When policies become dormant or ineffective, revise them to meet the current needs. This problem isn\u0026rsquo;t going away anytime soon and it\u0026rsquo;s rapidly evolving. Your corporate security department must evolve with it. A coworker of mine hit the nail on the head with this:\n@rackerhacker that's probably the number 1 security dilemma for the next two years. \u0026mdash; letterj (@letterj) March 3, 2013 The best thing about this approach is that it scales better and is more effective than denying access. It takes a significant amount of work up front for a corporate security department, but it pays off in the end. Employees soon call out other employees for poor security hygiene and they become informal delegates of the corporate security team. Security can go viral in your organization just like the usage of third party tools.\nThe key to success is driving security innovation within your company that equals or outpaces the innovation coming from third party applications.\nNew tools and services may appear on a daily basis, but if your employees know what belongs there and what doesn\u0026rsquo;t, they\u0026rsquo;ll do your work for you.\n","date":"3 March 2013","permalink":"/p/controlling-sensitive-company-data-means-losing-some-control-of-it/","section":"Posts","summary":"This year\u0026rsquo;s RSA Conference was full of very useful content but the most useful session for me was a peer to peer discussion regarding BYOD on mobile devices.","title":"Controlling sensitive company data means losing some control of it"},{"content":"","date":null,"permalink":"/tags/encryption/","section":"Tags","summary":"","title":"Encryption"},{"content":"","date":null,"permalink":"/tags/pgp/","section":"Tags","summary":"","title":"Pgp"},{"content":" I\u0026rsquo;ve been a big fan of the GPGTools suite for Mac for quite a while but I discovered some neat features when right-clicking on a file in Finder today. It\u0026rsquo;s a bit disappointing that I didn\u0026rsquo;t find these sooner!\nEncrypting files is simple: just click OpenPGP: Encrypt File and a window will pop asking you which key you\u0026rsquo;d like to use for encryption. You also have the option of encrypting it with a password. Decrypting, signing, and validating files is easy and extremely fast. In addition, you\u0026rsquo;ll get Growl notifications upon success or failure.\nGPGTools also integrates with Mail.app to allow for seamless signing, encrypting, decrypting and verification of email content. There\u0026rsquo;s a preview version available that integrates quite well with Mountain Lion\u0026rsquo;s Mail.app, but you can only acquire it via donation.\n","date":"8 February 2013","permalink":"/p/quick-access-to-openpgp-tasks-with-gpgtools-in-os-x/","section":"Posts","summary":"I\u0026rsquo;ve been a big fan of the GPGTools suite for Mac for quite a while but I discovered some neat features when right-clicking on a file in Finder today.","title":"Quick access to OpenPGP tasks with GPGTools in OS X"},{"content":"My new role has caused me to look at information security in a different way. It\u0026rsquo;s always been a hobby for me but I enjoy the challenge of making it my focus each day.\nMany companies seem to make a natural progression in security as they grow larger, bring on larger accounts, or find themselves subject to regulation or compliance requirements. That gradual process is usually more straightforward than the reactive process brought on by a security breach and it ends up delivering better overall results for the company.\nThis reactive process seems oddly similar to the way my son has learned to eat. Confused? Keep reading.\nEntirely oblivious\nThis is how my son first got started. He was so busy trying to figure out how to eat that he had no idea how much of a mess he was making. Eventually, someone would either step in all of the dropped food or spilled juice and it would be all over the kitchen.\nIf you replace the food and juice with information at a small company, you can see how the same would apply. Many startups and small businesses are focused so heavily on building a product or brand that they forget about the importance of securing the data they are generating and collecting. Everything from trade secrets to sensitive customer data is at risk of being lost. Basic security measures are taken and there\u0026rsquo;s usually no way to know if a breach has occurred and how deep the breach has gone.\nPurely reactionary\nEventually my son realized that making a mess wasn\u0026rsquo;t a good thing and he started to react whenever he ended up with a lap full of spaghetti. He would notice the problem and cry for someone else to come and help. I\u0026rsquo;d clean him up and he was back to normal again. The food would end up in his lap again, he would cry, and I\u0026rsquo;d be back to clean it up.\nCompanies find themselves in this situation when they\u0026rsquo;ve been hit with a breach previously and a new issue has appeared. Their security stance has only changed a little and they\u0026rsquo;re able to determine that something has happened after it has happened. Companies in this stage may consider creating a team focused on security issues or they may look to outside contractors or consultants for help. Much of the focus now shifts to answering \u0026ldquo;how do we prevent this from happening again?\u0026rdquo;\nPartially proactive\nAs my son became more skillful at working with a fork and a spoon, he was able to be more focused on eating and he made fewer messes. They may have occurred less frequently but when they did occur, his clothes still needed to be washed and he was still quite upset. He knew what to watch out for and he knew which foods were going to present a particular challenge. It was obvious that he was putting in much more effort to eat spaghetti than he would with something simple like crackers.\nThis stage in a company\u0026rsquo;s development usually involves a dedicated or semi-dedicated security team that is beginning to understand the threats and risks involved with the company\u0026rsquo;s operation. They\u0026rsquo;re putting focus in certain higher-risk areas but there\u0026rsquo;s still not a lot of proactive work being done to limit the damage from security breaches. For example, a company might institute stricter firewall rules and OS patching for their most important servers but they might not have any security within their internal network. This would allow an attacker free reign over the environment if they can take over one of the servers.\nPassionately proactive\nWhen my son eats, he does quite a few things to ensure success. First off, he sits down and asks for his chair to be pushed in before he eats. He wants a paper towel close by in case something bad happens. With certain foods, he knows the chance of making a mess is higher and he tries to put less of it on his fork. He\u0026rsquo;s determined to not let food get in his lap, and when it does, he wants to ensure that his clothes stay as clean as possible.\nCompanies that reach this stage have now realized the risks involved in the operation of their business and they\u0026rsquo;ve determined how to reduce the impact of a breach. They\u0026rsquo;re consciously aware that they\u0026rsquo;re a target and they are taking an offensive security stance. These companies often test their own security measures to make sure that they\u0026rsquo;re effective against the most frequently seen threats. Their security posture isn\u0026rsquo;t perfect, but they are able to react more efficiently (and with less chaos) when a serious issue presents itself.\nSo let\u0026rsquo;s summarize…\nSome readers may think this post is way too generalized. However, the generalization is the point I\u0026rsquo;m trying to make. Creating a security mindset within a company is generally the easy part; applying it is where things get tough. The concept of information security is actually quite simple: ensure that information is readily available to people who should be able to access it and ensure it\u0026rsquo;s not available for people who shouldn\u0026rsquo;t. If you\u0026rsquo;re starting a small business or you\u0026rsquo;re working for one right now, build your products and your infrastructure with security in mind. Your other option is to retrofit it later, but you\u0026rsquo;ll surely make a mess.\n","date":"13 January 2013","permalink":"/p/what-my-toddler-taught-me-about-information-security/","section":"Posts","summary":"My new role has caused me to look at information security in a different way.","title":"What my toddler taught me about information security"},{"content":"","date":null,"permalink":"/tags/display/","section":"Tags","summary":"","title":"Display"},{"content":"Although the X1 Carbon has a much better looking display than the T430s, it still looked a bit washed out when I compared it to other monitors right next to it. The entire display had a weak blue tint and it was difficult to use for extended periods, especially at maximum brightness.\nA quick Google search took me to a LaunchPad entry about a better ICC profile for the X1 Carbon. After applying the ICC file via GNOME Control Center\u0026rsquo;s Color panel, the display looks fantastic.\nFeel free to download a copy of the color profile and try it for yourself:\nOriginal Link ","date":"8 January 2013","permalink":"/p/fixing-the-lenovo-x1-carbons-washed-out-display/","section":"Posts","summary":"Although the X1 Carbon has a much better looking display than the T430s, it still looked a bit washed out when I compared it to other monitors right next to it.","title":"Fixing the Lenovo X1 Carbon’s washed out display"},{"content":"","date":null,"permalink":"/tags/mint/","section":"Tags","summary":"","title":"Mint"},{"content":"UPDATE: I\u0026rsquo;ve found a better configuration via another X1 Carbon user and there\u0026rsquo;s a new post with all the details.\nThe Lenovo X1 Carbon comes with a pretty useful clickpad just below the keyboard, but the default synaptics settings in X from a Fedora 17 installation aren\u0026rsquo;t the best for this particular laptop. I found some tips about managing clickpads in a Github Gist about the Samsung Series 9 and I adjusted the values for the X1. To get my configuration, just create /etc/X11/xorg.conf.d/10-synaptics.conf and toss this data in there:\nSection \u0026#34;InputClass\u0026#34; Identifier \u0026#34;touchpad catchall\u0026#34; Driver \u0026#34;synaptics\u0026#34; MatchIsTouchpad \u0026#34;on\u0026#34; MatchDevicePath \u0026#34;/dev/input/event*\u0026#34; Option \u0026#34;TapButton1\u0026#34; \u0026#34;1\u0026#34; Option \u0026#34;TapButton2\u0026#34; \u0026#34;3\u0026#34; Option \u0026#34;TapButton3\u0026#34; \u0026#34;2\u0026#34; Option \u0026#34;VertTwoFingerScroll\u0026#34; \u0026#34;on\u0026#34; Option \u0026#34;HorizTwoFingerScroll\u0026#34; \u0026#34;on\u0026#34; Option \u0026#34;HorizHysteresis\u0026#34; \u0026#34;50\u0026#34; Option \u0026#34;VertHysteresis\u0026#34; \u0026#34;50\u0026#34; Option \u0026#34;PalmDetect\u0026#34; \u0026#34;1\u0026#34; Option \u0026#34;PalmMinWidth\u0026#34; \u0026#34;5\u0026#34; Option \u0026#34;PalmMinZ\u0026#34; \u0026#34;40\u0026#34; EndSection There are a few important settings here to note:\nTapButtonX – this sets up the single, double and triple taps to match up to left, right and middle mouse clicks respectively Vert/HorizHysteresis – reduces movement during and between taps Palm* – enables palm detection while you\u0026rsquo;re typing with some reasonable settings You will need to restart X (or reboot) to apply these settings from the configuration file. If you want to test the settings before restarting, you can apply individual adjustments with synclient without any restarts:\nsynclient \u0026#34;HorizHysteresis=50\u0026#34; ","date":"28 December 2012","permalink":"/p/handy-settings-for-the-touchpadclickpad-in-the-lenovo-x1-carbon/","section":"Posts","summary":"UPDATE: I\u0026rsquo;ve found a better configuration via another X1 Carbon user and there\u0026rsquo;s a new post with all the details.","title":"Handy settings for the touchpad/clickpad in the Lenovo X1 Carbon"},{"content":"","date":null,"permalink":"/tags/synaptics/","section":"Tags","summary":"","title":"Synaptics"},{"content":"Ever since I saw QuickSilver for the first time, I\u0026rsquo;ve been hooked on quick application launchers. I\u0026rsquo;ve struggled to find a barebones, auto-completing application launcher in Linux for quite some time. My search has ended with dmenu.\nI stumbled upon dmenu after trying out the i3 tiling window manager and I was hooked almost immediately. It\u0026rsquo;s extremely fast, unobtrusive, and the auto-completion is really intuitive. Another added bonus is that there is no daemon or window manager hook required for the launcher to operate.\nInstalling dmenu on Fedora is as easy as:\nyum install dmenu XFCE is my desktop environment of choice and the dmenu integration is pretty simple:\nApplications Menu \u0026gt; Settings \u0026gt; Keyboard Click the Application Shortcuts tab Click Add In the Command box, enter /usr/bin/dmenu and press OK On the next screen, enter a key combination to launch dmenu (I use LCTRL-SPACE) Click OK From now on, you can press your key combination and start typing the name of any executable application in your path for dmenu to run. If you launch dmenu accidentally, just press ESC to close it.\n","date":"27 December 2012","permalink":"/p/launch-applications-quickly-with-dmenu-in-xfce/","section":"Posts","summary":"Ever since I saw QuickSilver for the first time, I\u0026rsquo;ve been hooked on quick application launchers.","title":"Launch applications quickly with dmenu in XFCE"},{"content":"","date":null,"permalink":"/tags/xfce/","section":"Tags","summary":"","title":"Xfce"},{"content":"Python\u0026rsquo;s virtual environment capability is extremely handy for situations where you don\u0026rsquo;t want the required modules for a particular python project to get mixed up with your system-wide installed modules. If you work on large python projects (like OpenStack), you\u0026rsquo;ll find that the applications may require certain versions of python modules to operate properly. If these versions differ from the system-wide python modules you already have installed, you might get unexpected results when you try to run the unit tests.\nIf you build a virtual environment and inspect the files found within the bin directory of the virtual environment, you\u0026rsquo;ll find that the first line in the executable scripts is set to use the python version specific to that virtual environment. Here\u0026rsquo;s an example from a virtual environment containing the OpenStack glance project:\n#!/home/major/glance/.venv/bin/python # EASY-INSTALL-SCRIPT: \u0026#39;glance==2013.1\u0026#39;,\u0026#39;glance-api\u0026#39; __requires__ = \u0026#39;glance==2013.1\u0026#39; import pkg_resources pkg_resources.run_script(\u0026#39;glance==2013.1\u0026#39;, \u0026#39;glance-api\u0026#39;) However, what if I wanted to take this virtual environment and place it somewhere else on the server where multiple people could use it? The path in the first line of the scripts in bin will surely break.\nThe first option is to make the virtual environment relocatable. This can produce unexpected results for some software projects, so be sure to test it out before trying to use it in a production environment.\n$ virtualenv --relocatable .venv A quick check of the same python file now shows this:\n#!/usr/bin/env python2.6 import os; activate_this=os.path.join(os.path.dirname(os.path.realpath(__file__)), \u0026#39;activate_this.py\u0026#39;); execfile(activate_this, dict(__file__=activate_this)); del os, activate_this # EASY-INSTALL-SCRIPT: \u0026#39;glance==2013.1\u0026#39;,\u0026#39;glance-api\u0026#39; This allows for the path to the activate_this.py script to be determined at runtime and allows you to move your virtual environment wherever you like.\nIn situations where one script within bin would import another script within bin, things can get a little dicey. These are edge cases, of course, but you can get a similar effect by adjusting the path in the first line of each file within bin to the new location of the virtual environment. If you move the virtual environment again, be sure to alter the paths again with sed.\n","date":"25 November 2012","permalink":"/p/relocating-a-python-virtual-environment/","section":"Posts","summary":"Python\u0026rsquo;s virtual environment capability is extremely handy for situations where you don\u0026rsquo;t want the required modules for a particular python project to get mixed up with your system-wide installed modules.","title":"Relocating a python virtual environment"},{"content":"The biggest gripe I have about my Android phone is that the Bluetooth connectivity is very finicky with my car. Sometimes the phone and car won\u0026rsquo;t connect automatically when I start my car and there are other times where the initial connection is fine but then the car loses the connection to the phone while I\u0026rsquo;m driving. The problem crops up in multiple cars and the biggest suspect I\u0026rsquo;ve found so far is the Galaxy S III\u0026rsquo;s use of Bluetooth Low Energy (BLE).\nI stumbled upon an application in the Google Play Store called Bluetooth Keepalive and decided to spend $1.50 to see if it could fix my problem. The application itself is quite simple:\nI configured it to start at boot and run as a background service via the configuration menu. After two days of using the application, I haven\u0026rsquo;t had any weird Bluetooth issues in the car. My phone connects as soon as I start my car and it stays connected throughout my trip. There were some situations where my phone used to think it was connected to my car even when I was miles away and those problems are gone as well. Battery life seems to be unaffected by the change.\nI\u0026rsquo;m currently running CyanogenMod 10 Nightly w/Android 4.1.2 on an AT\u0026amp;T Galaxy S III (SGH-I747). Your mileage might vary on other ROM\u0026rsquo;s and models.\n","date":"20 November 2012","permalink":"/p/fixing-finicky-bluetooth-on-the-samsung-galaxy-s-iii/","section":"Posts","summary":"The biggest gripe I have about my Android phone is that the Bluetooth connectivity is very finicky with my car.","title":"Fixing finicky Bluetooth on the Samsung Galaxy S III"},{"content":"I\u0026rsquo;m still quite pleased with my Samsung Galaxy SIII but there are some finicky Bluetooth issues with my car that I simply can\u0026rsquo;t figure out. After discovering logcat, I wondered if there was a way to get logs sent from an Android device to a remote syslog server. It\u0026rsquo;s certainly possible and it actually works quite well.\nMy phone is currently rooted with CyanogenMod 10 installed. Some of these steps will require rooting your device. Be sure to fully understand the implications of gaining root access on your particular device before trying it.\nGet started by installing Titanium Backup and Logcat to UDP. Once they\u0026rsquo;re installed, you\u0026rsquo;ll need to enable USB debugging by accessing Settings \u0026gt; Developer Options:\nNow, run Titanium Backup and click the Backup/Restore tab at the top. Find the “Logcat to UDP 0.5” application and hold your finger on it for a few seconds. Press Convert to system app and wait for that to complete:\nNow, run the Logcat to UDP application and configure it. Put in a server IP address for the remote syslog server and choose a remote port where your syslog server is listening. Be sure to check the Filter log messages box and put in a reasonable set of things to watch. My standard filter is:\nSensors:S dalvikvm:S MP-Decision:S overlay:S RichInputConnection:S *:V That filter says that I don\u0026rsquo;t want to see data from the Sensors process (and some other chatty daemons) but I want verbose logs from everything else. The full details on logcat filters can be found in Google\u0026rsquo;s Android Developer Documentation.\nWhen all that is done, you can begin receiving syslog data pretty quickly on a CentOS or Fedora server. For CentOS, you only need to make a small adjustment to /etc/rsyslog.conf to begin receiving logs:\n# Provides UDP syslog reception $ModLoad imudp $UDPServerRun 514 The standard port is 514, but be sure to change it to match your configuration in the Logcat to UDP application on your phone. Restart rsyslog and you should be able to see logs flowing in from your Android device:\n# /etc/init.d/rsyslog restart Shutting down system logger: [ OK ] Starting system logger: [ OK ] # tail /var/log/messages Nov 4 20:44:04 home.local Iridium: E/ThermalDaemon( 264): ACTION: CPU - Setting CPU[0] to 1512000 Nov 4 20:44:04 home.local Iridium: E/ThermalDaemon( 264): ACTION: CPU - Setting CPU[1] to 1512000 Nov 4 20:44:04 home.local Iridium: E/ThermalDaemon( 264): Fusion mitigation failed - QMI registration incomplete Nov 4 20:44:07 home.local Iridium: I/ActivityManager( 624): START {act=android.intent.action.MAIN cat=[android.intent.category.HOME] flg=0x10200000 cmp=com.cyanogenmod.trebuchet/.Launcher u=0} from pid 624 Nov 4 20:44:07 home.local Iridium: I/ActivityManager( 624): START {act=android.intent.action.MAIN cat=[android.intent.category.HOME] flg=0x10200000 cmp=com.cyanogenmod.trebuchet/.Launcher u=0} from pid 624 Nov 4 20:44:07 home.local Iridium: I/ActivityManager( 624): START {act=android.intent.action.MAIN cat=[android.intent.category.HOME] flg=0x10200000 cmp=com.cyanogenmod.trebuchet/.Launcher u=0} from pid 624 Nov 4 20:44:08 home.local Iridium: I/ActivityManager( 624): START {act=android.intent.action.MAIN cat=[android.intent.category.HOME] flg=0x10200000 cmp=com.cyanogenmod.trebuchet/.Launcher u=0} from pid 624 Nov 4 20:44:09 home.local Iridium: E/ThermalDaemon( 264): Sensor \u0026#39;tsens_tz_sensor0\u0026#39; - alarm raised 1 at 57.0 degC Nov 4 20:44:09 home.local Iridium: E/ThermalDaemon( 264): ACTION: CPU - Setting CPU[0] to 1134000 Nov 4 20:44:09 home.local Iridium: E/ThermalDaemon( 264): ACTION: CPU - Setting CPU[1] to 1134000 If you\u0026rsquo;re not seeing logs on your remote server, be sure to check the remote server\u0026rsquo;s firewall since the default rules on a CentOS or Fedora server will block syslog traffic. If you want to generate logs quickly for testing in CyanogenMod, just repeatedly press the home button. A log line from the trebuchet launcher should appear each time.\n","date":"4 November 2012","permalink":"/p/log-android-events-remotely-to-a-syslog-server/","section":"Posts","summary":"I\u0026rsquo;m still quite pleased with my Samsung Galaxy SIII but there are some finicky Bluetooth issues with my car that I simply can\u0026rsquo;t figure out.","title":"Log Android events remotely to a syslog server"},{"content":"I had a peculiar situation today where I cloned a repository into a directory which was inside another repository. Here\u0026rsquo;s what I was doing:\n$ git clone git://gitserver/repo1.git repo1 $ cd repo1 $ git clone git://gitserver/repo2.git repo2 $ git clean -fxd Removing repo2/ $ ls -d repo2 repo2 The second repository existed even after a git clean -fxd. I stumbled upon a GitHub page within the capistrano project that explained the problem - an extra -f was required:\n$ git clean -ffxd Removing repo2/ $ ls -d repo2 ls: cannot access repo2: No such file or directory ","date":"24 October 2012","permalink":"/p/using-git-clean-to-remove-subdirectories-containing-git-repositories/","section":"Posts","summary":"I had a peculiar situation today where I cloned a repository into a directory which was inside another repository.","title":"Using git clean to remove subdirectories containing git repositories"},{"content":"","date":null,"permalink":"/tags/customer-service/","section":"Tags","summary":"","title":"Customer Service"},{"content":"This post covers the second half of my experience moving back to a Linux desktop but I figured it was a good opportunity to focus on the ThinkPad T430s itself as well as the Lenovo ordering experience. If you follow me on Twitter, you know about my service experience. I\u0026rsquo;ll save that for the end of this post.\nThis post is a little on the long side, so here\u0026rsquo;s a TL;DR for you if you\u0026rsquo;re in a big hurry:\nGood: build quality, port quantity/location, input devices, battery life, quiet operation Bad: LCD display is very washed out and has a blue tint, poor sales support from Lenovo Suggestions: Don\u0026rsquo;t buy via Lenovo, try GovConnection and get faster delivery with better service The Laptop\nIf you asked me for a one-sentence description of the T430s, I\u0026rsquo;d have to say it\u0026rsquo;s a well-built, lightweight laptop with a good keyboard and a less than mediocre screen.\nThe island-style keyboard was very easy for me to use coming from a MacBook Pro with chiclet keys. The spacing between the keys and the size of the keys themselves were good. I kept pushing the Function key when I meant to push Control, but that can be quickly swapped in the BIOS to make things easier.\nEven coming from the MacBook\u0026rsquo;s amazing trackpad, the trackpad on the ThinkPad was superb. It tracked gestures and taps extremely well without much configuration in Linux or Windows. It\u0026rsquo;s light years ahead of the Samsung Series 9\u0026rsquo;s trackpad and it\u0026rsquo;s marginally better than the latest Dell laptops. The \u0026ldquo;nipple\u0026rdquo; controller wedged in the keyboard was easy to use and the extra set of mouse buttons below the keyboard (but above the trackpad) were convenient.\nI really like having the hardware WiFi on/off switch on the front right side of the laptop for situations where I want to ensure my laptop doesn\u0026rsquo;t start searching for access points before I can be sure it\u0026rsquo;s connecting to the right one. The USB ports were well placed and the \u0026ldquo;always on\u0026rdquo; port on the back is handy for charging phones and tablets when the laptop is powered off (you can disable that in the BIOS if you prefer). The fingerprint reader hasn\u0026rsquo;t been tested since there aren\u0026rsquo;t any open source drivers available for it in Linux.\nIt\u0026rsquo;s apparent that build quality is above average with this laptop. It\u0026rsquo;s certainly not terribly attractive (when compared to a Mac), but for a solid business laptop, it\u0026rsquo;s ahead of the curve. The screen hinges are tight and they don\u0026rsquo;t flex even when typing on a wobbly surface. This really helps when you\u0026rsquo;re using the webcam with the laptop resting on your legs. The ThinkLight above the webcam is a little quirky (this is my first ThinkPad) but it is really useful in lowlight situations.\nNow, about that screen. I ordered my laptop with the 1600×900 HD+ screen (best one available). The color representation is downright terrible. Almost everything is washed out with a blue tint. If you open a web page with a mostly white background, the text is readable but it hurts my eyes to read it. You can almost see gaps between the pixels on the screen at regular intervals and it gets really distracting when you\u0026rsquo;re editing photos. Even after applying different monitor profiles in Linux and Windows, I\u0026rsquo;ve found the screen to be frustrating to use. The panel on mine is a Samsung panel and I\u0026rsquo;d expect a better performing screen from them.\nOutside of the screen itself, the video performance of the Intel HD4000 is impressive. Onboard GPU\u0026rsquo;s have really come a long way. I hooked up a second monitor via the DisplayPort and found that the graphics performance was still extremely good. You can play games like Civilization V on this laptop with onboard graphics pretty easily.\nAll in all, I really do like the ThinkPad. If the screen doesn\u0026rsquo;t bother you, the remainder of this laptop is very convenient and powerful. However, for my use, I need something that performs well for business work as well as creative work. I\u0026rsquo;ve yet to find something better than the MacBooks for this kind of workload.\nThe Service\nI generally try to start out with something positive when I try to review something, but the only positive thing I can say about Lenovo\u0026rsquo;s ordering experience is that it\u0026rsquo;s consistent. Consistently bad. Here\u0026rsquo;s a timeline of my first order:\nSep 3 - Ordered laptop. Sales page said laptop would ship around Sep 12. Sep 4 - Order confirmed. Ship date pushed to Sep 27. Sep 8 - Order status showed \u0026ldquo;Released to Manufacturing\u0026rdquo;. Sep 10 - Order status page showed laptop shipped Sep 9 and would be delivered Sep 16. Sep 12 - Received an email saying a part had delayed my shipment. Sep 17 - Order status page shows laptop shipped Sep 14 and would be delivered Sep 21. I emailed Lenovo for more detail. Sep 18 - Lenovo representative replies saying the graphics card was the constrained part. Due to ship in 3-4 weeks or less. I emailed back asking about canceling the order. My email was never acknowledged. Sep 19 - I emailed Lenovo stating that I wanted my order cancelled immediately. My email was never acknowledged. I received an automated email stating that my order was delayed and would ship within 30 days. I called Lenovo\u0026rsquo;s support line to cancel and waited on hold for almost two hours. Gave up. Sep 20 - I called Lenovo again and my call was answered after 90 minutes on hold. The representative tried to talk me out of canceling several times and then finally canceled it. My cancellation was only a \u0026ldquo;request\u0026rdquo;, not a guarantee. I was referred to sales and they wanted me to order a different laptop - I declined. Sep 21 - My order was confirmed cancelled. The salesperson suggested ordering a different laptop without the NVIDIA Optimus graphics card, so I did that on September 21. The order page showed a ship date of September 27, so I was quite pleased. As soon as I paid for the order, the ship date immediately slid out to October 16. Needless to say, I felt like I\u0026rsquo;d been bait-and-switched once again.\nIt\u0026rsquo;s important to note that through both orders, Lenovo\u0026rsquo;s public-facing order status page worked while the internal order status page accessed via my account page showed timeouts. The internal order status page hasn\u0026rsquo;t worked before, during or after shipping at all after multiple attempts. I\u0026rsquo;ve notified them more than once about it so they could make repairs.\nSomeone on Twitter suggested trying GovConnection to order a ThinkPad since they keep models in stock with fast shipping. I ordered one on a Sunday and it shipped on a Monday. The back panel behind the screen was badly damaged and they cross-shipped me a replacement the next day. Their service has been superb and they provided timely updates for my order.\nIn the end, Lenovo did actually ship my second laptop (without the NVIDIA Optimus card) and it did arrive slightly ahead of schedule. I\u0026rsquo;ll be sending it back to them since I already received a ThinkPad from GovConnection.\nSome of you might be saying that I should expect some delays when a laptop is built to order. I\u0026rsquo;m generally fine with that and I\u0026rsquo;ve had minor delays from Dell and Apple in the past with previous orders. The big difference is that the other companies warned me about the delays prior to purchase and also warned me about the parts I added that might cause delays to my order. That bit of forward thinking allowed me to decide whether a certain part was important to me or if I was able to wait for the product to arrive. When it comes down to communication, Lenovo has a lot to learn.\n","date":"21 October 2012","permalink":"/p/lenovo-thinkpad-t430s-review/","section":"Posts","summary":"This post covers the second half of my experience moving back to a Linux desktop but I figured it was a good opportunity to focus on the ThinkPad T430s itself as well as the Lenovo ordering experience.","title":"Lenovo ThinkPad T430s review"},{"content":"Troy Toman delivered a great keynote this morning about OpenStack and how Rackspace uses it:\nI\u0026rsquo;m extremely glad to be a tiny part of the OpenStack story and I\u0026rsquo;m proud to see where it\u0026rsquo;s going today. My goals are to make OpenStack environments easier to deploy, maintain and administer. Along with other Rackers, we\u0026rsquo;re making this happen each day and we plan to share our successes and hardships with the community.\nIf you\u0026rsquo;re interested in working on OpenStack at Rackspace, feel free to reach out to me or learn more about our open positions.\n","date":"18 October 2012","permalink":"/p/proud-to-be-a-part-of-openstack-at-rackspace/","section":"Posts","summary":"Troy Toman delivered a great keynote this morning about OpenStack and how Rackspace uses it:","title":"Proud to be a part of OpenStack at Rackspace"},{"content":"Although I\u0026rsquo;ve been exclusively using a Mac for everything but servers since about 2008, I found myself considering a move back to Linux on the desktop after seeing how some people were using it at LinuxCon. My conversion from the iPhone to Android was rocky for a very brief period and now I can\u0026rsquo;t think of a reason to ever go back. I approached Linux in the same way and ordered a new ThinkPad shortly after returning from the conference.\nThe ThinkPad ordering experience was one of the worst retail experiences I\u0026rsquo;ve had so far but that\u0026rsquo;s a separate discussion for a separate post (that\u0026rsquo;s on the way soon). This post is focused only on my experience getting back into a Linux desktop for the first time in four years.\nThe Good\nLinux hardware support has come a really long way over the past few years. All of my hardware was recognized and configured in Fedora 17 without any action on my part. The fingerprint reader has some proprietary firmware that couldn\u0026rsquo;t be automatically loaded (which didn\u0026rsquo;t bother me much). Getting network connectivity via ethernet, WiFi and even a 4G USB stick was surprisingly simple. Battery life was longer in Linux than in Windows and I was glad to see that the power management features were working well and already configured how I\u0026rsquo;d like them to be.\nI knew I wasn\u0026rsquo;t a fan of GNOME 3 already, so I loaded up KDE and XFCE. Both worked extremely well with great performance. Desktop effects were really responsive and I never saw flickering, crashes, or artifacts. Those were a lot more frequent previously. I eventually settled into the i3 window manager and got into a keyboard-based workflow with tiled windows. It was shocking to see how much time I could save with a tiled window manager when I wasn\u0026rsquo;t pushing and resizing windows every time I opened a new app or a new window.\nThe raw X performance has improved drastically. In i3, I rarely found myself waiting for anything to render and any changes to my desktop were speedy. Font smoothing and rendering has also come a long way. OS X still leads this category for me, but I was glad to see some serious advancements in Linux in this area.\nOne of the biggest worries I had about Linux was email. I need Exchange connectivity with good calendaring support for work and I didn\u0026rsquo;t enjoy using Evolution in the past. My choice this time around was Thunderbird with the Lightning plugin for calendaring and Enigmail for GPG signing and encryption. I stacked that on the davmail gateway for Exchange connectivity. This worked surprisingly well. The performance could have been a bit better, but as far as functionality is concerned, everything I tried worked. Creating meeting invitations, responding to meeting invitations, handling email, and searching the GAL was a relatively smooth experience.\nChrome was easy to install and very stable. Using Skype with video was a breeze and I was on TeamSpeak calls with my team at work within a few minutes. I had a slew of terminals to choose from and I settled on terminator since I could have tiled terminals in my tiled window manager (which I\u0026rsquo;m sure Xzibit would approve).\nVirtualization was simple and I was a few package installs away from running KVM. Xen also worked well via virt-manager and the performance was excellent. I installed VMWare Workstation since it was my favorite before, but it caused stack traces in the kernel and I eventually had to remove it.\nThe Bad\nI have yet to find a Twitter client for Linux that I enjoy using. It took me forever to find even one application which used Twitter\u0026rsquo;s streaming API (which was released almost three years ago). Many of the applications were either difficult to use, had confusing UI\u0026rsquo;s, or wasted so much screen real estate that they became a nuisance. Text-based clients looked good at first glance but then I became frustrated with the inability to quickly see conversations or see what a particular reply was referring to in my timeline. My current Mac client is Yorufukurou.\nMusic management was another sore spot. Some applications, like audacious, fit the bill perfectly for basic internet radio streaming and playing small albums. If I tried to look for an application to replace iTunes (library management, internet radio, podcasts, and sync with a mobile device), I ended up with Songbird, amarok and rhythmbox. Songbird was fair but lacked a lot of features that I was eager to get. At first, amarok and rhythmbox looked like winners but managing a library with them was taking much more time than I was willing to invest.\nThe ThinkPad screen is very washed out by default but I found quite a few forum posts talking about applying ICM profiles to correct it. Quite a few people made that adjustment in Windows with some very good results. I tried to do the same in Fedora 17 but struggled after working through several different methods. Fedora 18 is going to have gnome-color-manager from the start and it will probably make that process a little easier. Getting a DisplayLink adapter working in Fedora 17 was problematic but I\u0026rsquo;ve read that native support for configuring these devices is coming soon.\nConclusion\nLinux on the desktop has really improved a substantial amount but I\u0026rsquo;m leaning back towards the Mac. Although a portion of that decision centers around the Mac hardware, the majority of the decision hinges on my workflow and the quality of the applications available for my specific needs. I found myself much less distracted in Linux mainly because it was more difficult for me to interact with my coworkers and friends than it was on the Mac.\nFor a fair fight, I may try to get Linux going on my MacBook to ensure I\u0026rsquo;m comparing apples to apples. I\u0026rsquo;ll save that for later this year when Fedora 18 is released.\nKeep in mind that everyone has a unique workflow and mine may be much different than yours. I\u0026rsquo;m eager to read your comments and I welcome any feedback you have.\n","date":"12 October 2012","permalink":"/p/going-back-to-linux-as-a-desktop/","section":"Posts","summary":"Although I\u0026rsquo;ve been exclusively using a Mac for everything but servers since about 2008, I found myself considering a move back to Linux on the desktop after seeing how some people were using it at LinuxCon.","title":"Going back to Linux as a desktop"},{"content":"Automating package updates in CentOS 6 is a quick process and it ensures that your system receives the latest available security patches, bugfixes and enhancements. Although it\u0026rsquo;s easy and available right from yum on a normal CentOS 6 system, I still find that many people aren\u0026rsquo;t aware of it.\nBefore you enable automatic updates, you\u0026rsquo;ll want to ensure that you\u0026rsquo;re excluding certain packages which may be integral to your system. You can either make a list of those packages now or configure the automated updates so that you\u0026rsquo;re emailed a report of what needs to be installed rather than having those packages installed automatically.\nTo get started, install yum-cron:\nyum -y install yum-cron By default, it\u0026rsquo;s configured to download all of the available updates and apply them immediately after downloading. Reports will be emailed to the root user on the system. To change these settings, just open /etc/sysconfig/yum-cron in your favorite text editor and adjust these lines:\n# Default - check for updates, download, and apply CHECK_ONLY=no DOWNLOAD_ONLY=no # Download the updates and email a report CHECK_ONLY=no DOWNLOAD_ONLY=yes # Don\u0026#39;t download the updates, just email a report CHECK_ONLY=yes DOWNLOAD_ONLY=no As mentioned earlier, if you want to exclude certain packages from these updates, just edit your /etc/yum.conf and add:\nexclude=kernel* mysql* The cron jobs from the yum-cron package are active immediately after installing the package and there\u0026rsquo;s no extra configuration necessary. The job will be run when your normal daily cron jobs are set to run.\n","date":"21 September 2012","permalink":"/p/automatic-package-updates-in-centos-6/","section":"Posts","summary":"Automating package updates in CentOS 6 is a quick process and it ensures that your system receives the latest available security patches, bugfixes and enhancements.","title":"Automatic package updates in CentOS 6"},{"content":"After getting Android-envy at LinuxCon, I decided to push myself out of my comfort zone and ditch my iPhone 4 for a Samsung Galaxy S III. It surprised a lot of people I know since I\u0026rsquo;ve been a big iPhone fan since the original model was released in 2007. I\u0026rsquo;ve carried the original iPhone, the 3GS, and then the 4. There have been good times and bad times, but the devices have served me pretty well overall.\nThe Good Stuff\nOne of my coworkers summed up Android devices pretty succinctly: \u0026ldquo;This will be the first phone that feels like your phone.\u0026rdquo; That\u0026rsquo;s what I like about it the most. I have so much more control over what my phone does and when it does it. It seems like there\u0026rsquo;s a checkbox or option list for almost every possible setting on the phone. Everything feels customizable (to a reasonable point). Even trivial things like configuring home screens and adjusting Wi-Fi settings seem to be more user-friendly.\nThe raw performance of the S3 handset is impressive. All of the menus are responsive and I rarely find myself waiting on the phone to do something. 4G LTE is extremely fast (but it does chow down on your battery) and it\u0026rsquo;s hard to tell when I\u0026rsquo;m on Wi-Fi and when I\u0026rsquo;m not. Photo adjustments are instantaneous and moving around in Chrome is snappy.\nAnother big benefit is that applications can harness the power of the Linux system under the hood (although some may require getting root access on your phone). Using rsync, ssh, FTP, and samba makes transferring data and managing the device much easier. It also allows you to set up automated backups to remote locations or to another SD card in your phone.\nThe Not-So-Good Stuff\nIf you\u0026rsquo;ve ever used a Mac along with Apple\u0026rsquo;s music devices, you know that the integration is tight and well planned. Moving over to Android has been really rough for me and the ways that I manage music. I gave DoubleTwist and AirSync a try but then I found that all of my music was being transcoded on the fly from AAC to another format. Syncing music took forever, quality was reduced, and the DoubleTwist music player on the phone was difficult to use. I downloaded SongBird and then tried to use Google Play Music but both felt inefficient and confusing.\nEventually, I found SSHDroid and started transferring music via ssh. That worked out well but then I couldn\u0026rsquo;t find any of the music I uploaded on my phone. A friend recommended SDRescan since it forces the device to scan itself for any new media files. My current work flow involves uploading the music via ssh, rescanning for media files, and then listening to the new files with Apollo (from CyanogenMod, more on that later).\nBattery life on the S3 is well below what I expected but it sounds like it might be more the shortfall of the device rather than the software. The screen is large and it\u0026rsquo;s very bright even on the lowest settings. The battery settings panel on the phone regularly shows the screen as the largest consumer of energy on the phone. I did make some adjustments, like allowing Wi-Fi to switch off when the phone is asleep, which has helped with battery life. Disabling push email or IMAP IDLE has helped but it\u0026rsquo;s prevented me from getting some of the functionality I want.\nFinally, the pre-installed Samsung software was absolutely terrible. There were background processes running that were eating the battery and the interface was hard to use. I\u0026rsquo;m not sure what their target audience is, but it made coming over from the iPhone pretty difficult.\nTo Flash or Not To Flash\nVoiding the warranty and flashing the phone had me pretty nervous, but then again, I had quite a few coworkers who were experienced in the process and they had rarely experienced problems. Luckily, there is a great wiki page that walks you through the process. It\u0026rsquo;s a bit technical but I found it reasonably straightforward to follow. One of the nightly builds caused some problems with the GPS functionality on the phone but that was corrected in a day or two with another nightly build.\nUpgrading to new nightly ROMs is unbelievably simple. You can download them manually to your phone and then reboot into recovery mode to flash the phone or you can load up an application on the phone itself which will download the ROM images and install the new image after a quick reboot with one key press. Don\u0026rsquo;t forget to make backups just in case something goes wrong, though.\nMy Application List\nHere are my favorite applications so far:\n1Password Reader ConnectBot ES File Explorer Google Authenticator GPS Test K-9 Mail Notify My Android RunKeeper SDRescan SSHDroid Titanium Backup TouchDown Wifi Analyzer More Changes\nI\u0026rsquo;m waiting on my new ThinkPad T430s to ship and I\u0026rsquo;m told that Android phones are a bit easier to use within Linux than they are on a Mac. Not having the integrated USB support on the Mac is pretty frustrating. I\u0026rsquo;ll probably amend this post or write another one once I\u0026rsquo;m running Linux on my laptop and using my Android with it regularly.\n","date":"7 September 2012","permalink":"/p/one-week-with-android/","section":"Posts","summary":"After getting Android-envy at LinuxCon, I decided to push myself out of my comfort zone and ditch my iPhone 4 for a Samsung Galaxy S III.","title":"One week with Android"},{"content":"After a recent issue I had with some users in the Puppy Linux forums, I thought it might be prudent to write a post about how to monitor and protect your reputation online. This guide is mainly geared towards technical people who maintain some type of public presence. That should include folks who talk at conferences, contribute to high profile open source projects, or those who utilize social media to connect with other users and contributors.\nThe first part is monitoring. A monitoring solution should ideally be inexpensive, have a low lag time between a new mention and a notification, and it should be able to search a lot of resources.\nFor me, it made sense to use Google Alerts. I have as-it-happens searches in place for several things:\nmy full name frequently used handles/usernames on various communication mediums (like IRC, twitter, etc) the URL\u0026rsquo;s of web sites I maintain new links to web sites I maintain Google Alerts allow me to get notifications very quickly about new blog posts, forum posts, or other websites which mention something I find to be sensitive. The signal to noise ratio for my searches is quite good but it has taken some time to hone the queries down and reduce the useless notifications.\nIf you frequent certain IRC channels, you ought to consider setting up an IRC bouncer if the server administrators allow it. You\u0026rsquo;ll have the benefit of getting all of the logs from the channel even when you\u0026rsquo;re not actively at your computer and you may be able to spot things that need attention.\nProtecting your reputation is multi-faceted and immensely critical. The same communication mediums that you depend upon to spread your message and meet other people can be used against you in an instant. How many times have you seen hacked Twitter and Facebook accounts and then wondered: \u0026ldquo;I never would have thought someone would have targeted that person. I also figured that they would have protected their account a little more aggressively.\u0026rdquo;\nI\u0026rsquo;ve seen people with giant piles of alphabet soup (certifications) after their name (including CSO, CISSP, Security+) have their Twitter accounts hacked and I\u0026rsquo;ve had to tell them about it. It can happen to anyone but it\u0026rsquo;s up to you to make it extremely difficult for it to happen. Here are some tips which apply specifically to Twitter but could be loosely applied to almost anything you use daily:\nuse very strong passwords along with a solid password manager regularly audit the applications which have access to your account (via OAuth, API\u0026rsquo;s, etc) for critical accounts, force yourself to change the password regularly If you don\u0026rsquo;t get anything from this post, please understand this. The most critical piece of your personal infrastructure to protect is your email account. Think about it - where do your password resets go? Where do your domain name renewal notifications go? It\u0026rsquo;s the crux of your personal security. Even if you have a 100-character password with upper/lower-case letters, numbers, symbols and unicode characters, you\u0026rsquo;re totally unprotected when an attacker forces a password reset email and finds that your email account password is \u0026ldquo;p455w0rd\u0026rdquo;.\nFor those providers that offer two-factor authentication, you really should consider using it. The pain of two-factor auth may be annoying at first, but imagine the pain when you find your bank account emptied, credit card filled, iPhone/iPad/laptop wiped and your personal identification information stolen.\nI\u0026rsquo;ll wrap up this post by talking about what I mentioned at the start of this post: responding to someone who has dragged your name through the mud on false information. Respond promptly and succinctly. Let them know who you are (with proof via links or other means), that their statements are false, and then provide proof and redirection. You certainly don\u0026rsquo;t want to be overly agressive and condescending, but you don\u0026rsquo;t want to be passive about it either. Be assertive and protect what\u0026rsquo;s yours.\nMy grade school journalism teacher summed it up pretty well (I\u0026rsquo;ll paraphrase):\nYour credibility and reputation are the two best things you\u0026rsquo;ve got. Money and fame will come and go but you\u0026rsquo;ll always land on your feet if you keep your credibility. Your greatest asset is something that nobody else will help you protect.\n","date":"6 August 2012","permalink":"/p/monitoring-and-protecting-your-reputation-online/","section":"Posts","summary":"After a recent issue I had with some users in the Puppy Linux forums, I thought it might be prudent to write a post about how to monitor and protect your reputation online.","title":"Monitoring and protecting your reputation online"},{"content":"If you install vpnc via MacPorts on OS X, you\u0026rsquo;ll find that you have no openssl support after it\u0026rsquo;s built:\n$ sudo port install vpnc ---\u0026gt; Computing dependencies for vpnc ---\u0026gt; Cleaning vpnc ---\u0026gt; Scanning binaries for linking errors: 100.0% ---\u0026gt; No broken files found. $ sudo vpnc vpnc was built without openssl: Can\u0026#39;t do hybrid or cert mode. This will cause some problems if you\u0026rsquo;re trying to use VPN with a Cisco VPN concentrator which uses SSL VPN technology. The fix is an easy one. You\u0026rsquo;ll find a variant within the portfile itself:\n$ sudo port edit --editor cat vpnc | tail -7 variant hybrid_cert description \u0026#34;Enable the support for hybrid and cert modes in vpnc\u0026#34; { depends_lib-append port:openssl build.args-append \u0026#34;OPENSSL_GPL_VIOLATION=-DOPENSSL_GPL_VIOLATION OPENSSLLIBS=-lcrypto\u0026#34; } livecheck.type regex livecheck.url ${homepage} livecheck.regex \u0026#34;${name}-(\\\\d+(?:\\\\.\\\\d+)*)${extract.suffix}\u0026#34; Simply specify that you want the hybrid_cert variant on the command line when you install vpnc and you should be all set:\n$ sudo port install vpnc +hybrid_cert ---\u0026gt; Computing dependencies for vpnc ---\u0026gt; Deactivating vpnc @0.5.3_0 ---\u0026gt; Cleaning vpnc ---\u0026gt; Activating vpnc @0.5.3_0+hybrid_cert ---\u0026gt; Cleaning vpnc ---\u0026gt; Scanning binaries for linking errors: 100.0% ---\u0026gt; No broken files found. $ sudo vpnc unknown host `\u0026lt;gateway\u0026gt;\u0026#39; \u0026lt;/gateway\u0026gt; ","date":"1 August 2012","permalink":"/p/building-vpnc-with-openssl-support-via-macports-on-mac-os-x/","section":"Posts","summary":"If you install vpnc via MacPorts on OS X, you\u0026rsquo;ll find that you have no openssl support after it\u0026rsquo;s built:","title":"Building vpnc with openssl support via MacPorts on Mac OS X"},{"content":"Vitalie Cherpec contacted me back in May about his new hosted DNS offering, Luadns. I gave it a try and I offered to write a review about the service.\nDISCLAIMER: I don\u0026rsquo;t write many reviews on this blog, but I want to make sure a few things are clear. Vitalie was kind enough to set up an account for me to test with which would have normally cost me $9/month. However, he didn\u0026rsquo;t give me any compensation of any kind for the review itself and there was nothing done for me outside of what a customer would receive at a paid service level at Luadns. In other words, this is an honest review and I haven\u0026rsquo;t been paid for a favorable (or unfavorable) response.\nAt first glance, Luadns looks like many of the other hosted DNS services out there. Their DNS servers run tinydns and there are globally distributed DNS servers in Germany (Hetzner), California (Linode), New Jersey (Linode), Netherlands (LeaseWeb), and Japan (KDDI). The latency to the two US locations were reasonable from my home in San Antonio (on Time Warner Cable, usually under 70ms) but the overseas servers had reasonable latency except for the server in Germany. I was regularly seeing round trip times of over 300ms to that server.\nWhat makes Luadns unique is how you update your DNS records. You can put your DNS zone files into a git repository in GitHub or BitBucket and then set up a post-commit hook to nudge Luadns when you make an update. This process gives you a good audit trail of when DNS changes were made, who changed them, and what was changed.\nAs soon as you push your changes, Luadns is notified and they can go about updating the DNS records on their servers around the world. You also get the option to do manual updates if your business processes require a thorough review of DNS changes prior to their public release. You\u0026rsquo;ll receive an email confirmation each time Luadns is nudged with changes to your zone files.\nIn my experience, I saw pretty reasonable delays for updates. Here are the times I measured for DNS changes to propagate to all five Luadns servers:\nUpdates to an existing zone: 15-25 seconds (regardless of the amount of updates) Adding a totally new zone: 30-45 seconds Deleting a zone: 5-6 minutes (see following paragraph) I contacted Vitalie about the long delay in deleting entire zones from Luadns and he made some adjustments to the domain deletion priority. After his change, deletions were processed in under 20 seconds every time I tried it.\nAll of my testing was done with basic BIND zone files but Luadns allows you to write your zones in Lua if you prefer. That allows you to do some pretty slick automation with templates and you won\u0026rsquo;t have to be quite so repetitive as you normally would with BIND zone files.\nSummary\nLuadns provides a nice twist on the available DNS hosting solutions available today. Committing zone changes into a git repository allows for some great auditing and opens the door for pull requests that get a look from another team member before the DNS changes are released. The GitHub and Bitbucket integration is well done and the post-commit hooks seemed to work every time I tried them. The delays for zone updates are very reasonable and the pricing seems fair. I operate 48 domains and my bill each month would probably be $19 for the base plan. I\u0026rsquo;d easily go over the 4M queries/month so I\u0026rsquo;d expect to be paying extra.\nI\u0026rsquo;d like to see Luadns improve by getting a more reliable European location that Hetzner since I can\u0026rsquo;t get good round trip times from various locations that I\u0026rsquo;ve tried. Anycasted DNS servers would be a big plus, but that\u0026rsquo;s a tough thing for a small company to do. I\u0026rsquo;d also like to see other development languages available other than Lua (python and ruby, perhaps).\nOverall, I\u0026rsquo;d recommend Luadns for DNS hosting due to the convenience provided by GitHub/Bitbucket and the audit trail provided by both. Vitalie was easy to work with and he was quick to respond to any inquiry I sent. There\u0026rsquo;s a free pricing tier - why not give it a try?\n","date":"22 July 2012","permalink":"/p/dns-service-review-luadns/","section":"Posts","summary":"Vitalie Cherpec contacted me back in May about his new hosted DNS offering, Luadns.","title":"DNS Service Review: Luadns"},{"content":"","date":null,"permalink":"/tags/lua/","section":"Tags","summary":"","title":"Lua"},{"content":"","date":null,"permalink":"/tags/review/","section":"Tags","summary":"","title":"Review"},{"content":"","date":null,"permalink":"/tags/scripting/","section":"Tags","summary":"","title":"Scripting"},{"content":"Although GRUB 2 does give us some nice benefits, changing its configuration can be a bit of a challenge if you\u0026rsquo;re used to working with the original GRUB for many, many years. I\u0026rsquo;ve recently installed some Fedora 17 systems with Xen and I\u0026rsquo;ve had to go back to the documentation to change the default GRUB 2 boot option. Hopefully this post will save you some time.\nA good place to start reading is on Fedora\u0026rsquo;s own page about GRUB 2 and the helpful commands provided to manage its configuration.\nI\u0026rsquo;ll assume you\u0026rsquo;ve installed the xen packages already and those packages have configured a (non-default) menu entry in your GRUB 2 configuration. Start by getting a list of your grub menu entry options (without the submenu options):\n[root@remotebox ~]# grep ^menuentry /boot/grub2/grub.cfg | cut -d \u0026#34;\u0026#39;\u0026#34; -f2 Fedora Fedora, with Xen hypervisor We obviously wan\u0026rsquo;t the second one to be our default option. Let\u0026rsquo;s adjust the GRUB 2 settings and then check our work:\n[root@remotebox ~]# grub2-set-default \u0026#39;Fedora, with Xen hypervisor\u0026#39; [root@remotebox ~]# grub2-editenv list saved_entry=Fedora, with Xen hypervisor The configuration file hasn\u0026rsquo;t been written yet! I prefer to disable the graphical framebuffer and I like to see all of the kernel boot messages each time I reboot. Some of those messages can be handy if you have failing hardware or a bad configuration somewhere in your boot process. Open up /etc/sysconfig/grub in your favorite text editor and remove rhgb quiet from the line that starts with GRUB_CMDLINE_LINUX.\nWrite your new GRUB 2 configuration file:\n[root@remotebox ~]# grub2-mkconfig -o /boot/grub2/grub.cfg Reboot your server. Once it\u0026rsquo;s back, check to see if you loaded the right boot option. Even without any Xen daemons running, you should be able to check for the presence of the hypervisor:\n[root@i7tiny ~]# dmesg | grep -i \u0026#34;xen version\u0026#34; [ 0.000000] Xen version: 4.1.2 (preserve-AD) ","date":"16 July 2012","permalink":"/p/boot-the-xen-hypervisor-by-default-in-fedora-17-with-grub-2/","section":"Posts","summary":"Although GRUB 2 does give us some nice benefits, changing its configuration can be a bit of a challenge if you\u0026rsquo;re used to working with the original GRUB for many, many years.","title":"Boot the Xen hypervisor by default in Fedora 17 with GRUB 2"},{"content":"","date":null,"permalink":"/tags/emergency/","section":"Tags","summary":"","title":"Emergency"},{"content":"","date":null,"permalink":"/tags/lvm/","section":"Tags","summary":"","title":"Lvm"},{"content":"LVM snapshots can be really handy when you\u0026rsquo;re trying to take a backup of a running virtual machine. However, mounting the snapshot can be tricky if the logical volume is partitioned.\nI have a virtual machine running zoneminder on one of my servers at home and I needed to take a backup of the instance with rdiff-backup. I made a snapshot of the logical volume and attempted to mount it:\n[root@i7tiny ~]# lvcreate -s -n snap -L 5G /dev/vg_i7tiny/vm_zoneminder Logical volume \u0026#34;snap\u0026#34; created [root@i7tiny ~]# mount /dev/vg_i7tiny/snap /mnt/snap/ mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_i7tiny-snap, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Oops. The logical volume has partitions. We will need to mount the volume with an offset so that we can get the right partition. Figuring out the offset can be done fairly easily with fdisk:\n[root@i7tiny ~]# fdisk -l /dev/vg_i7tiny/vm_zoneminder Disk /dev/vg_i7tiny/vm_zoneminder: 53.7 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0007a1d5 Device Boot Start End Blocks Id System /dev/vg_i7tiny/vm_zoneminder1 * 2048 1026047 512000 83 Linux /dev/vg_i7tiny/vm_zoneminder2 1026048 102825983 50899968 83 Linux /dev/vg_i7tiny/vm_zoneminder3 102825984 104857599 1015808 82 Linux swap / Solaris It looks like we have a small boot partition, a big root partition and a swap volume. We want to mount the second volume to copy files from the root filesystem. There are two critical pieces of information here that we need:\nthe sector where the partition starts (the Start column from fdisk) the number of bytes per sector (512 in this case - see the third line of the fdisk output) Let\u0026rsquo;s calculate how many bytes we need to skip when we mount the partition and then mount it:\n[root@i7tiny ~]# echo \u0026#34;512 * 1026048\u0026#34; | bc 525336576 [root@i7tiny ~]# mount -o offset=525336576 /dev/mapper/vg_i7tiny-snap /mnt/snap/ [root@i7tiny ~]# ls /mnt/snap/ bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var The root filesystem from the virtual machine is now mounted and we can copy some files from it. Don\u0026rsquo;t forget to clean up when you\u0026rsquo;re finished:\n[root@i7tiny ~]# umount /mnt/snap/ [root@i7tiny ~]# lvremove -f /dev/vg_i7tiny/snap Logical volume \u0026#34;snap\u0026#34; successfully removed If you need to do this with file-backed virtual machine storage or with a flat file you made with dd/dd_rescue, read my post from 2010 about tackling that similar problem.\n","date":"15 July 2012","permalink":"/p/mounting-an-lvm-snapshot-containing-partitions/","section":"Posts","summary":"LVM snapshots can be really handy when you\u0026rsquo;re trying to take a backup of a running virtual machine.","title":"Mounting an LVM snapshot containing partitions"},{"content":"This problem came up in conversation earlier this week and I realized that I\u0026rsquo;d never written a post about it. Has this ever happened to you before?\n$ ssh -YC remotebox [major@remotebox ~]$ xterm xterm: Xt error: Can\u0026#39;t open display: xterm: DISPLAY is not set I\u0026rsquo;ve scratched my head on this error message when the remote server is a minimally-installed CentOS, Fedora, or Red Hat system. It turns out that the xorg-x11-xauth package wasn\u0026rsquo;t installed with the minimal package set and I didn\u0026rsquo;t have any authentication credentials ready to hand off to the X server on the remote machine.\nLuckily, the fix is a quick one:\n[root@remotebox ~]# yum -y install xorg-x11-xauth Close the ssh connection to your remote server and give it another try:\n$ ssh -YC remotebox [major@remotebox ~]$ xterm You should now have an xterm from the remote machine on your local computer.\nThe source of the problem is that you don\u0026rsquo;t have a MIT-MAGIC-COOKIE on the remote system. The Xsecurity man page explains it fairly well:\nMIT-MAGIC-COOKIE-1\nWhen using MIT-MAGIC-COOKIE-1, the client sends a 128 bit \u0026ldquo;cookie\u0026rdquo; along with the connection setup information. If the cookie presented by the client matches one that the X server has, the connection is allowed access. The cookie is chosen so that it is hard to guess; xdm generates such cookies automatically when this form of access control is used. The user\u0026rsquo;s copy of the cookie is usually stored in the .Xauthority file in the home directory, although the environment variable XAUTHORITY can be used to specify an alternate location. Xdm automatically passes a cookie to the server for each new login session, and stores the cookie in the user file at login.\nYour home directory on the remote server should have a small file called .Xauthority with the magic cookie in binary:\n[major@remotebox ~]$ ls -al ~/.Xauthority -rw-------. 1 major major 61 Jul 14 19:28 /home/major/.Xauthority [major@remotebox ~]$ file ~/.Xauthority /home/major/.Xauthority: data ","date":"14 July 2012","permalink":"/p/x-forwarding-over-ssh-woes-display-is-not-set/","section":"Posts","summary":"This problem came up in conversation earlier this week and I realized that I\u0026rsquo;d never written a post about it.","title":"X forwarding over ssh woes: DISPLAY is not set"},{"content":"If you try to run Xen without libvirt on Fedora 17 with SELinux in enforcing mode, you\u0026rsquo;ll be butting heads with SELinux in no time. You\u0026rsquo;ll probably be staring at something like this:\n# xm create -c fedora17 Using config file \u0026#34;/etc/xen/fedora17\u0026#34;. Error: Disk isn\u0026#39;t accessible If you have setroubleshoot and setroubleshoot-server installed, you should have a friendly message in /var/log/messages telling you the source of the problem:\nsetroubleshoot: SELinux is preventing /usr/bin/python2.7 from read access on the blk_file dm-1. For complete SELinux messages. run sealert -l 4d890105-d9a4-4b3e-a674-ba7e952942dc The Xen daemon (the python process mentioned in the SELinux denial) is running with a context type of xend_t but the block device I\u0026rsquo;m trying to use for the VM has fixed_disk_device_t:\n# ps axZ | grep xend system_u:system_r:xend_t:s0 953 ? SLl 0:40 /usr/bin/python /usr/sbin/xend # ls -alZ /dev/dm-1 brw-rw----. root disk system_u:object_r:fixed_disk_device_t:s0 /dev/dm-1 SELinux isn\u0026rsquo;t going to allow this to work. However, even if we fix this, SELinux will balk about three additional issues and we\u0026rsquo;ll need to adjust the contexts on every new fixed block device we make. To get over the hump, change the context type on your block device to xen_image_t and re-run the xm create:\n# chcon -t xen_image_t /dev/dm-1 # ls -alZ /dev/dm-1 brw-rw----. root disk system_u:object_r:xen_image_t:s0 /dev/dm-1 # xm create -c fedora17 Using config file \u0026#34;/etc/xen/fedora17\u0026#34;. Error: out of pty devices You\u0026rsquo;ll find three new denials in /var/log/messages:\nsetroubleshoot: SELinux is preventing /usr/bin/python2.7 from read access on the file group. For complete SELinux messages. run sealert -l b1392df4-dda4-4b82-914c-1e20c62fc898 setroubleshoot: SELinux is preventing /usr/bin/python2.7 from setattr access on the chr_file 1. For complete SELinux messages. run sealert -l 3e09edc3-aeb7-49f5-96e1-d8148afda48f setroubleshoot: SELinux is preventing /usr/bin/python2.7 from execute access on the file pt_chown. For complete SELinux messages. run sealert -l 86395f09-5f33-4f66-8d02-519b61e54139 As much as it pains me to suggest it, you can create a custom module to allow all four of these operations by xend:\n# grep xend /var/log/audit/audit.log | audit2allow -M custom_xen WARNING: Policy would be downgraded from version 27 to 26. ******************** IMPORTANT *********************** To make this policy package active, execute: semodule -i custom_xen.pp # semodule -i custom_xen.pp You should now be able to start your VM without any complaints from SELinux. I\u0026rsquo;ll reiterate that this isn\u0026rsquo;t ideal, but it\u0026rsquo;s the best balance of security and convenience that I\u0026rsquo;ve found so far.\n","date":"10 July 2012","permalink":"/p/selinux-xen-and-block-devices-in-fedora-17/","section":"Posts","summary":"If you try to run Xen without libvirt on Fedora 17 with SELinux in enforcing mode, you\u0026rsquo;ll be butting heads with SELinux in no time.","title":"SELinux, Xen, and block devices in Fedora 17"},{"content":"Anyone who has been a system administrator for even a short length of time has probably used traceroute at least once. Although the results often seem simple and straightforward, Richard Steenbergen pointed out in a NANOG presentation [PDF] that many people misinterpret the results and chase down the wrong issues.\nRichard makes some great points about where latency comes from and when people often make the wrong assumptions regarding the source and location of the latency. For example, it\u0026rsquo;s important to keep in mind that many routers de-prioritize ICMP packets sent directly to them and although you may think a particular hop has a ton of latency, it may just be caused by the router prioritizing the handling of other packets before yours. In addition, different routers measure latency with varying precision (4ms for Cisco).\nHe also covers tricky routing paths that you might not consider without intimate knowledge of the remote network configuration. Technologies like MPLS can hide parts of the network path from view and those hidden devices could be causing network problems for your traffic.\nI sent Richard an email to thank him for assembling this guide and he linked me to a tablet-handy, book-like version. Both versions have some great information for system and network administrators.\nI\u0026rsquo;ve mirrored the PDF\u0026rsquo;s here just in case the links above stop working:\nA Practical Guide to (Correctly) Troubleshooting with Traceroute (NANOG presentation slides) Traceroute (Book format) ","date":"13 June 2012","permalink":"/p/guide-to-using-and-understanding-traceroute/","section":"Posts","summary":"Anyone who has been a system administrator for even a short length of time has probably used traceroute at least once.","title":"Great guide for using traceroute and understanding its results"},{"content":"The feedback from my last lengthy post (Lessons learned in the ambulance pay dividends in the datacenter) about analogies between EMS and server administration was mostly positive, so I decided to do it again!\nOur ceiling fan in our living room died the night before we had all of our floors replaced and I knew that a portion of my weekend would be lost trying to replace it. The motor was totally dead but at least the lights still worked. However, in Texas, if the motor isn\u0026rsquo;t running, no air is moving and the fan is worthless. Replacing the fan wouldn\u0026rsquo;t be an easy task: our living room is 14 feet tall (that\u0026rsquo;s about 4.3 meters) and our replacement fan was a pretty heavy one. Add in an almost-two-year-old running around the living room during the process and it gets a little tougher.\nI took the old fan off pretty easily and was immediately stumped about the new one. The instructions had a method of installation that wasn\u0026rsquo;t compatible with the outlet box in the ceiling and I didn\u0026rsquo;t have the right bolts and washers for the job. A quick trip to Lowe\u0026rsquo;s solved that and I was back in the game. The motor was soon hung, the wiring was connected, and I tested the wall switch. The motor didn\u0026rsquo;t move.\nAt this point, I figured that the light assembly was required for the motor to run. I screwed everything in, connected the light assembly, and still no movement. I thought that the light switch had possibly gone bad since my wife saw the lights flicker last week when she turned off the fan. Another trip to Lowe\u0026rsquo;s yielded a new switch. Installed that - still no movement. Double-checked the breaker. Re-did the wiring in the fan. Tried the switch again. No movement.\nThe confusion soon started. The fan was new, the motor was new, the switch was new, and the wiring was verified. I called my stepfather in the hopes that he could think of something I couldn\u0026rsquo;t but he said I\u0026rsquo;d thought of everything. He came over with a voltage tester and verified that the switch had power and so did the fan. He re-did the wiring and tried again. Still no movement.\nHe tilted his head for a second, then looked down at me:\nYou did try pulling the chain for the fan, right? Usually the factory sets them up so that the lights and fan are off when you hang it. You know, for safety.\nAfter a quick tug of the chain the motor was flying. I felt like an idiot and he had a good chuckle at my expense.\nWhat\u0026rsquo;s the point?\nWe all do things like this when we administer servers. I touched on this back in January 2010 and it\u0026rsquo;s probably important enough to mention again. Go for the simplest solutions first. They\u0026rsquo;re not only easier and faster to verify, but you\u0026rsquo;ll be guaranteed to forget about them if you dive right into the more complicated stuff at first. Also, bear in mind that the same set of instructions won\u0026rsquo;t fit all scenarios and situations. Trust your instincts when you know they\u0026rsquo;re right.\nSometimes situations crop up where you really need a second set of eyes. We\u0026rsquo;re all eager to find the solution ourselves and avoid bothering others, but when you find yourself flailing for a solution, the best remedy may be to share your troubles with someone you trust.\n","date":"11 June 2012","permalink":"/p/what-installing-a-ceiling-fan-can-teach-you-about-administering-servers/","section":"Posts","summary":"The feedback from my last lengthy post (Lessons learned in the ambulance pay dividends in the datacenter) about analogies between EMS and server administration was mostly positive, so I decided to do it again!","title":"What installing a ceiling fan can teach you about administering servers"},{"content":"It\u0026rsquo;s no secret that I\u0026rsquo;m a fan of Twitter and OpenStack. I found myself needing a better way to follow the rapid pace of OpenStack development and I figured that a Twitter bot would be a pretty good method for staying up to date.\nI\u0026rsquo;d like to invite you to check out @openstackwatch.\nFirst things first, it\u0026rsquo;s a completely unofficial project that I worked on during my spare time and it\u0026rsquo;s not affiliated with OpenStack in any way. If it breaks, it\u0026rsquo;s most likely my fault.\nThe bot watches for ticket status changes in OpenStack\u0026rsquo;s Gerrit server and makes a tweet about the change within a few minutes. Every tweet contains the commit\u0026rsquo;s project, owner, status, and a brief summary of the change. In addition, you\u0026rsquo;ll get a link directly to the review page on the Gerrit server. Here\u0026rsquo;s an example:\nHey! It's Dan! If you\u0026rsquo;re not a fan of Twitter, there\u0026rsquo;s a link to the RSS feed in the bio section, or you can just add this URL to your RSS feed reader:\nhttp://api.twitter.com/1/statuses/user_timeline.rss?screen_name=openstackwatch If you can come up with any ideas for improvements, please let me know!\n","date":"8 June 2012","permalink":"/p/keep-tabs-on-openstack-development-with-openstack-watch-on-twitter/","section":"Posts","summary":"It\u0026rsquo;s no secret that I\u0026rsquo;m a fan of Twitter and OpenStack.","title":"Keep tabs on OpenStack development with OpenStack Watch on Twitter"},{"content":"While cleaning up a room at home in preparation for some new flooring, I found my original documents from when I first became certified as an Emergency Medical Technician (EMT) in Texas. That was way back in May of 2000 and I received it just before I graduated from high school later in the month. After renewing it twice, I decided to let my certification go this year. It expires today and although I\u0026rsquo;m sad to see it go, I know that sometimes you have to let one thing go so that you can excel in something else.\nI mentioned this yesterday on Twitter and Jesse Newland from GitHub came back with a good reply:\nThe tweet that inspired this post It began to make more sense the more I thought about it (and once Mark Imbriaco and Jerry Chen asked for it as well). Working in Operations in a large server environment has a lot of similarities to working on an ambulance:\nboth involve fixing things (whether it\u0026rsquo;s technology or an illness/injury) there are plenty of highly stressful situations in both occupations lots of money is riding on the decisions made at a keyboard or at a stretcher if you can\u0026rsquo;t work as a team, you can\u0026rsquo;t do either job effectively there is always room for improvement (and I do mean always) not having all the facts can lead to perilous situations Without further ado, here are some lessons I learned on the ambulance which have really helped me as a member of an operations team. I\u0026rsquo;ve broken them up into separate chunks (more on that lesson shortly) to make it a little easier to read:\nWhatever happens, keep your cool\nOne of the worst situations you can have on an ambulance is when an EMT or paramedic feels overwhelmed to the point that they can\u0026rsquo;t function. Imagine rolling up with your partner on a multi-car collision with several injured drivers and passengers. It\u0026rsquo;s just the two of you at the scene and you need to start working. You\u0026rsquo;re obviously outnumbered and you won\u0026rsquo;t be able to treat everyone at once. Now, imagine that your partner hasn\u0026rsquo;t seen this type of situation and is actively buckling under the pressure. The quality of care you\u0026rsquo;re trained to deliver and the efficiency at which you can deliver it has now been slashed in half. Even worse, getting your partner back on track might take some work and this may slow you down even more.\nThe same can be said about working on large incidents affecting your customers. You\u0026rsquo;re probably going to be outnumbered by the amount of servers having a problem and you won\u0026rsquo;t get them back online any sooner if you\u0026rsquo;re beginning to freak out. Just remember, as with servers and as with people (most of the time), they were running fine at one time and they\u0026rsquo;ll be running fine again soon. Your job is to bridge the gap between those times and try to get to the end goal as soon as possible.\nYou might miss some things or not complete certain tasks as well as you\u0026rsquo;d like to. You might slip and make things worse than they were before. One step backward and two steps forward is painful, but it\u0026rsquo;s still progress. Keep your mind clear and focused so that you can use your knowledge, skills, and experience to pave a path out.\nTriage, triage, triage\nGoing back to the multi-car collision scenario, you\u0026rsquo;re well aware that you won\u0026rsquo;t be able to take care of everyone at once. This is where skillful triaging is key. Find the people who are in the most dire situations and treat them first. Although it seems counterproductive, you may have to pass over the people who are hurt so badly that they have little chance of survival. Spending additional time with those people may cause patients with treatable conditions to deteriorate further unnecessarily. It may sound callous, but I\u0026rsquo;d rather have a few people with serious injuries get treated than lose all of them while I\u0026rsquo;m treating someone who is essentially near death.\nLots of this can be carried over into maintaining servers. When a big problem occurs, you can spend all of your time wrestling with servers that are beyond repair only to watch the remainder of your environment crash around you. Find ways to stop the bleeding first and then figure out some solid fixes.\nFor example, if your database cluster gets out of sync, think of the things you can do to reduce the amount of bad data coming in. Could you have your load balancer send traffic elsewhere? Could you disable your application until the database problem is solved? If you lose sight of what\u0026rsquo;s causing you immediate pain, you may spend all day trying to fix the broken database cluster only to find that you have many multitudes more data to sort out due to your application running throughout the whole process.\nFlickr via jar0d Learn from your mistakes and don\u0026rsquo;t dwell on them\nMedical mistakes can range anywhere from unnoticeable to career-endingly serious. One missed tidbit of a patient\u0026rsquo;s medical history, one small math error when administering drugs, or one slip of the hand can make a bad situation much worse. I\u0026rsquo;ve made mistakes on the ambulance and I\u0026rsquo;ve been very fortunate that almost all of them were very small and inconsequential. If I made one that went unnoticed, I made an effort to notify my supervisor and whoever would be taking over care of my patient. For the mistakes I didn\u0026rsquo;t even notice on my own, my partners would often be quick to point out the error.\nGetting called out on a mistake (even if you call yourself out on it) hurts. Funnel the frustration from it into a plan to fix it. Do some reading to understand the right solution. Learn mnemonics to remember in stressful situations. Make notes for yourself. Practice. Those small steps will reduce your mistakes through increasing your confidence.\nAlthough most Ops engineers should survive big incidents with their lives intact, mistakes are still made and they can be costly. Mistakes can turn into a positive learning experience for everyone on the team. There\u0026rsquo;s a great post on Etsy\u0026rsquo;s \u0026ldquo;Code as Craft\u0026rdquo; blog about this topic.\nJohn Allspaw wrote:\nA funny thing happens when engineers make mistakes and feel safe when giving details about it: they are not only willing to be held accountable, they are also enthusiastic in helping the rest of the company avoid the same error in the future. They are, after all, the most expert in their own error.\nThe only true mistake is the one which is made but never learned from. Accept it, learn from it, teach others to avoid it and move forward.\nGet all the facts to avoid assumptions\nMy mother (an Engish teacher) always told me to put the most important things at the beginning and and the end when I write. If there\u0026rsquo;s anything more important than keeping your cool under duress, it\u0026rsquo;s that you should have as many facts as you can before you get started.\nOn the ambulance, you\u0026rsquo;re always looking for the very small clues to ensure that your patient is getting the proper treatment. You may walk up to a patient with slurred speech who can\u0026rsquo;t walk straight. You may think he\u0026rsquo;s drunk until you see a small bottle of insulin and a blood glucose meter. Wait, did his blood sugar bottom out? Did he take his insulin at the wrong time? Did he take the wrong amount? Missing that small bit of information may lead you to put your \u0026ldquo;drunk\u0026rdquo; patient onto a stretcher without the proper treatment only to find that you\u0026rsquo;re dealing with a diabetic coma as you get to the hospital. That incorrect assumption could have turned a serious situation into a possibly fatal one.\nResponding to incidents with servers is much the same. Skipping over a server with data corruption or not realizing that a change was made (and documented) earlier in the day could lead to serious damage. Forgetting to check log files, streams of exceptions, or reports from customers can lead to bad assumptions which could extend your downtime or cause the loss of data.\n*In summary, here\u0026rsquo;s my internal runbook from when I was working full time as an EMT:\nStop the bleeding Find the root cause of the problem Make a plan (or plans) to fix it Vet out your best plan with your partner if it seems risky Execute the plan Monitor the results Review the plan\u0026rsquo;s success or failure with a trusted expert When I\u0026rsquo;m fighting outages at work, I reach back into this runbook and try my best to follow the steps. It helps me keep my cool, reduce mistakes, and proceed with better plans. I\u0026rsquo;d be curious to hear your feedback about how this runbook could work for your Operations team or if you have ideas for edits.\n","date":"31 May 2012","permalink":"/p/lessons-learned-in-the-ambulance-pay-dividends-in-the-datacenter/","section":"Posts","summary":"While cleaning up a room at home in preparation for some new flooring, I found my original documents from when I first became certified as an Emergency Medical Technician (EMT) in Texas.","title":"Lessons learned in the ambulance pay dividends in the datacenter"},{"content":"","date":null,"permalink":"/tags/operations/","section":"Tags","summary":"","title":"Operations"},{"content":"Kristóf Kovács has a fantastic post about some lesser-known Linux tools that can really come in handy in different situations.\nIf you haven\u0026rsquo;t tried dstat (I hadn\u0026rsquo;t until I saw Kristóf\u0026rsquo;s post), this is a great one to try. You can keep a running tally on various server metrics including load average, network transfer, and disk operations.\nHere is some sample output:\n----total-cpu-usage---- ---paging-- ---load-avg--- ------memory-usage----- -net/total- ---procs--- --io/total- ---system-- ----tcp-sockets---- usr sys idl wai hiq siq| in out | 1m 5m 15m | used buff cach free| recv send|run blk new| read writ| int csw |lis act syn tim clo 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M|1314B 180B| 0 0 0| 0 0 | 70 80 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M|1779B 1004B| 0 0 0| 0 0 | 84 78 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M| 904B 362B|1.0 0 1.0| 0 0 | 75 86 | 13 9 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 386M|2203B 1559B| 0 0 0| 0 0 | 180 127 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 386M| 260B 130B| 0 0 0| 0 0 | 53 66 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M| 52B 114B| 0 0 0| 0 0 | 54 77 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M|2271B 872B| 0 0 0| 0 0 | 94 79 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M| 52B 130B| 0 0 0| 0 0 | 54 74 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M|1126B 1254B| 0 0 0| 0 24.0 | 80 87 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.07 0.25 0.25| 866M 249M 537M 387M|1030B 130B| 0 0 0| 0 0 | 88 82 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M| 578B 114B| 0 0 0| 0 0 | 53 64 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M|1597B 890B| 0 0 0| 0 0 | 85 79 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M| 552B 114B| 0 0 0| 0 0 | 63 77 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M|1624B 1254B| 0 0 0| 0 0 | 81 75 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M| 478B 114B| 0 0 0| 0 0 | 67 73 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M| 418B 114B| 0 0 0| 0 0 | 59 74 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M|1265B 874B| 0 0 0| 0 0 | 82 73 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M| 758B 114B| 0 0 0| 0 0 | 60 80 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M|1236B 1255B| 0 0 0| 0 4.00 | 93 79 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.06 0.24 0.25| 866M 249M 537M 387M| 52B 130B| 0 0 0| 0 0 | 71 70 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.05 0.23 0.25| 866M 249M 537M 387M| 214B 114B| 0 0 0| 0 0 | 55 73 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.05 0.23 0.25| 866M 249M 537M 387M|1201B 890B| 0 0 0| 0 0 | 80 80 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.05 0.23 0.25| 866M 249M 537M 387M| 108B 114B| 0 0 0| 0 0 | 53 66 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.05 0.23 0.25| 866M 249M 537M 387M|1344B 1254B| 0 0 0| 0 10.0 | 119 85 | 13 7 0 0 5 0 0 100 0 0 0| 0 0 |0.05 0.23 0.25| 866M 249M 537M 387M| 172B 130B| 0 0 0| 0 8.00 | 80 82 | 13 7 0 0 5 Learn more about dstat on Dag Wieërs\u0026rsquo; site.\n","date":"11 May 2012","permalink":"/p/lesser-known-but-extremely-handy-linux-tools/","section":"Posts","summary":"Kristóf Kovács has a fantastic post about some lesser-known Linux tools that can really come in handy in different situations.","title":"Lesser-known but extremely handy Linux tools"},{"content":"It\u0026rsquo;s been a few years since I started a little project to operate a service to return your IPv4 and IPv6 address. Although there are a bunch of other sites that offer this service as well, I\u0026rsquo;ve been amazed by the gradually increasing traffic to icanhazip.com.\nHere\u0026rsquo;s a sample of the latest statistics:\nHits per day: 1.8 million (about 21 hits/second) Unique IP addresses per day: 25,555 Hits per day from IPv6 addresses: 1,069 (a little sad) Bandwidth used per day: ~ 400MB The site is now running on multiple Cloud Servers at Rackspace behind a load balancer cluster. In addition, the DNS records are hosted with Rackspace\u0026rsquo;s Cloud DNS service.\nThis should allow the site to reply more quickly and reliably. If you have suggestions for other improvements, let me know!\n","date":"18 April 2012","permalink":"/p/performance-and-redundancy-boost-for-icanhazip-com/","section":"Posts","summary":"It\u0026rsquo;s been a few years since I started a little project to operate a service to return your IPv4 and IPv6 address.","title":"Performance and redundancy boost for icanhazip.com"},{"content":"You\u0026rsquo;ve probably noticed that the blog has slowed down a bit recently. Part of the slowdown is due to an uptick in work required to get OpenStack Nova and its related software up and running at Rackspace for Cloud Servers and another part of it is a severe case of writer\u0026rsquo;s block. I threw out some questions on Twitter about the topics people would like to see covered in some new posts and a commonly requested topic was employment at Rackspace.\nFirst things first, getting a job at Rackspace isn\u0026rsquo;t easy. We don\u0026rsquo;t intentionally make the process difficult. It\u0026rsquo;s just that the work we do is unique and demanding.\nWe work in a fast-paced, extremely dynamic team-centric environment. While some people in the company work in extremely small teams or sometimes all by themselves, that\u0026rsquo;s pretty few and far between. We look for people who can survive and flourish in this atmosphere and we look for people who can do it all while working as a team. Even with all of this hustle and bustle, we still remember why we\u0026rsquo;re doing it: the pursuit of Fanatical Support for our customers.\nAnother thing to keep in mind is that there\u0026rsquo;s no true secret for making it through the application process. There\u0026rsquo;s no magic combination of skills or \u0026ldquo;silver bullet\u0026rdquo; that will scoot you through. Every candidate is reviewed individually for each position. There have been several times at the end of an interview where we\u0026rsquo;ve gotten together and said: \u0026ldquo;Wow, this candidate is solid, but they\u0026rsquo;re just not right for this position. Let\u0026rsquo;s find the right spot and see if there\u0026rsquo;s a spot open.\u0026rdquo; We look for the right candidate for the right position at the right time.\nOne of the best ways to get ahead in the screening or interview process is to do a little homework about Rackspace and the products we offer. Much of this is covered in a post I wrote in 2011. You\u0026rsquo;ll go into the interviews with more confidence and it will be much more obvious that you\u0026rsquo;re really interested in the position.\nDon\u0026rsquo;t be discouraged if the process takes a little longer than you expected. When I was hired in 2006, I went through two phone pre-screens and then three back-to-back interviews in person. Things have changed a little since then and I\u0026rsquo;ve heard of some candidates receiving two to three pre-screens via telephone and then one or two interviews in person. The additional screening and interviews may be due to Rackers trying to find the right fit for a particular applicant. As I said previously, we look for the right fit for each applicant. We may consider you for a different position than you applied for if we feel like your skill set or personality fits that role better.\nA very common question is what to wear to a Rackspace interview. It\u0026rsquo;s confusing to know exactly what\u0026rsquo;s expected since we have Rackers in the building wearing everything from suits to flip-flops. This is where you really have to go with your gut. Interviewing for a customer-facing sales position while wearing a hoodie and shorts is probably going to bring a suboptimal result. Keep in mind that there\u0026rsquo;s really nothing negative about overdressing (but keep your tuxedo in the closet, seriously). I wore a shirt and tie for my interviews in 2006 but my tie got caught in the car door and was shredded. After a lot of cursing, I took off the tie and decided to wing it with my dress shirt. Nobody ever said a word about it.\nRemember to be flexible during the interviews. You might be asked to draw a solution on a whiteboard or think through a really complicated situation. Roll with it and keep your confidence up. When you don\u0026rsquo;t know something, admit it, but then talk about how you\u0026rsquo;d research an answer.\nThere\u0026rsquo;s one last thing to keep in mind and it\u0026rsquo;s really critical. If you\u0026rsquo;re ever asked about how you would solve a problem or how you solved a problem in the past, don\u0026rsquo;t divulge any information which is confidential or proprietary to your current company. Just tell the interviewers that you\u0026rsquo;ve solved the solution in the past but you\u0026rsquo;ll need to keep things vague to maintain confidentiality. We will definitely understand and we will encourage you to maintain that confidentiality.\nLeave your comments if you have any! I\u0026rsquo;ll be glad to answer any questions you have.\n","date":"9 April 2012","permalink":"/p/getting-a-technical-job-at-rackspace/","section":"Posts","summary":"You\u0026rsquo;ve probably noticed that the blog has slowed down a bit recently.","title":"Getting a Technical Job at Rackspace"},{"content":"","date":null,"permalink":"/tags/interview/","section":"Tags","summary":"","title":"Interview"},{"content":"","date":null,"permalink":"/tags/rhca/","section":"Tags","summary":"","title":"Rhca"},{"content":"I originally wrote this post for the Rackspace Blog but I decided to post it here in case some of my readers might have missed it. Please feel free to leave your comments at the end of the post.\nSometimes people talk to me about posts I\u0026rsquo;ve written on my blog, or posts they wish I would write. At some point during the discussion, I\u0026rsquo;ll almost always ask the person why they don\u0026rsquo;t start up their own blog or contribute to someone else\u0026rsquo;s. Very few people actually seem interested when I probe them about writing posts on technical topics.\nMy mother was always the one who told me (and her students) that everyone has a story. She said that writing could be therapeutic in ways you probably won\u0026rsquo;t consider until you\u0026rsquo;ve written something that someone else enjoys. Just as software developers exist to write software for their users, writers exist to write stories for their readers. There\u0026rsquo;s nothing that says technical people can\u0026rsquo;t become excellent writers who inspire others to learn and share their knowledge with others.\nThe goal of this post is to encourage technical people to enjoy writing, write efficiently and feel comfortable doing it. I\u0026rsquo;ll roll through some of the most common responses I\u0026rsquo;ve received about why technical people don\u0026rsquo;t blog about what they know.\nI don\u0026rsquo;t think I\u0026rsquo;m really an expert on anything. I\u0026rsquo;m not an authority on any topic I can think of.\nI\u0026rsquo;m leading off with this response because it\u0026rsquo;s the most critical to refute. If you don\u0026rsquo;t take away anything else from this post, let it be this: you don\u0026rsquo;t need to be an expert on a topic to write about it.\nYou can find examples of this by rolling through some of the posts on my blog. I\u0026rsquo;d consider myself to be an expert on one, maybe two topics, but I\u0026rsquo;ve written over 450 posts in the span of just over five years. I certainly didn\u0026rsquo;t write all of those about the one or two topics I know best.\nWrite about what you know and don\u0026rsquo;t be afraid to do a little research to become an authority on something. A great example of this was my post, entitled \u0026ldquo;Kerberos for haters.\u0026rdquo; I had almost no expertise in Kerberos. In fact, I couldn\u0026rsquo;t even configure it properly for my RHCA exam! However, I did a ton of research and began to understand how most of the pieces fit together. Many other people were just as confused and I decided to pack all of the knowledge I had about Kerberos into a blog post. Positive and negative feedback rolled in and it was obvious that my post taught some readers, inspired some others and angered a few.\nWhat a great way to lead into the next response:\nWhat if I say something that isn\u0026rsquo;t correct? I\u0026rsquo;ll look like an idiot in front of the whole internet!\nBeen there, done that. Every writer makes errors and comes up with bad assumptions at least once. Readers will call you out on your mistakes (some do it delicately while others don\u0026rsquo;t) and it\u0026rsquo;s your duty to correct your post or correct the reader. I\u0026rsquo;ve written posts with errors, and I\u0026rsquo;ve gotten a little lazy on my fact-checking from time to time. As my middle school journalism teacher always reminded me, the most important part of a mistake is what you do to clean it up and learn from it.\nIn short: you\u0026rsquo;ll make mistakes. As long as you\u0026rsquo;ve done your due diligence to minimize them and respond to them promptly, your readers should forgive you.\nSpeaking of errors:\nI\u0026rsquo;m great at a command prompt but my spelling and grammar are awful. I write terribly.\nThis is easily fixed. If you\u0026rsquo;re one of those folks who live the do-it-yourself type of lifestyle, pick up a copy of The Elements of Style by Strunk \u0026amp; White. There are free PDF versions online or you can borrow one from your nearest journalist. No matter the situation you\u0026rsquo;re in, this book has details about where punctuation should and shouldn\u0026rsquo;t be, how to structure sentences and paragraphs, and how to properly cite your sources (really vital for research posts).\nHauling around a copy of an ultra-dry reference book may not be your thing. If that\u0026rsquo;s the case, find someone you know who has a knack for writing. You can usually find helpful folks in marketing or corporate communications in most big companies who will take your post and return it covered in red ink ready for corrections (thanks, Garrett!). I\u0026rsquo;ve even spotted some folks on Fiverr who will do this for as low as $5.\nI\u0026rsquo;ll wrap up with the second most common response:\nI don\u0026rsquo;t know who I\u0026rsquo;m writing for? What if I write about something simple and the really technical folks think I\u0026rsquo;m a noob? What if I write something crazy complex and it goes over most people\u0026rsquo;s heads?\nI\u0026rsquo;ve done both of these. Most Linux system administrators worth their salt know how to add and remove iptables rules, and they\u0026rsquo;d consider it to be pretty trivial work. Would it surprise you to know that out of over 450 posts, my post about deleting a single iptables rule is in the top five most accessed posts per month? I receive just over 11 percent of my monthly hits to this post. People are either learning from it or they can\u0026rsquo;t remember how to delete the rule and they want to use the post as a quick reference. Either way, the post is valuable to many people even if I think it\u0026rsquo;s the simplest topic possible.\nOn the flip side, I went nuts and wrote up a complete how-to for a redundant cloud hosting configuration complete with LVS, glusterfs, MySQL on DRBD, memcached, haproxy and ldirectord. I thought it would be valuable knowledge to a few folks but that it might sail over the heads of most of my readers. Again, I was wrong. The post is constantly in the top 10 most visited posts on the blog and I\u0026rsquo;ve probably received more feedback via comments, email and IRC about that post than any other. Once again, a post I thought would be mostly useless turned into a real conversation starter.\nLet\u0026rsquo;s conclude and wrap up. Keep these things in mind if you feel discouraged about writing:\nWrite about what interests you whether you\u0026rsquo;re an expert on it or not Don\u0026rsquo;t be afraid to fail Be responsive to your readers Even if you think nobody will read your post, write it Always ensure your voice shines through in your writing — this is what makes it special and appealing ","date":"30 March 2012","permalink":"/p/why-technical-people-should-blog-but-dont/","section":"Posts","summary":"I originally wrote this post for the Rackspace Blog but I decided to post it here in case some of my readers might have missed it.","title":"Why technical people should blog (but don’t)"},{"content":"My quest to get better at Python led me to create a new project on GitHub. It\u0026rsquo;s called mysql-json-bridge and it\u0026rsquo;s ready for you to use.\nWhy do we need a JSON API for MySQL?\nThe real need sprang from a situation I was facing daily at Rackspace. We have a lot of production and pre-production environments which are in flux but we need a way to query data from various MySQL servers for multiple purposes. Some folks need data in ruby or python scripts while others need to drag in data with .NET and Java. Wrestling with the various adapters and all of the user privileges on disparate database servers behind different firewalls on different networks was less than enjoyable.\nThat\u0026rsquo;s where this bridge comes in.\nThe bridge essentially gives anyone the ability to talk to multiple database servers across different environments by talking to a single endpoint with easily configurable security and encryption. As long as the remote user can make an HTTP POST and parse some JSON, they can query data from multiple MySQL endpoints.\nHow does it work?\nIt all starts with a simple HTTP POST. I\u0026rsquo;ve become a big fan of the Python requests module. If you\u0026rsquo;re using it, this is all you need to submit a query:\nimport requests payload = {\u0026#39;sql\u0026#39;: \u0026#39;SELECT * FROM some_tables WHERE some_column=some_value\u0026#39;} url = \u0026#34;http://localhost:5000/my_environment/my_database\u0026#34; r = requests.post(url, data=payload) print r.text The bridge takes your query and feeds it into the corresponding MySQL server. When the results come back, they\u0026rsquo;re converted to JSON and returned via the same HTTP connection.\nWhat technology does it use?\nFlask does the heavy lifting for the HTTP requests and Facebook\u0026rsquo;s Tornado database class wraps the MySQLdb module in something a little more user friendly. Other than those modules, PyYAML and requests are the only other modules not provided by the standard Python libraries.\nIs it fast?\nYes. I haven\u0026rsquo;t done any detailed benchmarks on it yet, but the overhead is quite low even with a lot of concurrency. The biggest slowdowns come from network latency between you and the bridge or between the bridge and the database server. Keep in mind that gigantic result sets will take a longer time to transfer across the network and get transformed into JSON.\nI found a bug. I have an idea for an improvement. You\u0026rsquo;re terrible at Python.\nAll feedback (and every pull request) is welcome. I\u0026rsquo;m still getting the hang of Python (hey, I\u0026rsquo;ve only been writing in it seriously for a few weeks!) and I\u0026rsquo;m always eager to learn a new or better way to accomplish something. Feel free to create an issue in GitHub or submit a pull request with a patch.\n","date":"29 March 2012","permalink":"/p/mysql-json-bridge-a-simple-json-api-for-mysql/","section":"Posts","summary":"My quest to get better at Python led me to create a new project on GitHub.","title":"mysql-json-bridge: a simple JSON API for MySQL"},{"content":"I found myself stuck in a particularly nasty situation a few weeks ago where I had two git branches with some commits that were mixed up. Some commits destined for a branch called development ended up in master. To make matters worse, development was rebased on top of master and the history was obviously mangled.\nMy goal was to find out which commits existed in development but didn\u0026rsquo;t exist anywhere in master. From there, I needed to find out which commits existed in master that didn\u0026rsquo;t exist in development. That would give me all of the commits that needed to be in the development branch.\nI constructed this awful looking bash mess to figure out which commits were in development but not in master:\nI had a list of commits that existed in development but not in master:\n965cf71 Trollface acda854 Some patch 2 bf1f3e2 Some patch 1 db1980c Packaging From there, I could swap MASTER and DEV to figure out which commits existed in master but not in development. Only a couple of commits showed up and these were the ones which were committed and pushed to master inadvertently. After a couple of careful cherry picks and reversions, my branches were back to normal.\n","date":"15 March 2012","permalink":"/p/compare-commits-between-two-git-branches/","section":"Posts","summary":"I found myself stuck in a particularly nasty situation a few weeks ago where I had two git branches with some commits that were mixed up.","title":"Compare commits between two git branches"},{"content":"A fellow Racker showed me httpry about five years ago and I\u0026rsquo;ve had in my toolbox as a handy way to watch HTTP traffic. I\u0026rsquo;d used some crazy tcpdump arguments and some bash one-liners to pull out the information I needed but I never could get the live look that I really wanted.\nHere\u0026rsquo;s an example of what httpry\u0026rsquo;s output looks like on a busy site like icanhazip.com:\nGET\ticanhazip.com\t/\tHTTP/1.1\t-\t- 2012-03-13 23:29:39 192.x.x.x\t186.x.x.x \u0026lt; -\t-\t-\tHTTP/1.1\t200\tOK 2012-03-13 23:29:39 187.x.x.x\t192.x.x.x \u0026gt; GET\ticanhazip.com\t/\tHTTP/1.0\t-\t- 2012-03-13 23:29:39 192.x.x.x\t187.x.x.x \u0026lt; -\t-\t-\tHTTP/1.0\t200\tOK 2012-03-13 23:29:39 188.x.x.x\t192.x.x.x \u0026gt; GET\ticanhazip.com\t/\tHTTP/1.1\t-\t- 2012-03-13 23:29:39 192.x.x.x\t188.x.x.x \u0026lt; -\t-\t-\tHTTP/1.1\t200\tOK 2012-03-13 23:29:39 189.x.x.x\t192.x.x.x \u0026gt; GET\ticanhazip.com\t/\tHTTP/1.1\t-\t- 2012-03-13 23:29:39 192.x.x.x\t189.x.x.x \u0026lt; -\t-\t-\tHTTP/1.1\t200\tOK You can watch the requests come in and the responses go out in real time. It even allows for BPF-style packet filters which allow you to narrow down the source and/or destination IP addresses and ports you want to watch. You can run it as a foreground process or as a daemon depending on your needs.\nIt\u0026rsquo;s now available as a RPM package for Fedora 15, 16, 17 (and rawhide) as well as EPEL 6 (for RHEL/CentOS/SL 6).\n","date":"14 March 2012","permalink":"/p/new-fedora-and-epel-package-httpry/","section":"Posts","summary":"A fellow Racker showed me httpry about five years ago and I\u0026rsquo;ve had in my toolbox as a handy way to watch HTTP traffic.","title":"New Fedora and EPEL package: httpry"},{"content":"","date":null,"permalink":"/tags/scientific-linux/","section":"Tags","summary":"","title":"Scientific Linux"},{"content":"Getting XenServer installed on some unusual platforms takes a bit of work and the AOpen MP57 is a challenging platform for a XenServer 6.0.2 installation.\nMy MP57 box came with the i57QMx-vP motherboard. If yours came with something else, this post may or may not work for you.\nYou\u0026rsquo;ll need the XenServer 6 installation ISO burned to a CD to get started. Boot the CD in your MP57 and wait for the initial boot screen to appear. Type safe at the prompt and press enter. Go through the normal installation steps and reboot.\nAfter the reboot, you\u0026rsquo;ll notice that there\u0026rsquo;s no video output for dom0. Hop on another nearby computer and ssh to your XenServer installation using the root user and the password that you set during the installation process. Open up /boot/extlinux.conf in your favorite text editor and make sure the label xe section looks like this:\nlabel xe # XenServer kernel mboot.c32 append /boot/xen.gz mem=1024G dom0_max_vcpus=4 dom0_mem=752M lowmem_emergency_pool=1M crashkernel=64M@32M acpi=off console=vga --- /boot/vmlinuz-2.6-xen root=LABEL=root-aouozuoo ro xencons=hvc console=hvc0 console=tty0 vga=785 --- /boot/initrd-2.6-xen.img The console=vga adjustment ensures that the dom0 console is piped to the vga output and acpi=off fixes the lockup that will occur when the vga output is sent to your display. I also removed splash and quiet from the kernel line so that I could see all of the boot messages in detail.\n","date":"12 March 2012","permalink":"/p/installing-xenserver-6-0-2-on-an-aopen-mp57/","section":"Posts","summary":"Getting XenServer installed on some unusual platforms takes a bit of work and the AOpen MP57 is a challenging platform for a XenServer 6.","title":"Installing XenServer 6.0.2 on an AOpen MP57"},{"content":"","date":null,"permalink":"/tags/dtrace/","section":"Tags","summary":"","title":"Dtrace"},{"content":"I\u0026rsquo;m a big fan of Linux tools which allow you to monitor things in great detail. Some of my favorites are strace, the systemtap tools, and sysstat. Finding tools similar to these on a Mac is a little more difficult.\nThere\u0026rsquo;s a great blog post from Brendan Gregg\u0026rsquo;s blog that covers a lot of detail around dtrace and its related tools:\nhttp://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/ One of the handier tools is iosnoop. It gives you a much easier to read (and easier to generate) view of the disk I/O on your Mac. If you remember, I talked about how to do this in Linux in the systemtap post as well as the post about finding elusive sources of iowait. This could give you a lot of handy information if you\u0026rsquo;re staring at beachballs regularly while your disk drive churns.\n","date":"10 March 2012","permalink":"/p/handy-hints-for-using-dtrace-on-the-mac/","section":"Posts","summary":"I\u0026rsquo;m a big fan of Linux tools which allow you to monitor things in great detail.","title":"Handy hints for using dtrace on the Mac"},{"content":"","date":null,"permalink":"/tags/systemtap/","section":"Tags","summary":"","title":"Systemtap"},{"content":"I originally wrote this post for the Rackspace Blog but I\u0026rsquo;ve posted it here just in case anyone following my blog\u0026rsquo;s feed finds it useful. Feel free to share your feedback!\nGetting yourself ready for any type of examination is usually a stressful experience that involves procrastination and some late nights leading up to the test. Every time I take one, I always say to myself, “I’m really going to get ahead of this next time and study early. This last minute stuff is terrible.” But I always forget all of this as the next exam rolls around.\nQuick note: As you read through the remainder of the post, you may wonder why some of it is a bit vague. Every Red Hat test taker is under a NDA to prevent disclosure of test information that may reduce the security of the exam itself. Penalties start with losing credit for the exams previously taken and they can escalate up to legal action. I hope you’ll understand why I’m not able to go into details about certain portions of the Red Hat examinations.\nI’ve taken seven Red Hat exams already: two for the RHCE and five for the RHCA. These tests certainly aren’t easy, but there are some good guidelines and tips you can use to make your studying efforts less stressful and more productive. Without further ado, here are my recommendations for prospective Red Hat examinees:\nBuild a flexible study environment #This is critical. You’ll need some spare servers or some available virtual machines to practice the objectives on each exam. However, don’t feel like you need to spend the money on a Red Hat subscription to get your studying done. Most of the test objectives on the majority of exams can be completed with very similar Linux distributions, like Scientific Linux or CentOS. Look for a version of the distribution that is closest to what you’ll be tested on at exam time. Your study environment should meet some basic criteria:\nYou should be able to quickly build and tear down servers or virtual machines Keep the latency to your environment low to avoid getting frustrated Use applications like VirtualBox, VMWare Fusion/Workstation to practice on your own computer Consider using VMs from cloud providers if you’re under a time crunch Some exams may require some bare-metal access to the server itself (especially EX442), so keep that in mind when you’re looking for a good practice environment. You may need some specific network or storage setups for some exams (as with EX436). If you’re not sure what you need, be sure to ask your instructor or someone else you know who has taken the exam already.\nPrioritize doing over reading #The Red Hat exams are all hands-on, practical exams. You won’t find any essays or multiple-choice questions in these exams. Although the materials from Red Hat are full of good information, reading this information can only get you so far. You need to practice setting up the services on your own to be fully prepared for the test. If you’re not pressed for time, reading through the book can give you some details about the lab sequences, which you might miss by solely reading through labs themselves.\nResearch the why, not the what, to remember #This is especially important for the RHCA exam track. You may find that there is a ton of material to cover for the exam and that it’s difficult to remember each command to bring a certain service online or to repair a problem. Instead of thinking through the problem as “first, I do this, then I do this”, try to understand why each step is important in the first place.\nHere’s a good example. I’ll be the first one to admit that Kerberos drives me crazy. I’ve even written posts about it. The commands seemed really archaic, the daemons didn’t make sense, and the lack of readline support in the Kerberos tools made me want to throw my computer out the window (come on, MIT!). I put my class materials aside, went to Google in a browser, and started researching Kerberos.\nI read some of MIT’s documentation, ventured over to Wikipedia, and poked at some of the documentation within the Kerberos RPM packages. After a while, I began to realize how it all fit together. “Okay,” I thought to myself, “I need principals in a keytab to do these things, but I need to have a database for the admin stuff first.” Suddenly, the order of things in my head wasn’t just memorized any longer. The process of operations seemed to make logical sense because I fully understood how the pieces of a Kerberos infrastructure fit together.\nIf you start to get discouraged, take a break and learn more about why you’re doing what you’re doing. Once it becomes second nature, working through the problems on the exam becomes much easier.\nLean on your available resources #Don’t forget that there are other knowledgeable folks available to talk to when you get bogged down. Lean on other RHCE’s, RHCA’s, or experienced Linux users to get the answers or explanations you need. If you already have a Red Hat certification, head over to the Red Hat Certification Forums and meet up with other examinees that are discussing test preparation.\nAlso, you’ll find some knowledgeable (but sometimes snarky or quirky) people on IRC who are eager to point you in the right direction. Try the #rhel, #centos, or #fedora channels if you’re struggling through the configuration of a certain service. Many Linux users may roll their eyes about it, but Twitter is also a pretty good way to reach out to people who have a lot of Linux experience.\nSummary #Remember to lean on the knowledge of others, get hands-on with the test objectives and do your research when you’re frustrated. The exams from Red Hat are generally difficult and cover a lot of material, but with the right amount of preparation and determination you can pass the exams and get the certifications you want.\n","date":"28 February 2012","permalink":"/p/preparing-for-red-hat-exams/","section":"Posts","summary":"I originally wrote this post for the Rackspace Blog but I\u0026rsquo;ve posted it here just in case anyone following my blog\u0026rsquo;s feed finds it useful.","title":"Preparing for Red Hat Exams"},{"content":"","date":null,"permalink":"/tags/certifications/","section":"Tags","summary":"","title":"Certifications"},{"content":"The grades came back last Friday and I\u0026rsquo;ve passed the last exam in the requirements to become a Red Hat Certified Architect (RHCA). I was fortunate enough to be part of Rackspace\u0026rsquo;s RHCA pilot program and we took our first exam back at the end of 2010. It\u0026rsquo;s definitely a good feeling to be finished and I\u0026rsquo;m definitely ready to give back some knowledge to the readers of this blog.\nFirst things first: there are going to be many part of this post which probably aren\u0026rsquo;t as specific as you\u0026rsquo;d like. A lot of that is due to the NDA that all Red Hat examinees agree to when they take an exam. We aren\u0026rsquo;t allowed to talk about what was on the exam or our experiences during the exam. If we do, penalties range from smaller things like losing certifications all the way up to serious stuff like legal action. It goes without saying that I want to protect the security of the exams, I don\u0026rsquo;t want to lose my certifications, and I don\u0026rsquo;t want to hire a lawyer. Please try to keep this in mind if you yearn for more specifics than I\u0026rsquo;m able to give.\nRed Hat Certified Engineer\nThe RHCSA and RHCE exams are the first step on the path to the RHCA. You can\u0026rsquo;t take any of the RHCA prerequisite exams without it. These exams cover a really broad spectrum of material including apache configuration, NFS, iptables and mail services. The two links above will take you to the exam objectives for each exam.\nI\u0026rsquo;ve always recommended the RHCE exam for Linux administrators who are trying to sharpen their skills and get to the next level whether they use Red Hat or not. The exam covers a lot of good material that makes a solid foundation for any Linux user without throwing in too many Red Hat-specific knowledge.\nThe exam (like all Red Hat exams) is fully practical. There are no multiple choice questions or essays. You\u0026rsquo;ll have to meet all of the objectives by logging into a local Red Hat system and making the system do what it needs to do.\nQuick tips for the RHCSA/RHCE exams:\nKeep your eye on the clock. Time can really get away from you if you get stuck in the weeds on a problem that should be relatively straightforward. Leave time at the end to check your work. When you set up a lot of services, it\u0026rsquo;s inevitable that you might configure a service for one problem that breaks the functionality required by a problem you completed already. Always reboot before you leave. We all forget to use chkconfig when we\u0026rsquo;re in a hurry. Practice, practice, practice. There\u0026rsquo;s not one objective on this exam that you can\u0026rsquo;t test in a VM on your own. Red Hat Enterprise System Monitoring and Performance Tuning\nOur group at Rackspace started off with EX442 and it was a very difficult way to start off the RHCA track. Take a look at the objectives and you\u0026rsquo;ll see that much of the exam is related to tweaking system performance and then monitoring that performance with graphs and raw data. You\u0026rsquo;ll have to turn a lot of knobs on the kernel and you\u0026rsquo;ll need to know where to store these configurations so they\u0026rsquo;ll be persistent.\nIn addition, the objective regarding TCP buffers and related settings is a real challenge. You\u0026rsquo;ll have to wrestle with some math that appears to be relatively simple, but can get confusing quickly. Some of the settings can\u0026rsquo;t really be checked to know if your setting is correct. The objectives mention tuning disk scheduling - you don\u0026rsquo;t really have the time or tools to know if your setting is ideal.\nQuick tips for EX442:\nUse the documentation available to you. Install the kernel-doc package while you practice and during the exam. Be careful with your math. You have a Linux machine in front of you! Don\u0026rsquo;t forget about bc. Watch your units. Know the difference between a kilobyte (KB) and a kibibyte (KiB). Make comments in files where you adjust kernel configurations. It will help you keep track of which question the kernel adjustment is meant to satisfy. Red Hat Enterprise Storage Management\nI\u0026rsquo;m surprised to say this now, but I actually enjoyed EX436. I\u0026rsquo;ve always used other clustering tools like heartbeat and pacemaker, but I\u0026rsquo;ve never had the need to use the Red Hat Cluster Suite. Although RHCS definitely has a lot of quirks and rough edges, it\u0026rsquo;s pretty solid once you get familiar with the GUI and command line tools.\nYou get the opportunity to mess around with some pretty useful technology like iSCSI, GFS, and clustered LVM. These are things that you\u0026rsquo;re probably already using or will be using soon in a large server environment. The web interface for RHCS is quite peculiar and you may find yourself wanting to put your fist through the screen when you\u0026rsquo;re staring down the endless animated GIFs when the cluster is syncing its configuration. Do your best to be patient because you certainly don\u0026rsquo;t want to short circuit the cluster sync.\nQuick tips for EX436:\nBe patient. You\u0026rsquo;ll feel like the RHCS web interface is mocking you when you\u0026rsquo;re pressed for time. Watch the clock. It\u0026rsquo;s extremely easy to burn a lot of time on this exam if you get stuck on a particular problem. Double check your entries in the web interface. Make sure you\u0026rsquo;re doing things in the right order and that you\u0026rsquo;ve set up the prerequisites before adding services to the cluster. If you get it wrong, you could put your cluster into a weird state. Use man pages. If you don\u0026rsquo;t mess with GFS a lot, the man pages will save you in a pinch. Red Hat Enterprise Deployment and Systems Management\nIf there\u0026rsquo;s one exam where time management is critical, it\u0026rsquo;s EX401. Importing data into the Satellite Server takes quite a bit of time and there\u0026rsquo;s almost nothing you can do to speed it up. It probably goes without saying, but as with most long-running tasks, you\u0026rsquo;ll want to run it in screen. The last thing you\u0026rsquo;d ever want is to abort the import due to an errant click or CTRL-C (I did it while practicing - it\u0026rsquo;s aggravating).\nThere are other test objectives which you can either complete or partially complete while you wait for the import to finish.\nAlso, take the time to really dig into the Satellite Server web interface while your practicing for the exam. Knowing where to find the most common configuration items will really save some time when you\u0026rsquo;re in the exam. You can sometimes get pretty bogged down in the interface so don\u0026rsquo;t forget to use multiple tabs to keep your work organized.\nI felt like this exam was the easiest out of the bunch since you could go back and test every single question with good time management. Did I mention how important time management was on this exam already? If I forgot to mention it earlier, be sure to focus on time management for this test.\nQuick tips for EX401:\nTime management will make or break you on this test. Keep an eye on the clock and make sure you\u0026rsquo;ve done absolutely every piece of the exam that you can while you wait for the server to do its work. Scour the web interface. Keep a mental map in your mind where the big chunks of configuration items are. Go back and test everything. If you manage your time well, you should have enough time to verify each and every objective on this exam. Red Hat Enterprise Directory Services and Authentication\nAt first, EX423 looks pretty straightforward. Red Hat\u0026rsquo;s authentication configuration tools make LDAP authentication setup pretty easy. However, this exam comes with a lot of curveballs.\nThe GUI interface for the Directory Services component is a little frustrating to use. I found that the GUI stopped responding to keyboard input occasionally unless I clicked on another window and came back. If you misconfigure the SSL certificates in the interface, your LDAP server is down for the count. If you don\u0026rsquo;t input the correct data into the setup scripts at the beginning, you might not notice it until much later when it\u0026rsquo;s either too difficult to dig yourself out of the hole or it\u0026rsquo;s too late to start over with a clean configuration.\nI didn\u0026rsquo;t feel pressed for time on this exam too much and that was pretty refreshing after taking the EX401 test. It\u0026rsquo;s extremely critical to watch what you type and click on this exam. Some mistakes can be quickly corrected while others may require you to blow away the LDAP server configuration and re-provision the whole thing.\nQuick tips for EX423:\nAlways watch what you\u0026rsquo;re typing. A simple mistake can lead to confusion or bigger issues down the road. Don\u0026rsquo;t ignore the LDIF objectives. As you practice, you\u0026rsquo;ll find that manipulating LDIF files is a little more involved than you expected. Practice starting over. Throw out your Directory Services configuration and get the experience of what it\u0026rsquo;s like to start over and get back in the game. Red Hat Enterprise Security: Network Services\nThere\u0026rsquo;s no sugar coating it - EX333 is a beast. It\u0026rsquo;s a six hour exam broken into two three-hour chunks. It covers a ton of material and I refer to it as \u0026ldquo;the RHCE on steroids.\u0026rdquo; You might argue that I thought it was hard since it was the last test and I was ready to be finished, but I really think this exam is a tough one.\nPracticing for the Kerberos and DNS objectives was the hardest for me. I just couldn\u0026rsquo;t understand Kerberos, no matter how hard I tried. The realization that I would really have to learn it soon set in. I dug into the Kerberos design documentation on MIT\u0026rsquo;s site, read the summaries on Wikipedia, and scoured the documentation available in the Kerberos RPM packages. Once I understood why Kerberos is set up the way it is and why the security measures are present, everything began to come together. I was able to remember the steps not because I was memorizing them, but because I understood how Kerberos worked.\nWhen you\u0026rsquo;re working through the DNS objectives, keep an eye out for punctuation. I blew through a good 20 minutes in what seemed like the blink of an eye when I forgot a period in my TSIG key configuration while studying. Make sure you use the resources available to you, like system-config-bind and sample configs in /usr/share/doc/bind*/examples/. Get to know commands like dig really well.\nIf you\u0026rsquo;re overwhelmed by OpenSSL\u0026rsquo;s command line syntax, check out the /etc/pki/tls/misc/CA script. There are some handy comments at the top of the script that explain how to use it. You can also pluck OpenSSL commands right out of the script if you need to run them yourself.\nDon\u0026rsquo;t just memorize. Do some research to understand how everything fits together. Manage your time. DNS and Kerberos have lots of small nuances that can become time sinks when done incorrectly. Use the available documentation and tools. Try practicing without study materials so that you\u0026rsquo;re forced to use the docs and tools available within the server. Ranking the exams\nA couple of folks on Twitter asked me to rank the exams from most difficult to least difficult. Keep in mind that these are a little subjective since I was more familiar with some objectives than others for certain tests.\nEX333 - Enterprise Security: Network Services: a tubload of material and a very long exam EX442 - System Monitoring and Performance Tuning: very difficult to check your work, lots of calculations EX423 - Directory Services and Authentication: not a lot of material to cover, but tons of curveballs EX436 - Storage Management: the web interface made things much easier, lots of documentation available EX401 - Deployment and Systems Management: every objective can be tested, I build RPM\u0026rsquo;s already ","date":"13 February 2012","permalink":"/p/looking-back-at-the-long-road-to-becoming-a-red-hat-certified-architect/","section":"Posts","summary":"The grades came back last Friday and I\u0026rsquo;ve passed the last exam in the requirements to become a Red Hat Certified Architect (RHCA).","title":"Looking back at the long road to becoming a Red Hat Certified Architect"},{"content":"Getting Fedora 16 working in XenServer isn\u0026rsquo;t the easiest thing to do, but I\u0026rsquo;ve put together a repository on GitHub that should help. The repository contains a kickstart file along with some brief instructions to help with the installation. If you\u0026rsquo;re ready to get started right now, just clone the repository:\ngit clone git://github.com/rackerhacker/kickstarts.git kickstarts There are some big issues with Fedora 16 which cause problems for installations within XenServer:\nthe installer sets up a console on something other than hvc0 anaconda won\u0026rsquo;t start without being in serial mode anaconda tries to use GPT partitions by default grub2 is now standard, but it causes problems for older XenServer versions My kickstart works around the grub2 problem by throwing down an old-style grub configuration file and creating the proper symlinks. This config will still be updated when you upgrade kernels (at least in Fedora 16). It also sets up a very simple partitioning schema with one root and one swap partition. A DOS partition table is used in lieu of a GPT partition table.\nWhen you start the installation, be sure to review the README.md in the git repository. It has some special instructions for boot options to meet the requirements of Fedora 16 and the kickstart file.\n","date":"12 February 2012","permalink":"/p/installing-fedora-16-in-xenserver/","section":"Posts","summary":"Getting Fedora 16 working in XenServer isn\u0026rsquo;t the easiest thing to do, but I\u0026rsquo;ve put together a repository on GitHub that should help.","title":"Installing Fedora 16 in XenServer"},{"content":"One of the handiest tools in the OpenSSL toolbox is s_client. You can quickly view lots of details about the SSL certificates installed on a particular server and diagnose problems. For example, use this command to look at Google\u0026rsquo;s SSL certificates:\nopenssl s_client -connect encrypted.google.com:443 You\u0026rsquo;ll see the chain of certificates back to the original certificate authority where Google bought its certificate at the top, a copy of their SSL certificate in plain text in the middle, and a bunch of session-related information at the bottom.\nThis works really well when a site has one SSL certificate installed per IP address (this used to be a hard requirement). With Server Name Indication (SNI), a web server can have multiple SSL certificates installed on the same IP address. SNI-capable browsers will specify the hostname of the server they\u0026rsquo;re trying to reach during the initial handshake process. This allows the web server to determine the correct SSL certificate to use for the connection.\nIf you try to connect to rackerhacker.com with s_client, you\u0026rsquo;ll find that you receive the default SSL certificate installed on my server and not the one for this site:\n$ openssl s_client -connect rackerhacker.com:443 Certificate chain 0 s:/C=US/ST=Texas/L=San Antonio/O=MHTX Enterprises/CN=*.mhtx.net i:/C=US/O=SecureTrust Corporation/CN=SecureTrust CA 1 s:/C=US/O=SecureTrust Corporation/CN=SecureTrust CA i:/C=US/O=Entrust.net/OU=www.entrust.net/CPS incorp. by ref. (limits liab.)/OU=(c) 1999 Entrust.net Limited/CN=Entrust.net Secure Server Certification Authority Add on the -servername argument and s_client will do the additional SNI negotiation step for you:\n$ openssl s_client -connect rackerhacker.com:443 -servername rackerhacker.com Certificate chain 0 s:/OU=Domain Control Validated/OU=PositiveSSL/CN=rackerhacker.com i:/C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=PositiveSSL CA 1 s:/C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=PositiveSSL CA i:/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/OU=http://www.usertrust.com/CN=UTN-USERFirst-Hardware 2 s:/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/OU=http://www.usertrust.com/CN=UTN-USERFirst-Hardware i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root 3 s:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root You may be asking yourself this question:\nWhy doesn\u0026rsquo;t the web server just use the Host: header that my browser sends already to figure out which SSL certificate to use?\nKeep in mind that the SSL negotiation must occur prior to sending the HTTP request through to the remote server. That means that the browser and the server have to do the certificate exchange earlier in the process and the browser wouldn\u0026rsquo;t get the opportunity to specify which site it\u0026rsquo;s trying to reach. SNI fixes that by allowing a Host: header type of exchange during the SSL negotiation process.\n","date":"7 February 2012","permalink":"/p/using-openssls-s_client-command-with-web-servers-using-server-name-indication-sni/","section":"Posts","summary":"One of the handiest tools in the OpenSSL toolbox is s_client.","title":"Using OpenSSL’s s_client command with web servers using Server Name Indication (SNI)"},{"content":"","date":null,"permalink":"/tags/nis/","section":"Tags","summary":"","title":"Nis"},{"content":" As promised in my earlier post entitled Kerberos for haters, I\u0026rsquo;ve assembled the simplest possible guide to get Kerberos up an running on two CentOS 5 servers.\nAlso, I don\u0026rsquo;t really hate Kerberos. It\u0026rsquo;s a bit of an inside joke with my coworkers who are studying for some of the RHCA exams at Rackspace. The additional security provided by Kerberos is quite good but the setup involves a lot of small steps. If you miss one of the steps or if you get something done out of order, you may have to scrap the whole setup and start over unless you can make sense of the errors in the log files. A lot of my dislikes for Kerberos comes from the number of steps required in the setup process and the difficulty in tracking down issues when they crop up.\nTo complete this guide, you\u0026rsquo;ll need the following:\ntwo CentOS, Red Hat Enterprise Linux or Scientific Linux 5 servers or VM\u0026rsquo;s some patience Here\u0026rsquo;s how I plan to name my servers:\nkdc.example.com – the Kerberos KDC server at 192.168.250.2 client.example.com – the Kerberos client at 192.168.250.3 CRITICAL STEP: Before getting started, ensure that both systems have their hostnames properly set and both systems have the hostnames and IP addresses of both systems in /etc/hosts. Your server and client must be able to know the IP and hostname of the other system as well as themselves.\nFirst off, we will need NIS working to serve up the user information for our client. Install the NIS server components on the KDC server:\n[root@kdc ~]# yum install ypserv Set the NIS domain and set a static port for ypserv to make it easier to firewall off. Edit /etc/sysconfig/network on the KDC server:\nNISDOMAINNAME=EXAMPLE.COM YPSERV_ARGS=\"-p 808\" Manually set the NIS domain on the KDC server and add it to /etc/yp.conf:\n[root@kdc ~]# nisdomain EXAMPLE.COM [root@kdc ~]# echo \"domain EXAMPLE.COM server kdc.example.com\" \u003e\u003e /etc/yp.conf Adjust /var/yp/securenets on the KDC server for additional security:\n[root@kdc ~]# echo \"255.0.0.0 127.0.0.0\" \u003e\u003e /var/yp/securenets [root@kdc ~]# echo \"255.255.255.0 192.168.250.0\" \u003e\u003e /var/yp/securenets Start the NIS server and generate the NIS maps:\n[root@kdc ~]# /etc/init.d/ypserv start; chkconfig ypserv on [root@kdc ~]# make -C /var/yp I usually like to prepare my iptables rules ahead of time so I ensure that it doesn\u0026rsquo;t derail me later on. Paste this into the KDC\u0026rsquo;s terminal:\niptables -N SERVICES iptables -I INPUT -j SERVICES iptables -A SERVICES -p tcp --dport 111 -j ACCEPT -m comment --comment \"rpc\" iptables -A SERVICES -p udp --dport 111 -j ACCEPT -m comment --comment \"rpc\" iptables -A SERVICES -p tcp --dport 808 -j ACCEPT -m comment --comment \"nis\" iptables -A SERVICES -p udp --dport 808 -j ACCEPT -m comment --comment \"nis\" iptables -A SERVICES -p tcp --dport 88 -j ACCEPT -m comment --comment \"kerberos\" iptables -A SERVICES -p udp --dport 88 -j ACCEPT -m comment --comment \"kerberos\" iptables -A SERVICES -p udp --dport 464 -j ACCEPT -m comment --comment \"kerberos\" iptables -A SERVICES -p tcp --dport 749 -j ACCEPT -m comment --comment \"kerberos\" /etc/init.d/iptables save We need our time in sync for Kerberos to work properly. Install NTP on both nodes, start it, and ensure it comes up at boot time:\n[root@kdc ~]# yum -y install ntp \u0026\u0026 chkconfig ntpd on \u0026\u0026 /etc/init.d/ntpd start [root@client ~]# yum -y install ntp \u0026\u0026 chkconfig ntpd on \u0026\u0026 /etc/init.d/ntpd start Now we\u0026rsquo;re ready to set up Kerberos. Start by installing some packages on the KDC:\n[root@kdc ~]# yum install krb5-server krb5-workstation We will need to make some edits to /etc/krb5.conf on the KDC to set up our KDC realm. Ensure that the default_realm is set:\ndefault_realm = EXAMPLE.COM The [realms] section should look like this:\n[realms] EXAMPLE.COM = { kdc = 192.168.250.2:88 admin_server = 192.168.250.2:749 } The [domain_realm] section should look like this:\n[domain_realm] kdc.example.com = EXAMPLE.COM client.example.com = EXAMPLE.COM Add validate = true within the pam { } block of the [appdefaults] section:\n[appdefaults] pam = { validate = true Adjust /var/kerberos/krb5kdc/kdc.conf on the KDC:\n[realms] EXAMPLE.COM = { master_key_type = des-hmac-sha1 default_principal_flags = +preauth } There\u0026rsquo;s one last configuration file to edit on the KDC! Ensure that /var/kerberos/krb5kdc/kadm5.acl looks like this:\n*/admin@EXAMPLE.COM\t* We\u0026rsquo;re now ready to make a KDC database to hold our sensitive Kerberos data. Create the database and set a good password which you can remember. This command also stashes your password on the KDC so you don\u0026rsquo;t have to enter it each time you start the KDC:\nkdb5_util create -r EXAMPLE.COM -s On the KDC, create a principal for the admin user as well as user1 (which we\u0026rsquo;ll create shortly). Also, export the admin details to the kadmind key tab. You\u0026rsquo;ll get some extra output after each one of these commands but I\u0026rsquo;ve snipped it to reduce the length of the post.\n[root@kdc ~]# kadmin.local kadmin.local: addprinc root/admin kadmin.local: addprinc user1 kadmin.local: ktadd -k /var/kerberos/krb5kdc/kadm5.keytab kadmin/admin kadmin.local: ktadd -k /var/kerberos/krb5kdc/kadm5.keytab kadmin/changepw kadmin.local: exit Let\u0026rsquo;s start the Kerberos KDC and kadmin daemons:\n[root@kdc ~]# /etc/init.d/krb5kdc start; /etc/init.d/kadmin start [root@kdc ~]# chkconfig krb5kdc on; chkconfig kadmin on Now that the administration work is done, let\u0026rsquo;s create a principal for our KDC server and stick it in it\u0026rsquo;s keytab:\n[root@kdc ~]# kadmin.local kadmin.local: addprinc -randkey host/kdc.example.com kadmin.local: ktadd host/kdc.example.com Transfer your /etc/krb5.conf from the KDC server to the client. Hop onto the client server, install the Kerberos client package and add some host principals:\n[root@client ~]# yum install krb5-workstation [root@client ~]# kadmin.local kadmin.local: addpinc --randkey host/client.example.com kadmin.local: ktadd host/kdc.example.com There aren\u0026rsquo;t any daemons on the client side, so the configuration is pretty much wrapped up there for Kerberos. However, we now need to tell both servers to use Kerberos for auth and your client servers needs to use NIS to get user data.\nOn the KDC: run authconfig-tui choose Use Kerberos from the second column press Next don\u0026rsquo;t edit the configuration (authconfig got the data from /etc/krb.conf) press OK On the client: run authconfig-tui choose Use NIS and Use Kerberos press Next enter your NIS domain (EXAMPLE.COM) and NIS server (kdc.example.com or 192.168.250.2) press Next don\u0026rsquo;t edit the Kerberos configuration (authconfig got the data from /etc/krb.conf) press OK Got NIS problems? If the NIS connection stalls on the client, ensure that you have the iptables rules present on the KDC that we added near the beginning of this guide. Also, if you forgot to add both hosts to both servers\u0026rsquo; /etc/hosts, go do that now.\nLet\u0026rsquo;s make our test user on the KDC. Don\u0026rsquo;t add this user to the client — we\u0026rsquo;ll get the user information via NIS and authenticate via Kerberos shortly. We\u0026rsquo;ll also rebuild our NIS maps after adding the user:\n[root@kdc ~]# useradd user1 [root@kdc ~]# passwd user1 [root@kdc ~]# make -C /var/yp/ On the client, see if you can get the password hash for the user1 account via NIS:\n[root@client ~]# ypcat -d EXAMPLE.COM -h kdc.example.com passwd | grep user1 user1:$1$sUlSTlCv$riK5El3z8N4y.mi5Fe3Q60:500:500::/home/user1:/bin/bash You can see why NIS isn\u0026rsquo;t a good way to authenticate users. Someone could easily pull the hash for any account and brute force the hash on their own server. Go back to the KDC and lock out the user account:\n[root@kdc ~]# usermod -p '!!' user1 Go back to the client and try to pull the password hash now:\n[root@client ~]# ypcat -d EXAMPLE.COM -h kdc.example.com passwd | grep user1 user1:!!:500:500::/home/user1:/bin/bash On the plus side, the user\u0026rsquo;s password hash is now gone. On the negative side, you\u0026rsquo;ve just prevented this user from logging in locally or via NIS. Don\u0026rsquo;t worry, the user can log in via Kerberos now. Let\u0026rsquo;s prepare a home directory on the client for the user:\n[root@client ~]# mkdir /home/user1 [root@client ~]# cp -av /etc/skel/.bash* /home/user1/ [root@client ~]# chown -R user1:user1 /home/user1/ Note: In a real-world scenario, you\u0026rsquo;d probably want to export this user\u0026rsquo;s home directory via NFS so they didn\u0026rsquo;t get a different home directory on every server.\nWhile you\u0026rsquo;re still on the client, try to log into the client via the user. Use the password that you used when you created the user1 principal on the KDC.\n[root@client ~]# ssh user1@localhost user1@localhost's password: [user1@client ~]$ whoami user1 List your Kerberos tickets and you should see one for your user principal:\n[user1@client ~]$ klist Ticket cache: FILE:/tmp/krb5cc_500_fCKPnZ Default principal: user1@EXAMPLE.COM Valid starting Expires Service principal 02/05/12 14:18:53 02/06/12 00:18:53 krbtgt/EXAMPLE.COM@EXAMPLE.COM renew until 02/05/12 14:18:53 Your KDC should have a couple of lines in its /var/log/krb5kdc.log showing the authentication:\nFeb 05 14:18:53 kdc.example.com krb5kdc[4694](info): AS_REQ (12 etypes {18 17 16 23 1 3 2 11 10 15 12 13}) 192.168.250.3: ISSUE: authtime 1328473133, etypes {rep=16 tkt=16 ses=16}, user1@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Feb 05 14:18:53 kdc.example.com krb5kdc[4694](info): TGS_REQ (7 etypes {18 17 16 23 1 3 2}) 192.168.250.3: ISSUE: authtime 1328473133, etypes {rep=16 tkt=18 ses=18}, user1@EXAMPLE.COM for host/client.example.com@EXAMPLE.COM The first line shows that the client asked for a Authentication Server Request (AS_REQ) and the second line shows that the client then asked for a Ticket Granting Server Request (TGS_REQ). In layman\u0026rsquo;s terms, the client first asked for a ticket-granting ticket (TGT) so it could authenticate to other services. When it actually tried to log in via ssh it asked for a ticket (and received it).\nYOU JUST CONFIGURED KERBEROS!\nFrom here, the sky\u0026rsquo;s the limit. Another popular implementation of Kerberos is encrypted NFSv4. You can even go crazy and use Kerberos with apache.\nLet me know if you have any questions about this post or if you spot any errors. With this many steps, there\u0026rsquo;s bound to be a typo or two in this guide. Keep in mind that there are some obvious spots for network-level and service-level security improvements. This guide was intended to give you the basics and it doesn\u0026rsquo;t cover all of the security implications involved with a Kerberos implementation.\n","date":"5 February 2012","permalink":"/p/the-kerberos-haters-guide-to-installing-kerberos/","section":"Posts","summary":"As promised in my earlier post entitled Kerberos for haters, I\u0026rsquo;ve assembled the simplest possible guide to get Kerberos up an running on two CentOS 5 servers.","title":"The Kerberos-hater’s guide to installing Kerberos"},{"content":"Scientific Linux installations have a package called yum-autoupdate by default and the package contains two files:\n# rpm -ql yum-autoupdate /etc/cron.daily/yum-autoupdate /etc/sysconfig/yum-autoupdate The cron job contains the entire script to run automatic updates once a day and the configuration file controls its behavior. However, you can\u0026rsquo;t get the same functionality as Fedora\u0026rsquo;s yum-updatesd package where you can receive notifications for updates rather than automatically updating the packages.\nTo get those notifications in Scientific Linux, just make two small edits to this portion of /etc/cron.daily/yum-autoupdate:\n173 echo \u0026#34; Starting Yum with command\u0026#34; 174 echo \u0026#34; /usr/bin/yum -c $TEMPCONFIGFILE -e 0 -d 1 -y update\u0026#34; 175 fi 176 /usr/bin/yum -c $TEMPCONFIGFILE -e 0 -d 1 -y update \u0026gt; $TEMPFILE 2\u0026gt;\u0026amp;1 177 if [ -s $TEMPFILE ] ; then Adjust the update commands to look like this:\n173 echo \u0026#34; Starting Yum with command\u0026#34; 174 echo \u0026#34; /usr/bin/yum -c $TEMPCONFIGFILE -e 0 -d 1 -y check-update\u0026#34; 175 fi 176 /usr/bin/yum -c $TEMPCONFIGFILE -e 0 -d 1 -y check-update \u0026gt; $TEMPFILE 2\u0026gt;\u0026amp;1 177 if [ -s $TEMPFILE ] ; then Since you won\u0026rsquo;t be auto-updating with this script any longer, you may want to comment out the EXCLUDE= line in /etc/sysconfig/yum-autoupdate so that you\u0026rsquo;ll receive notifications for all packages with updates. Also, to avoid having your changes updated with a newer yum-autoupdate package later, add the package to your list of excluded packages in /etc/yum.conf.\n","date":"4 February 2012","permalink":"/p/get-notifications-instead-of-automatic-updates-in-scientific-linux/","section":"Posts","summary":"Scientific Linux installations have a package called yum-autoupdate by default and the package contains two files:","title":"Get notifications instead of automatic updates in Scientific Linux"},{"content":"I\u0026rsquo;ll be the first one to admit that Kerberos drives me a little insane. It\u0026rsquo;s a requirement for two of the exams in Red Hat\u0026rsquo;s RHCA certification track and I\u0026rsquo;ve been forced to learn it. It provides some pretty nice security features for large server environments. You get central single sign ons, encrypted authentication, and bidirectional validation. However, getting it configured can be a real pain due to some rather archaic commands and shells.\nHere\u0026rsquo;s Kerberos in a nutshell within a two-server environment: One server is a Kerberos key distribution center (KDC) and the other is a Kerberos client. The KDC has the list of users and their passwords. Consider a situation where a user tries to ssh into the Kerberos client:\nsshd calls to pam to authenticate the user pam calls to the KDC for a ticket granting ticket (TGT) to see if the user can authenticate the KDC replies to the client with a TGT encrypted with the user\u0026rsquo;s password pam (on the client) tries to decrypt the TGT with the password that the user provided via ssh if pam can decrypt the TGT, it knows the user is providing the right password Now that the client has a a TGT for that user, it can ask for tickets to access other network services. What if the user who just logged in wants to access another Kerberized service in the environment?\nclient calls the KDC and asks for a ticket to grant access to the other service KDC replies with two copies of the ticket: one copy is encrypted with the user\u0026rsquo;s current TGT a second copy is encrypted with the password of the network service the user wants to access the client can decrypt the ticket which was encrypted with the current TGT since it has the TGT already client makes an authenticator by taking the decrypted ticket and encrypting it with a timestamp client passes the authenticator and the second copy of the ticket it received from the KDC the other network service decrypts the second copy of the ticket and verifies the password the other network service uses the decrypted ticket to decrypt the authenticator it received from the client if the timestamp looks good, the other network service allows the user access Okay, that\u0026rsquo;s confusing. Let\u0026rsquo;s take it one step further. Enabling pre-authentication requires that clients send a request containing a timestamp encrypted with the user\u0026rsquo;s password prior to asking for a TGT. Without this requirement, an attacker can ask for a TGT one time and then brute force the TGT offline. Pre-authentication forces the client to send a timestamped request encrypted with the user\u0026rsquo;s password back to the KDC before they can ask for a TGT. This means the attacker is forced to try different passwords when encrypting the timestamp in the hopes that they\u0026rsquo;ll get a TGT to work with eventually. One would hope that you have something configured on the KDC to set off an alarm for multiple failed pre-authentication attempts.\nOh, but we can totally kick it up another notch. What if an attacker is able to give a bad password to a client but they\u0026rsquo;re also able to impersonate the KDC? They could reply to the TGT request (as the KDC) with a TGT encrypted with whichever password they choose and get access to the client system. Enabling mutual authentication stops this attack since it forces the client to ask the KDC for the client\u0026rsquo;s own host principal password (this password is set when the client is configured to talk to the KDC). The attacker shouldn\u0026rsquo;t have any clue what that password is and the attack will be thwarted.\nBy this point, you\u0026rsquo;re either saying \u0026ldquo;Oh man, I don\u0026rsquo;t ever want to do this.\u0026rdquo; or \u0026ldquo;How do I set up Kerberos?\u0026rdquo;. Stay tuned if you\u0026rsquo;re in the second group. I\u0026rsquo;ll have a dead simple (or as close to dead simple as one can get with Kerberos) how-to on the blog shortly.\nIn the meantime, here are a few links for extra Kerberos bedtime reading:\nKerberos on Wikipedia MIT\u0026rsquo;s \u0026ldquo;Why Kerberos\u0026rdquo; [PDF] How Kerberos Authentication Works ","date":"3 February 2012","permalink":"/p/kerberos-for-haters/","section":"Posts","summary":"I\u0026rsquo;ll be the first one to admit that Kerberos drives me a little insane.","title":"Kerberos for haters"},{"content":"Regular users of Python\u0026rsquo;s package tools like pip or easy_install are probably familiar with the PyPi repository. It\u0026rsquo;s a one-stop-shop to learn more about available Python packages and get them installed on your server.\nHowever, certain folks may find the need to host a local PyPi repository for their own packages. You may need it to store Python code which you don\u0026rsquo;t plan to release publicly or you may need to add proprietary patches to upstream Python packages. Regardless of the reason to have it, a local PyPi repository is relatively easy to configure.\nYou\u0026rsquo;ll need to start with a base directory for your PyPi repository. For this example, I chose /var/pypi. The directory structure should look something like this:\n/var/pypi/simple/[package_name]/[package_tarball] For a package like pip, you\u0026rsquo;d make a structure like this:\n/var/pypi/simple/pip/pip-1.0.2.tar.gz Once you have at least one package stored locally, it\u0026rsquo;s time to configure apache. Here\u0026rsquo;s a snippet from the virtual host I configured:\nDocumentRoot /var/pypi/ ServerName pypi.example.com Options +Indexes RewriteEngine On RewriteRule ^/robots.txt - [L] RewriteRule ^/icons/.* - [L] RewriteRule ^/index\\..* - [L] RewriteCond /var/pypi/$1 !-f RewriteCond /var/pypi/$1 !-d RewriteRule ^/(.*)/?$ http://pypi.python.org/$1 [R,L] The last set of rewrite directives check to see if the request refers to an existing file or directory under your document root. If it does, your server will reply with a directory listing or with the actual file to download. If the directory or file doesn\u0026rsquo;t exist, apache will send the client a redirection to the main PyPi site.\nReload your apache configuration to bring in your new changes. Let\u0026rsquo;s try to download the pip tarball from our local server in the example I mentioned above:\n$ curl -I http://pypi.example.com/simple/pip/ HTTP/1.1 200 OK $ curl -I http://pypi.example.com/simple/pip/pip-1.0.2.tar.gz HTTP/1.1 200 OK I\u0026rsquo;ve obviously snipped a bit of the response above, but you can see that apache is responding with 200\u0026rsquo;s since it has the directories and files that I was trying to retrieve via curl. Let\u0026rsquo;s try to get something we don\u0026rsquo;t have locally, like kombu:\n$ curl -I http://pypi.example.com/simple/kombu/ HTTP/1.1 302 Found Location: http://pypi.python.org/simple/kombu/ Our local PyPi repository doesn\u0026rsquo;t have kombu so it will refer our Python tools over to the official PyPi repository to get the listing of available package versions for kombu.\nNow we need to tell pip to use our local repository. Edit ~/.pip/pip.conf and add:\n[global] index-url = http://pypi.example.com/simple/ If you\u0026rsquo;d rather use easy_install, edit ~/.pydistutils.cfg and add:\n[easy_install] index_url = http://pypi.example.com/simple/ Once your tools are configured, try installing a package you have locally and try to install one that you know you won\u0026rsquo;t have locally. You can add -v to pip install to watch it retrieve different URL\u0026rsquo;s to get the packages it needs. If you spot any peculiar behavior or unexpected redirections, double-check your mod_rewrite rules in your apache configuration and check the spelling of your directories under your document root.\n","date":"1 February 2012","permalink":"/p/create-a-local-pypi-repository-using-only-mod_rewrite/","section":"Posts","summary":"Regular users of Python\u0026rsquo;s package tools like pip or easy_install are probably familiar with the PyPi repository.","title":"Create a local PyPi repository using only mod_rewrite"},{"content":"","date":null,"permalink":"/tags/mod_rewrite/","section":"Tags","summary":"","title":"Mod_rewrite"},{"content":"I used to be one of those folks who would install Fedora, CentOS, Scientific Linux, or Red Hat and disable SELinux during the installation. It always seemed like SELinux would get in my way and keep me from getting work done.\nLater on, I found that one of my servers (which I\u0026rsquo;d previously secured quite thoroughly) had some rogue processes running that were spawned through httpd. Had I actually been using SELinux in enforcing mode, those processes would have probably never even started.\nIf you\u0026rsquo;re trying to get started with SELinux but you\u0026rsquo;re not sure how to do it without completely disrupting your server\u0026rsquo;s workflow, these tips should help:\nGet some good reporting and monitoring\nTwo of the most handy SELinux tools are setroubleshoot and setroubleshoot-server. If you\u0026rsquo;re running a server without X, you can use my guide for configuring setroubleshoot-server. You will receive email alerts within seconds of an AVC denial and the emails should contain tips on how to resolve the denial if the original action should be allowed. If the AVC denial caught something you didn\u0026rsquo;t expect, you\u0026rsquo;ll know about the potential security breach almost immediately.\nStart out with SELinux in permissive mode\nIf you\u0026rsquo;re overly concerned about SELinux getting in your way, or if you\u0026rsquo;re enabling SELinux on a server that has been running without SELinux since it was installed, start out with SELinux in permissive mode. To make the change effective immediately, just run:\n# setenforce 0 # getenforce Permissive Edit /etc/sysconfig/selinux to make it persistent across reboots:\n# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive Adjust booleans before adding your own custom modules\nThere are a lot of booleans you can toggle to get the functionality you need without adding your own custom SELinux modules with audit2allow. If you wanted to see all of the applicable booleans for httpd, just use getsebool:\n# getsebool -a | grep httpd httpd_builtin_scripting --\u0026gt; on httpd_can_check_spam --\u0026gt; off httpd_can_network_connect --\u0026gt; on httpd_can_network_connect_cobbler --\u0026gt; off httpd_can_network_connect_db --\u0026gt; off httpd_can_network_memcache --\u0026gt; off httpd_can_network_relay --\u0026gt; on httpd_can_sendmail --\u0026gt; on ... and so on ... Toggling booleans is easy with togglesebool:\n# togglesebool httpd_can_network_memcache httpd_can_network_memcache: active Now httpd can talk to memcache. You can also use setsebool if you want to be specific about your setting (this is good for scripts):\n# setsebool httpd_can_network_memcache on Tracking your history of AVC denials\nAll of your AVC denals are logged by auditd in /var/log/audit/audit.log but it\u0026rsquo;s not the easiest file to read and parse. That\u0026rsquo;s where aureport comes in:\n# aureport --avc | tail -n 5 45. 01/24/2012 04:23:29 postdrop unconfined_u:system_r:httpd_t:s0 4 fifo_file getattr system_u:object_r:postfix_public_t:s0 denied 1061 46. 01/24/2012 04:23:29 postdrop unconfined_u:system_r:httpd_t:s0 2 fifo_file write system_u:object_r:postfix_public_t:s0 denied 1062 47. 01/24/2012 04:23:29 postdrop unconfined_u:system_r:httpd_t:s0 2 fifo_file open system_u:object_r:postfix_public_t:s0 denied 1062 48. 01/24/2012 14:01:58 sendmail unconfined_u:system_r:httpd_t:s0 160 process setrlimit unconfined_u:system_r:httpd_t:s0 denied 1123 49. 01/24/2012 14:01:58 postdrop unconfined_u:system_r:httpd_t:s0 4 dir search system_u:object_r:postfix_public_t:s0 denied 1124 Summary\nThere\u0026rsquo;s no need to be scared of or be annoyed by SELinux in your server environment. While it takes some getting used to (and what new software doesn\u0026rsquo;t?), you\u0026rsquo;ll have an extra layer of security and access restrictions which should let you sleep a little better at night.\n","date":"26 January 2012","permalink":"/p/getting-started-with-selinux/","section":"Posts","summary":"I used to be one of those folks who would install Fedora, CentOS, Scientific Linux, or Red Hat and disable SELinux during the installation.","title":"Getting started with SELinux"},{"content":"","date":null,"permalink":"/tags/seliux/","section":"Tags","summary":"","title":"Seliux"},{"content":"","date":null,"permalink":"/tags/mdadm/","section":"Tags","summary":"","title":"Mdadm"},{"content":"","date":null,"permalink":"/tags/raid/","section":"Tags","summary":"","title":"Raid"},{"content":"Although Citrix recommends against using software RAID with XenServer due to performance issues, I\u0026rsquo;ve had some pretty awful experiences with hardware RAID cards over the last few years. In addition, the price of software RAID makes it a very desirable solution.\nBefore you get started, go through the steps to disable GPT. That post also explains an optional adjustment to get a larger root partition (which I would recommend). You cannot complete the steps in this post if your XenServer installation uses GPT.\nYou should have three partitions on your first disk after the installation:\n# fdisk -l /dev/sda -- SNIP -- Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20971520 83 Linux /dev/sda2 2611 5222 20971520 83 Linux /dev/sda3 5222 19457 114345281 8e Linux LVM Here\u0026rsquo;s a quick explanation of your partitions:\n/dev/sda1: the XenServer root partition /dev/sda2: XenServer uses this partition for temporary space during upgrades /dev/sda3: your storage repository should be in this logical volume We need to replicate the same partition structure across each of your drives and the software RAID volume will span the across the third partition on each disk. Copying the partition structure from disk to disk is done easily with sfdisk:\nWHOA THERE! NO TURNING BACK! This step is destructive! If your other disks have any data on them, this step will make it (relatively) impossible to retrieve data on those disks again. Back up any data on the other disks in your XenServer machine before running these next commands.\nsfdisk -d /dev/sda | sfdisk --force /dev/sdb sfdisk -d /dev/sda | sfdisk --force /dev/sdc sfdisk -d /dev/sda | sfdisk --force /dev/sdd If you have only two disks, stop with /dev/sdb and you\u0026rsquo;ll be making a RAID 1 array. My machine has four disks and I\u0026rsquo;ll be making a RAID 10 array.\nWe need to destroy the main storage repository, but we need to unplug the physical block device first. Get the storage repository uuid first, then use it to find the corresponding physical block device. Once the physical block device is unplugged, the storage repository can be destroyed:\n# xe sr-list name-label=Local\\ storage | head -1 uuid ( RO) : 75264965-f981-749e-0f9a-e32856c46361 # xe pbd-list sr-uuid=75264965-f981-749e-0f9a-e32856c46361 | head -1 uuid ( RO) : ff7e9656-c27c-1889-7a6d-687a561f0ad0 # xe pbd-unplug uuid=ff7e9656-c27c-1889-7a6d-687a561f0ad0 # xe sr-destroy uuid=75264965-f981-749e-0f9a-e32856c46361 All of the LVM data from /dev/sda3 should now be gone:\n# lvdisplay \u0026amp;\u0026amp; vgdisplay \u0026amp;\u0026amp; pvdisplay # Change the third partition on each physical disk to be a software RAID partition type:\necho -e \u0026#34;t\\n3\\nfd\\nw\\n\u0026#34; | fdisk /dev/sda echo -e \u0026#34;t\\n3\\nfd\\nw\\n\u0026#34; | fdisk /dev/sdb echo -e \u0026#34;t\\n3\\nfd\\nw\\n\u0026#34; | fdisk /dev/sdc echo -e \u0026#34;t\\n3\\nfd\\nw\\n\u0026#34; | fdisk /dev/sdd Stop here and reboot your XenServer box to pick up the new partition changes. Once the server comes back from the reboot, start up a software RAID volume with mdadm:\n// RAID 1 for two drives mdadm --create /dev/md0 -l 1 -n 2 /dev/sda3 /dev/sdb3 // RAID 10 for four drives mdadm --create /dev/md0 -l 10 -n 4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 Check to see that your RAID array is building:\n# cat /proc/mdstat Personalities : [raid10] md0 : active raid10 sdd3[3] sdc3[2] sdb3[1] sda3[0] 228690432 blocks 64K chunks 2 near-copies [4/4] [UUUU] [\u0026gt;....................] resync = 0.3% (694272/228690432) finish=16.4min speed=231424K/sec Although you don\u0026rsquo;t have to wait for the resync to complete, just be aware that XenServer doesn\u0026rsquo;t do well with a lot of disk I/O within dom0. You may notice unusually slow performance in dom0 until it finishes. Save the array\u0026rsquo;s configuration for reboots:\n/etc/mdadm.conf Edit the /etc/mdadm.conf file and append auto=yes to the end of the line (but leave everything on one line):\nARRAY /dev/md0 level=raid10 num-devices=4 metadata=0.90 \\ UUID=2876748c:5117eed5:ce4d62d3:9592bd84 auto=yes Create a new storage repository on the RAID volume with thin provisioning (thanks to Spherical Chicken for the command):\nxe sr-create content-type=user type=ext device-config:device=/dev/md0 shared=false name-label=\u0026#34;Local storage\u0026#34; This command takes some time to complete since it makes logical volumes and then makes an ext3 filesystem for the new storage repository. Bigger RAID arrays will take more time and it\u0026rsquo;s guaranteed to take longer than you\u0026rsquo;d expect if your RAID array is still building. As soon as it completes, you\u0026rsquo;ll be given the uuid of your new storage repository and it should appear within the XenCenter interface.\nTIP: If you run into any problems during reboots, open /boot/extlinux.conf and remove splash and quiet from the label xe boot section. This removes the framebuffer during boot-up and it causes a lot more output to be printed to the console. It won\u0026rsquo;t affect the display once your XenServer box has fully booted.\n","date":"16 January 2012","permalink":"/p/xenserver-6-storage-repository-on-software-raid/","section":"Posts","summary":"Although Citrix recommends against using software RAID with XenServer due to performance issues, I\u0026rsquo;ve had some pretty awful experiences with hardware RAID cards over the last few years.","title":"XenServer 6: Storage repository on software RAID"},{"content":"XenServer 6 is a solid virtualization platform, but the installer doesn\u0026rsquo;t give you many options for customized configurations. By default, it installs with a 4GB root partition and uses GUID Partition Tables (GPT). GPT is new in XenServer 6.\nI\u0026rsquo;d rather use MBR partition tables and get a larger root partition. If you want to make these adjustments in your XenServer 6 installation, follow these steps after booting into the XenServer 6 install disc:\nWhen the installer initially boots, press F2 to access the advanced installation options.\nType shell and press enter. The installer should begin booting into a pre-installation shell where you can make your adjustments.\nOnce you\u0026rsquo;ve booted into the pre-installation shell, type vi /opt/xensource/installer/constants.py and press enter.\nChange GPT_SUPPORT = True to GPT_SUPPORT = False to disable GPT and use MBR partition tables. Adjust the value of root_size from 4096 (the default) to a larger number to get a bigger root partition. The size is specified in MB, so 4096 is 4GB. Save the file and exit vim.\nType exit and the installer should start.\nOnce the installation is complete, you should have a bigger root partition on a MBT partition table:\n# df -h / Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 1.8G 17G 10% / # fdisk -l /dev/sda Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20971520 83 Linux /dev/sda2 2611 5222 20971520 83 Linux /dev/sda3 5222 19457 114345281 8e Linux LVM ","date":"13 January 2012","permalink":"/p/xenserver-6-disable-gpt-and-get-a-larger-root-partition/","section":"Posts","summary":"XenServer 6 is a solid virtualization platform, but the installer doesn\u0026rsquo;t give you many options for customized configurations.","title":"XenServer 6: Disable GPT and get a larger root partition"},{"content":"It\u0026rsquo;s no secret that I\u0026rsquo;m a big fan of the Routerboard devices and the RouterOS software from Mikrotik that runs on them. The hardware is solid, the software is stable and feature-rich, and I found a great vendor that ships quickly.\nI recently added a RB493G (~ $230 USD) to sit in front of a pair of colocated servers. The majority of the setup routine was the same as with my previous devices except for the IPv6 configuration.\nIn the past, I\u0026rsquo;ve set up IPv6 tunnels with Hurricane Electric and it\u0026rsquo;s been mostly a cut-and-paste operation from the sample configuration in their IPv6 tunnel portal. Setting up native IPv6 involved a little more legwork.\nIf your provider will give you two /64\u0026rsquo;s or an entire /48, getting IPv6 connectivity for your WAN/LAN interfaces is simple. However, if you can only get one /64, you\u0026rsquo;ll have to see if your provider can route it to you via your Mikrotik\u0026rsquo;s link local interface (I wouldn\u0026rsquo;t recommend this for many reasons).\nI split my Mikrotik into two interfaces: wan and lanbridge. The lanbridge bridge joins all of the LAN ethernet ports (ether2-9 on the RB493G) and the wan interface connects to the upstream switch.\nMy configuration:\n/ipv6 address add address=2001:DB8:0:1::2/64 advertise=yes disabled=no eui-64=no interface=wan add address=2001:DB8:0:2::1/64 advertise=yes disabled=no eui-64=no interface=lanbridge /ipv6 route add disabled=no distance=1 dst-address=::/0 gateway=2001:DB8:0:1::1 scope=30 \\ target-scope=10 /ipv6 nd add advertise-dns=no advertise-mac-address=yes disabled=no hop-limit=64 \\ interface=all managed-address-configuration=no mtu=unspecified \\ other-configuration=no ra-delay=3s ra-interval=3m20s-10m ra-lifetime=30m \\ reachable-time=unspecified retransmit-interval=unspecified /ipv6 nd prefix default set autonomous=yes preferred-lifetime=1w valid-lifetime=4w2d Explanation:\n/ipv6 address add address=2001:DB8:0:1::2/64 advertise=yes disabled=no eui-64=no interface=wan add address=2001:DB8:0:2::1/64 advertise=yes disabled=no eui-64=no interface=lanbridge These two lines configure the IPv6 addresses for the firewall\u0026rsquo;s interfaces. My provider\u0026rsquo;s router holds the 2001:DB8:0:1::1/64 address and routes the remainder of that /64 to me via 2001:DB8:0:1::2/64. The second /64 is on the lanbridge interface and my LAN devices take their IP addresses from that block. My provider routes that second /64 to me via the 2001:DB8:0:1::2/64 IP on my wan interface.\n/ipv6 route add disabled=no distance=1 dst-address=::/0 gateway=2001:DB8:0:1::1 scope=30 \\ target-scope=10 I\u0026rsquo;ve set a gateway for IPv6 traffic so that the Mikrotik knows where to send internet-bound IPv6 traffic (in this case, to my ISP\u0026rsquo;s core router).\n/ipv6 nd add advertise-dns=no advertise-mac-address=yes disabled=no hop-limit=64 \\ interface=lanbridge managed-address-configuration=no mtu=unspecified \\ other-configuration=no ra-delay=3s ra-interval=3m20s-10m ra-lifetime=30m \\ reachable-time=unspecified retransmit-interval=unspecified /ipv6 nd prefix default set autonomous=yes preferred-lifetime=1w valid-lifetime=4w2d These last two lines configure the neighbor discovery on my lanbridge interface. This allows my LAN devices to do stateless autoconfiguration (which gives them an IPv6 address as well as the gateway).\nWant to read up on IPv6?\nLinux IPv6 HOWTO IPv6 on Wikipedia IPv6 Cheat Sheet [PDF] IPv6 Subnetting Card ","date":"11 January 2012","permalink":"/p/native-ipv6-connectivity-in-mikrotiks-routeros/","section":"Posts","summary":"It\u0026rsquo;s no secret that I\u0026rsquo;m a big fan of the Routerboard devices and the RouterOS software from Mikrotik that runs on them.","title":"Native IPv6 connectivity in Mikrotik’s RouterOS"},{"content":"If you want to forward e-mail from root to another user, you can usually place a .forward file in root\u0026rsquo;s home directory and your mail server will take care of the rest:\n/root/.forward With SELinux, you\u0026rsquo;ll end up getting an AVC denial each time your mail server tries to read the contents of the .forward file:\ntype=AVC msg=audit(1325543823.787:7416): avc: denied { open } for pid=9850 comm=\u0026#34;local\u0026#34; name=\u0026#34;.forward\u0026#34; dev=md0 ino=17694734 scontext=system_u:system_r:postfix_local_t:s0 tcontext=unconfined_u:object_r:admin_home_t:s0 tclass=file The reason is that your .forward file doesn\u0026rsquo;t have the right SELinux contexts. You can set the correct contest quickly with restorecon:\n# ls -Z /root/.forward -rw-r--r--. root root unconfined_u:object_r:admin_home_t:s0 /root/.forward # restorecon -v /root/.forward restorecon reset /root/.forward context unconfined_u:object_r:admin_home_t:s0-\u0026gt;system_u:object_r:mail_forward_t:s0 # ls -Z /root/.forward -rw-r--r--. root root system_u:object_r:mail_home_t:s0 /root/.forward Try to send another e-mail to root and you should see the mail server forward the e-mail properly without any additional AVC denials.\n","date":"2 January 2012","permalink":"/p/selinux-and-forward-files/","section":"Posts","summary":"If you want to forward e-mail from root to another user, you can usually place a .","title":"SELinux and .forward files"},{"content":"Anyone who has used a 3G ExpressCard or USB stick knows how handy they can be when you need internet access away from home (and away from Wi-Fi). I\u0026rsquo;ve run into some situations recently where I needed to share my 3G connection with more than one device without using internet sharing on my MacBook Pro.\nThat led me to pick up a CradlePoint PHS-300 (discontinued by the manufacturer, but available from Amazon for about $35). It\u0026rsquo;s compatible with my AT\u0026amp;T USBConnect Mercury (a.k.a. Sierra Wireless Compass 885/885U) USB stick.\nConfiguring the PHS-300 was extremely easy since I could just associate with the wireless network and enter the password printed on the bottom of the unit. However, getting the 3G stick to work was an immense pain. If you\u0026rsquo;re trying to pair up these products, these steps should help:\nAccess the PHS-300\u0026rsquo;s web interface Click the Modem tab Click Settings on the left Click Always on under Reconnect Mode Uncheck Aggressive Modem Reset Put the following into the AT Dial Script text box: ATE0V1\u0026amp;F\u0026amp;D2\u0026amp;C1S0=0 ATDT*99***1# Add ISP.CINGULAR to the Access Point Name (APN) box Flip the Connect Mode under Dual WiMAX/3G Settings to 3G Only Scroll up and push Save Settings and then Reboot Now Once the PHS-300 reboots, the USB stick may light up, then turn off, and the display on the PHS-300 might show a red light for the 3G card. Wait about 10-15 seconds for the light to turn green. The lights on the 3G stick should be glowing and blinking as well.\nSo how did I figure this out?\nAfter scouring Google search results, Sierra Wireless FAQ\u0026rsquo;s, CradlePoint\u0026rsquo;s support pages, and trolling through minicom (yes, minicom), I thought I\u0026rsquo;d try connecting with my MacBook Pro using the 3G Watcher application provided by Sierra Wireless. Before connecting, I opened up Console.app and watched the ppp.log file. Sure enough, two lines popped up that were quite relevant to my interests:\nFri Dec 16 00:37:51 2011 : Initializing phone: ATE0V1\u0026amp;F\u0026amp;D2\u0026amp;C1S0=0 Fri Dec 16 00:37:51 2011 : Dialing: ATDT*99***1# I didn\u0026rsquo;t have the exact initialization string in the PHS-300 and that was the cause of the failure the entire time.\nIf you\u0026rsquo;d like to talk to your USBConnect Mercury stick with minicom, just install minicom from macports (sudo port -v install minicom) and start it up like so:\nsudo minicom -D /dev/cu.sierra04 For other Sierra Wireless cards and adapters, there\u0026rsquo;s a helpful page on Sierra Wireless\u0026rsquo; site for Eee PC users.\n","date":"16 December 2011","permalink":"/p/getting-online-with-a-cradlepoint-phs-300-and-an-att-usbconnect-mercury/","section":"Posts","summary":"Anyone who has used a 3G ExpressCard or USB stick knows how handy they can be when you need internet access away from home (and away from Wi-Fi).","title":"Getting online with a CradlePoint PHS-300 and an AT\u0026T USBConnect Mercury"},{"content":"When you install Scientific Linux, it will keep you on the same point release that you installed. For example, if you install it from a 6.0 DVD, you\u0026rsquo;ll stay on 6.0 and get security releases for that point release only.\nGetting it to behave like Red Hat Enterprise Linux and CentOS is a painless process. Just install the sl6x repository with yum:\nyum install yum-conf-sl6x Check to ensure that you\u0026rsquo;re getting updates from the new repository:\n# yum repolist repo id repo name status sl Scientific Linux 6.1 - x86_64 6,251 sl-security Scientific Linux 6.1 - x86_64 - security updates 548 sl6x Scientific Linux 6x - x86_64 6,251 sl6x-security Scientific Linux 6x - x86_64 - security updates 548 repolist: 13,598 ","date":"23 November 2011","permalink":"/p/automatically-upgrading-to-new-point-releases-of-scientific-linux/","section":"Posts","summary":"When you install Scientific Linux, it will keep you on the same point release that you installed.","title":"Automatically upgrading to new point releases of Scientific Linux"},{"content":"I added a DisplayLink USB to DVI adapter to my MacBook Pro a while back and it occasionally has some issues where it won\u0026rsquo;t start the display after connecting the USB cable. My logs in Console.app usually contain something like this:\nThe IOUSBFamily is having trouble enumerating a USB device that has been plugged in. It will keep retrying. (Port 4 of Hub at 0xfa100000) The IOUSBFamily was not able to enumerate a device. The IOUSBFamily is having trouble enumerating a USB device that has been plugged in. It will keep retrying. (Port 4 of Hub at 0xfa100000) The IOUSBFamily was not able to enumerate a device. The IOUSBFamily is having trouble enumerating a USB device that has been plugged in. It will keep retrying. (Port 4 of Hub at 0xfa100000) The IOUSBFamily gave up enumerating a USB device after 10 retries. (Port 4 of Hub at 0xfa100000) The IOUSBFamily was not able to enumerate a device. The solution is a bit goofy, but here\u0026rsquo;s what you can do:\nUnplug the adapter from the USB port. Disconnect the DVI cable from the DisplayLink adapter. Power off the display you normally use with the adapter. Connect the USB cable between your computer and the DisplayLink adapter. Wait for your displays to flash (as if a new display was connected). The light on your DisplayLink adapter should be on now. Connect the DVI cable to the DisplayLink adapter. Wait a few seconds and then power on the display connected to the adapter. If this process doesn\u0026rsquo;t work, try a reboot and repeat the process once Finder finishes starting up.\n","date":"17 November 2011","permalink":"/p/displaylink-usb-to-dvi-issues-in-os-x-lion/","section":"Posts","summary":"I added a DisplayLink USB to DVI adapter to my MacBook Pro a while back and it occasionally has some issues where it won\u0026rsquo;t start the display after connecting the USB cable.","title":"DisplayLink USB to DVI issues in OS X Lion"},{"content":"Before we get started, I really ought to drop this here:\nUpgrading Fedora via yum is not the recommended method. Your first choice for upgrading Fedora should be to use preupgrade. Seriously. This begs the question: When should you use another method to upgrade Fedora? What other methods are there?\nYou have a few other methods to get the upgrade done:\nToss in a CD or DVD: You can upgrade via the anaconda installer provided on the CD, DVD or netinstall media. My experiences with this method for Fedora (as well as CentOS, Scientific Linux, and Red Hat) haven\u0026rsquo;t been too positive, but your results may vary. Download the newer release\u0026rsquo;s fedora-release RPM, install it with rpm, and yum upgrade: This is the really old way of doing things. Don\u0026rsquo;t try this (read the next bullet). Use yum\u0026rsquo;s distro-sync functionality: If you can\u0026rsquo;t go the preupgrade route, I\u0026rsquo;d recommend giving this a try. However, leave plenty of time to fix small glitches after it\u0026rsquo;s done (and after your first reboot). Personal anecdote time (Keep scrolling for the meat and potatoes)\nI have a dedicated server at Joe\u0026rsquo;s Datacenter (love those folks) with IPMI and KVM-over-LAN access. The preupgrade method won\u0026rsquo;t work for me because my /boot partition is on a software RAID volume. There\u0026rsquo;s a rat\u0026rsquo;s nest of a Bugzilla ticket over on Red Hat\u0026rsquo;s site about this problem. I\u0026rsquo;m really only left with a live upgrade using yum.\nLive yum upgrade process\nBefore even beginning the upgrade, I double-checked that I\u0026rsquo;d applied all of the available updates for my server. Once that was done, I realized I was one kernel revision behind and I rebooted to ensure I was in the latest Fedora 15 kernel.\nA good practice here is to run package-cleanup --orphans (it\u0026rsquo;s in the yum-utils package) to find any packages which don\u0026rsquo;t exist on any Fedora mirrors. In my case, I had two old kernels and a JungleDisk package. I removed the two old kernels (probably wasn\u0026rsquo;t necessary) and left JungleDisk alone (it worked fine after the upgrade). If you have any external repositories, such as Livna or RPMForge, you may want to disable those until the upgrade is done. Should the initial upgrade checks bomb out, try adding as few repositories back in as possible to see if it clears up the problem.\nOnce you make it this far, just follow the instructions available in Fedora\u0026rsquo;s documentation: Upgrading Fedora using yum. I set SELinux to permissive mode during the upgrade just in case it caused problems.\nI\u0026rsquo;d recommend skipping the grub2-install portion since your original grub installation will still be present after the upgrade. If your server has EFI (not BIOS), don\u0026rsquo;t use grub2 yet. Keep an eye on the previously mentioned documentation page to see if the problems get ironed out between grub2 and EFI.\nBefore you reboot, be sure to get a list of your active processes and daemons. After your reboot, some old SysVinit scripts will be converted into Systemd service scripts. They might not start automatically and you might need to enable and/or start some services.\nNew to Systemd? This will be an extremely handy resource: SysVinit to Systemd Cheatsheet.\nI haven\u0026rsquo;t seen too many issues after cleaning up some daemons that didn\u0026rsquo;t start properly. There is a problem between asterisk and SELinux that I haven\u0026rsquo;t nailed down yet but it\u0026rsquo;s not a showstopper.\nGood luck during your upgrades. Keep in mind that Fedora 15 could be EOL\u0026rsquo;d as early as May or June 20102 when Fedora 17 is released.\n","date":"15 November 2011","permalink":"/p/live-upgrading-fedora-15-to-fedora-16-using-yum/","section":"Posts","summary":"Before we get started, I really ought to drop this here:","title":"Live upgrade Fedora 15 to Fedora 16 using yum"},{"content":"","date":null,"permalink":"/tags/nova/","section":"Tags","summary":"","title":"Nova"},{"content":" My work at Rackspace has changed a bit in the last few weeks and I\u0026rsquo;ve shifted from managing a team of engineers to a full technical focus on OpenStack Nova. Although it was difficult to leave my management position, I\u0026rsquo;m happy to get back to my roots and dig into the technical stuff again.\nOne of the first things I wanted to tackle was understanding how a build request flows through Nova to a XenServer hypervisor. Following this process through the code is a bit tricky (I\u0026rsquo;m still learning python, so that could explain it). Here are the basic steps:\nClient requests a build via the API. The API runs some checks (quotas, auth, etc) and hands the build off to the scheduler. The scheduler figures out where the instance should go. The scheduler drops a message in queue specific to one compute node (where the instance will be built). The API responds to the client and the client is now unblocked and free to do other things. The compute node updates the database with the instance details and calls to the hypervisor to assemble block devices for the instance. A message is dropped into the network node\u0026rsquo;s queue (from the compute node) to begin assembling networks for the instance. The compute node blocks and waits while this step completes. Once the networking details come back (via the queue), the compute node does the remaining adjustments on the hypervisor and starts up the actual instance. When the instance starts successfully (or fails to do so), the database is updated and a message is dropped onto another message queue as a notification that the build is complete. Click on the thumbnail on the right to see the flow chart I created to explain this process.\nPlease note: This information should be accurate to the Nova code as of November 1, 2011. There could be some refactoring of these build processes before Essex is released.\n","date":"7 November 2011","permalink":"/p/tracing-a-build-through-openstack-compute-nova/","section":"Posts","summary":"My work at Rackspace has changed a bit in the last few weeks and I\u0026rsquo;ve shifted from managing a team of engineers to a full technical focus on OpenStack Nova.","title":"Tracing a build through OpenStack Compute (Nova)"},{"content":"I\u0026rsquo;ve floated back and forth between graphical IRC clients and terminal-based clients for a long time. However, I was sad to see that irssi wouldn\u0026rsquo;t build via MacPorts on OS X Lion. During the build, I saw quite a few errors from the compiler:\n-E, -S, -save-temps and -M options are not allowed with multiple -arch flags Sure enough, when I looked at the lines in the output, both x86_64 and i386 were passed to the compiler:\n... -pipe -O2 -arch x86_64 -arch i386 -fno-common ... I opened a ticket in trac and began looking for a workaround. Another trac ticket (from four years ago) on the MacPorts site gave some pointers on how to work around the bug for a previous version.\nI changed up the instructions a bit since we\u0026rsquo;re not dealing with the ppc architecture any longer:\nsudo port -v clean irssi +perl sudo port -v configure irssi +perl cd /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_irc_irssi/irssi/work/ sudo find . -type f -exec sed -i \u0026#34;\u0026#34; -e \u0026#34;s/-arch i386//g\u0026#34; {} \\; cd sudo port -v install irssi +perl The build worked!\n$ irssi -v irssi 0.8.15 (20100403 1617) ","date":"30 September 2011","permalink":"/p/installing-irssi-via-macports-on-os-x-lion-10-7-1/","section":"Posts","summary":"I\u0026rsquo;ve floated back and forth between graphical IRC clients and terminal-based clients for a long time.","title":"Installing irssi via MacPorts on OS X Lion 10.7.1"},{"content":"","date":null,"permalink":"/tags/irssi/","section":"Tags","summary":"","title":"Irssi"},{"content":"","date":null,"permalink":"/tags/macports/","section":"Tags","summary":"","title":"Macports"},{"content":"Fedora 15 was released with some updates to allow for consistent network device names. Once it\u0026rsquo;s installed, you\u0026rsquo;ll end up with network devices that are named something other than eth0, eth1, and so on.\nFor example, all onboard ethernet adapters are labeled as emX (em1, em2…) and all PCI ethernet adapters are labeled as pXpX (p[slot]p[port], like p7p1 for port 1 on slot 7). Ethernet devices within Xen virtual machines aren\u0026rsquo;t adjusted.\nThis may make sense to people who swap out the chassis on servers regularly and they don\u0026rsquo;t want to mess with hard-coding MAC addresses in network configuration files. Also, it should give users predictable names even if a running system\u0026rsquo;s drives are inserted into a newer hardware revision of the same server.\nHowever, I don\u0026rsquo;t like this on my personal dedicated servers and I prefer to revert back to the old way of doing things. Getting back to eth0 is pretty simple and it only requires a few configuration files to be edited followed by a reboot.\nFirst, add biosdevname=0 to your grub.conf on the kernel line:\ntitle Fedora (2.6.40.4-5.fc15.x86_64) root (hd0,0) kernel /boot/vmlinuz-2.6.40.4-5.fc15.x86_64 ro root=/dev/md0 SYSFONT=latarcyrheb-sun16 KEYTABLE=us biosdevname=0 quiet LANG=en_US.UTF-8 initrd /boot/initramfs-2.6.40.4-5.fc15.x86_64.img Open /etc/udev/rules.d/70-persistent-net.rules in your favorite text editor (create it if it doesn\u0026rsquo;t exist) and add in the following:\n# Be sure to put your MAC addresses in the fields below SUBSYSTEM==\u0026#34;net\u0026#34;, ACTION==\u0026#34;add\u0026#34;, DRIVERS==\u0026#34;?*\u0026#34;, ATTR{address}==\u0026#34;00:11:22:33:44:10\u0026#34;, ATTR{dev_id}==\u0026#34;0x0\u0026#34;, ATTR{type}==\u0026#34;1\u0026#34;, KERNEL==\u0026#34;eth*\u0026#34;, NAME=\u0026#34;eth0\u0026#34; SUBSYSTEM==\u0026#34;net\u0026#34;, ACTION==\u0026#34;add\u0026#34;, DRIVERS==\u0026#34;?*\u0026#34;, ATTR{address}==\u0026#34;00:11:22:33:44:11\u0026#34;, ATTR{dev_id}==\u0026#34;0x0\u0026#34;, ATTR{type}==\u0026#34;1\u0026#34;, KERNEL==\u0026#34;eth*\u0026#34;, NAME=\u0026#34;eth1\u0026#34; Be sure to rename your ifcfg-* files in /etc/sysconfig/network-scripts/ to match the device names you\u0026rsquo;ve assigned. Just for good measure, I add in the MAC address in /etc/sysconfig/network-scripts/ifcfg-ethX:\n... HWADDR=00:11:22:33:44:10 ... Reboot the server and you should be back to eth0 and eth1 after a reboot.\n","date":"25 September 2011","permalink":"/p/getting-back-to-using-eth0-in-fedora-15/","section":"Posts","summary":"Fedora 15 was released with some updates to allow for consistent network device names.","title":"Getting back to using eth0 in Fedora 15"},{"content":"","date":null,"permalink":"/tags/messagebus/","section":"Tags","summary":"","title":"Messagebus"},{"content":"SELinux isn\u0026rsquo;t a technology that\u0026rsquo;s easy to tackle for newcomers. However, there\u0026rsquo;s been a lot of work to smooth out the rough edges while still keeping a tight grip on what applications and users are allowed to do on a Linux system. One of the biggest efforts has been around setroubleshoot.\nThe purpose behind setroubleshoot is to let users know when access has been denied, help them resolve it if necessary, and to reduce overall frustration while working through tight security restrictions in the default SELinux policies. The GUI frontend for setroubleshoot is great for users who run Linux desktops or those who run servers with a display attached. Don\u0026rsquo;t worry, you can configure setroubleshoot on remote servers to send alerts elsewhere when a GUI alert isn\u0026rsquo;t an option.\nInstall a few packages to get started:\nyum install setroubleshoot{-server,-plugins,-doc} Open /etc/setroubleshoot/setroubleshoot.conf in your favorite text editor and adjust the [email] section to fit your server:\nrecipients_filepath = /var/lib/setroubleshoot/email_alert_recipients smtp_port = 25 smtp_host = localhost from_address = selinux@myserver.com subject = [MyServer] SELinux AVC Alert You could probably see it coming, but you need to put the e-mail addresses for your recipients into /var/lib/setroubleshoot/email_alert_recipients:\necho \u0026#34;selinux@mycompany.com\u0026#34; \u0026gt;\u0026gt; /var/lib/setroubleshoot/email_alert_recipients You\u0026rsquo;ll notice that setroubleshoot doesn\u0026rsquo;t have an init script and it doesn\u0026rsquo;t exist in systemd in Fedora 15. It runs through the dbus-daemon and a quick bounce of the messagebus via its init script brings in the necessary components to run setroubleshoot:\nservice messagebus restart A really easy (and safe) test is to ask sshd to bind to a non-standard port. Simply define an additional port on in your /etc/ssh/sshd_config like this:\nPort 22 Port 222 When you restart sshd, it will bind to port 22 with success, but it won\u0026rsquo;t be allowed to bind to port 222 (since that\u0026rsquo;s blocked by SELinux as a non-standard port for the ssh_port_t port type). DON\u0026rsquo;T WORRY! Your sshd server will still be listening on port 22. If you wait a moment, you\u0026rsquo;ll get an e-mail (perhaps two) that not only notify you of the denial, but they make suggestions for how to fix it:\nSELinux is preventing /usr/sbin/sshd from name_bind access on the tcp_socket port 222. ***** Plugin bind_ports (99.5 confidence) suggests ************************* If you want to allow /usr/sbin/sshd to bind to network port 222 Then you need to modify the port type. Do # semanage port -a -t PORT_TYPE -p tcp 222 where PORT_TYPE is one of the following: ... For this particular example, the quick fix would be to run:\nsemanage port -a -t ssh_port_t -p tcp 222 Much of this post\u0026rsquo;s information was gathered from the detailed documentation on Fedora\u0026rsquo;s setroubleshoot User\u0026rsquo;s FAQ as well as Dan Walsh\u0026rsquo;s setroubleshoot blog post.\n","date":"16 September 2011","permalink":"/p/receive-e-mail-reports-for-selinux-avc-denials/","section":"Posts","summary":"SELinux isn\u0026rsquo;t a technology that\u0026rsquo;s easy to tackle for newcomers.","title":"Receive e-mail reports for SELinux AVC denials"},{"content":"","date":null,"permalink":"/tags/server/","section":"Tags","summary":"","title":"Server"},{"content":" I\u0026rsquo;m using SELinux more often now on my Fedora 15 installations and I came up against a peculiar issue today on a new server. My PHP installation is configured to store its sessions in memcached and I brought over some working configurations from another server. However, each time I accessed a page which tried to initiate a session, the page load would hang for about a minute and I\u0026rsquo;d find this in my apache error logs:\n[Thu Sep 08 03:23:40 2011] [error] [client 11.22.33.44] PHP Warning: Unknown: Failed to write session data (memcached). Please verify that the current setting of session.save_path is correct (127.0.0.1:11211) in Unknown on line 0 I ran through my usual list of checks:\nnetstat showed memcached bound to the correct ports/interfaces memcached was running and I could reach it via telnet memcached-tool could connect and pull stats from memcached double-checked my php.ini tested memcached connectivity via a PHP and ruby script - they worked Even after all that, I still couldn\u0026rsquo;t figure out what was wrong. I ran strace on memcached while I ran a curl against the page which creates a session and I found something significant - memcached wasn\u0026rsquo;t seeing any connections whatsoever at that time. A quick check of the lo interface with tcpdump showed the same result. Just before I threw a chair, I remembered one thing:\nSELinux.\nA quick check for AVC denials showed the problem:\n# aureport --avc | tail -n 1 4021. 09/08/2011 03:23:38 httpd system_u:system_r:httpd_t:s0 42 tcp_socket name_connect system_u:object_r:memcache_port_t:s0 denied 31536 I\u0026rsquo;m far from being a guru on SELinux, so I leaned on audit2allow for help:\n# grep memcache /var/log/audit/audit.log | audit2allow #============= httpd_t ============== #!!!! This avc can be allowed using one of the these booleans: # httpd_can_network_relay, httpd_can_network_memcache, httpd_can_network_connect allow httpd_t memcache_port_t:tcp_socket name_connect; The boolean we\u0026rsquo;re looking for is httpd_can_network_memcache. Flipping the boolean can be done in a snap:\n# setsebool -P httpd_can_network_memcache 1 # getsebool httpd_can_network_memcache httpd_can_network_memcache --\u0026gt; on After adjusting the boolean, apache was able to make connections to memcached without a hitch. My page which created sessions loaded quickly and I could see data being stored in memcached. If you want to check the status of all of the apache-related SELinux booleans, just use getsebool:\n# getsebool -a | grep httpd | grep off$ allow_httpd_anon_write --\u0026gt; off allow_httpd_mod_auth_ntlm_winbind --\u0026gt; off allow_httpd_mod_auth_pam --\u0026gt; off allow_httpd_sys_script_anon_write --\u0026gt; off httpd_can_check_spam --\u0026gt; off httpd_can_network_connect_cobbler --\u0026gt; off httpd_can_network_connect_db --\u0026gt; off httpd_can_network_relay --\u0026gt; off httpd_can_sendmail --\u0026gt; off httpd_dbus_avahi --\u0026gt; off httpd_enable_ftp_server --\u0026gt; off httpd_enable_homedirs --\u0026gt; off httpd_execmem --\u0026gt; off httpd_read_user_content --\u0026gt; off httpd_setrlimit --\u0026gt; off httpd_ssi_exec --\u0026gt; off httpd_tmp_exec --\u0026gt; off httpd_unified --\u0026gt; off httpd_use_cifs --\u0026gt; off httpd_use_gpg --\u0026gt; off httpd_use_nfs --\u0026gt; off If you\u0026rsquo;re interested in SELinux, a good way to get your feet wet is to head over to the CentOS Wiki and review their SELinux Howtos\n","date":"8 September 2011","permalink":"/p/getting-apache-php-and-memcached-working-with-selinux/","section":"Posts","summary":"I\u0026rsquo;m using SELinux more often now on my Fedora 15 installations and I came up against a peculiar issue today on a new server.","title":"Getting apache, PHP, and memcached working with SELinux"},{"content":"","date":null,"permalink":"/tags/memcached/","section":"Tags","summary":"","title":"Memcached"},{"content":"Standard e-mail etiquette is pretty obvious to most of us and if you\u0026rsquo;re good at it, you\u0026rsquo;ll get your point across more often without stepping on toes or causing unneeded confusion. Simple things like identifying yourself well, avoiding sarcasm and adding context to statements are all extremely beneficial. However, writing e-mails to highly technical developers, system administrators, and engineers is a little trickier. These types of e-mail recipients don\u0026rsquo;t really enjoy handling e-mail (inbound or outbound) and most find that e-mail is just a speed bump which interrupts their productivity.\nIf you\u0026rsquo;re not technical, you might be asking yourself: \u0026ldquo;I need to e-mail technical people and they need to take what I say seriously? How do I do it?\u0026rdquo; It\u0026rsquo;s not impossible, but the rest of this blog post should help.\nBrevity is key #There are some people who thrive on receiving e-mail, sending e-mail, and talking about e-mail that they\u0026rsquo;ve sent or received. Most nerds don\u0026rsquo;t feel this way.\nYou need to get your point across concisely and succinctly so that your e-mail is seen as less of a distraction. Avoid adding a lot of context where it isn\u0026rsquo;t needed and try to summarize business needs and processes unless details are absolutely critical. If you need to send your e-mail to multiple recipients and some of those recipients need additional details, provide an abstract at the beginning of the e-mail.\nLearn the ways of TL;DR #I\u0026rsquo;ve heard quite a few conversations like these around the office:\nNerd 1: \u0026ldquo;Did you get that e-mail from [name here]?\u0026rdquo;\nNerd 2: \u0026ldquo;The six page one with four PDF files attached?\u0026rdquo;\nNerd 1: \u0026ldquo;Yeah. That one.\u0026rdquo;\nNerd 2: \u0026ldquo;TL;DR dude, seriously. Did you read it?\u0026rdquo;\nNerd 1: \u0026ldquo;Nah. I might read it later.\u0026rdquo;\nIf someone\u0026rsquo;s ever mentioned \u0026ldquo;TL;DR\u0026rdquo; (too long; didn\u0026rsquo;t read) when your e-mail was mentioned, don\u0026rsquo;t fret. It\u0026rsquo;s a quick fix. Just add a quick summary to the top of your e-mail prefaced with \u0026ldquo;TL;DR\u0026rdquo;. Provide a really brief summary (bulleted lists are a plus) of your e-mail in the section and then start your e-mail right afterwards. Here\u0026rsquo;s an example:\nTL;DR * next software release deploys Monday * two bugs remaining to fix * we will get started at 8AM Saturday, yeaaaaah Missed the joke? Head over to Wikipedia.\nIf one of the summary points interests a recipient, they\u0026rsquo;ll scan your e-mail for the pertinent sections. Some recipients may only need to see what\u0026rsquo;s in the summary and they won\u0026rsquo;t bother reading the remainder. Either way, the effectiveness of your e-mail increases by leaps and bounds.\nPlain text # Users of mutt prefer plain text e-mails If you only take away one thing from this entire post, let it be this section. Writing e-mails in plain text is \\*highly recommended\\* if you want a technical person to take your e-mail seriously. Many system administrators I know use mutt, a text-based console-only e-mail reader. Click the thumbnail at the right and imagine what your e-mails would look like if they\u0026rsquo;re full of images, stylesheets and background images. Better yet, imagine if your entire e-mail was in an image and the e-mail itself had no text.\nHere are a few more tips under this category:\nDon\u0026rsquo;t use Outlook stationery. Never send e-mails with an image as the e-mail itself. No Comic Sans at any time. Period. Avoid graphical e-mail signatures (more on that in a moment). E-mail signatures #Brevity can definitely be applied to e-mail signatures, too. How many times have you seen e-mails that end like this:\nFrank Frankelton MCSE, RHCSA, RHCE, CCNA, RHCA, LPIC-3, Ph.D., M.D., Esq., CMDBA Systems Adminstrator Extraordinaire, Database Administrator, All-around great guy Office: 210-555-1212 Mobile: 210-555-1213 Other Mobile: 210-555-1214 Fax: 210-555-1215 VOIP: 210-555-1216 AIM: frankeltonia Twitter: @frankyfrank Jabber: frankfurter@frankeltonisinthehouse.com Big Company, Inc You might think that nobody would ever send out e-mails with a signature like the one above, but I\u0026rsquo;ve seen some that are actually worse. Keep the signature short and only put in the information that people really need to know. Generally, your name and title or department is sufficient for e-mail signatures (unless your local/federal laws require otherwise). Always preface it with a double dash \u0026ldquo;-\u0026rdquo; on a line by itself to signify that the remainder of the e-mail is the signature.\nSummary #Keep it simple, keep it brief, and keep it relevant. While the suggestions above might not apply to every business or every person, following the suggestions will increase the effectiveness of your e-mails and ensure that your voice is heard on the other end.\nI\u0026rsquo;m really interested to hear your comments. Are there some suggestions you have that I missed in the post? Did I make some suggestions which didn\u0026rsquo;t make sense or don\u0026rsquo;t apply to you? Let me know!\n","date":"26 August 2011","permalink":"/p/how-to-write-e-mails-to-nerds-that-they-will-actually-read/","section":"Posts","summary":"Standard e-mail etiquette is pretty obvious to most of us and if you\u0026rsquo;re good at it, you\u0026rsquo;ll get your point across more often without stepping on toes or causing unneeded confusion.","title":"How to write e-mails to nerds (that they will actually read)"},{"content":"Before I get started, I\u0026rsquo;d like to give a big thanks to all of the visitors who dropped by and participated in the contest last week. Also, thanks to ThinkGeek for offering to pay for (and double) one of the prizes!\nHere are the list of winners:\nGrand Prize ($50 at ThinkGeek): Dan Udey Runners-Up ($25 at ThinkGeek): Joe Wright, Susan Price, and Giovanni Tirloni Dan\u0026rsquo;s comment rang true with me since much of a sysadmin\u0026rsquo;s job involves responding to crises regardless of how much planning you put forth:\nKeep a cool head. Focus. Work methodically. Figure out what to do and get it done, and people will remember you as the person who performs under pressure. Once you can do that, you\u0026rsquo;re a sysadmin.\nJoe touched on a critical point about system administration:\nTell the truth. If you break something, \u0026lsquo;fess up and fix it. If you don\u0026rsquo;t know how to do something, admit it and learn how to do the task. Create your own culture of honesty on the job; others will respect and follow your example.\nSusan offered some inspiration for system administrators stuck in frustrating situations:\nI know, I know - dumb users, RTFM. Believe me, I\u0026rsquo;ve been there. In fact - one of your strategies should be to establish a trusted community where you can VENT about these issues, and get support for yourself. Ask for answers when you don\u0026rsquo;t know them. Restock on the compassion and patience.\nGiovanni talked about the basics and what every system administrator should know to get started in a career. We probably take this for granted, but this is critical to keep in mind:\nIf you are starting in the system administration area, don\u0026rsquo;t praise yourself only because you (blindly?) fixed an issue or helped that friend with his/her server. Ask yourself: Why what I did fixed the issue? Why that was happening in the first place? And more importantly, how to avoid it for all eternity? You won\u0026rsquo;t but it doesn\u0026rsquo;t hurt to aim high.\nEven though it isn\u0026rsquo;t a runner-up, Paul\u0026rsquo;s comment certainly deserves an honorable mention. His comment is actually a true story (with a slight amount of embellishment, of course) and it serves as a reminder that system administrators and developers must stand up for their beliefs even if it goes against the beliefs of their superiors. If your managers don\u0026rsquo;t value the feedback, it might be a sign that a career change is in order.\nOnce again, a big thanks goes out to everyone who submitted a comment. I\u0026rsquo;ll reach out to the winners today and get the gift certificates sent out to them.\n","date":"22 August 2011","permalink":"/p/contest-winners-from-the-inspire-a-sysadmin-contest/","section":"Posts","summary":"Before I get started, I\u0026rsquo;d like to give a big thanks to all of the visitors who dropped by and participated in the contest last week.","title":"Contest winners from the “Inspire a sysadmin” contest"},{"content":"UPDATE: THE STAKES ARE RAISED! Check the end of this post for details.\nToday is my birthday and I\u0026rsquo;m doing things in reverse - you are getting gifts today. I\u0026rsquo;m giving away four $25 gift certificates to ThinkGeek today (yep, that\u0026rsquo;s $100 out of my pocket) but you\u0026rsquo;ll have to do something to earn them.\nI\u0026rsquo;m looking for words of wisdom and guidance from the readers of my blog for system administrators who are just getting started. I talk to brand new sysadmins and college graduates regularly and they\u0026rsquo;re all hungry for what the seasoned folks in the industry know. They\u0026rsquo;re not specifically on the hunt for hard facts and how-to\u0026rsquo;s; they\u0026rsquo;re looking for guidance on how to gain experience, reduce errors and learn efficiently.\nLet\u0026rsquo;s get to the important stuff: How does this contest work?\nWrite a comment. Put an inspirational story, anecdote, or random words of wisdom for system administrators who are new to the industry in a comment on this post. Although it doesn\u0026rsquo;t have to be extraordinarily lengthy, try to write more than just a sentence or two. Give me a way to contact you. Add something to your comment so I can contact you if you\u0026rsquo;re the winner. Do it soon. The contest ends at 11:59PM CDT tonight. I\u0026rsquo;ll be the judge of the comments and I\u0026rsquo;m going to choose the winners based on the content of the comment. The more inspirational and profound your comment is, the better chance you have of winning. Any comment written in LOLCats caption style will lose points immediately. ;)\nOne last thing: This contest isn\u0026rsquo;t affiliated with my employer or ThinkGeek. I\u0026rsquo;m doing this on my own. However, I\u0026rsquo;m a big fan of both my employer and ThinkGeek, but that\u0026rsquo;s irrelevant right now.\nUPDATE: The folks at ThinkGeek decided to not only pay for one of the gift certificates, but they\u0026rsquo;re going to double it. There\u0026rsquo;s now a $50 certificate for the best entry and three more $25 certificates for second, third and fourth best entries. Thanks again to ThinkGeek for offering this up!\nUPDATE: The winners have been announced!\n","date":"17 August 2011","permalink":"/p/inspire-a-sysadmin-get-a-thinkgeek-gift-certificate/","section":"Posts","summary":"UPDATE: THE STAKES ARE RAISED!","title":"Inspire a sysadmin, get a ThinkGeek gift certificate"},{"content":"My daily work involves working with a large number of servers and one of my frustrations with Firefox is that it\u0026rsquo;s not possible to select an entire IP address with a double click with the default settings. Although it works right out of the box with Safari, you have to make a configuration adjustment in Firefox to get the same behavior.\nTo change the setting in Firefox, open up a new Firefox tab and go to about:config in the browser. Paste word_select.stop in the search bar that appears below your tab bar and double click the layout.word_select.stop_at_punctuation line. It should become bold and the value on the end will flip from true to false.\nGo back to another tab and open a web page which displays an IP address. Double click on any portion of the IP address and Firefox should highlight the entire address.\n","date":"16 August 2011","permalink":"/p/highlight-ip-addresses-with-a-double-click-in-firefox/","section":"Posts","summary":"My daily work involves working with a large number of servers and one of my frustrations with Firefox is that it\u0026rsquo;s not possible to select an entire IP address with a double click with the default settings.","title":"Highlight IP addresses with a double click in Firefox"},{"content":"","date":null,"permalink":"/tags/windows/","section":"Tags","summary":"","title":"Windows"},{"content":"If you haven\u0026rsquo;t noticed already, full Xen dom0 support was added in the Linux 3.0 kernel. This means there\u0026rsquo;s no longer a need to drag patches forward from old kernels and work from special branches and git repositories when building a kernel for dom0.\nSomething else you might not have noticed is that the Fedora kernel team has quietly slipped Linux 3.0 into Fedora 15\u0026rsquo;s update channels in disguise. Click that link, scroll down, and you\u0026rsquo;ll see “Rebase to 3.0. Version reports as 2.6.40 for compatibility with older userspace.” Although I\u0026rsquo;m not a fan of calling something what it isn\u0026rsquo;t (2.6.40 doesn\u0026rsquo;t exist on kernel.org), I can understand some of the reasoning behind the choice.\nThis change makes the Xen installation on Fedora 15 pretty trivial. To get started, update your kernel to the latest if you\u0026rsquo;re not already on Fedora\u0026rsquo;s 2.6.40 kernels:\nyum -y upgrade kernel We need three more packages (quite a few dependencies will roll in with them):\nyum -y install xen libvirt python-virtinst The xen package reels in the hypervisor itself along with libraries and command line tools (like xl and xm). Libvirt gives us easy access to VM management with the virsh command and python-virtinst gives us the handy virt-install command to make OS installations easy.\nOnce those packages are installed, we need to make some adjustments in your grub configuration. Open /boot/grub/menu.lst in your text editor of choice and add something like this at the bottom:\ntitle Fedora + Xen (2.6.40-4.fc15.x86_64) root (hd0,1) kernel /boot/xen.gz module /boot/vmlinuz-2.6.40-4.fc15.x86_64 ro root=/dev/sda1 module /boot/initramfs-2.6.40-4.fc15.x86_64.img Ensure that the root (hd0,1) is applicable to your system (adjust it if it isn\u0026rsquo;t). Also, check the kernel version to ensure it matches your installed kernel and adjust the root= portion to match your root volume. Flip the default line to a value which will boot your new grub entry and ensure the timeout is set to a reasonable number if you need to temporarily switch back to your original grub entry at boot time. (Hey, we all make mistakes.)\nI take one extra precaution and change the UPDATEDEFAULT=yes line to no in /etc/sysconfig/kernel. This ensures that future kernel updates don\u0026rsquo;t trample the entry you\u0026rsquo;ve just made. Keep in mind that you\u0026rsquo;ll need to manually update your grub configuration when you do kernel upgrades later.\nCross your fingers and reboot. If your system doesn\u0026rsquo;t reboot properly, reboot it again and choose your old kernel from the grub menu. Double-check your configuration for fat-fingering and give it another try. If your system boots and pings but you have no output via a monitor, don\u0026rsquo;t fret. There\u0026rsquo;s a patch for the problem which should appear soon in Linux 3.0. The impatient can snag a kernel source RPM, add the patch file, and build a local kernel (or you can download my local build from when I did it).\nLog in and verify that you booted into the dom0:\n[root@xenbox ~]# xm dmesg | head -n 5 __ __ _ _ _ _ ____ __ _ ____ \\ \\/ /___ _ __ | || | / | / | |___ \\ / _| ___/ | ___| \\ // _ \\ '_ \\ | || |_ | | | |__ __) | | |_ / __| |___ \\ / \\ __/ | | | |__ _|| |_| |__/ __/ _| _| (__| |___) | /_/\\_\\___|_| |_| |_|(_)_(_)_| |_____(_)_| \\___|_|____/ Once you\u0026rsquo;re done with that, make sure libvirtd is running:\n/etc/init.d/libvirtd start; chkconfig libvirtd on Try installing a VM:\nvirt-install \\ --paravirt \\ --name=testvm \\ --ram=512 \\ --vcpus=4 \\ --file /dev/vmstorage/testvm \\ --graphics vnc,port=5905 --noautoconsole \\ --autostart --noreboot \\ --location=http://mirrors.kernel.org/debian/dists/squeeze/main/installer-amd64/ You should have a VM installation underway pretty quickly and it will be visible via port 5905 on the local host. Enjoy the power and freedom of your brand new type 1 hypervisor.\n","date":"6 August 2011","permalink":"/p/xen-4-1-on-fedora-15-with-linux-3-0/","section":"Posts","summary":"If you haven\u0026rsquo;t noticed already, full Xen dom0 support was added in the Linux 3.","title":"Xen 4.1 on Fedora 15 with Linux 3.0"},{"content":"This is a copy of a post I wrote for the Rackspace Talent blog. Much of it probably applies to the job of a system administrator, so I figured it would be a good idea to post it here as well. Let me know what you think!\nAlthough Rackspace has one of the best work environments of any company I’ve worked for, there are plenty of opportunities to become stressed.\nStress can come from a variety of sources. Some of the obvious ones involve dealing with outages or tight deadlines, but there are some that aren’t so obvious, such as maintaining the customers’ trust and interpersonal issues.\nThere’s one thing you must remember: stress doesn’t have to rule your life. I’ve learned (and sometimes stumbled upon) some good techniques to prevent many of the negative effects of stressful situations at work and they’re definitely worth a try.\nKnow what you’re up against\nIt’s hard to battle a source of stress if you don’t know why it’s bothering you. Take the problem you’re facing and break it down into pieces. There are going to be some things you can and can’t change. Put the things you can’t change aside and focus on the things you’re able to change. As you tackle the list of things you can change, you might find ways to work around the things you can’t.\nInterpersonal issues are easy\nStress that comes from dealing with coworkers may seem insurmountable at times. However, this type of stress is easily fixed and it normally stems from insufficient communication or conflicting goals. There’s an informal policy I’ve had on most of my teams called “Take it to the Racker” and it’s been quite successful. The basic idea is that if you have problems with another Racker, whether it’s something personal or work-related, take the grievance to them directly (in private, of course) and find common ground.\nMore often than not, this process leads to a good work relationship. It also improves communication drastically in the short term and it generally lasts if the people involved keep up the communication over time. I’ve seen Rackers who are so upset that they refuse to sit next to each other and after this process, they’re eating lunch together and working on the same projects.\nDon’t fight your battles alone\nYour best resources for fighting stress are all around you. Lean on your manager or your coworkers for help. Remember what your mother always told you: a trouble shared is a trouble halved. Your coworker might have a solution to a particular problem which frees up an hour for you each day and allows you to work on other projects. Your manager might not know that a particular task doesn’t fit your strengths and they might be able to provide you with another project that plays to your strengths.\nIs it possible to reduce your stress level to zero at work? I don’t think so. However, you should always have a goal to reduce it when it makes sense.\nAs always, I’m interested to hear your comments. Which stress-reduction strategies work best for you? What is the source of most of your stress?\n","date":"22 July 2011","permalink":"/p/success-with-stress/","section":"Posts","summary":"This is a copy of a post I wrote for the Rackspace Talent blog.","title":"Success with stress"},{"content":"Some might call me paranoid, but I get nervous when my package manager automatically removes a kernel. I logged into my Fedora 15 VM this morning and found this:\n================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel x86_64 2.6.35.13-92.fc14 updates 22 M Removing: kernel x86_64 2.6.35.11-83.fc14 @updates 104 M Transaction Summary ================================================================================ Install 1 Package(s) Remove 1 Package(s) Fedora 15\u0026rsquo;s default behavior is to keep three kernels: the latest one and the two previous versions. However, this behavior may be counter-productive if you compile your own modules, or if you have compatibility issues with subsequent kernel versions.\nYou can change how yum handles kernel packages with some simple changes to your /etc/yum.conf. The installonly_limit option controls how many old packages are kept:\ninstallonly_limit Number of packages listed in installonlypkgs to keep installed at the same time. Setting to 0 disables this feature. Default is \u0026lsquo;0\u0026rsquo;.\nI disabled the functionality altogether by setting installonly_limit to 0:\n#installonly_limit=3 installonly_limit=0 It\u0026rsquo;s important to keep in mind that you will need to purge these packages from your system yourself now. Kernel packages can occupy a fair amount of disk space, so make a note to go back and clean them up when you no longer need them.\n","date":"16 June 2011","permalink":"/p/keep-all-old-kernels-when-upgrading-via-yum/","section":"Posts","summary":"Some might call me paranoid, but I get nervous when my package manager automatically removes a kernel.","title":"Keep all old kernels when upgrading via yum"},{"content":"It\u0026rsquo;s no secret that I\u0026rsquo;m a big fan of the RouterBoard network devices paired with Mikrotik\u0026rsquo;s RouterOS. I discovered today that these devices offer Cisco NetFlow-compatible statistics gathering which can be directed to a Linux box running ntop. Mikrotik calls it “traffic flow” and it\u0026rsquo;s much more efficient than setting up a mirrored or spanned port and then using ntop to dump traffic on that interface.\nThese instructions are for Fedora 15, but they should be pretty similar on most other Linux distributions. Install ntop first:\nyum -y install ntop Adjust /etc/ntop.conf so that ntop listens on something other than localhost:\n# limit ntop to listening on a specific interface and port --http-server 0.0.0.0:3000 --https-server 0.0.0.0:3001 I had to comment out the sched_yield() option to get ntop to start:\n# Under certain circumstances, the sched_yield() function causes the ntop web # server to lock up. It shouldn't happen, but it does. This option causes # ntop to skip those calls, at a tiny performance penalty. # --disable-schedyield Set an admin password for ntop:\nntop --set-admin-password Once you set the password, you may need to press CTRL-C to get back to a prompt in some ntop versions.\nStart ntop:\n/etc/init.d/ntop start Open a web browser and open http://example.com:3000 to access the ntop interface. Roll your mouse over the Plugins menu, then NetFlow, and then click Activate. Roll your mouse over the Plugins menu again, then NetFlow, and then click Configure. Click Add NetFlow Device and fill in the following:\nType “Mikrotik” in the NetFlow Device section and click Set Interface Name. Type 2055 in the Local Collector UDP Port section and click Set Port. Type in your router\u0026rsquo;s IP/netmask in the Virtual NetFlow Interface Network Address section and click Set Interface Address. Enabling traffic flow on the Mikrotik can be done with just two configuration lines:\n/ip traffic-flow set enabled=yes interfaces=all /ip traffic-flow target add address=192.168.10.65:2055 disabled=no version=5 Wait about a minute and then try reviewing some of the data in the ntop interface. Depending on the amount of traffic on your network, you might see data in as little as 10-15 seconds.\n","date":"5 June 2011","permalink":"/p/measure-traffic-flows-with-mikrotiks-routeros-and-ntop-on-fedora-15/","section":"Posts","summary":"It\u0026rsquo;s no secret that I\u0026rsquo;m a big fan of the RouterBoard network devices paired with Mikrotik\u0026rsquo;s RouterOS.","title":"Measure traffic flows with Mikrotik’s RouterOS and ntop on Fedora 15"},{"content":"","date":null,"permalink":"/tags/ntop/","section":"Tags","summary":"","title":"Ntop"},{"content":"If you find yourself forgetting bits and pieces about network topics, Packet Life\u0026rsquo;s cheat sheets should be a handy resource for you. Lots of topics are included, such as VOIP, NAT, MPLS, BGP, and IOS basics. They\u0026rsquo;re all in PDF form and free to download.\nCheat Sheets - Packet Life\nThanks to @speude for mentioning the site on Twitter.\n","date":"25 May 2011","permalink":"/p/handy-networking-cheat-sheets-from-packet-life/","section":"Posts","summary":"If you find yourself forgetting bits and pieces about network topics, Packet Life\u0026rsquo;s cheat sheets should be a handy resource for you.","title":"Handy networking cheat sheets from Packet Life"},{"content":"If you work for a growing company like I do, it\u0026rsquo;s inevitable that you\u0026rsquo;ll have to do your fair share of interviewing. I love it when I leave an interview with a good feeling about the candidate. That \u0026ldquo;wow, they really nailed it\u0026rdquo; feeling is always great to have when you need to fill a critical role. Most often, the successful candidates are the ones who do their homework before they ever walk in our office doors.\nWhat do I mean by \u0026ldquo;do your homework?\u0026rdquo; Here are some bullet points to get you on your way:\nKnow what the company does.\nThis one is critical and it should be easy. However, make sure to do thorough research first. For example, if you interviewed at a company like Apple, becoming familiar with their hardware lineup should be a no-brainer. That\u0026rsquo;s their bread and butter. On the other hand, remember that Apple isn\u0026rsquo;t solely a hardware company; they write lots of software, provide online productivity services, and they distribute music, movies, and other entertainment media.\nWhile you\u0026rsquo;re doing this research, try to discover what makes the company unique. Sure, Apple sells laptops and desktops (just like a lot of other companies), but what makes their particular products unique? Is there something unique about the way they provide their services? Have they cornered a certain market segment by providing a combination of products and services to that group of consumers? Answering these simple questions may help you tip the scales in the interview process.\nTry one or more of the company\u0026rsquo;s products.\nThe feasibility of trying a company\u0026rsquo;s product before an interview could be debatable. For example, if you wanted to interview at Cray, you probably don\u0026rsquo;t need to drop $2M USD on your own XE6 before walking in the door. For companies where the barrier to entry for purchasing a product is much lower, such as cloud computing companies, there\u0026rsquo;s no excuse to not try things out first. Amazon has a free tier and a Rackspace Cloud Server could cost you as little as $2.50 per week.\nIt\u0026rsquo;s concerning when I talk to an applicant about a job working with Rackspace\u0026rsquo;s Cloud Servers and they haven\u0026rsquo;t tried out any cloud products from any provider. How can I take a candidate\u0026rsquo;s interest seriously when they haven\u0026rsquo;t shown interest in any portion of my group\u0026rsquo;s market segment?\nKnow what the company\u0026rsquo;s competitors do.\nIt\u0026rsquo;s often more impressive to an interviewer to know what a company\u0026rsquo;s competitors are doing and how it compares to what that company is doing in the market. For example, if you can walk into an interview and say \u0026ldquo;I like the way your company makes these widgets, but Company X is able to make them more lightweight, and I value that more than the added customer service your company offers.\u0026rdquo; This shows the interviewer that you\u0026rsquo;re familiar with various products in the segment and you\u0026rsquo;ve used them enough to understand what makes them different.\nSome of you might be thinking: \u0026ldquo;Why would I say something like that to the interviewer? They\u0026rsquo;ll think I\u0026rsquo;m being too negative about their product.\u0026rdquo; That\u0026rsquo;s always possible, but you can guard against it by wording everything carefully. Make sure you have a solid reason for the way you feel that is based on something substantial (usability, price, features, etc). I\u0026rsquo;ve had candidates talk for five to ten minutes about why one of our product is inferior to one of our competitors\u0026rsquo; products and I was very impressed.\nOne quick gotcha: your interviewer might turn your comments back on you and ask you how you would improve one of the inferior products (I do this regularly). Make sure that you\u0026rsquo;re prepared for that question and consider offering up a suggestion before the question is presented to you.\nCan\u0026rsquo;t get the information you need? Ask!\nWhen you reach the end of the interview and the interviewer asks if you have questions, be sure to ask any questions about topics you had trouble researching. Going back to the Cray example, compare what you know about an XE6 to servers you\u0026rsquo;ve used before. You could mention a problem you had with the density of your previous configurations and ask how they overcame that hurdle at Cray. If it\u0026rsquo;s a proprietary trade secret, you might not get an answer, but they\u0026rsquo;ll know that you did some serious research ahead of time. If they can share the answer, they might still be impressed, and you might end up learning something you didn\u0026rsquo;t know prior to the interview.\nConclusion\nIn summary, doing your homework and learning about the company shows the interviewers that you not only have what it takes to do the work, but that the work interests you as well. I\u0026rsquo;ve interviewed folks in the past who lacked on technical ability but had plenty of desire and drive. More often than not, those people are now Rackers.\n","date":"3 May 2011","permalink":"/p/do-your-homework-before-a-technical-interview/","section":"Posts","summary":"If you work for a growing company like I do, it\u0026rsquo;s inevitable that you\u0026rsquo;ll have to do your fair share of interviewing.","title":"Do your homework before a technical interview"},{"content":"Anyone who says management is easy obviously hasn\u0026rsquo;t done it for very long or they\u0026rsquo;re not doing their job very well. Coordinating the activities and personal development of each person on the team is always a challenge and it introduces an unbelievable number of variables into an already difficult job. However, watching members of the team grow and succeed in their work is tremendously rewarding.\nTaking on the job of a technical manager presents its own unique challenges. It\u0026rsquo;s easy for a technical manager to lose focus and get down in the weeds of daily work. It\u0026rsquo;s also very difficult to let go of the reins and resign to the fact that the direct involvement in technical work will have to be reduced.\nThese problems resonate with me as I\u0026rsquo;ve recently taken on another technical management role at Rackspace. My previous experience involved managing a team of technicians at various skill levels who were working on customer environments made up of dedicated servers and network equipment. The current position has quite a few differences. I\u0026rsquo;m now managing a small group of highly technical and extremely dedicated Linux engineers and we\u0026rsquo;re responsible for maintaining the systems and networks which run the Cloud Servers product.\nOne of my goals of this blog is to help others learn things much more easily than I have. Here are some things I\u0026rsquo;ve had to learn the hard way while working as a technical manager:\nGet out of the mindset of an individual contributor\nWhen you\u0026rsquo;re a system administrator on a team (or by yourself), you\u0026rsquo;re often judged on your personal job performance. Team interaction is important for some companies (especially at Rackspace), but not for others. Breaking the mindset of being an individual contributor was extremely difficult for me to do.\nManagers are judged on the success of the team as a whole. Encouraging your team members to succeed and playing an active role in their personal and professional development is key. Each time you find yourself buried in the weeds of a problem rather than facilitating your team\u0026rsquo;s work on the problem is when your performance as a manager will drop. If you do it more often, you may find that your team members aren\u0026rsquo;t getting the support they need.\nDon\u0026rsquo;t be afraid of your team becoming smarter than you\nOne of the biggest things I\u0026rsquo;ve heard from my team is: \u0026ldquo;Aren\u0026rsquo;t you worried about losing your technical skills when you\u0026rsquo;re a manager?\u0026rdquo; My answer: \u0026ldquo;Of course.\u0026rdquo; Anyone who has technical abilities will always be afraid of watching those abilities wane over time. However, as your team becomes stronger, you should be able to continue learning not through your own work, but through theirs. When your team members see that you\u0026rsquo;re still interested in learning and you\u0026rsquo;re now able to learn from them, they\u0026rsquo;ll become more energized about their own work.\nIf you find yourself thinking negatively about a potential job candidate because they\u0026rsquo;re smarter than you, step back and think for a moment. Put your own ego aside and consider what\u0026rsquo;s best for you, your team, and your company. Your goal is to build a strong and successful team, not to pad your own ego. If your managers are judging you (as a technical manager) on your technical ability, then you need to solve that problem first.\nInspire instead of direct\nEvery manager faces the challenge of working with team members who disagree with a particular company policy or with the direction of their particular infrastructure. Keep in mind that your team members are probably not intending to be insubordinate and they might have something useful to contribute.\nWhen you find yourself locking horns with your team members, inspire a discussion about the problem. Break out the disagreement onto a whiteboard and let the team make suggestions for improvements. Even if the entire discussion leads back to the fact that the original problem is inevitable, fostering that feedback loop is critical. You\u0026rsquo;ll learn more about your team while they find ways to express their opinions and feel empowered.\nThe really tough part is when your team comes up with an alternative plan and you find yourself presenting to your leadership team. Always remember to take it seriously and know that you may need to refine the plan many times over before you find something acceptable for your team and the business.\nDe-stress by staying on task\nIf you\u0026rsquo;re anything like me, you need some way to keep tabs on action items coming from meetings, e-mails, phone calls, and walk-ups. I\u0026rsquo;ve heard great things about applications like OmniFocus and Things, but I settled on 2Do. I really enjoy a strong to-do list which allows me to set priorities, due dates, and write extended notes about a particular task.\nThe best way to tackle a wall of tasks is to keep them organized into a concise list. Even if it\u0026rsquo;s a small task, get it into your list so it\u0026rsquo;s on your radar and you won\u0026rsquo;t forget it. Work through the simple tasks and the high priority ones first but watch out for tasks with due dates.\nConclusion\nAll of these processes get easier over time and although your job will surely have challenges and pitfalls, the enjoyment will greatly increase. I feel privileged to lead a team of talented people who work on a complex and ever-expanding product.\nAlso, I\u0026rsquo;d like to mention that I\u0026rsquo;m not an expert on management! There are probably much better ways to do much of this than I\u0026rsquo;ve outlined in this post. Be sure to share your ideas in the comments section below.\n","date":"29 March 2011","permalink":"/p/how-to-survive-as-a-technical-manager/","section":"Posts","summary":"Anyone who says management is easy obviously hasn\u0026rsquo;t done it for very long or they\u0026rsquo;re not doing their job very well.","title":"How to survive as a technical manager"},{"content":"","date":null,"permalink":"/tags/management/","section":"Tags","summary":"","title":"Management"},{"content":"","date":null,"permalink":"/tags/strategy/","section":"Tags","summary":"","title":"Strategy"},{"content":"There are few things which will rattle systems administrators more than a compromised server. It gives you the same feeling that you would have if someone broke into your house or car, except that it\u0026rsquo;s much more difficult (with a server) to determine how to clean up the compromise and found out how the attacker gained access. In addition, leaving a compromise in place for an extended period can lead to other problems:\nyour server could be used to gain access other servers data could be stolen from your server\u0026rsquo;s databases or storage devices an attacker could capture data from your server\u0026rsquo;s local network denial of service attacks could be launched using your server as an active participant The best ways to limit your server\u0026rsquo;s attack surface are pretty obvious: limit network access, keep your OS packages up to date, and regularly audit any code which is accessible externally or internally. As we all know, your server can still become compromised even with all of these preventative measures in place.\nHere are some tips which will allow you to rapidly detect a compromise on your servers:\nAbnormal network usage patterns and atypical bandwidth consumption\nMost sites will have a fairly normal traffic pattern which repeats itself daily. If your traffic graph suddenly has a plateau or spikes drastically during different parts of the day, that could signify that there is something worth reviewing. Also, if your site normally consumes about 2TB of traffic per month and you\u0026rsquo;re at the 1.5TB mark on the fifth day of the month, you might want to examine the server more closely.\nOn the flip side, look for dips in network traffic as well. This may mean that a compromise is interfering with the operation of a particular daemon, or there may be a rogue daemon listening on a trusted port during certain periods.\nMany compromises consist of simple scripts which scan for other servers to infect or participate in large denial of service attacks. The scans may show up as a large amount of packets, but the denial of service attacks will usually consume a large amount of bandwidth. Keeping tabs on network traffic is easily done with open source software like munin, cacti, or MRTG.\nUnusual open ports\nIf you run a web server on port 80, but netstat -ntlp shows something listening on various ports over 1024, those processes are worth reviewing. Use commands like lsof to probe the system for the files and network ports held open by the processes. You can also check within /proc/[pid] to find the directory where the processes were originally launched.\nWatch out for processes started within directories like /dev/shm, /tmp or any directories in which your daemons have write access. You might see that some processes were started in a user\u0026rsquo;s home directory. If that\u0026rsquo;s the case, it might be a good time to reset that user\u0026rsquo;s password or clear out their ssh key. Review the output from last authentication logs to see if there are account logins from peculiar locations. If you know the user lives in the US, but there are logins from various other countries over a short period, you\u0026rsquo;ve got a serious problem.\nI\u0026rsquo;ve used applications like chkrootkit and rkhunter in the past, but I still prefer a keen eye and netstat on most occasions.\nCommand output is unusual\nI\u0026rsquo;ve seen compromises in the past where the attacker actually took the time to replace integral applications like ps, top and lsof to hide the evidence of the ongoing compromise. However, a quick peek in /proc revealed that there was a lot more going on.\nIf you suspect a compromise like this one, you may want to use the functionality provided by rpm to verify the integrity of the packages currently installed. You can quickly hunt for changed files by running rpm -Va | grep ^..5.\nKeeping tabs on changing files can be a challenge, but applications like tripwire and good ol\u0026rsquo; logwatch can save you in a pinch.\nSummary\nWe can all agree that the best way to prevent a compromise is to take precautions before putting anything into production. In real life, something will always be forgotten, so detection is a must. It\u0026rsquo;s critical to keep in mind that monitoring a server means more than keeping track on uptime. Keeping tabs on performance anomalies will allow you to find the compromise sooner and that keeps the damage done to a minimum.\n","date":"10 March 2011","permalink":"/p/strategies-for-detecting-a-compromised-linux-server/","section":"Posts","summary":"There are few things which will rattle systems administrators more than a compromised server.","title":"Strategies for detecting a compromised Linux server"},{"content":"","date":null,"permalink":"/tags/cluster/","section":"Tags","summary":"","title":"Cluster"},{"content":"As promised in one of my previous posts about dual-primary DRBD and OCFS2, I\u0026rsquo;ve compiled a step-by-step guide for Fedora. These instructions should be somewhat close to what you would use on CentOS or Red Hat Enterprise Linux. However, CentOS and Red Hat don\u0026rsquo;t provide some of the packages needed, so you will need to use other software repositories like RPMFusion or EPEL.\nIn this guide, I\u0026rsquo;ll be using two Fedora 14 instances in the Rackspace Cloud with separate public and private networks. The instances are called server1 and server2 to make things easier to follow.\nNOTE: All of the instructions below should be done on both servers unless otherwise specified.\n*First, we need to set up DRBD with two primary nodes. I\u0026rsquo;ll be using loop files for this setup since I don\u0026rsquo;t have access to raw partitions.\nyum -y install drbd-utils dd if=/dev/zero of=/drbd-loop.img bs=1M count=1000 Put this loop file initialization init script in /etc/init.d/loop-for-drbd and finish setting it up:\nchmod a+x /etc/init.d/loop-for-drbd chkconfig loop-for-drbd on /etc/init.d/loop-for-drbd start Place this DRBD resource file in /etc/drbd.d/r0.res. Be sure to adjust the server names and IP addresses for your servers.\nresource r0 { meta-disk internal; device /dev/drbd0; disk /dev/loop7; syncer { rate 1000M; } net { allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } startup { become-primary-on both; } on server1 { address 10.181.76.0:7789; } on server2 { address 10.181.76.1:7789; } } The net section is telling DRBD to do the following:\nallow-two-primaries – Generally, DRBD has a primary and a secondary node. In this case, we will allow both nodes to have the filesystem mounted at the same time. Do this only with a clustered filesystem. If you do this with a non-clustered filesystem like ext2/ext3/ext4 or reiserfs, you will have data corruption. Seriously! after-sb-0pri discard-zero-changes – DRBD detected a split-brain scenario, but none of the nodes think they\u0026rsquo;re a primary. DRBD will take the newest modifications and apply them to the node that didn\u0026rsquo;t have any changes. after-sb-1pri discard-secondary – DRBD detected a split-brain scenario, but one node is the primary and the other is the secondary. In this case, DRBD will decide that the secondary node is the victim and it will sync data from the primary to the secondary automatically. after-sb-2pri disconnect – DRBD detected a split-brain scenario, but it can\u0026rsquo;t figure out which node has the right data. It tries to protect the consistency of both nodes by disconnecting the DRBD volume entirely. You\u0026rsquo;ll have to tell DRBD which node has the valid data in order to reconnect the volume. Use extreme caution if you find yourself in this scenario. If you\u0026rsquo;d like to read about DRBD split-brain behavior in more detail, review the documentation.\nI generally turn off the usage reporting functionality in DRBD within /etc/drbd.d/global_common.conf:\nglobal { usage-count no; } Now we can create the volume and start DRBD:\ndrbdadm create-md r0 /etc/init.d/drbd start \u0026\u0026 chkconfig drbd on You may see some errors thrown about having two primaries but neither are up to date. That can be fixed by running the following command on the primary node only:\ndrbdsetup /dev/drbd0 primary -o If you run cat /proc/drbd on the secondary node, you should see the DRBD sync running:\nversion: 8.3.8 (api:88/proto:86-94) srcversion: 299AFE04D7AFD98B3CA0AF9 0: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r---- ns:0 nr:210272 dw:210272 dr:0 al:0 bm:12 lo:1 pe:2682 ua:0 ap:0 ep:1 wo:b oos:813660 [===\u003e................] sync'ed: 20.8% (813660/1023932)K queue_delay: 0.0 ms finish: 0:01:30 speed: 8,976 (6,368) want: 1024,000 K/sec Before you go any further, wait for the DRBD sync to fully finish. When it completes, it should look like this:\nversion: 8.3.8 (api:88/proto:86-94) srcversion: 299AFE04D7AFD98B3CA0AF9 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:1023932 dw:1023932 dr:0 al:0 bm:63 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 Now, on the secondary node only make it a primary node as well:\ndrbdadm primary r0 You should see this on the secondary node if you\u0026rsquo;ve done everything properly:\nversion: 8.3.8 (api:88/proto:86-94) srcversion: 299AFE04D7AFD98B3CA0AF9 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r---- ns:1122 nr:1119 dw:2241 dr:4550 al:2 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 We\u0026rsquo;re now ready to move on to configuring OCFS2. Only one package is needed:\nyum -y install ocfs2-tools Ensure that you have your servers and their private IP addresses in /etc/hosts before proceeding. Create the /etc/ocfs2 directory and place the following configuration in /etc/ocfs2/cluster.conf (adjust the server names and IP addresses):\ncluster: node_count = 2 name = web node: ip_port = 7777 ip_address = 10.181.76.0 number = 1 name = server1 cluster = web node: ip_port = 7777 ip_address = 10.181.76.1 number = 2 name = server2 cluster = web Now it\u0026rsquo;s time to configure OCFS2. Run service o2cb configure and follow the prompts. Use the defaults for all of the responses except for two questions:\nAnswer “y” to “Load O2CB driver on boot” Answer “web” to “Cluster to start on boot” Start OCFS2 and enable it at boot up:\nchkconfig o2cb on \u0026\u0026 chkconfig ocfs2 on /etc/init.d/o2cb start \u0026\u0026 /etc/init.d/ocfs2 start Create an OCFS2 partition on the primary node only:\nmkfs.ocfs2 -L \"web\" /dev/drbd0 Mount the volumes and configure them to automatically mount at boot time. You might be wondering why I do the mounting within /etc/rc.local. I chose to go that route since mounting via fstab was often unreliable for me due to the incorrect ordering of events at boot time. Using rc.local allows the mounts to work properly upon every reboot.\nmkdir /mnt/storage echo \"/dev/drbd0 /mnt/storage ocfs2 noauto,noatime 0 0\" \u003e\u003e /etc/fstab mount /dev/drbd0 echo \"mount /dev/drbd0\" \u003e\u003e /etc/rc.local At this point, you should be all done. If you want to test OCFS2, copy a file into your /mnt/storage mount on one node and check that it appears on the other node. If you remove it, it should be gone instantly on both nodes. This is a great opportunity to test reboots of both machines to ensure that everything comes up properly at boot time.\n","date":"14 February 2011","permalink":"/p/dual-primary-drbd-with-ocfs2/","section":"Posts","summary":"As promised in one of my previous posts about dual-primary DRBD and OCFS2, I\u0026rsquo;ve compiled a step-by-step guide for Fedora.","title":"Dual-primary DRBD with OCFS2"},{"content":"","date":null,"permalink":"/tags/high-availability/","section":"Tags","summary":"","title":"High Availability"},{"content":"","date":null,"permalink":"/tags/ocfs2/","section":"Tags","summary":"","title":"Ocfs2"},{"content":"","date":null,"permalink":"/tags/storage/","section":"Tags","summary":"","title":"Storage"},{"content":"","date":null,"permalink":"/tags/fudcon/","section":"Tags","summary":"","title":"Fudcon"},{"content":"FUDCon 2011 in Tempe hasn\u0026rsquo;t even fully started yet, but it\u0026rsquo;s been well worth the trip already. We put quite a few names with faces (or IRC nicks with faces) and discussed our initial forays into Linux when we were young.\nFrom what I was told last night, this is the first conference organized by folks not already working for Red Hat (even though some of them were hired on after planning was underway) and presentations are done in BarCamp format. This morning kicks off with the BarCamp pitches themselves and they are supposed to last only 20 seconds each. I\u0026rsquo;m new to this format of conferences but I\u0026rsquo;m eager to see how it works.\nQuite a few people on Twitter have asked me if I could toss some summaries of some of the talks onto the blog. I will certainly try my best to do so!\nHere\u0026rsquo;s a sampling of the photos I\u0026rsquo;ve taken so far:\niPad being used as a laptop List of sponsors (hey, it\u0026rsquo;s Rackspace!) Ian Weller has a great job title Strange \u0026ldquo;Loaf Love\u0026rdquo; truck in the hotel parking lot My conference badge along with a handy QR barcode Sunrise over Tempe ","date":"29 January 2011","permalink":"/p/gearing-up-for-fudcon-2011/","section":"Posts","summary":"FUDCon 2011 in Tempe hasn\u0026rsquo;t even fully started yet, but it\u0026rsquo;s been well worth the trip already.","title":"Gearing up for FUDCon 2011"},{"content":"After reading the title of this post, you might wonder “Why would someone pay for a Mac Mini and then not use OS X with it?” Well, if you have a somewhat older Mac Mini you want to use as a server with Linux, these instructions will come in handy.\nTo get started, you\u0026rsquo;ll need a few things:\nMac OS X Install Disc Your favorite Linux distribution\u0026rsquo;s install or live CD/DVD A CD with refit on it First off, boot the Mac into your normal OS X installation first and mute the sound. This will get rid of the Mac chime on bootup. It\u0026rsquo;s really difficult to get this done properly outside of OS X, so take the time to do it now. Put your Linux CD/DVD in the drive and reboot. While it\u0026rsquo;s rebooting, hold down the Option key (alt key if you\u0026rsquo;re using a PC keyboard) and you\u0026rsquo;ll have the option to boot from the disc when it boots up. The boot screen might say “Windows” for the Linux CD/DVD, but choose it anyway.\nWhen I installed Fedora, I had to switch the hard drive\u0026rsquo;s partition table from GPT to a plain old “msdos” partition table. Hop into a terminal, start parted on your main hard disk and type mklabel msdos. This will instantly erase the hard drive — make sure you\u0026rsquo;re ready for this step. If you\u0026rsquo;re using an anaconda-based installation, you can get to a root shell by pressing CTRL-ALT-F2. When you\u0026rsquo;re done with parted in that terminal, switch back to anaconda with CTRL-ALT-F6.\nAt this point, you shouldn\u0026rsquo;t have any partitions on your disk and you\u0026rsquo;ll be ready to install your Linux distribution normally. I generally put everything in one giant partition as it makes the “bless” step a little easier later on.\nEject the Linux CD/DVD once the installation is complete and toss in the refit CD that you burned previously. Reboot the Mini again while holding Option (or alt key) and choose the disc again at bootup. When refit appears, choose the second icon from the left in the bottom row and press enter. It might say that your GPT partition is empty — that\u0026rsquo;s okay.\nReboot again, but hold down the Eject key (or F12 on PC keyboards) during boot to eject the refit disc. Pop in the OS X install disc (may need to reboot again to get it to boot) and open a terminal once the install disc fully boots. Once you\u0026rsquo;re in the terminal, run diskutil list to figure out which partition is your boot partition. If you did one giant partition, this should be /dev/disk0s1. Just “bless” the partition to make it valid for booting:\nbless --device /dev/disk0s1 --setBoot --legacy --verbose Reboot again while holding Eject (or F12) to get the OS X disc out of the drive. At this point, you should be ready to go for hands-off booting. My Mac Mini went through about 10-20 seconds of wild screen flickering from grey to black to grey to black but then I saw the familiar Fedora framebuffer.\nIf you intend to run the Mac Mini headless with Linux, you\u0026rsquo;re going to run into a problem. The legacy BIOS used to boot Linux requires a monitor to be attached, but there are some workarounds. Also, if you want the Mini to power back on in case of a power failure, just run this at each boot:\nsetpci -s 0:1f.0 0xa4.b=0 Helpful resources:\nhttp://mac.linux.be/content/single-boot-linux-without-delay\nhttp://www.alphatek.info/2009/07/22/natively-run-fedora-11-on-an-intel-mac/\n","date":"26 January 2011","permalink":"/p/single-boot-linux-on-an-intel-mac-mini/","section":"Posts","summary":"After reading the title of this post, you might wonder “Why would someone pay for a Mac Mini and then not use OS X with it?","title":"Single boot Linux on an Intel Mac Mini"},{"content":"","date":null,"permalink":"/tags/mutt/","section":"Tags","summary":"","title":"Mutt"},{"content":"E-mailing a binary e-mail attachment from a Linux server has always been difficult for me because I never found a reliable method to get it done. I\u0026rsquo;ve used uuencode to pipe data into mail on various systems but the attachment is often unreadable by many e-mail clients.\nSomeone finally showed me a simple, fool-proof method to send binary attachments reliably from various Linux systems:\necho \u0026#34;Cheeseburger\u0026#34; | mutt -s \u0026#34;OHAI!\u0026#34; -a lolcat.jpg -- recipient@domain.com If you e-mail doesn\u0026rsquo;t arrive, remember to consider the size of the file that you\u0026rsquo;re sending and the restrictions of the receiver\u0026rsquo;s e-mail server. Keep in mind that encoding the binary attachment will cause the size of the e-mail to creep up a bit more (about 1.37x plus a little extra with Base64).\n","date":"11 January 2011","permalink":"/p/sending-binary-e-mail-attachments-from-the-command-line-with-mutt/","section":"Posts","summary":"E-mailing a binary e-mail attachment from a Linux server has always been difficult for me because I never found a reliable method to get it done.","title":"Sending binary e-mail attachments from the command line with mutt"},{"content":"Although it\u0026rsquo;s not a glamorous subject for system administrators, backups are necessary for any production environment. Those who run their systems without backups generally learn from their errors in a very painful way. However, the way you store your backups may sometimes prove to be just as vital as the methods you use to backup your data.\nFor my environments, I follow a strategy like this: I have some backups immediately accessible, others that are accessible very quickly (but not instantly), and others that are offsite and may take a bit more time to access.\nImmediately accessible backups\nOne of the easiest way to have an immediately accessible backup is to have multiple machines online running the same versions of code or databases in a high availability group. If you have a node which fails, the remaining nodes should be able to handle the requests immediately. You may not consider this to be a backup under the traditional definition of what a backup should be, but it\u0026rsquo;s functionally similar.\nBackups that are accessible quickly\nThis second level of backups should be stored very close to your environment or within the environment itself. If you have multiple database and web server nodes, you could consider storing your web backups on the database servers and vice versa. For those who run very sensitive applications, this may violate the provisions of different certifications and regulations. A server dedicated to holding backups may be a viable alternative for additional security.\nOffsite backup storage\nThese are the backups that need to be geographically distant from your main environment. Also, you should always consider storing these backups on more than one medium with more than one company.\nFor example, if your hosting providers offers a storage service, it\u0026rsquo;s fine to store one set of your backups there, but consider storing them with a competitor as well. If you store your backups with your hosting provider in multiple places, you could be caught be a provider issue and lose access to your backups entirely. Hosting with multiple providers will allow you to access at least one copy of your backups even if there are billing or technical issues with a particular provider.\nAnother thing to keep in mind with offsite backup storage is how long it will take to transfer the backups to your hosting environment in case of an emergency. If your hosting environment is in Texas, but your backups are stored in Australia, you\u0026rsquo;re going to have a longer wait when you transfer your data back.\nA specific example\nMy environments are all in Dallas, Texas and I have a highly available environment with multiple instances. My second layer of backups are stored within the environment as well as in Rackspace\u0026rsquo;s Cloud Files in Dallas. My third layer of backups are stored with Amazon S3 via Jungle Disk and at my home on a RAID array.\nWhile I hope you never need to access your backups under duress, these tips should help to reduce your stress if you need to restore data in a hurry.\n","date":"10 January 2011","permalink":"/p/strategies-for-storing-backups/","section":"Posts","summary":"Although it\u0026rsquo;s not a glamorous subject for system administrators, backups are necessary for any production environment.","title":"Strategies for storing backups"},{"content":"My daily work requires me to work with a lot of customer data and much of it involves IP address allocations. If you find that you need to sort a list by IP address with GNU sort on a Linux server, just use these handy arguments for sort:\nsort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 somefile.txt For this to work, the file you\u0026rsquo;re sorting needs to have the IP address as the first item on each line.\n","date":"6 January 2011","permalink":"/p/using-gnu-sort-to-sort-a-list-by-ip-address/","section":"Posts","summary":"My daily work requires me to work with a lot of customer data and much of it involves IP address allocations.","title":"Using GNU sort to sort a list by IP address"},{"content":"This situation might not affect everyone, but it struck me today and left me scratching my head. Consider a situation where you need to clone one drive to another with dd or when a hard drive is failing badly and you use dd_rescue to salvage whatever data you can.\nLet\u0026rsquo;s say you cloned data from a drive using something like this:\n# dd if=/dev/sda of=/mnt/nfs/backup/harddrive.img Once that\u0026rsquo;s finished, you should end up with your partition table as well as the grub data from the MBR in your image file. If you run file against the image file you made, you should see something like this:\n# file harddrive.img harddrive.img: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200, GRUB version 0.97; partition 1: ID=0x83, active, starthead 1, startsector 63, 33640047 sectors, code offset 0x48 What if you want to pull some files from this image without writing it out to another disk? Mounting it like a loop file isn\u0026rsquo;t going to work:\n# mount harddrive /mnt/temp mount: you must specify the filesystem type The key is to mount the file with an offset specified. In the output from file, there is a particular portion of the output that will help you:\n... startsector 63 ... This means that the filesystem itself starts on sector 63. You can also view this with fdisk -l:\n# fdisk -l harddrive.img Device Boot Start End Blocks Id System harddrive.img * 63 33640109 16820023+ 83 Linux Since we need to scoot 63 sectors ahead, and each sector is 512 bytes long, we need to use an offset of 32,256 bytes. Fire up the mount command and you\u0026rsquo;ll be on your way:\n# mount -o ro,loop,offset=32256 harddrive.img /mnt/loop # mount | grep harddrive.img /root/harddrive.img on /mnt/loop type ext3 (ro,loop=/dev/loop1,offset=32256) If you made this image under duress (due to a failing drive or other emergency), you might have to check and repair the filesystem first. Doing that is easy if you make a loop device:\n# losetup --offset 32256 /dev/loop2 harddrive.img # fsck /dev/loop2 Once that\u0026rsquo;s complete, you can save some time and mount the loop device directly:\n# mount /dev/loop2 /mnt/loop ","date":"15 December 2010","permalink":"/p/mounting-a-raw-partition-file-made-with-dd-or-dd_rescue-in-linux/","section":"Posts","summary":"This situation might not affect everyone, but it struck me today and left me scratching my head.","title":"Mounting a raw partition file made with dd or dd_rescue in Linux"},{"content":"It\u0026rsquo;s not easy remembering which RPM packages contain certain files. If I asked you which files you\u0026rsquo;d find in packages like postfix-2.7.1-1.fc14 and bash-4.1.7-3.fc14, you would be able to name some obvious executables. However, would you be able to do the same if I mentioned a package like util-linux-ng-2.18-4.6.fc14? If the RPM is already installed, you can quickly use rpm -ql to list the files within it.\nHowever, what if the RPM isn\u0026rsquo;t installed already? How do you figure out which one to install?\nFedora has well over 20,000 packages in the standard repositories without adding additional repositories like RPM Fusion. Narrowing that list down to find the package you want can be daunting, but you can use yum to help.\nConsider this: you\u0026rsquo;re following a guide online and the author says you need to run deallocvt:\n# deallocvt -bash: deallocvt: command not found Perhaps it\u0026rsquo;s in a package with deallocvt in the name:\n# yum search deallocvt Warning: No matches found for: deallocvt No Matches found This is where yum\u0026rsquo;s whatprovides (provides works in recent yum versions) command works really well:\n# yum whatprovides */deallocvt kbd-1.15-11.fc14.x86_64 : Tools for configuring the console Repo : fedora Matched from: Filename : /usr/bin/deallocvt From there, you can install the kbd RPM package via yum and you\u0026rsquo;ll be on your way.\nAuthor\u0026rsquo;s note: Regular readers will probably think this is pretty basic, but I often find people who don\u0026rsquo;t know this functionality exists in yum.\nUPDATE: I forgot to include another handy command in this article (thanks to Jason Gill for reminding me). If you have file on your system already, but you need to know which RPM package it came from, you can do this very quickly:\n# rpm -qf /usr/bin/free procps-3.2.8-14.fc14.x86_64 ","date":"9 December 2010","permalink":"/p/locate-rpm-packages-which-contain-a-certain-file/","section":"Posts","summary":"It\u0026rsquo;s not easy remembering which RPM packages contain certain files.","title":"Locate RPM packages which contain a certain file"},{"content":"","date":null,"permalink":"/tags/advanced/","section":"Tags","summary":"","title":"Advanced"},{"content":"One of the most interesting topics I\u0026rsquo;ve seen so far during my RHCA training at Rackspace this week is SystemTap. In short, SystemTap allows you to dig out a bunch of details about your running system relatively easily. It takes scripts, converts them to C, builds a kernel module, and then runs the code within your script.\nHOLD IT: The steps below are definitely not meant for those who are new to Linux. Utilizing SystemTap on a production system is a bad idea — it can chew up significant resources while it runs and it can also cause a running system to kernel panic if you\u0026rsquo;re not careful with the packages you install.\nThese instructions will work well with Fedora, CentOS and Red Hat Enterprise Linux. Luckily, the SystemTap folks put together some instructions for Debian and Ubuntu as well.\nBefore you can start working with SystemTap on your RPM-based distribution, you\u0026rsquo;ll need to get some prerequisites together:\nyum install gcc systemtap systemtap-runtime systemtap-testsuite kernel-devel yum --enablerepo=*-debuginfo install kernel-debuginfo kernel-debuginfo-common WHOA THERE: Ensure that the kernel-devel and kernel-debuginfo* packages that you install via yum match up with your running kernel. If there\u0026rsquo;s a newer kernel available from your yum repo, yum will pull that one. If it\u0026rsquo;s been a while since you updated, you\u0026rsquo;ll either need to upgrade your current kernel to the latest and reboot or you\u0026rsquo;ll need to hunt down the corresponding kernel-devel and kernel-debuginfo* packages from a repository. Installing the wrong package version can lead to kernel panics. Also, bear in mind that the debuginfo packages are quite large: almost 200MB in Red Hat/CentOS and almost 300MB in Fedora.\nYou can\u0026rsquo;t write the script in just any language. SystemTap uses an odd syntax to get things going:\n#! /usr/bin/env stap probe begin { println(\u0026#34;hello world\u0026#34;) exit () } Just run the script with stap:\n# stap -v helloworld.stp Pass 1: parsed user script and 73 library script(s) using 94380virt/21988res/2628shr kb, in 140usr/30sys/167real ms. Pass 2: analyzed script: 1 probe(s), 1 function(s), 0 embed(s), 0 global(s) using 94776virt/22516res/2692shr kb, in 10usr/0sys/5real ms. Pass 3: using cached /root/.systemtap/cache/bc/stap_bc368822da380b943d4e845ee15ed047_773.c Pass 4: using cached /root/.systemtap/cache/bc/stap_bc368822da380b943d4e845ee15ed047_773.ko Pass 5: starting run. hello world Pass 5: run completed in 0usr/20sys/285real ms. The systemtap-testsuite package gives you a tubload of extremely handy SystemTap scripts. For example:\n# cd /usr/share/systemtap/testsuite/systemtap.examples/io/ # stap iotime.stp 15138470 6351 (httpd) access /usr/share/cacti/index.php read: 0 write: 0 15142243 6351 (httpd) access /usr/share/cacti/include/auth.php read: 0 write: 0 15143780 6351 (httpd) access /usr/share/cacti/include/global.php read: 0 write: 0 15144099 6351 (httpd) access /etc/cacti/db.php read: 0 write: 0 15187641 6351 (httpd) access /usr/share/cacti/lib/adodb/adodb.inc.php read: 106486 write: 0 15187664 6351 (httpd) iotime /usr/share/cacti/lib/adodb/adodb.inc.php time: 218 15194965 6351 (httpd) access /usr/share/cacti/lib/adodb/adodb-time.inc.php read: 0 write: 0 15195692 6351 (httpd) access /usr/share/cacti/lib/adodb/adodb-iterator.inc.php read: 0 write: 0 ... output continues ... The iotime.stp script dumps out the reads and writes occurring on the system in real time. After starting the script above, I accessed my cacti instance on the server and immediately started seeing some reads as apache began picking up PHP files to parse.\nConsider a situation in which you need to decrease interrupts on a Linux machine. This is vital for laptops and systems that need to remain in low power states. Some might suggest powertop for that, but why not give SystemTap a try?\n# cd /usr/share/systemtap/testsuite/systemtap.examples/interrupt/ # stap interrupts-by-dev.stp ohci_hcd:usb3 : 1 ohci_hcd:usb4 : 1 hda_intel : 1 eth0 : 2 eth0 : 2 eth0 : 2 eth0 : 2 eth0 : 2 eth0 : 2 On this particular system, it\u0026rsquo;s pretty obvious that the ethernet interface is causing a lot of interrupts.\nIf you want more examples, keep hunting around in the systemtap-testsuite package (remember rpm -ql systemtap-testsuite) or review the giant list of examples on SystemTap\u0026rsquo;s site.\nThanks again to Phil Hopkins at Rackspace for giving us a detailed explanation of system profiling during training.\n","date":"8 December 2010","permalink":"/p/tap-into-your-linux-system-with-systemtap/","section":"Posts","summary":"One of the most interesting topics I\u0026rsquo;ve seen so far during my RHCA training at Rackspace this week is SystemTap.","title":"Tap into your Linux system with SystemTap"},{"content":"The guide to redundant cloud hosting that I wrote recently will need some adjustments as I\u0026rsquo;ve fallen hard for the performance and reliability of DRBD and OCFS2. As a few of my sites were gaining in popularity, I noticed that GlusterFS simply couldn\u0026rsquo;t keep up. High I/O latency and broken replication threw a wrench into my love affair with GlusterFS and I knew there had to be a better option.\nI\u0026rsquo;ve shared my configuration with my coworkers and I\u0026rsquo;ve received many good questions about it. Let\u0026rsquo;s get to the Q\u0026amp;A:\nHow does the performance compare to GlusterFS?\nOn Gluster\u0026rsquo;s best days, the data throughput speeds were quite good, but the latency to retrieve the data was often much too high. Page loads on this site were taking upwards of 3-4 seconds with GlusterFS latency accounting for well over 75% of the delays. For small files, GlusterFS\u0026rsquo;s performance was about 20-25x slower than accessing the disk natively. The performance hit for DRBD and OCFS2 is usually between 1.5-3x for small files and difficult to notice for large file transfers.\nCouldn\u0026rsquo;t you keep the data separate and then sync it with rsync?\nEveryone knows that rsync can be a resource consuming monster and it seems wasteful to call rsync via a cron job to keep my data in sync. There are some periods of the day where the actual data on the web root rarely changes. There are other times where it changes rapidly and I\u0026rsquo;d end up with nodes out of sync for a few minutes.\nTo get the just-in-time synchronization that I want, I\u0026rsquo;d have to run rsync at least once a minute. If the data isn\u0026rsquo;t changing over a long period, rsync would end up crushing the disk and consuming CPU for no reason. DRBD only syncs data when data changes. Also, all reads with DRBD are done locally. This makes is a highly efficient and effective choice for instant synchronization.\nWhy OCFS2? Isn\u0026rsquo;t that overkill?\nWhen you use DRBD in dual-primary mode, it\u0026rsquo;s functionally equivalent to having a raw storage device (like a SAN) mounted in two places. If you threw an ext4 filesystem onto a LUN on your SAN and then mounted it on two different servers, you\u0026rsquo;d be in bad shape very quickly. Non-clustered filesystems like ext3 or ext4 can\u0026rsquo;t handle being mounted in more than one environment.\nOCFS2 is built primarily to be mounted in more than one place and it comes with its own distributed locking manager (DLM). The configuration files for OCFS2 are extremely simple and you mount it like any other filesystem. It\u0026rsquo;s been part of the mainline Linux kernel since 2.6.19.\nWhat happens when you lose one of the nodes?\nThe configuration shown above can operate with just one node in an emergency. When the failed node comes back online, DRBD will resync the block device and you can mount the OCFS2 filesystem as you normally would.\nYou\u0026rsquo;re using an Oracle product? Really?\nYou\u0026rsquo;ve got me there. I\u0026rsquo;m not a fan of how they treat the open source community with regards to some of their projects, but the OCFS2 filesystem is robust, free, and it meets my needs.\nWhere\u0026rsquo;s the how-to?\nIt\u0026rsquo;s coming soon! Stay tuned.\n","date":"3 December 2010","permalink":"/p/keep-web-servers-in-sync-with-drbd-and-ocfs2/","section":"Posts","summary":"The guide to redundant cloud hosting that I wrote recently will need some adjustments as I\u0026rsquo;ve fallen hard for the performance and reliability of DRBD and OCFS2.","title":"Keep web servers in sync with DRBD and OCFS2"},{"content":"The pv command is one that I really enjoy using but it\u0026rsquo;s also one that I often forget about. You can\u0026rsquo;t get a much more concise definition of what pv does than this one:\npv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA.\nThe usage certainly isn\u0026rsquo;t complicated:\nTo use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error.\nA great application of pv is when you\u0026rsquo;re restoring large amounts of data into MySQL, especially if you\u0026rsquo;re restoring data under duress due to an accidentally-dropped table or database. (Who hasn\u0026rsquo;t been there before?) The standard way of restoring data is something we\u0026rsquo;re all familiar with:\n# mysql my_database \u0026lt; database_backup.sql The downside of this method is that you have no idea how quickly your restore is working or when it might be done. You could always open another terminal to monitor the tables and databases as they\u0026rsquo;re created, but that can be hard to follow.\nToss in pv and that problem is solved:\n# pv database_backup.sql | mysql my_database 96.8MB 0:00:17 [5.51MB/s] [==\u003e ] 11% ETA 0:02:10 When it comes to MySQL, your restore rate is going to be different based on some different factors, so the ETA might not be entirely accurate.\n","date":"24 November 2010","permalink":"/p/monitor-mysql-restore-progress-with-pv/","section":"Posts","summary":"The pv command is one that I really enjoy using but it\u0026rsquo;s also one that I often forget about.","title":"Monitor MySQL restore progress with pv"},{"content":"If you offer a web service that users query via scripts or other applications, you\u0026rsquo;ll probably find that some people will begin to abuse the service. My icanhazip.com site is no exception.\nWhile many of the users have reasonable usage patterns, there are some users that query the site more than once per second from the same IP address. If you haven\u0026rsquo;t used the site before, all it does is return your public IP address in plain text. Unless your IP changes rapidly, you may not need to query the site more than a few times an hour.\nI added the following to my icanhazip.com virtual host definition to get the message across to those users that abuse the service:\nErrorDocument 403 \u0026#34;No can haz IP. Stop abusing this service. \\ Contact major at mhtx dot net for details.\u0026#34; RewriteEngine On RewriteCond %{REMOTE_ADDR} ^12.23.34.45$ [OR] RewriteCond %{REMOTE_ADDR} ^98.87.76.65$ RewriteRule .* nocanhaz [F] The users that are caught on the business end of these 403 responses will see something like this:\n$ curl -i icanhazip.com HTTP/1.1 403 Forbidden Date: Wed, 17 Nov 2010 13:42:55 GMT Server: Apache Content-Length: 84 Connection: close Content-Type: text/html; charset=iso-8859-1 No can haz IP. Stop abusing this service. Contact major at mhtx dot net for details. ","date":"17 November 2010","permalink":"/p/throwing-thoughtful-403-forbidden-responses-with-apache/","section":"Posts","summary":"If you offer a web service that users query via scripts or other applications, you\u0026rsquo;ll probably find that some people will begin to abuse the service.","title":"Throwing thoughtful “403 Forbidden” responses with apache"},{"content":" Diagram: OpenVPN to Rackspace Cloud Servers and Slicehost\nA recent blog post from Mixpanel inspired me to write a quick how-to for Fedora users on using OpenVPN to talk to instances privately in the Rackspace Cloud.\nThe diagram at the right gives an idea of what this guide will allow you to accomplish. Consider a situation where you want to talk to the MySQL installation on db1 directly without requiring extra ssh tunnels or MySQL over SSL via the public network. If you tunnel into one of your instances, you can utilize the private network to talk between your instances very easily.\nThere\u0026rsquo;s one important thing to keep in mind here: even though you\u0026rsquo;ll be utilizing the private network between your tunnel endpoint and your other instances, your traffic will still traverse the public network. That means that the instance with your tunnel endpoint will still get billed for the traffic flowing through your tunnel.\nYou\u0026rsquo;ll only need the openvpn package on the server side:\nyum -y install openvpn Throw down this simple configuration file into /etc/openvpn/server.conf:\nport 1194 proto tcp dev tun persist-key persist-tun server 10.66.66.0 255.255.255.0 ifconfig-pool-persist ipp.txt #push \u0026#34;route 10.0.0.0 255.0.0.0\u0026#34; push \u0026#34;route 10.176.0.0 255.248.0.0\u0026#34; keepalive 10 120 ca /etc/openvpn/my_certificate_authority.pem cert /home/major/vpn_server_cert.pem key /home/major/vpn_server_key.pem dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem status log/openvpn-status.log verb 3 Here\u0026rsquo;s a bit of explanation for some things you may want to configure:\npush: These are the routes that will be sent over the VPN that are pushed to the clients. If you don\u0026rsquo;t use any IP addresses in the 10.0.0.0/8 network block in your office, you can probably use the commented out line above. However, you may want to be more specific with the routes if you happen to use any 10.0.0.0/8 space in your office. server: These are the IP addresses that the VPN server will assign and NAT out through the private interface. I\u0026rsquo;ve used a /24 above, but you may want to adjust the netmask if you have a lot of users making tunnels to your VPN endpoint. ca, cert, key: You will need to create a certificate authority as well as a certificate/key pair for your VPN endpoint. I already use SimpleAuthority on my Mac to manage some other CA\u0026rsquo;s and certificates, but you can use openvpn\u0026rsquo;s easy-rsa scripts if you wish. They are already included with the openvpn installation. Build your Diffie-Hellman parameters file:\ncd /etc/openvpn/easy-rsa/2.0/ \u0026amp;\u0026amp; ./build-dh Tell iptables that you want to NAT your VPN endpoint traffic out to all 10.x.x.x IP addresses on the private network:\niptables -t nat -A POSTROUTING -s 10.0.0.0/8 -o eth1 -j MASQUERADE The last step on the server side is to ensure that the kernel will forward packets from the VPN endpoint out through the private interface. Ensure that your /etc/sysctl.conf looks like this:\n# Controls IP packet forwarding net.ipv4.ip_forward = 1 Adjusting your sysctl.conf ensures that forwarding is enabled at boot time, but you\u0026rsquo;ll need to enable it on your VPN endpoint right now:\necho 1 \u0026gt; /proc/sys/net/ipv4/ip_forward Start the openvpn server:\n/etc/init.d/openvpn start If all is well, you should see openvpn listening on port 1194:\n[root@lb2 ~]# netstat -ntlp | grep openvpn tcp 0 0 0.0.0.0:1194 0.0.0.0:* LISTEN 2020/openvpn You\u0026rsquo;ll need to configure a client to talk to your VPN now. This involves three steps: creating a new certificate/key pair for the client (same procedure as making your server certificates), signing the client\u0026rsquo;s certificate with your CA certificate (same one that you used above to sign your server certificates), and then configuring your client application to access the VPN.\nThere are many openvpn clients out there to choose from.\nIf you\u0026rsquo;re using a Linux desktop, you may want to consider using the built-in VPN functionality in NetworkManager. For Mac users, I\u0026rsquo;d highly recommend using Viscosity ($9), but there\u0026rsquo;s also tunnelblick (free).\n","date":"16 November 2010","permalink":"/p/accessing-rackspace-cloud-servers-and-slicehost-slices-privately-via-openvpn/","section":"Posts","summary":"Diagram: OpenVPN to Rackspace Cloud Servers and Slicehost","title":"Accessing Rackspace Cloud Servers and Slicehost slices privately via OpenVPN"},{"content":"","date":null,"permalink":"/tags/slicehost/","section":"Tags","summary":"","title":"Slicehost"},{"content":"","date":null,"permalink":"/tags/vpn/","section":"Tags","summary":"","title":"Vpn"},{"content":"","date":null,"permalink":"/tags/cloud-servers/","section":"Tags","summary":"","title":"Cloud Servers"},{"content":"On most systems, using Fedora\u0026rsquo;s preupgrade package is the most reliable way to update to the next Fedora release. However, this isn\u0026rsquo;t the case with Slicehost and Rackspace Cloud Servers.\nHere are the steps for an upgrade from Fedora 13 to Fedora 14 via yum:\nyum -y upgrade wget http://mirror.rackspace.com/fedora/releases/14/Fedora/x86_64/os/Packages/fedora-release-14-1.noarch.rpm rpm -Uvh fedora-release-14-1.noarch.rpm yum -y install yum yum -y upgrade If you happen to be upgrading a 32-bit instance on Slicehost, simply replace x86_64 with i386 in the url shown above.\n","date":"3 November 2010","permalink":"/p/upgrading-fedora-13-to-fedora-14-on-slicehost-and-rackspace-cloud-servers/","section":"Posts","summary":"On most systems, using Fedora\u0026rsquo;s preupgrade package is the most reliable way to update to the next Fedora release.","title":"Upgrading Fedora 13 to Fedora 14 on Slicehost and Rackspace Cloud Servers"},{"content":"After a discussion amongst coworkers about professional certifications in e-mail signatures yesterday, I decided to throw the question out to Twitter to gather some feedback:\nrackerhacker: Quick Twitter poll for the nerds: How many certification abbreviations do you put in your e-mail signature? [Permalink]\nThe question must have struck a nerve with folks as I had over 50 replies in less than 10-15 minutes. I expected to hear a lot of people say \u0026ldquo;zero\u0026rdquo;, and there were quite a few responses that didn\u0026rsquo;t surprise me:\nminter: @RackerHacker Zero\nstwange: @RackerHacker none it\u0026rsquo;s pretentious\nnickboldt: @RackerHacker Zero. My cert-fu is weak.\nscassiba: @RackerHacker none, I don\u0026rsquo;t feel it\u0026rsquo;s necessary to fluff up an email signature with certifications\nerrr_: @RackerHacker 0\nchrstphrbrwn: @RackerHacker Zero. I don\u0026rsquo;t use an email signature.\nDamianZaremba: @RackerHacker none, makes you look a bit stuck up imo\nsaiweb: @RackerHacker 0\njirahcox: @RackerHacker 0. Job title only if absolutely necessary.\njtimberman: @RackerHacker None mine are almost all expired anyway :)\nckeck: @RackerHacker zero\nbillblum: @RackerHacker None.\npuppetmasterd: @RackerHacker zero, or preferably fewer\nripienaar: @RackerHacker zero, they dont add value. Much rather link me to your github account so I can make up my own mind :)\nredbluemagenta: @RackerHacker None. People can see for themselves through other avenues (blog, github, references) if you\u0026rsquo;re any good.\nubuntusoren: @RackerHacker none\nThere were a few people who disagreed:\nbwwhite: @RackerHacker Just one because that\u0026rsquo;s all I have :) But I think 2 should be the limit. Pick the 2 most relevant to your current role\njwgoerlich: Generally 2. Depends on the email topic.\nwhitenheimer: @RackerHacker just one, some of them aren\u0026rsquo;t worth putting\nrussjohnson: @RackerHacker Currently none but have done upto 4\nrbp1987: @RackerHacker I only put the most relevant or the highest level of cert that i have. Why what do you do at the moment?\nhotshotsphoto: @RackerHacker MCP, MCP+I, CNA MCSE ITIL Practitioner….but only when I\u0026rsquo;m working in the IT field\nThere were quite a few that were strongly worded or humorous:\nrjamestaylor: @RackerHacker I hate them - R Taylor, SCJP, MCP, RHCP, BSci, SAG\nmshuler: .@RackerHacker I regard email signatures similar to SUVs - the size is relevant to the compensation factor\niota: @RackerHacker zero; as number of reported certifications increase, respect for sender decreases - my law #1514\nraykrueger: @RackerHacker http://theoatmeal.com/comics/email\nhjv: @RackerHacker I\u0026rsquo;m Ebay A+++ Certified.\nunixdaemon: @puppetmasterd @RackerHacker I\u0026rsquo;d love to see \u0026ldquo;Failed my MCP due to realising 10 minutes in that it would taint my soul. Forever.\u0026rdquo; on a CV.\nswimsaftereatin: @mshuler @RackerHacker And ASCII art is to the signature as a huge purple spoiler is to a pickup truck.\nanoopbhat: @RackerHacker none. unless there is a cert whose acronym is BADASS.\nsarahvdv: @RackerHacker None because I only have one and it\u0026rsquo;s almost embarrassing to put \u0026ldquo;CompTIA Network+ Certified\u0026rdquo; in my signature.\n0x44: @RackerHacker Zero. Certifications are nerd short-hand for \u0026ldquo;Don\u0026rsquo;t hire me.\u0026rdquo;\nI\u0026rsquo;m certainly not against certifications - they\u0026rsquo;re a good way for vendors to ensure that there are trained professionals that meet a certain set of minimum knowledge levels about their product. When you hire someone with a particular certification, you should be able to assume that they have this minimum knowledge level (for most certifications).\nHowever, a certification says absolutely nothing about how a job candidate has actually applied these skills to their previous work. For example, consider a systems administrator with a CCNA. If you ask the job applicant something like \u0026ldquo;So, how much experience do you have working with Cisco\u0026rdquo; for a Cisco-heavy job position and they reply that they\u0026rsquo;ve set up a Cisco PIX a few times, but they mainly focus on Linux administration, then what is that certification worth to your company?\nAs for e-mail signatures, I\u0026rsquo;d leave out the certifications. If you\u0026rsquo;re sending e-mails to coworkers that you already know, there shouldn\u0026rsquo;t any reason for you to \u0026ldquo;fluff\u0026rdquo; your signature with those abbreviations. They should already be familiar with your abilities and the addition of certifications to the e-mail doesn\u0026rsquo;t add anything valuable to the e-mail itself. If you\u0026rsquo;re sending e-mails to people you don\u0026rsquo;t know (especially for a job), it makes your e-mail look pretentious.\n","date":"16 October 2010","permalink":"/p/do-professional-certifications-belong-in-your-e-mail-signature/","section":"Posts","summary":"After a discussion amongst coworkers about professional certifications in e-mail signatures yesterday, I decided to throw the question out to Twitter to gather some feedback:","title":"Do professional certifications belong in your e-mail signature?"},{"content":"One of the most common questions that I see in my favorite IRC channel is: “How can I secure sshd on my server?” There\u0026rsquo;s no single right answer, but most systems administrators combine multiple techniques to provide as much security as possible with the least inconvenience to the end user.\nHere are my favorite techniques listed from most effective to least effective:\nSSH key pairs\nBy disabling password-based authentication and requiring ssh key pairs, you reduce the chances of compromise via a brute force attack. This can also help you protect against weak account passwords since a valid private key is required to gain access to the server. However, a weak account password is still a big problem if you allow your users to use sudo.\nIf you\u0026rsquo;re new to using ssh keys, there are many great guides that can walk you through the process.\nFirewall\nLimiting the source IP addresses that can access your server on port 22 is simple and effective. However, if you travel on vacation often or your home IP address changes frequently, this may not be a convenient way to limit access. Acquiring a server with trusted access through your firewall would make this method easier to use, but you\u0026rsquo;d need to consider the security of that server as well.\nThe iptables rules would look something like this:\niptables -A INPUT -j ACCEPT -p tcp --dport 22 -s 10.0.0.20 iptables -A INPUT -j ACCEPT -p tcp --dport 22 -s 10.0.0.25 iptables -A INPUT -j DROP -p tcp --dport 22 Use a non-standard port\nI\u0026rsquo;m not a big fan of security through obscurity and it doesn\u0026rsquo;t work well for ssh. If someone is simply scanning a subnet to find ssh daemons, you might not be seen the first time. However, if someone is targeting you specifically, changing the ssh port doesn\u0026rsquo;t help at all. They\u0026rsquo;ll find your ssh banner quickly and begin their attack.\nIf you prefer this method, simply adjust the Port configuration parameter in your sshd_config file.\nLimit users and groups\nIf you have only certain users and groups who need ssh access to your server, setting user or group limits can help increase security. Consider a server which needs ssh access for developers and a manager. Adding this to to your sshd_config would allow only those users and groups to access your ssh daemon:\nAllowGroups developers AllowUsers jsmith pjohnson asamuels Keep in mind that any users or groups not included in the sshd_config won\u0026rsquo;t be able to access your ssh server.\nTCP wrappers\nWhile TCP wrappers are tried and true, I consider them to be a bit old-fashioned. I\u0026rsquo;ve found that many new systems administrators may not think of TCP wrappers when they diagnose server issues and this could possibly cause delays when adjustments need to be made later.\nIf you\u0026rsquo;re ready to use TCP wrappers to limit ssh connections, check out Red Hat\u0026rsquo;s extensive documentation.\nfail2ban and denyhosts\nFor those systems administrators who want to take a bit more active stance on blocking brute force attacks, there\u0026rsquo;s always fail2ban or denyhosts. Both fail2ban and denyhosts monitor your authentication logs for repeated failures, but denyhosts can only work with your ssh daemon. You can use fail2ban with other applications like web servers and FTP servers.\nThe only downside of using these applications is that if a valid user accidentally tries to authenticate unsuccessfully multiple times, they may be locked out for a period of time. This could be a big problem if you\u0026rsquo;re in the middle of a server emergency.\nA quick search on Google will give you instructions on fail2ban configuration as well as denyhosts configuration.\nPort knocking\nAlthough port knocking is another tried and true method to prevent unauthorized access, it can be annoying to use unless you have users who are willing to jump through additional hoops. Port knocking involves a “knock” on an arbitrary port that then allows the ssh daemon to be exposed to the user who sent the original knock.\nLinux Journal has a great article explaining how port knocking works and it provides some sample configurations as well.\nConclusion\nThe best way to secure your ssh daemon is to apply more than one of these methods to your servers. Weighing security versus convenience of access isn\u0026rsquo;t an easy task and it will be different for every environment. Regardless of the method or methods you choose, ensure that the rest of your team is comfortable with the changes and capable of adapting to them efficiently.\n","date":"12 October 2010","permalink":"/p/securing-your-ssh-server/","section":"Posts","summary":"One of the most common questions that I see in my favorite IRC channel is: “How can I secure sshd on my server?","title":"Securing your ssh server"},{"content":"Installing Xen can be a bit of a challenge for a beginner and it\u0026rsquo;s made especially difficult by distribution vendors who aren\u0026rsquo;t eager to include it in their current releases. I certainly don\u0026rsquo;t blame the distribution vendors for omitting it; the code to support Xen\u0026rsquo;s privileged domain isn\u0026rsquo;t currently in upstream kernels.\nHowever, Pasi Kärkkäinen has written a detailed walkthrough about how to get Xen 4 running on Fedora 13. Although there are quite a few steps involved, it\u0026rsquo;s worked well for me so far.\n","date":"10 September 2010","permalink":"/p/installing-xen-4-on-fedora-13/","section":"Posts","summary":"Installing Xen can be a bit of a challenge for a beginner and it\u0026rsquo;s made especially difficult by distribution vendors who aren\u0026rsquo;t eager to include it in their current releases.","title":"Installing Xen 4 on Fedora 13"},{"content":"Let\u0026rsquo;s go ahead and get this out of the way: The following post contains only my personal opinions. These are not the opinions of my employer and should not be considered as such.\nThe term \u0026ldquo;cloud hosting\u0026rdquo; has become more popular over the past few years and it seems like everyone is talking about it. I\u0026rsquo;m often asked by customers and coworkers about what cloud hosting really is. Where does traditional dedicated hosting end and cloud begin? Do they overlap? Who needs cloud and who doesn\u0026rsquo;t?\nYou can\u0026rsquo;t talk about cloud hosting without defining it first. When I think of \u0026ldquo;cloud\u0026rdquo;, these are the things that come to mind:\nquickly add/remove resources with little or no lead time hosting platforms that allow for quick provisioning of highly available systems self-service adjustment of tangible and intangible resources that normally require human intervention That list may seem a bit vague at first, but try to let it sink in just a bit. Hosting applications in a \u0026ldquo;cloud\u0026rdquo; shouldn\u0026rsquo;t mean that you must have a virtual instance running on Xen, KVM or VMWare, and it shouldn\u0026rsquo;t mean that you must have an account with Rackspace Cloud, Amazon EC2, or Microsoft Azure. It means that your hosting operations are highly automated and you can rapidly allocate and deallocate resources for the requirements of your current projects.\nConsider this: a customer of a traditional dedicated hosting provider decides to take their applications and host them on one VPS at a leading commercial provider. That provider allows the customer to spin up new VM\u0026rsquo;s in a matter of minutes and re-image the VM\u0026rsquo;s whenever they like. Is that cloud hosting? I\u0026rsquo;d say yes - even if it\u0026rsquo;s one single virtual instance. That customer has moved from a hosting system with manual interventions and extended lead times to a system where they have instant control over their resources.\nIt\u0026rsquo;s not possible to talk about what cloud is without talking about what it isn\u0026rsquo;t.\nCloud is not infinitely scalable. If any provider ever claims that their solution is \u0026ldquo;infinitely scalable\u0026rdquo;, you should be skeptical. Regardless of the provider, everyone eventually runs out of datacenter space, servers, network bandwidth, or power. (If you know of a provider that is infinitely scalable, please let me know as I\u0026rsquo;d love to see their facilities and review their supply chain.) Cloud isn\u0026rsquo;t right for everybody. Some applications have demands that cloud hosting might not be able to meet (yet). If an application depends on proprietary hardware that is difficult to virtualize or rapidly allocate, cloud hosting is probably not the answer for that particular application. Cloud doesn\u0026rsquo;t mean VPS. VPS doesn\u0026rsquo;t mean cloud. As I said before, having a virtual private server environment is not a pre-requisite for cloud hosting. Also, not all VPS solutions fit my definition of cloud as they don\u0026rsquo;t allow for rapid deployments and resource adjustments. It\u0026rsquo;s important to remember that cloud hosting is a marketing term. As for the technology of cloud, it\u0026rsquo;s what you make of it. You should be looking to reduce costs, solidify availability and increase performance every day. If the ideals of cloud hosting help you do that, it might be the right option for you.\n","date":"25 August 2010","permalink":"/p/a-nerds-perspective-on-cloud-hosting/","section":"Posts","summary":"Let\u0026rsquo;s go ahead and get this out of the way: The following post contains only my personal opinions.","title":"A nerd’s perspective on cloud hosting"},{"content":"","date":null,"permalink":"/tags/opinion/","section":"Tags","summary":"","title":"Opinion"},{"content":"","date":null,"permalink":"/tags/benchmarks/","section":"Tags","summary":"","title":"Benchmarks"},{"content":"","date":null,"permalink":"/tags/glusterfs/","section":"Tags","summary":"","title":"Glusterfs"},{"content":"I\u0026rsquo;ve been getting requests for GlusterFS benchmarks from every direction lately and I\u0026rsquo;ve been a bit slow on getting them done. You may suspect that you know the cause of the delays, and you\u0026rsquo;re probably correct. ;-)\nQuite a few different sites argue that the default GlusterFS performance translator configuration from glusterfs-volgen doesn\u0026rsquo;t allow for good performance. You can find other sites which say you should stick with the defaults that come from the script. I decided to run some simple tests to see which was true in my environment.\nHere\u0026rsquo;s the testbed:\nGlusterFS 3.0.5 running on RHEL 5.4 Xen guests with ext3 filesystems one GlusterFS client and two GlusterFS servers are running in separate Xen guests cluster/replicate translator is being used to keep the servers in sync the instances are served by a gigabit network It\u0026rsquo;s about time for some pretty graphs, isn\u0026rsquo;t it?\nThe test run on the left used default stock client and server volume files as they come from glusterfs-volgen. The test run on the right used a client volume file with no performance translators (the server volume file was untouched). Between each test run, the GlusterFS mount was unmounted and remounted. I repeated this process four times (for a total of five runs) and averaged the data.\nYou\u0026rsquo;ll have to forgive the color mismatches and the lack of labeling on the legend (that\u0026rsquo;s KB/sec transferred) as I\u0026rsquo;m far from an Excel expert.\nThe graphs show that running without any translators at all will drastically hinder read caching in GlusterFS - exactly as I expected. Without any translators, the performance is very even across the board. Since my instances had 256MB of RAM each, their iocache translator was limited to about 51MB of cache. That\u0026rsquo;s reflected in the graph on the left - look for the vertical red/blue divider between the 32MB and 64MB file sizes. I\u0026rsquo;ll be playing around with that value soon to see how it can improve performance for large and small files.\nKeep in mind that this test was very unscientific and your results may vary depending on your configuration. While I hope to have more detailed benchmarks soon, this should help some of the folks who have been asking for something basic and easy to understand.\n","date":"13 August 2010","permalink":"/p/very-unscientific-glusterfs-benchmarks/","section":"Posts","summary":"I\u0026rsquo;ve been getting requests for GlusterFS benchmarks from every direction lately and I\u0026rsquo;ve been a bit slow on getting them done.","title":"Very unscientific GlusterFS benchmarks"},{"content":"As many of you might have noticed from my previous GlusterFS blog post and my various tweets, I\u0026rsquo;ve been working with GlusterFS in production for my personal hosting needs for just over a month. I\u0026rsquo;ve also been learning quite a bit from some of the folks in the #gluster channel on Freenode. On a few occasions I\u0026rsquo;ve even been able to help out with some configuration problems from other users.\nThere has been quite a bit of interest in GlusterFS as of late and I\u0026rsquo;ve been inundated with questions from coworkers, other system administrators and developers. Most folks want to know about its reliability and performance in demanding production environments. I\u0026rsquo;ll try to do my best to cover the big points in this post.\nFirst off, here\u0026rsquo;s now I\u0026rsquo;m using it in production: I have two web nodes that keep content in sync for various web sites. They each run a GlusterFS server instance and they also mount their GlusterFS share. I\u0026rsquo;m using the replicate translator to keep both web nodes in sync with client side replication.\nHere are my impressions after a month:\nI/O speed is often tied heavily to network throughput\nThis one may seem obvious, but it\u0026rsquo;s not always true in all environments. If you deal with a lot of small files like I do, a 40mbit/sec link between the Xen guests is plenty. Adding extra throughput didn\u0026rsquo;t add any performance to my servers. However, if you wrangle large files on your servers regularly, you may want to consider higher throughput links between your servers. I was able to push just under 900mbit/sec by using dd to create a large file within a GlusterFS mount.\nNetwork and I/O latency are big factors for small file performance\nIf you have a busy network and the latency creeps up from time to time, you\u0026rsquo;ll find that your small file performance will drop significantly (especially with the replicate translator). Without getting too nerdy (you\u0026rsquo;re welcome to read the technical document on replication), replication is an intensive process. When a file is accessed, the client goes around to each server node to ensure that it not only has a copy of the file being read, but that it has the correct copy. If a server didn\u0026rsquo;t save a copy of a file (due to disk failure or the server being offline when the file was written), it has to be synced across the network from one of the good nodes.\nWhen you write files on replicated servers, the client has to roll through the same process first. Once that\u0026rsquo;s done, it has to lock the file, write to the change log, then do the write operation, drop the change log entries, and then unlock the file. All of those operations must be done on all of the servers. High latency networks will wreak havoc on this process and cause it to take longer than it should.\nIt\u0026rsquo;s quite obvious that if you have a fast, low-latency network between your servers, slow disks can still be a problem. If the client is waiting on the server nodes\u0026rsquo; disks to write data, the read and write performance will suffer. I\u0026rsquo;ve tested this in environments with fast networks and very busy RAID arrays. Even if the network was very underutilized, slow disks could cut performance drastically.\nMonitoring GlusterFS isn\u0026rsquo;t easy\nWhen the client has communication problems with the server nodes, some weird things can happen. I\u0026rsquo;ve seen situations where the client loses connections to the servers (see the next section on reliability) and the client mount simply hangs. In other situations, the client has been knocked offline entirely and the process is missing from the process tree by the time I logged in. Your monitoring will need to ensure that the mount is active and is responding in a timely fashion.\nThere\u0026rsquo;s a handy script which allows you to monitor GlusterFS mounts via nagios that Ian Rogers put together. Also, you can get some historical data with acrollet\u0026rsquo;s munin-glusterfs plugin.\nGlusterFS 3.x is pretty reliable\nWhen I first started working with GlusterFS, I was using a version from the 2.x tree. The Fedora package maintainer hadn\u0026rsquo;t updated the package in quite some time, but I figured it should work well enough for my needs. I found that the small file performance was lacking and the nodes often had communication issues when many files were being accessed or written simultaneously. This improved when I built my own RPMs of 3.0.4 (and later 3.0.5) and began using those instead.\nI did some failure testing by hard cycling the server and client nodes and found some interesting results. First off, abruptly pulling clients had no effects on the other clients or the server nodes. The connection eventually timed out and the servers logged the timeout as expected.\nAbruptly pulling servers led to some mixed results. In the 2.x branch, I saw client hangs and timeouts when I abruptly removed a server. This appears to be mostly corrected in the 3.x branch. If you\u0026rsquo;re using replicate, it\u0026rsquo;s important to keep in mind that the first server volume listed in your client\u0026rsquo;s volume file is the one that will be coordinating the file and directory locking. Should that one fall offline quickly, you\u0026rsquo;ll see a hiccup in performance for a brief moment and the next server will be used for coordinating the locking. When your original server comes back up, the locking coordination will shift back.\nConclusion\nI\u0026rsquo;m really impressed with how much GlusterFS can do with the simplicity of how it operates. Sure, you can get better performance and more features (sometimes) from something like Lustre or GFS2, but the amount of work required to stand up that kind of cluster isn\u0026rsquo;t trivial. GlusterFS really only requires that your kernel have FUSE support (it\u0026rsquo;s been in mainline kernels since 2.6.14).\nThere are some things that GlusterFS really needs in order to succeed:\nDocumentation - The current documentation is often out of date and confusing. I\u0026rsquo;ve even found instances where the documentation contradicts itself. While there are some good technical documents about the design of some translators, they really ought to do some more work there. Statistics gathering - It\u0026rsquo;s very difficult to find out what GlusterFS is doing and where it can be optimized. Profiling your environment to find your bottlenecks is nearly impossible with the 2.x and 3.x branches. It doesn\u0026rsquo;t make it easier when some of the performance translators actually decrease performance. Community involvement - This ties back into the documentation part a little, but it would be nice to see more participation from Gluster employees on IRC and via the mailing lists. They\u0026rsquo;re a little better with mailing list responses than other companies I\u0026rsquo;ve seen, but there is still room for improvement. If you\u0026rsquo;re considering GlusterFS for your servers but you still have more questions, feel free to leave a comment or find me on Freenode (I\u0026rsquo;m \u0026lsquo;rackerhacker\u0026rsquo;).\n","date":"11 August 2010","permalink":"/p/one-month-with-glusterfs-in-production/","section":"Posts","summary":"As many of you might have noticed from my previous GlusterFS blog post and my various tweets, I\u0026rsquo;ve been working with GlusterFS in production for my personal hosting needs for just over a month.","title":"One month with GlusterFS in production"},{"content":"After I wrote a recent post on best practices for iptables, I noticed that I forgot to mention comments for iptables rules. They can be extremely handy if you have some obscure rules for odd situations.\nTo make an iptables rule with a comment, simply add on the following arguments to the rule:\n-m comment --comment \u0026#34;limit ssh access\u0026#34; Depending on your distribution, you may need to load the ipt_comment or xt_comment modules into your running kernel first.\nA full iptables rule to limit ssh access would look something like this:\niptables -A INPUT -j DROP -p tcp --dport 22 -m comment --comment \u0026#34;limit ssh access\u0026#34; ","date":"26 July 2010","permalink":"/p/adding-comments-to-iptables-rules/","section":"Posts","summary":"After I wrote a recent post on best practices for iptables, I noticed that I forgot to mention comments for iptables rules.","title":"Adding comments to iptables rules"},{"content":" Typical configuration for a proxy-type load balancer\nA typical load balancing configuration using hardware devices or software implementations will be organized such that they resemble the diagram at the right. I usually call this a proxy-type load balancing solution since the load balancer proxies your request to some other nodes. The standard order of operations looks like this:\nclient makes a request load balancer receives the request load balancer sends request to a web node the web server sends content back to the load balancer the load balancer responds to the client If you\u0026rsquo;re not familiar with load balancing, here\u0026rsquo;s an analogy. Consider a fast food restaurant. When you walk up to the counter and place an order, you\u0026rsquo;re asking the person at the counter (the load balancer) for a hamburger. The person at the counter is going to submit your order, and then a group of people (web nodes) are going to work on it. Once your hamburger (web request) is ready, your order will be given to the person at the counter and then back to you.\nThis style of organization can become a problem as your web nodes begin to scale. It requires you to ensure that your load balancers can keep up with the requests and sustain higher transfer rates that come from having more web nodes serving a greater number of requests. Imagine the fast food restaurant where you have one person taking the orders but you have 30 people working on the food. The person at the counter may be able to take orders very quickly, but they may not be able to keep up with the orders coming out of the kitchen.\nLVS allows for application servers\nto respond to clients directly\nThis is where Linux Virtual Server (LVS) really shines. LVS operates a bit differently:\nclient makes a request load balancer receives the request load balancer sends request to a web node the web server sends the response directly to the client The key difference is that the load balancer sends the unaltered request to the web server and the web server responds directly to the client. Here\u0026rsquo;s the fast food analogy again. If you ask the person at the counter (the load balancer) for a hamburger, that person is going to take your order and give it to the kitchen staff (the web nodes) to work on it. This time around, the person at the counter is going to advise the kitchen staff that the order needs to go directly to you once it\u0026rsquo;s complete. When your hamburger is ready, a member of the kitchen staff will walk to the counter and give it directly to you.\nIn the fast food analogy, what are the benefits? As the number of orders and kitchen staff increases, the job of the person at the counter doesn\u0026rsquo;t drastically increase in difficulty. While that person will have to handle more orders and keep tabs on which of the kitchen staff is working on the least amount of orders, they don\u0026rsquo;t have to worry about returning food to customers. Also, the kitchen staff doesn\u0026rsquo;t need to waste time handing orders to the person at the counter. Instead, they can pass these orders directly to the customer that ordered them.\nIn the world of servers, this is a large benefit. Since the web servers\u0026rsquo; responses no longer pass through the load balancer, they can spend more time on what they do best: balancing traffic. This allows for smaller, lower-powered load balancing servers from the beginning. It also allows for increases in web nodes without big changes for the load balancers.\nThere are three main implementations of LVS to consider:\nLVS-DR: Direct Routing\nThe load balancer receives the request and sends the packet directly to a waiting real server to process. LVS-DR has the best performance, but all of your servers must be on the same network subnet and they have to be able to share the same router (with no other routing devices in between them).\nLVS-TUN: Tunneling\nThis is very similar to the direct routing approach, but the packets are encapsulated and sent directly to the real servers once the load balancer receives them. This removes the restriction that all of the devices must be on the same network. Thanks to encapsulation, you can use this method to load balance between multiple datacenters.\nLVS-NAT: Network Address Translation\nUsing NAT for LVS yields the least performance and scaling of all of the implementation options. In this configuration, the incoming requests are rewritten so that they will be transported correctly in a NAT environment. This puts a bigger burden on the load balancer as it must rewrite the requests quickly while still keeping up with how much work is being done by each web server.\nLooking for a Linux Virtual Server HOWTO? Stay tuned. I\u0026rsquo;m preparing one for my next post.\n","date":"27 June 2010","permalink":"/p/modern-implementation-and-explanation-of-linux-virtual-server-lvs/","section":"Posts","summary":"Typical configuration for a proxy-type load balancer","title":"A modern implementation and explanation of Linux Virtual Server (LVS)"},{"content":"My curiosity is always piqued when I find new ways to manipulate command line output in simple ways. While working on a solution to parse /proc/mdstat output, I stumbled upon the paste utility.\nThe man page offers a very simple description of its features:\nWrite lines consisting of the sequentially corresponding lines from each FILE, separated by TABs, to standard output.\nHere\u0026rsquo;s an example of how it works. Let\u0026rsquo;s say you want to parse some software raid output that looks like this:\n# mdadm --brief --verbose --detail /dev/md0 ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=7bea4601:d5a02f5c:2da69848:3184a367 devices=/dev/sda1,/dev/sdb1 It would be handy if we had both on one line as that would make it easier to parse with a script. Of course, you can do this with utilities like awk and tr, but paste makes it so much easier:\n# mdadm --brief --verbose --detail /dev/md0 | paste - - ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=7bea4601:d5a02f5c:2da69848:3184a367 devices=/dev/sda1,/dev/sdb1 By default, paste uses tabs to separate the lines, but you can use the -d argument to specify any delimiter you like:\n# mdadm --brief --verbose --detail /dev/md0 | paste -d\"*\" - - ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=7bea4601:d5a02f5c:2da69848:3184a367* devices=/dev/sda1,/dev/sdb1 ","date":"14 June 2010","permalink":"/p/parsing-mdadm-output-with-paste/","section":"Posts","summary":"My curiosity is always piqued when I find new ways to manipulate command line output in simple ways.","title":"Parsing mdadm output with paste"},{"content":"","date":null,"permalink":"/tags/scripts/","section":"Tags","summary":"","title":"Scripts"},{"content":"NOTE: This post is out of date and is relevant only for GlusterFS 2.x.\n*High availability is certainly not a new concept, but if there\u0026rsquo;s one thing that frustrates me with high availability VM setups, it\u0026rsquo;s storage. If you don\u0026rsquo;t mind going active-passive, you can set up DRBD, toss your favorite filesystem on it, and you\u0026rsquo;re all set.\nIf you want to go active-active, or if you want multiple nodes active at the same time, you need to use a clustered filesystem like GFS2, OCFS2 or Lustre. These are certainly good options to consider but they\u0026rsquo;re not trivial to implement. They usually rely on additional systems and scripts to provide reliable fencing and STONITH capabilities.\nWhat about the rest of us who want multiple active VM\u0026rsquo;s with simple replicated storage that doesn\u0026rsquo;t require any additional elaborate systems? This is where GlusterFS really shines. GlusterFS can ride on top of whichever filesystem you prefer, and that\u0026rsquo;s a huge win for those who want a simple solution. However, that means that it has to use fuse, and that will limit your performance.\nLet\u0026rsquo;s get this thing started!\nConsider a situation where you want to run a WordPress blog on two VM\u0026rsquo;s with load balancers out front. You\u0026rsquo;ll probably want to use GlusterFS\u0026rsquo;s replicated volume mode (RAID 1-ish) so that the same files are on both nodes all of the time. To get started, build two small Slicehost slices or Rackspace Cloud Servers. I\u0026rsquo;ll be using Fedora 13 in this example, but the instructions for other distributions should be very similar.\nFirst things first — be sure to set a new root password and update all of the packages on the system. This should go without saying, but it\u0026rsquo;s important to remember. We can clear out the default iptables ruleset since we will make a customized set later:\n# iptables -F # /etc/init.d/iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ] GlusterFS communicates over the network, so we will want to ensure that traffic only moves over the private network between the instances. We will need to add the private IP\u0026rsquo;s and a special hostname for each instance to /etc/hosts on both instances. I\u0026rsquo;ll call mine gluster1 and gluster2:\n10.xx.xx.xx gluster1 10.xx.xx.xx gluster2 You\u0026rsquo;re now ready to install the required packages on both instances:\nyum install glusterfs-client glusterfs-server glusterfs-common glusterfs-devel Make the directories for the GlusterFS volumes on each instance:\nmkdir -p /export/store1 We\u0026rsquo;re ready to make the configuration files for our storage volumes. Since we want the same files on each instance, we will use the --raid 1 option. This only needs to be run on the first node:\n# glusterfs-volgen --name store1 --raid 1 gluster1:/export/store1 gluster2:/export/store1 Generating server volfiles.. for server 'gluster2' Generating server volfiles.. for server 'gluster1' Generating client volfiles.. for transport 'tcp' Once that\u0026rsquo;s done, you\u0026rsquo;ll have four new files:\nbooster.fstab – you won\u0026rsquo;t need this file gluster1-store1-export.vol – server-side configuration file for the first instance gluster2-store1-export.vol – server-side configuration file for the second instance store1-tcp.vol – client side configuration file for GlusterFS clients Copy the gluster1-store1-export.vol file to /etc/glusterfs/glusterfsd.vol on your first instance. Then, copy gluster2-store1-export.vol to /etc/glusterfs/glusterfsd.vol on your second instance. The store1-tcp.vol should be copied to /etc/glusterfs/glusterfs.vol on both instances.\nAt this point, you\u0026rsquo;re ready to start the GlusterFS servers on each instance:\n/etc/init.d/glusterfsd start You can now mount the GlusterFS volume on both instances:\nmkdir -p /mnt/glusterfs glusterfs /mnt/glusterfs/ You should now be able to see the new GlusterFS volume in both instances:\n# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on /etc/glusterfs/glusterfs.vol 9.4G 831M 8.1G 10% /mnt/glusterfs As a test, you can create a file on your first instance and verify that your second instance can read the data:\n[root@gluster1 ~]# echo \"We're testing GlusterFS\" \u003e /mnt/glusterfs/test.txt ..... [root@gluster2 ~]# cat /mnt/glusterfs/test.txt We're testing GlusterFS If you remove that file on your second instance, it should disappear from your first instance as well.\nObviously, this is a very simple and basic implementation of GlusterFS. You can increase performance by making dedicated VM\u0026rsquo;s just for serving data and you can adjust the default performance options when you mount a GlusterFS volume. Limiting access to the GlusterFS servers is also a good idea.\nIf you want to read more, I\u0026rsquo;d recommend reading the GlusterFS Technical FAQ and the GlusterFS User Guide.\nThank you for your e-mails! I\u0026rsquo;ll be expanding on this post later with some sample benchmarks and additional tips/tricks, so please stay tuned.\n","date":"28 May 2010","permalink":"/p/glusterfs-on-the-cheap-with-rackspaces-cloud-servers-or-slicehost/","section":"Posts","summary":"NOTE: This post is out of date and is relevant only for GlusterFS 2.","title":"GlusterFS on the cheap with Rackspace’s Cloud Servers or Slicehost"},{"content":"I\u0026rsquo;ll admit it right now: I love engaging customers and learning more about how what we do at Rackspace can help their business or ideas take flight. Talking with customers can be a little nerve-wracking at first since you\u0026rsquo;re not always sure what their experience level is and which products they really need. However, you can get past that initial nervousness very quickly by getting an idea of what the customer needs and what they\u0026rsquo;ve tried already (that didn\u0026rsquo;t work).\nYou may not have realized it, but I covered the most important part of selling a technical product in the first paragraph without even mentioning the word \u0026ldquo;sell\u0026rdquo;. That was intentional. As a technical person, you have an innate ability to interact with customers without needing to actively sell them the product.\nWhenever I meet a customer at a conference, trade show, or some other relatively informal event, I try to keep a few things in mind. I\u0026rsquo;ll share them with you:\nLearn why your customers are seeking out your product and what they really need\nIt\u0026rsquo;s pretty obvious that this step requires more listening than talking. While the customer is explaining what they need but haven\u0026rsquo;t found, try to keep a running tally in your brain of what technologies are important to them so that you can rank your suggestions for them. Don\u0026rsquo;t think about which product will work best for them yet - just keep keep their general requirements in mind.\nThis is also a good opportunity to relate to what they\u0026rsquo;ve told you. If there\u0026rsquo;s a certain solution that ended up working really well or one that failed miserably, and you\u0026rsquo;re familiar with one of those solutions, tell them briefly about your experiences. This will re-affirm how the customer feels about that solution and it also shows them that you\u0026rsquo;ve been in their shoes before. They\u0026rsquo;ll also appreciate that you\u0026rsquo;ve been listening to their concerns and looking for ways to relate to their unique situation.\nMake thoughtful production suggestions and discuss implementation\nSome folks might say this is where the selling starts, but if you\u0026rsquo;re doing it correctly, you\u0026rsquo;ve been selling your product and your company the whole time. This is where things can get tricky. Most technical people I\u0026rsquo;ve met will try to avoid being pushy when suggesting a product for a customer to use, and that\u0026rsquo;s a good idea.\nYou need to do three things: pick the right product (or group of products), explain what needs it meets, and briefly cover some example implementations. As a technical person, this is where you really shine. Interpreting the customer\u0026rsquo;s needs and turning it into a mini technical sales pitch is a piece of cake when you know the product well and you\u0026rsquo;ve implemented it before.\nIt\u0026rsquo;s great to give a customer multiple options, but it\u0026rsquo;s a bad idea to overwhelm them. If you find that you\u0026rsquo;re talking a bit too much, there\u0026rsquo;s no harm in offering to talk about details later during a formal meeting. You can say things like these:\n\u0026ldquo;this product will meet all your needs, but if you want to save a little money, you can use this other product like this.\u0026rdquo; \u0026ldquo;if you combine these two products, you can meet these needs and save some time, but you can just use one and set it up like this…\u0026rdquo; \u0026ldquo;then later on, if you need to expand, you can start using this product by…\u0026rdquo; Think about the customer\u0026rsquo;s future growth\nEven if you have products that meet your customer\u0026rsquo;s needs, they\u0026rsquo;re going to be concerned about what\u0026rsquo;s going to happen down the road. What happens when they scale to a level that they can\u0026rsquo;t even comprehend right now? I don\u0026rsquo;t think any customer would expect you to cover all the bases, but try to think of some basic future-proofing for the customer. Even if it might involve a product that your company doesn\u0026rsquo;t sell, just mention it.\nOf course, there are some things that you shouldn\u0026rsquo;t do:\nDon\u0026rsquo;t overpromise or push hard about a future product. Don\u0026rsquo;t feel obligated to know the answer to every question. Don\u0026rsquo;t use words like \u0026ldquo;infinite\u0026rdquo;, \u0026ldquo;forever\u0026rdquo;, or \u0026ldquo;perfect\u0026rdquo;. Don\u0026rsquo;t talk about cost constantly. Don\u0026rsquo;t force a customer to choose a product, take product literature, or take your contact information. Don\u0026rsquo;t make assumptions about the customer\u0026rsquo;s technical level, needs, or purchasing power. Don\u0026rsquo;t let it bother you if the customer isn\u0026rsquo;t interested in your product - it\u0026rsquo;s not personal. And that\u0026rsquo;s about it. If you follow those three tips and avoid the things you shouldn\u0026rsquo;t do, you\u0026rsquo;ll get the confidence you need to engage the customer and create the beginnings of a relationship with them.\n","date":"27 May 2010","permalink":"/p/how-to-sell-a-guide-for-technical-people/","section":"Posts","summary":"I\u0026rsquo;ll admit it right now: I love engaging customers and learning more about how what we do at Rackspace can help their business or ideas take flight.","title":"How to sell: a guide for technical people"},{"content":"","date":null,"permalink":"/tags/sales/","section":"Tags","summary":"","title":"Sales"},{"content":"It certainly shouldn\u0026rsquo;t be difficult, but I always have a tough time with OAuth. Twitter is dropping support for basic authentication on June 30th, 2010. I have some automated Twitter bots that need an upgrade, so I\u0026rsquo;ve been working on a quick solution to generate tokens for my scripts.\nI formulated a pretty simple script using John Nunemaker\u0026rsquo;s twitter gem that will get it done manually for any scripts you have that read from or update Twitter:\n#!/usr/bin/ruby require \u0026#39;rubygems\u0026#39; require \u0026#39;twitter\u0026#39; # These credentials are specific to your *application* and not your *user* # Get these credentials from Twitter directly: http://twitter.com/apps application_token = \u0026#39;[this should be the shorter one]\u0026#39; application_secret = \u0026#39;[this should be the longer one]\u0026#39; oauth = Twitter::OAuth.new(application_token,application_secret) request_token = oauth.request_token.token request_secret = oauth.request_token.secret puts \u0026#34;Request token =\u0026gt; #{request_token}\u0026#34; puts \u0026#34;Request secret =\u0026gt; #{request_secret}\u0026#34; puts \u0026#34;Authentication URL =\u0026gt; #{oauth.request_token.authorize_url}\u0026#34; print \u0026#34;Provide the PIN that Twitter gave you here: \u0026#34; pin = gets.chomp oauth.authorize_from_request(request_token,request_secret,pin) access_token = oauth.access_token.token access_secret = oauth.access_token.secret puts \u0026#34;Access token =\u0026gt; #{oauth.access_token.token}\u0026#34; puts \u0026#34;Access secret =\u0026gt; #{oauth.access_token.secret}\u0026#34; oauth.authorize_from_access(access_token, access_secret) twitter = Twitter::Base.new(oauth) puts twitter.friends_timeline(:count =\u0026gt; 1) When you run the script, it will give you a request token, request secret and a URL to visit. When you access the URL, you\u0026rsquo;ll be given a PIN. Type the PIN into the prompt and you\u0026rsquo;ll get your access token and secret. This is what you can use to continue authenticating with Twitter, so be sure to save the access token and secret.\nFrom then on, you should be able to login with a script like this:\n#!/usr/bin/ruby require \u0026#39;rubygems\u0026#39; require \u0026#39;twitter\u0026#39; application_token = \u0026#39;[this should be the shorter one]\u0026#39; application_secret = \u0026#39;[this should be the longer one]\u0026#39; oauth = Twitter::OAuth.new(application_token,application_secret) oauth.authorize_from_access(access_token, access_secret) twitter = Twitter::Base.new(oauth) puts twitter.friends_timeline(:count =\u0026gt; 1) I hope this helps!\n","date":"20 May 2010","permalink":"/p/idiots-guide-to-oauth-logins-for-twitter/","section":"Posts","summary":"It certainly shouldn\u0026rsquo;t be difficult, but I always have a tough time with OAuth.","title":"Idiot’s guide to OAuth logins for Twitter"},{"content":"","date":null,"permalink":"/tags/oauth/","section":"Tags","summary":"","title":"Oauth"},{"content":"The discussions about the paravirt_ops, or \u0026ldquo;pvops\u0026rdquo;, support in upstream kernels at Xen Summit 2010 last month really piqued my interest.\nQuite a few distribution maintainers have gone to great lengths to keep Xen domU support in their kernels and it\u0026rsquo;s been an uphill battle. Some kernels, such as Ubuntu\u0026rsquo;s linux-ec2 kernels, have patches from 2.6.18 dragged forward into 2.6.32 and even 2.6.33. It certainly can\u0026rsquo;t be enjoyable to keep dragging those patches forward into new kernel trees.\nThe paravirt_ops support for Xen guests was added in 2.6.23 and continues to be included and improved in the latest kernel trees. However, there are two significant problems with these new kernels if you\u0026rsquo;re trying to work with legacy environments:\nthe console is on hvc0, not tty1 block devices are now /dev/xvdX rather than /dev/sdX If you only have a few guests, these changes are generally pretty easy. Switching the console just requires some changes to your inittab or upstart configurations. Changing the block device names requires changes to the guest\u0026rsquo;s Xen configuration file and /etc/fstab within the guest itself.\nConsidering the amount of environments I work with daily at Rackspace, changing the guest configuration is definitely not an option. I needed a way to keep the console and block devices unchanged so that our customers could have a consistent experience on our infrastructure.\nLuckily, Soren Hansen offered to pitch in and a solution became apparent. Through some relatively small patches, the legacy console and block device support was available in the latest 2.6.32 version (2.6.32.12 as of this post\u0026rsquo;s writing).\nSo far, I\u0026rsquo;ve tested x86_64 and i386 versions of 2.6.32.12 with the console and block device patches. It\u0026rsquo;s gone through its paces on Xen 3.0.3, 3.1.2, 3.3.0 and 3.4.2. All revisions of Fedora, CentOS, Ubuntu, Debian, Gentoo and Arch made within the last two years are working well with the new kernels.\n","date":"14 May 2010","permalink":"/p/legacy-tty1-and-block-device-support-for-xen-guests-with-pvops-kernels/","section":"Posts","summary":"The discussions about the paravirt_ops, or \u0026ldquo;pvops\u0026rdquo;, support in upstream kernels at Xen Summit 2010 last month really piqued my interest.","title":"Legacy tty1 and block device support for Xen guests with pvops kernels"},{"content":"Anyone who has used iptables before has locked themselves out of a remote server at least once. It\u0026rsquo;s easily avoided, but often forgotten. Lots of people have asked me for a list of best practices for iptables firewalls and I certainly hope this post helps.\nUnderstand how iptables operates\nBefore you can begin using iptables, you need to fully understand how it matches packets with chains and rules. There is a terrific diagram in Wikipedia that will make it easier to understand. It\u0026rsquo;s imperative to remember that iptables rules are read top-down until a matching rule is found. If no matching rule is found, the default policy of the chain will be applied (more on that in a moment).\nDon\u0026rsquo;t set the default policy to DROP\nAll iptables chains have a default policy setting. If a packet doesn\u0026rsquo;t match any of the rules in a relevant chain, it will match the default policy and will be handled accordingly. I\u0026rsquo;ve seen quite a few users set their default policy to DROP, and this can bring about some unintended consequences.\nConsider a situation where your INPUT chain contains quite a few rules allowing traffic, and you\u0026rsquo;ve set the default policy to DROP. Later on, another administrator logs into the server and flushes the rules (which isn\u0026rsquo;t a good practice, either). I\u0026rsquo;ve met quite a few good systems administrators who are unaware of the default policy for iptables chains. Your server will be completely inaccessible immediately. All of the packets will be dropped since they match the default policy in the chain.\nInstead of using the default policy, I normally recommend making an explicit DROP/REJECT rule at the bottom of your chain that matches everything. You can leave your default policy set to ACCEPT and this should reduce the chance of blocking all access to the server.\nDon\u0026rsquo;t blindly flush iptables rules\nBefore running iptables -F, always check each chain\u0026rsquo;s default policy. If the INPUT chain is set to DROP, you\u0026rsquo;ll need to set it to ACCEPT if you want to access the server after the rules are flushed. Also, consider the security implications of your network when you clear the rules. Your services will be completely exposed and any masquerading or NAT rules will be removed.\nRemember localhost\nLots of applications require access to the lo interface. Ensure that you set up your rules carefully so that the lo interface is not disturbed.\nSplit complicated rule groups into separate chains\nEven if you\u0026rsquo;re the only systems administrator for your particular network, it\u0026rsquo;s important to keep your iptables rules manageable. If you have a certain subset of rules that may be a little complicated, consider breaking them out into their own chain. You can just add in a jump to that chain from your default set of chains.\nUse REJECT until you know your rules are working properly\nWhen you\u0026rsquo;re writing iptables rules, you\u0026rsquo;ll probably be testing them pretty often. One way to speed up that process is to use the REJECT target rather than DROP. You\u0026rsquo;ll get an immediate rejection of your traffic (a TCP reset) instead of wondering if your packet is being dropped or if it\u0026rsquo;s making it to your server at all. Once you\u0026rsquo;re done with your testing, you can flip the rules from REJECT to DROP if you prefer.\nFor those folks working towards their RHCE, this is a huge help during the test. When you\u0026rsquo;re nervous and in a hurry, the immediate packet rejection is a welcomed sight.\nBe stringent with your rules\nTry to make your rules as specific as possible for your needs. For example, I like to allow ICMP pings on my servers so that I can run network tests against them. I could easily toss a rule into my INPUT chain that looks like this:\niptables -A INPUT -p icmp -m icmp -j ACCEPT However, I don\u0026rsquo;t want to simply allow all ICMP traffic. There have been some ICMP flaws from time to time and I\u0026rsquo;d rather keep as low of a profile as possible. There are many types of ICMP control messages, but I only want to allow echo requests:\niptables -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT This will allow echo requests (standard ICMP pings), but it won\u0026rsquo;t explicitly allow any other ICMP traffic to pass through the firewall.\nUse comments for obscure rules\nIf you have rules to cover edge cases that other administrators might not understand, consider using iptables comments by adding the following arguments to your rules:\n-m comment --comment \"limit ssh access\" The comments will appear in the iptables output if you list the current rules. They will also appear in your saved iptables rules.\nAlways save your rules\nMost distributions offer some way to save your iptables rules so that they persist through reboots. Red Hat-based distributions offer /etc/init.d/iptables save, but Debian and Ubuntu require some manual labor. An errant reboot would easily take out your unsaved rules, so save them often.\n","date":"12 April 2010","permalink":"/p/best-practices-iptables/","section":"Posts","summary":"Anyone who has used iptables before has locked themselves out of a remote server at least once.","title":"Best practices: iptables"},{"content":"Fedora 13 has quite a few changes related to upstart, and one of the biggest ones is how terminals are configured. Most distributions tuck the tty configuration away in /etc/inittab, /etc/event.d/ or /etc/init/. If you want to adjust the number of tty\u0026rsquo;s in Fedora 13, you\u0026rsquo;ll need to look in /etc/sysconfig/init:\nnew RH6.0 bootup # verbose =\u0026gt; old-style bootup # anything else =\u0026gt; new style bootup without ANSI colors or positioning BOOTUP=color # column to start \u0026#34;[ OK ]\u0026#34; label in RES_COL=60 # terminal sequence to move to that column. You could change this # to something like \u0026#34;tput hpa ${RES_COL}\u0026#34; if your terminal supports it MOVE_TO_COL=\u0026#34;echo -en \\\\033[${RES_COL}G\u0026#34; # terminal sequence to set color to a \u0026#39;success\u0026#39; color (currently: green) SETCOLOR_SUCCESS=\u0026#34;echo -en \\\\033[0;32m\u0026#34; # terminal sequence to set color to a \u0026#39;failure\u0026#39; color (currently: red) SETCOLOR_FAILURE=\u0026#34;echo -en \\\\033[0;31m\u0026#34; # terminal sequence to set color to a \u0026#39;warning\u0026#39; color (currently: yellow) SETCOLOR_WARNING=\u0026#34;echo -en \\\\033[0;33m\u0026#34; # terminal sequence to reset to the default color. SETCOLOR_NORMAL=\u0026#34;echo -en \\\\033[0;39m\u0026#34; # default kernel loglevel on boot (syslog will reset this) LOGLEVEL=3 # Set to anything other than \u0026#39;no\u0026#39; to allow hotkey interactive startup... PROMPT=yes # Set to \u0026#39;yes\u0026#39; to allow probing for devices with swap signatures AUTOSWAP=no # What ttys should gettys be started on? ACTIVE_CONSOLES=/dev/tty[1-6] The very last line controls the number of tty\u0026rsquo;s that are kept alive on your system. If you need more tty\u0026rsquo;s, simply increase the 6 to a higher number. If you only want one terminal (which is usually what I want in Xen), just make this adjustment:\n# What ttys should gettys be started on? ACTIVE_CONSOLES=/dev/tty1 A normal telinit q doesn\u0026rsquo;t seem to adjust the terminals on the fly as it did before upstart was involved. I\u0026rsquo;m not sure if this is a bug or an intended feature. Either way, a reboot solves the problem and you should see the changes afterwards.\n","date":"26 March 2010","permalink":"/p/adjusting-ttys-in-fedora-13-with-upstart/","section":"Posts","summary":"Fedora 13 has quite a few changes related to upstart, and one of the biggest ones is how terminals are configured.","title":"Adjusting tty’s in Fedora 13 with upstart"},{"content":"","date":null,"permalink":"/tags/tty/","section":"Tags","summary":"","title":"Tty"},{"content":"","date":null,"permalink":"/tags/upstart/","section":"Tags","summary":"","title":"Upstart"},{"content":"I normally try to keep my work-related items separate from this blog, but I felt that I needed to break tradition for a moment. The new Rackspace Talent site was released a few weeks ago and Michael Long asked me to write a blog post about what it means to be a Racker (that\u0026rsquo;s the term we use for employees of Rackspace).\nAfter the post went up, I received as much feedback from people outside of Rackspace as I received from Rackers. The negative feedback I received was centered around the assertion that the post\u0026rsquo;s content was \u0026ldquo;fluffed\u0026rdquo; to make the Rackspace experience seem better than it actually is. That couldn\u0026rsquo;t be further from the truth.\nIf you want to make comments on the post, or if you want to know more about working at Rackspace, let me know. Although I\u0026rsquo;m not in sales and I\u0026rsquo;m not in recruiting, I always enjoy talking to people about using Rackspace\u0026rsquo;s services or working for Rackspace.\nHere\u0026rsquo;s a link to the post: Rackspace Talent - Why I\u0026rsquo;m a Racker\n","date":"26 March 2010","permalink":"/p/why-im-a-racker/","section":"Posts","summary":"I normally try to keep my work-related items separate from this blog, but I felt that I needed to break tradition for a moment.","title":"Why I’m a Racker"},{"content":"When you need to measure network throughput and capacity, I haven\u0026rsquo;t found a simpler solution than iperf. There isn\u0026rsquo;t much to say about the operation of iperf — it\u0026rsquo;s a very simple application.\nIn short, iperf can be installed on two machines within your network. You\u0026rsquo;ll run one as a server, and one as a client. On the server side, simply run:\niperf -s On the client side, run:\niperf -c [server_ip] The client side will try to shove TCP packets through the network interface as quickly as possible for a period of 10 seconds by default. Once that\u0026rsquo;s complete, you\u0026rsquo;ll see a report on the server and client that will look like this:\n$ iperf -c 192.168.10.10 ------------------------------------------------------------ Client connecting to 192.168.10.10, TCP port 5001 TCP window size: 65.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.30 port 53345 connected with 192.168.10.10 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 37.9 MBytes 31.8 Mbits/sec The previous test was run over an 802.11n network between a wired and wireless device. The typical downlink for an 802.11n network is about 40Mbit/s, so it\u0026rsquo;s obvious that my home network could use an adjustment.\nYou can also run bidirectional tests from the client either at the same time (-d flag) or one after the other (-r flag). The server side will keep running until you stop it, so you can leave it running and run tests from multiple locations over time. You can daemonize the server end if that makes things easier.\nFor the full list of options, refer to iperf\u0026rsquo;s man page.\n","date":"20 March 2010","permalink":"/p/testing-network-throughput-with-iperf/","section":"Posts","summary":"When you need to measure network throughput and capacity, I haven\u0026rsquo;t found a simpler solution than iperf.","title":"Testing network throughput with iperf"},{"content":"Sending signals to processes using kill on a Unix system is not a new topic for most systems administrators, but I\u0026rsquo;ve been asked many times about the difference between kill and kill -9.\nAnytime you use kill on a process, you\u0026rsquo;re actually sending the process a signal (in almost all situations – I\u0026rsquo;ll get into that soon). Standard C applications have a header file that contains the steps that the process should follow if it receives a particular signal. You can get an entire list of the available signals on your system by checking the man page for kill.\nConsider a command like this:\nkill 2563 This would send a signal called SIGTERM to the process. Once the process receives the notice, a few different things can happen:\nthe process may stop immediately the process may stop after a short delay after cleaning up resources the process may keep running indefinitely The application can determine what it wants to do once a SIGTERM is received. While most applications will clean up their resources and stop, some may not. An application may be configured to do something completely different when a SIGTERM is received. Also, if the application is in a bad state, such as waiting for disk I/O, it may not be able to act on the signal that was sent.\nMost system administrators will usually resort to the more abrupt signal when an application doesn\u0026rsquo;t respond to a SIGTERM:\nkill -9 2563 The -9 tells the kill command that you want to send signal #9, which is called SIGKILL. With a name like that, it\u0026rsquo;s obvious that this signal carries a little more weight.\nAlthough SIGKILL is defined in the same signal header file as SIGTERM, it cannot be ignored by the process. In fact, the process isn\u0026rsquo;t even made aware of the SIGKILL signal since the signal goes straight to the kernel init. At that point, init will stop the process. The process never gets the opportunity to catch the signal and act on it.\nHowever, the kernel may not be able to successfully kill the process in some situations. If the process is waiting for network or disk I/O, the kernel won\u0026rsquo;t be able to stop it. Zombie processes and processes caught in an uninterruptible sleep cannot be stopped by the kernel, either. A reboot is required to clear those processes from the system.\n","date":"18 March 2010","permalink":"/p/sigterm-vs-sigkill/","section":"Posts","summary":"Sending signals to processes using kill on a Unix system is not a new topic for most systems administrators, but I\u0026rsquo;ve been asked many times about the difference between kill and kill -9.","title":"SIGTERM vs. SIGKILL"},{"content":"","date":null,"permalink":"/tags/gdm/","section":"Tags","summary":"","title":"Gdm"},{"content":"My synergy setup at work is relatively simple. I have a MacBook Pro running Snow Leopard that acts as a synergy server and a desktop running Fedora 12 as a synergy client. On the Mac, I use SynergyKM to manage the synergy server. The Fedora box uses my gdm strategy for starting synergy at the login screen and in GNOME.\nI kept having an issue where the shift key would become stuck regardless of the settings I set for the client or server. The halfDuplexCapsLock configuration option had no effect. After installing xkeycaps, I found that both shift keys were getting stuck if I brought the mouse back and forth between Mac and Fedora twice.\nI decided to run a test. I started the client with the debug argument and moved the mouse to my Fedora box. At that point, I pressed the letter \u0026lsquo;a\u0026rsquo; and saw:\nDEBUG1: CXWindowsKeyState.cpp,195: 032 (00000000) up DEBUG1: CXWindowsKeyState.cpp,195: 03e (00000000) up DEBUG1: CXWindowsKeyState.cpp,195: 026 (00000000) down DEBUG1: CXWindowsKeyState.cpp,195: 032 (00000000) down DEBUG1: CXWindowsKeyState.cpp,195: 03e (00000000) down DEBUG1: CXWindowsKeyState.cpp,195: 026 (00000000) up I brought the mouse back to the Mac and then back to Fedora. I pressed \u0026lsquo;a\u0026rsquo; again and saw:\nDEBUG1: CXWindowsKeyState.cpp,195: 026 (00000000) down DEBUG1: CXWindowsKeyState.cpp,195: 026 (00000000) up DEBUG1: CXWindowsKeyState.cpp,195: 026 (00000000) down DEBUG1: CXWindowsKeyState.cpp,195: 026 (00000000) up After dumping the keyboard layout with xmodmap I found the keys that corresponded with the key numbers:\n032 - Left shift 03e - Right shift 026 - a If I tapped the left shift, I could clear the key press, but I couldn\u0026rsquo;t clear the right shift key (it was stuck down according to Fedora\u0026rsquo;s X server). When I hooked up a physical keyboard and mouse, I was able to use them normally without any keybinding problems.\nThe root cause: When synergy started in /etc/gdm/PreSession/Default after the gdm login, the keyboard layout wasn\u0026rsquo;t set up properly. The X server was setting up the keyboard layout later in the startup process and this confusion caused the shift keys to get stuck. Fedora 12 uses evdev to probe for keyboards during X\u0026rsquo;s startup and eventually settles on a default layout if none are explicitly defined.\nThe fix: I added the synergy startup to the GNOME startup items and it works flawlessly.\n","date":"4 March 2010","permalink":"/p/sticky-shift-key-with-synergy-in-fedora-12/","section":"Posts","summary":"My synergy setup at work is relatively simple.","title":"Sticky shift key with synergy in Fedora 12"},{"content":"","date":null,"permalink":"/tags/synergy/","section":"Tags","summary":"","title":"Synergy"},{"content":"Regardless of the type of hosting you\u0026rsquo;re using - dedicated or cloud - it\u0026rsquo;s important to take network interface security seriously. Most often, threats from the internet are the only ones mentioned. However, if you share a private network with other customers, you have just as much risk on that interface.\nMany cloud providers allow you access to a private network environment where you can exchange data with other instances or other services offered by the provider. The convenience of this access comes with a price: other instances can access your instance on the private network just as easily as they could on the public interface.\nHere are some security tips for your private interfaces:\nDisable the private interface\nThis one is pretty simple. If you have only one instance or server, and you don\u0026rsquo;t need to communicate privately with any other instances, just disable the interface. Remember to configure your networking scripts to leave the interface disabled after reboots.\nUse packet filtering\nThe actual mechanism will vary based on your operating system, but filtering packets is the one of the simplest ways to secure your private interface. You can take some different approaches with them, but I find the easiest method is to allow access from your other instances and reject all other traffic.\nFor additional security, you can limit access based on ports as well as source IP addresses. This could prevent an attacker from having easy access to your other instances if they\u0026rsquo;re able to break into one of them.\nConfigure your daemons to listen on the appropriate interfaces\nIf there are services that don\u0026rsquo;t need to be listening on the private network, don\u0026rsquo;t allow them to listen on your private interface. For example, MySQL might need to listen on the private interface so the web server can talk to it, but apache won\u0026rsquo;t need to listen on the private interface. This reduces the profile of your instance on the private network and makes it a less likely target for attack.\nUse hosts.allow and hosts.deny\nMany new systems administrators forget about how handy tcpwrappers can be for limiting access. If your firewall is down in error, host.allow and hosts.deny could be an extra layer of protection. It\u0026rsquo;s important to ensure that the daemons you are attempting to control are build with tcpwrappers support. Daemons like sshd support it, but apache and MySQL do not.\nEncrypt all traffic on the private network\nJust because it\u0026rsquo;s called a \u0026ldquo;private\u0026rdquo; network doesn\u0026rsquo;t mean that your traffic can traverse the network privately. You should always err on the side of caution and encrypt all traffic traversing the private network. You can use ssh tunnels, stunnel, or the built-in SSL features found in most daemons.\nThis also brings up an important point: you should know how your provider\u0026rsquo;s private network works. Are there safeguards to prevent sniffing? Could someone else possibly ARP spoof your instance\u0026rsquo;s private IP addresses? Is your private network\u0026rsquo;s subnet shared among many customers?\nWith all of that said, it\u0026rsquo;s also very important to have proper change control policies so that administrators working after you are fully aware of the security measures in place and why they are important. This will ensure that all of the administrators on your instances will understand the security of the system and they should be able to make sensible adjustments later for future functionality.\n","date":"2 March 2010","permalink":"/p/private-network-interfaces-the-forgotten-security-hole/","section":"Posts","summary":"Regardless of the type of hosting you\u0026rsquo;re using - dedicated or cloud - it\u0026rsquo;s important to take network interface security seriously.","title":"Private network interfaces: the forgotten security hole"},{"content":"","date":null,"permalink":"/tags/tcpwrappers/","section":"Tags","summary":"","title":"Tcpwrappers"},{"content":"","date":null,"permalink":"/tags/general-advice/","section":"Tags","summary":"","title":"General advice"},{"content":"Earlier this year, I started a series of posts to encourage systems administrators to refine their troubleshooting abilities. This is the second post in that series.\nAlmost every system administrator has found themselves in a situation where they\u0026rsquo;re confronted with a server which has a problem. However, if you\u0026rsquo;re not the primary administrator for the server, you may not always know what has changed recently or you may not be aware of changes in the server\u0026rsquo;s environment. In these situations, if the fix isn\u0026rsquo;t obvious, try going through these steps:\nLocalize the problem to a specific daemon or service\nIn the case of a problem where a website isn\u0026rsquo;t loading properly, is it a problem with the web server itself? Could something other than the actual web server daemon be having an issue?\nAs an example, consider a ruby on rails application which runs through apache\u0026rsquo;s mod_proxy_balancer and queries data from MySQL. If any of those individual puzzle pieces were not functioning correctly, you\u0026rsquo;d get a different result. A downed MySQL instance could make the application throw errors or appear to be unresponsive. If the mongrel cluster had failed, apache might be returning internal server errors. Your browser might return a connection refused if apache was down. These are all relatively easy to determine.\nWhat if you are unable to determine which daemon is causing the problem?\nIf it\u0026rsquo;s broken, break it a little more\nLet\u0026rsquo;s say that you\u0026rsquo;ve reviewed the process list and all of the appropriate daemons appear to be running. However, the website is still not loading properly. What do you do? Bring down a service and try again. Did something change? Did a new error appear? If not, bring that daemon back up and try taking down one of the other ones.\nI\u0026rsquo;ve also had some good results by making small adjustments in the web server\u0026rsquo;s configuration file. If you have a virtual host that isn\u0026rsquo;t returning the correct data, try commenting it out temporarily. For rewrite rules, try removing them temporarily or strip them down to a more basic form. Test again, and then begin adding lines back incrementally. As much as a single period or quotation mark can derail a perfectly good set of rewrite rules.\nIn short - try to think outside the box when you\u0026rsquo;re troubleshooting a difficult issue on an unfamiliar system. Always remember to back up your configurations before making changes and ensure your daemons will start properly if you bring them down.\n","date":"28 February 2010","permalink":"/p/system-administration-inspiration-if-its-broken-break-it-a-little-more/","section":"Posts","summary":"Earlier this year, I started a series of posts to encourage systems administrators to refine their troubleshooting abilities.","title":"System Administration Inspiration: If it’s broken, break it a little more"},{"content":"","date":null,"permalink":"/tags/configuration/","section":"Tags","summary":"","title":"Configuration"},{"content":"","date":null,"permalink":"/tags/innodb/","section":"Tags","summary":"","title":"Innodb"},{"content":"","date":null,"permalink":"/tags/memory/","section":"Tags","summary":"","title":"Memory"},{"content":"If you\u0026rsquo;re running an operation on a large number of rows within a table that uses the InnoDB storage engine, you might see this error:\nERROR 1206 (HY000): The total number of locks exceeds the lock table size\nMySQL is trying to tell you that it doesn\u0026rsquo;t have enough room to store all of the row locks that it would need to execute your query. The only way to fix it for sure is to adjust innodb_buffer_pool_size and restart MySQL. By default, this is set to only 8MB, which is too small for anyone who is using InnoDB to do anything.\nIf you need a temporary workaround, reduce the amount of rows you\u0026rsquo;re manipulating in one query. For example, if you need to delete a million rows from a table, try to delete the records in chunks of 50,000 or 100,000 rows. If you\u0026rsquo;re inserting many rows, try to insert portions of the data at a single time.\nFurther reading:\nMySQL Bug #15667 - The total number of locks exceeds the lock table size MySQL Error 1206 » Mike R\u0026rsquo;s Blog ","date":"16 February 2010","permalink":"/p/mysql-the-total-number-of-locks-exceeds-the-lock-table-size-2/","section":"Posts","summary":"If you\u0026rsquo;re running an operation on a large number of rows within a table that uses the InnoDB storage engine, you might see this error:","title":"MySQL: The total number of locks exceeds the lock table size"},{"content":"","date":null,"permalink":"/tags/optimization/","section":"Tags","summary":"","title":"Optimization"},{"content":"This problem has cropped up for me a few times, but I\u0026rsquo;ve always forgotten to make a post about it. If you\u0026rsquo;re working with a large InnoDB table and you\u0026rsquo;re updating, inserting, or deleting a large volume of rows, you may stumble upon this error:\nERROR 1206 (HY000): The total number of locks exceeds the lock table size InnoDB stores its lock tables in the main buffer pool. This means that the number of locks you can have at the same time is limited by the innodb_buffer_pool_size variable that was set when MySQL was started. By default, MySQL leaves this at 8MB, which is pretty useless if you\u0026rsquo;re doing anything with InnoDB on your server.\nLuckily, the fix for this issue is very easy: adjust innodb_buffer_pool_size to a more reasonable value. However, that fix does require a restart of the MySQL daemon. There\u0026rsquo;s simply no way to adjust this variable on the fly (with the current stable MySQL versions as of this post\u0026rsquo;s writing).\nBefore you adjust the variable, make sure that your server can handle the additional memory usage. The innodb_buffer_pool_size variable is a server wide variable, not a per-thread variable, so it\u0026rsquo;s shared between all of the connections to the MySQL server (like the query cache). If you set it to something like 1GB, MySQL won\u0026rsquo;t use all of that up front. As MySQL finds more things to put in the buffer, the memory usage will gradually increase until it reaches 1GB. At that point, the oldest and least used data begins to get pruned when new data needs to be present.\nSo, you need a workaround without a MySQL restart?\nIf you\u0026rsquo;re in a pinch, and you need a workaround, break up your statements into chunks. If you need to delete a million rows, try deleting 5-10% of those rows per transaction. This may allow you to sneak under the lock table size limitations and clear out some data without restarting MySQL.\nTo learn more about InnoDB\u0026rsquo;s parameters, visit the MySQL documentation.\n","date":"29 January 2010","permalink":"/p/mysql-the-total-number-of-locks-exceeds-the-lock-table-size/","section":"Posts","summary":"This problem has cropped up for me a few times, but I\u0026rsquo;ve always forgotten to make a post about it.","title":"MySQL: The total number of locks exceeds the lock table size"},{"content":"","date":null,"permalink":"/tags/screen/","section":"Tags","summary":"","title":"Screen"},{"content":"About a year ago, I was introduced to the joys of using irssi and screen to access irc servers. Before that time, I\u0026rsquo;d usually used graphical clients like Colloquy, and I always enjoyed getting Growl notifications when someone mentioned a word or string that I set up as a trigger.\nOnce I started using irssi in screen, I found that the visual bell in screen didn\u0026rsquo;t get my attention quickly. Luckily, someone in the #slicehost channel let me know about screen\u0026rsquo;s audible bell. You can flip between the visual and audible bell with CTRL-A and then CTRL-G. If you keep repeating that key combination, you\u0026rsquo;ll switch back and forth between the two (with a status update at the bottom left).\nYou can also set up your visual bell configuration in your .screenrc via some configuration parameters:\nvbell [on|off] vbell_msg [message] vbellwait sec ","date":"21 January 2010","permalink":"/p/switching-between-audible-and-visual-bells-in-screen/","section":"Posts","summary":"About a year ago, I was introduced to the joys of using irssi and screen to access irc servers.","title":"Switching between audible and visual bells in screen"},{"content":"Thanks to a recommendation from [Michael][1] and [Florian][2], I\u0026rsquo;ve been using [dsh][3] with a lot of success for quite some time. In short, dsh is a small application which will allow you to run commands across many servers via ssh very quickly.\nYou may be wondering: \u0026ldquo;Why not just use ssh in a for loop?\u0026rdquo; Sure, you could do something like this in bash:\nBut dsh allows you to do this: In addition, dsh allows you to run the commands concurrently (-c) or one after the other (-w). You can tell it to prepend each line with the machine\u0026rsquo;s name (-M) or it can omit the machine name from the output (-H). If you need to pass extra options, such as which ssh key to use, or an alternative port, you can do that as well (-o). All of these command line options can be tossed into a configuration file if you have a default set of options you prefer.\nAnother thing that makes dsh more powerful is the groups feature. Let\u0026rsquo;s say you have three groups of servers - some are in California, others in Texas, and still others in New York. You could make three files for the groups:\n~/.dsh/group/california ~/.dsh/group/texas ~/.dsh/group/newyork Inside each file, you just need to list the hosts one after the other. Here\u0026rsquo;s the ~/.dsh/group/texas group file:\ndb1.tx.mydomain.com db2.tx.mydomain.com web1.tx.mydomain.com web2.tx.mydomain.com #web3.tx.mydomain.com As you can see, dsh handles comments in the hosts file. In the above example, the web3 server will be skipped since it\u0026rsquo;s prepended with a comment. Let\u0026rsquo;s say you want to check the uptime on all of the Texas servers as fast as possible:\nThat will run the `uptime` command on all of the servers in the Texas group concurrently. If you need to run it on two groups at once, just pass another group (eg. `-g texas -g california`) as an argument. You can also run the commands against all of your groups (-a). The dsh command can really help you if you need to gather information or run simple commands on many remote servers. If you find yourself using it often for systems management, you may want to consider something like [puppet][4]. [1]: http://twitter.com/mshuler [2]: http://twitter.com/pandemicsyn [3]: http://www.netfort.gr.jp/~dancer/software/dsh.html.en [4]: http://reductivelabs.com/products/puppet/ ","date":"20 January 2010","permalink":"/p/crash-course-in-dsh/","section":"Posts","summary":"Thanks to a recommendation from [Michael][1] and [Florian][2], I\u0026rsquo;ve been using [dsh][3] with a lot of success for quite some time.","title":"Crash course in dsh"},{"content":"","date":null,"permalink":"/tags/dsh/","section":"Tags","summary":"","title":"Dsh"},{"content":"","date":null,"permalink":"/tags/puppet/","section":"Tags","summary":"","title":"Puppet"},{"content":"One of my favorite (and most used) applications on any Linux machine is screen. Once you fire up a screen session, you can start something and keep it running indefinitely. Even if your internet connection drops or you accidentally close your terminal window, the screen session will remain open on the remote server.\nDetaching from a screen session is done by pressing CTRL-A and then d (for detach). However, when I\u0026rsquo;m on my Mac, CTRL-A and CTRL-E send my cursor to the beginning and end of lines, respectively. Once I launch screen, I lose the CTRL-A functionality because screen thinks I\u0026rsquo;m trying to send it a command.\nLuckily, this can be changed in your ~/.screenrc:\nescape ^Ww With this change, you can press CTRL-W, then press d, and you\u0026rsquo;ll detach from the screen session. For all of the screen options, run man screen on your local machine or review the man page online.\n","date":"7 January 2010","permalink":"/p/change-the-escape-keystrokes-in-screen/","section":"Posts","summary":"One of my favorite (and most used) applications on any Linux machine is screen.","title":"Change the escape keystrokes in screen"},{"content":"Happy New Year! I certainly hope it\u0026rsquo;s a great one for you, your family, and your business. As the new year begins, I figured it would be a good time to sit down and answer a question that I hear very often:\nHow do I become a better systems administrator?\nThe best way to become a better systems administrator is to fully understand the theory of what\u0026rsquo;s happening in your server\u0026rsquo;s environment.\nWhat do I mean by that? Learn why things aren\u0026rsquo;t happening as you expected and think about all of the factors that could possibly be involved. Instead of thinking purely about cause and effect, you\u0026rsquo;ll find it much easier and rewarding to consider everything inside and outside your environment before you make any changes.\nThis still may be a little difficult to fully understand, so he\u0026rsquo;s an example. Let\u0026rsquo;s say you\u0026rsquo;re handling an issue where a customer can\u0026rsquo;t reach a website hosted on their server. When you ask them for more details, they might give you the dreaded reply: \u0026ldquo;It\u0026rsquo;s not coming up.\u0026rdquo; Start by making a mental list of the problems that are easiest to check:\nIs the web server daemon running? If a database server is being used, is it running and accessible? Is there a software/hardware firewall blocking port 80? Is a script stuck on the server tying up resources? Could there be a DNS resolution problem? Is the server up? Did a switch fail? Is the server\u0026rsquo;s hard disk out of space? Can the customer reach other websites like Google or Yahoo? If SELinux is involved, have the appropriate contexts been set? Could the site be a target of a denial of service attack? Has the server reached its connection tracking limit? Of course, this is a relatively short list, but these are all easy to check. If you\u0026rsquo;re thinking about cause and effect, you might only consider the web server daemon and some basic network issues. By considering all of the other factors that may be related, you\u0026rsquo;ve ensured that all of the basics are covered before you consider more complex problems.\nMost systems administrators have taken an error message and tossed it in en masse into Google before. Occasionally, no results will appear for the search. If you find yourself in this situation, try to understand the individual parts of the error message. Work outward from what you know already. You should know which daemon said it, and you may have an idea of what the application was doing when the error occurred. Take time to consider what the daemon is trying to tell you within the context of what it was doing at the time.\nOne of the easiest ways to force yourself to be immersed into this way of thinking is to host applications for non-technical people. You\u0026rsquo;ll find that many customers want things done differently, and they\u0026rsquo;re all at different levels of technical aptitude. Some may find it a frustrating experience at first, but you\u0026rsquo;ll think yourself later. It will force you to consider all aspects of how a server operates since you might not always know what\u0026rsquo;s happening within a customer\u0026rsquo;s application.\nAs always, if you find yourself stumbling, remember to ask your peers and colleagues. Even if they haven\u0026rsquo;t seen the particular issue, they will probably be able to guide you closer to the solution you seek.\n","date":"4 January 2010","permalink":"/p/a-new-year-system-administrator-inspiration/","section":"Posts","summary":"Happy New Year!","title":"A New Year System Administrator Inspiration"},{"content":"","date":null,"permalink":"/tags/grep/","section":"Tags","summary":"","title":"Grep"},{"content":"","date":null,"permalink":"/tags/one-liner/","section":"Tags","summary":"","title":"One Liner"},{"content":"I try to keep up with the latest kernel update from kernel.org, but parsing through the output can be a pain if there are a lot of changes taking place. Here\u0026rsquo;s a handy one-liner to make it easier to read:\nwget --quiet -O - http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.31.8 | grep -A 4 ^commit | grep -B 1 \u0026#34;^--\u0026#34; | grep -v \u0026#34;^--\u0026#34; It should give you some output like this:\nLinux 2.6.31.8 ext4: Fix potential fiemap deadlock (mmap_sem vs. i_data_sem) signal: Fix alternate signal stack check SCSI: scsi_lib_dma: fix bug with dma maps on nested scsi objects SCSI: osd_protocol.h: Add missing #include SCSI: megaraid_sas: fix 64 bit sense pointer truncation .. ","date":"15 December 2009","permalink":"/p/parse-kernel-org-changelogs-with-wget-and-grep/","section":"Posts","summary":"I try to keep up with the latest kernel update from kernel.","title":"Parse kernel.org changelogs with wget and grep"},{"content":"","date":null,"permalink":"/tags/wget/","section":"Tags","summary":"","title":"Wget"},{"content":"","date":null,"permalink":"/tags/upgrade/","section":"Tags","summary":"","title":"Upgrade"},{"content":"As with the Fedora 10 to 11 upgrade, you can upgrade Fedora 11 to Fedora 12 using yum. I find this to be the easiest and most reliable way to upgrade a Fedora installation whether you use it as a server or desktop.\nTo reduce the total data downloaded, I\u0026rsquo;d recommend installing the yum-presto package first. It downloads delta RPM\u0026rsquo;s and builds them on the fly, which allows you to upgrade packages without having to download the entire RPM\u0026rsquo;s.\nyum install yum-presto Now, upgrade your current system to the latest packages and clean up yum\u0026rsquo;s metadata:\nyum upgrade yum clean all Get the latest fedora-release package and install it (replace x86_64 with x86 if you\u0026rsquo;re using a 32-bit system):\nwget ftp://download.fedora.redhat.com/pub/fedora/linux/releases/12/Fedora/x86_64/os/Packages/fedora-release-*.noarch.rpm rpm -Uvh fedora-release-*.rpm Now, upgrade your system to Fedora 12:\nyum upgrade For detailed documentation on the entire process, refer to Fedora using yum on the FedoraProject Wiki.\n","date":"8 December 2009","permalink":"/p/upgrading-fedora-11-to-12-using-yum/","section":"Posts","summary":"As with the Fedora 10 to 11 upgrade, you can upgrade Fedora 11 to Fedora 12 using yum.","title":"Upgrading Fedora 11 to 12 using yum"},{"content":"Edit: After further research, I found that this fix only adjusts the speed at which your mouse moves. It doesn\u0026rsquo;t do anything for the acceleration curve.\nI recently picked up a Magic Mouse and discovered that I like almost all of its features. The biggest headache is the funky mouse acceleration curve that it applies by default. When you make small movements, they barely even register on the screen. When you make big movements and slow down a little mid-move, the pointer slows down much too rapidly.\nA quick Google search revealed a support discussion post where users were discussing possible solutions. Someone suggested running this in the terminal:\ndefaults write -g com.apple.mouse.scaling -1 That improved things a little for me, but it\u0026rsquo;s not perfect. If you adjust the tracking speed in System Preferences after running this command, the acceleration curve will be reset to the default.\nUpdate: After some tinkering (and further Googling), I found that `` or .1 seemed to work better for me than -1.\n","date":"3 December 2009","permalink":"/p/disable-acceleration-for-apples-magic-mouse/","section":"Posts","summary":"Edit: After further research, I found that this fix only adjusts the speed at which your mouse moves.","title":"Disable acceleration for Apple’s Magic Mouse"},{"content":"","date":null,"permalink":"/tags/magic-mouse/","section":"Tags","summary":"","title":"Magic Mouse"},{"content":"If you want your iptables rules automatically loaded every time your networking comes up on your Debian or Ubuntu server, you can follow these easy steps.\nFirst, get your iptables rules set up the way you like them. Once you\u0026rsquo;ve verified that everything works, save the rules:\niptables-save \u003e /etc/firewall.conf Next, open up /etc/network/if-up.d/iptables in your favorite text editor and add the following:\n#!/bin/sh iptables-restore \u0026lt; /etc/firewall.conf Once you save it, make it executable:\nchmod +x /etc/network/if-up.d/iptables Now, the rules will be restored each time your networking scripts start (or restart). If you need to save changes to your rules in the future, you can manually edit /etc/firewall.conf or you can adjust your rules live and run:\niptables-save \u003e /etc/firewall.conf Thanks to Ant for this handy tip.\n","date":"17 November 2009","permalink":"/p/automatically-loading-iptables-on-debianubuntu/","section":"Posts","summary":"If you want your iptables rules automatically loaded every time your networking comes up on your Debian or Ubuntu server, you can follow these easy steps.","title":"Automatically loading iptables rules on Debian/Ubuntu"},{"content":"I usually set the time zone on my servers to UTC, but that makes it a bit confusing for me when I use irssi. If you have perl support built into irssi, you can run these commands to alter your time zone within irssi only:\n/load perl /script exec $ENV{'TZ'}='(nameofyourtimezone)'; For example, I\u0026rsquo;m in Central Time, so I\u0026rsquo;d use:\n/script exec $ENV{'TZ'}='CST6CDT'; To update the time in your status bar, simply /whois yourself and you should see the updated time zone. If you want more handy irssi tips, look no further than irssi\u0026rsquo;s documentation.\n","date":"3 November 2009","permalink":"/p/changing-the-time-zone-in-irssi/","section":"Posts","summary":"I usually set the time zone on my servers to UTC, but that makes it a bit confusing for me when I use irssi.","title":"Changing the time zone in irssi"},{"content":"","date":null,"permalink":"/tags/time-zone/","section":"Tags","summary":"","title":"Time Zone"},{"content":"Running OS X 10.6.3? William Fennie found a fix on Google Groups.\nFirst off, credit for this fix on OS X 10.6.2 goes to Geoff Watts from his two tweets.\nIf you\u0026rsquo;re using Snow Leopard, you\u0026rsquo;ll find that the current version of MacFusion refuses to complete a connection to a remote server. You can fix this in two steps:\nFirst, quit MacFusion.\nSecond, open System Preferences and then open the MacFUSE pane. Check the “Show Beta Versions” box and click “Check For Updates”. Go ahead and update MacFUSE.\nThird, open up a terminal and do the following:\nrm /Applications/Macfusion.app/Contents/PlugIns/sshfs.mfplugin/Contents/Resources/sshnodelay.so Your MacFusion installation should now be working on Snow Leopard. I\u0026rsquo;ve tested SSH and FTP connectivity so far, and they both appear to be working. Thanks again to Geoff for the fix!\n","date":"28 August 2009","permalink":"/p/fix-macfusion-on-snow-leopard/","section":"Posts","summary":"Running OS X 10.","title":"Fix MacFusion on Snow Leopard"},{"content":"","date":null,"permalink":"/tags/ftp/","section":"Tags","summary":"","title":"Ftp"},{"content":"","date":null,"permalink":"/tags/macfusion/","section":"Tags","summary":"","title":"Macfusion"},{"content":"","date":null,"permalink":"/tags/snow-leopard/","section":"Tags","summary":"","title":"Snow Leopard"},{"content":"If you use Fedora 11 in a virtualized environment, you may have seen this error recently if you\u0026rsquo;ve updated to apr-1.3.8-1:\n[root@f11 ~]# /etc/init.d/httpd start Starting httpd: [Fri Aug 14 17:05:24 2009] [crit] (22)Invalid argument: alloc_listener: failed to get a socket for (null) Syntax error on line 134 of /etc/httpd/conf/httpd.conf: Listen setup failed [FAILED] The issue is related to three kernel calls that are used in apr-1.3.8-1: accept4(), dup3() and epoll_create1(). Without these calls, apache is unable to start.\nUpdate on August 17, 2009: the Fedora team has pushed apr-1.3.8-2 into the stable repositories for Fedora 11, which eliminates the need for the temporary fix shown below.\nDeprecated solution: There is a bug open with the Fedora team, and there is a temporary fix available:\nyum --enablerepo=updates-testing update apr ","date":"14 August 2009","permalink":"/p/fedora-11-httpd-alloc_listener-failed-to-get-a-socket-for-null/","section":"Posts","summary":"If you use Fedora 11 in a virtualized environment, you may have seen this error recently if you\u0026rsquo;ve updated to apr-1.","title":"Fedora 11 httpd: alloc_listener: failed to get a socket for (null)"},{"content":"On some systems, getting the mysql gem to build can be a little tricky. Fedora 11 x86_64 will require a bit of extra finesse to get the gem installed. First off, ensure that you\u0026rsquo;ve installed the mysql-devel package:\n# yum -y install mysql-devel I\u0026rsquo;ll assume that you already installed the rubygems package. You can install the mysql gem like this:\n# gem install mysql -- --with-mysql-config=/usr/bin/mysql_config Building native extensions. This could take a while... Successfully installed mysql-2.7 1 gem installed Installing ri documentation for mysql-2.7... Installing RDoc documentation for mysql-2.7...","date":"7 August 2009","permalink":"/p/installing-the-mysql-gem-in-fedora-11-64-bit/","section":"Posts","summary":"On some systems, getting the mysql gem to build can be a little tricky.","title":"Installing the mysql gem in Fedora 11 64-bit"},{"content":"If you haven\u0026rsquo;t checked out bgplay, it\u0026rsquo;s pretty handy.\n","date":"4 August 2009","permalink":"/p/graphical-representation-of-ciscos-bgp-issues-this-morning/","section":"Posts","summary":"If you haven\u0026rsquo;t checked out bgplay, it\u0026rsquo;s pretty handy.","title":"Graphical representation of Cisco’s BGP issues this morning"},{"content":"","date":null,"permalink":"/tags/curl/","section":"Tags","summary":"","title":"Curl"},{"content":"There are a ton of places on the internet where you can check the public-facing IP for the device you are using. I\u0026rsquo;ve used plenty of them, but I\u0026rsquo;ve always wanted one that just returned text. You can get pretty close with checkip.dyndns.org, but there is still HTML in the output:\n$ curl checkip.dyndns.org Current IP Address: 174.143.240.31 I wanted something simpler, so I set up icanhazip.com:\n$ curl icanhazip.com 174.143.240.31 ","date":"31 July 2009","permalink":"/p/get-the-public-facing-ip-for-any-server-with-icanhazip-com/","section":"Posts","summary":"There are a ton of places on the internet where you can check the public-facing IP for the device you are using.","title":"Get the public-facing IP for any server with icanhazip.com"},{"content":"","date":null,"permalink":"/tags/logrotate/","section":"Tags","summary":"","title":"Logrotate"},{"content":"","date":null,"permalink":"/tags/passenger/","section":"Tags","summary":"","title":"Passenger"},{"content":"","date":null,"permalink":"/tags/rails/","section":"Tags","summary":"","title":"Rails"},{"content":"I found a great post on Overstimulate about handling the rotation of rails logs when you use Phusion Passenger. Most of the data for your application should end up in the apache logs, but if your site is highly dynamic, you may end up with a giant production log if you\u0026rsquo;re not careful.\nToss this into /etc/logrotate.d/yourrailsapplication:\n/var/www/yourrailsapp/log/*.log { daily missingok rotate 30 compress delaycompress sharedscripts postrotate touch /var/www/yourrailsapp/tmp/restart.txt endscript } For a detailed explanation, see the post on Overstimulate.\n","date":"26 June 2009","permalink":"/p/rotating-rails-logs-when-using-phusion-passenger/","section":"Posts","summary":"I found a great post on Overstimulate about handling the rotation of rails logs when you use Phusion Passenger.","title":"Rotating rails logs when using Phusion Passenger"},{"content":"Occasionally, I\u0026rsquo;ll end up with a mailbox full of random data, alerts, or other useless things. If you have SSH access to the server, you can always clear out your mail spool, but if you connect to an IMAP server, you can use mutt to do the same thing.\nFirst, use mutt to connect to your server remotely (via IMAP over SSL in this example):\nmutt -f imaps://mail.yourdomain.com/ Once you\u0026rsquo;ve connected and logged in, press SHIFT-D (uppercase d). The status bar of mutt should show:\nDelete messages matching: Type in ~s .* so that the line looks like:\nDelete messages matching: ~s .* When you press enter, mutt will put a D next to all of the messages, which marks them for deletion. Press q to quit, and then y to confirm the deletion. After a brief moment, all of those messages will be deleted and mutt will exit.\nUpdate: There\u0026rsquo;s an even faster way to remove all of the messages in a mailbox with mutt. Just hold shift while pressing D, ~ (tilde), and A to select everything:\nD~A ","date":"19 June 2009","permalink":"/p/deleting-all-e-mail-messages-in-your-inbox-with-mutt/","section":"Posts","summary":"Occasionally, I\u0026rsquo;ll end up with a mailbox full of random data, alerts, or other useless things.","title":"Deleting all e-mail messages in your inbox with mutt"},{"content":"","date":null,"permalink":"/tags/imap/","section":"Tags","summary":"","title":"Imap"},{"content":"","date":null,"permalink":"/tags/processes/","section":"Tags","summary":"","title":"Processes"},{"content":"","date":null,"permalink":"/tags/sigcont/","section":"Tags","summary":"","title":"Sigcont"},{"content":"","date":null,"permalink":"/tags/signals/","section":"Tags","summary":"","title":"Signals"},{"content":"","date":null,"permalink":"/tags/sigstop/","section":"Tags","summary":"","title":"Sigstop"},{"content":"The best uses I\u0026rsquo;ve found for the SIGSTOP and SIGCONT signals are times when a process goes haywire, or when a script spawns too many processes at once.\nYou can issue the signals like this:\nkill -SIGSTOP [pid] kill -SIGCONT [pid] Wikipedia has great definitions for SIGSTOP:\nWhen SIGSTOP is sent to a process, the usual behaviour is to pause that process in its current state. The process will only resume execution if it is sent the SIGCONT signal. SIGSTOP and SIGCONT are used for job control in the Unix shell, among other purposes. SIGSTOP cannot be caught or ignored.\nand SIGCONT:\nWhen SIGSTOP or SIGTSTP is sent to a process, the usual behaviour is to pause that process in its current state. The process will only resume execution if it is sent the SIGCONT signal. SIGSTOP and SIGCONT are used for job control in the Unix shell, among other purposes.\nIn short, SIGSTOP tells a process to “hold on” and SIGCONT tells a process to “pick up where you left off”. This can work really well for rsync jobs since you can pause the job, clear up some space on the destination device, and then resume the job. The source rsync process just thinks that the destination rsync process is taking a long time to respond.\nIn the ps output, stopped processes will have a status containing T. Here\u0026rsquo;s an example with crond:\n# kill -SIGSTOP `pgrep crond` # ps aufx | grep crond root 3499 0.0 0.0 100328 1236 ? Ts Jun11 0:01 crond # kill -SIGCONT `pgrep crond` # ps aufx | grep crond root 3499 0.0 0.0 100328 1236 ? Ss Jun11 0:01 crond ","date":"15 June 2009","permalink":"/p/two-great-signals-sigstop-and-sigcont/","section":"Posts","summary":"The best uses I\u0026rsquo;ve found for the SIGSTOP and SIGCONT signals are times when a process goes haywire, or when a script spawns too many processes at once.","title":"Two great signals: SIGSTOP and SIGCONT"},{"content":"There are two main ways to upgrade Fedora 10 (Cambridge) to Fedora 11 (Leonidas):\n» What the Fedora developers suggest:\nyum -y upgrade yum -y install preupgrade yum clean all preupgrade-cli \"Fedora 11 (Leonidas)\" Of course, if you\u0026rsquo;re doing this on a Fedora desktop, you can use preupgrade (rather than preupgrade-cli) to upgrade with a GUI.\n» The method I prefer (and it works properly on Slicehost):\nyum -y upgrade yum clean all wget http://download.fedora.redhat.com/pub/fedora/linux/releases/11/Fedora/x86_64/os/Packages/fedora-release-11-1.noarch.rpm rpm -Uvh fedora-release-11-1.noarch.rpm At this point, you would normally just start upgrading packages, but the Fedora developers threw us a curveball. Since yum in Fedora 10 doesn\u0026rsquo;t support metalinks, your upgrades will fail with something like this:\n# yum -y upgrade YumRepo Error: All mirror URLs are not using ftp, http[s] or file. Eg. / removing mirrorlist with no valid mirrors: //var/cache/yum/updates/mirrorlist.txt Error: Cannot retrieve repository metadata (repomd.xml) for repository: updates. Please verify its path and try again It\u0026rsquo;s easily fixed, however. Open up /etc/yum.repos.d/fedora.repo and /etc/yum.repos.d/fedora-updates.repo in your favorite text editor and change the mirrorlist URL\u0026rsquo;s like so:\nFedora Repository\n#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever\u0026arch=$basearch mirrorlist=https://mirrors.fedoraproject.org/mirrorlist?repo=fedora-$releasever\u0026arch=$basearch Fedora Updates Repository\n#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever\u0026arch=$basearch mirrorlist=https://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-f$releasever\u0026arch=$basearch Once you make those changes, finish out the upgrade:\nyum -y upgrade This process will take a little while to complete, but there shouldn\u0026rsquo;t be any interaction required. Once it\u0026rsquo;s done, change the mirrorlist lines back to the original values so you can benefit from the speedups provided by the metalink format.\n","date":"11 June 2009","permalink":"/p/upgrading-from-fedora-10-cambridge-to-fedora-11-leonidas/","section":"Posts","summary":"There are two main ways to upgrade Fedora 10 (Cambridge) to Fedora 11 (Leonidas):","title":"Upgrading from Fedora 10 (Cambridge) to Fedora 11 (Leonidas)"},{"content":"","date":null,"permalink":"/tags/proxy/","section":"Tags","summary":"","title":"Proxy"},{"content":"Sometimes we find ourselves in places where we don\u0026rsquo;t trust the network that we\u0026rsquo;re using. I\u0026rsquo;ve found myself in quite a few situations where I know my data is being encrypted, but I want an additional layer of protection. Luckily, that protection is built into SSH if you\u0026rsquo;d like to use it.\nCreate a simple SOCKS proxy with SSH by using the -D flag:\nssh -D 2400 username@some.host.com That command will open up a SOCKS proxy on your workstation on port 2400. If you configure your application to use the local SOCKS proxy, any traffic using the proxy will be sent through an encrypted SSH connection to your remote server and out to the internet. Inbound traffic through the proxy is encrypted through the same connection.\nYou can pair that with autossh to keep your proxy connected at all times:\nautossh -f -M 20000 -D 2400 username@some.host.com -N","date":"26 May 2009","permalink":"/p/simple-socks-proxy-using-ssh/","section":"Posts","summary":"Sometimes we find ourselves in places where we don\u0026rsquo;t trust the network that we\u0026rsquo;re using.","title":"Simple SOCKS proxy using SSH"},{"content":"I found a really helpful tip on Xaprb for comparing result sets in MySQL:\nmysql\u003e pager md5sum - PAGER set to 'md5sum -' mysql\u003e select * from test; a09bc56ac9aa0cbcc659c3d566c2c7e4 - 4096 rows in set (0.00 sec) It\u0026rsquo;s a quick way to determine if you have two tables that are properly in sync. Although there are better ways to compare tables in replicated environments, this method can get it done pretty quickly.\n","date":"5 May 2009","permalink":"/p/comparing-mysql-result-sets-quickly/","section":"Posts","summary":"I found a really helpful tip on Xaprb for comparing result sets in MySQL:","title":"Comparing MySQL result sets quickly"},{"content":"","date":null,"permalink":"/tags/hard-disk/","section":"Tags","summary":"","title":"Hard Disk"},{"content":"Servers with hot swappable drive bays are always handy. However, things can turn ugly if the SCSI controller doesn\u0026rsquo;t like a new drive when it is inserted. You may end up with these errors in your dmesg output:\nkernel: sdb : READ CAPACITY failed. kernel: sdb : status=0, message=00, host=4, driver=00 kernel: sdb : sense not available. kernel: sdb: Write Protect is off kernel: sdb: Mode Sense: 00 00 00 00 kernel: sdb: asking for cache data failed kernel: sdb: assuming drive cache: write through kernel: sdb:\u0026lt;6\u003esd 1:0:0:0: SCSI error: return code = 0x00040000 kernel: end_request: I/O error, dev sdb, sector 0 kernel: Buffer I/O error on device sdb, logical block 0 kernel: sd 1:0:0:0: SCSI error: return code = 0x00040000 kernel: end_request: I/O error, dev sdb, sector 0 kernel: Buffer I/O error on device sdb, logical block 0 kernel: sd 1:0:0:0: SCSI error: return code = 0x00040000 kernel: end_request: I/O error, dev sdb, sector 0 The errors show that the SCSI bus is having issues bringing the new drive online, and it won\u0026rsquo;t be seen by the OS until the SCSI controller is pleased. You can force the controller to re-scan the drives attached to it, and this should correct the problem:\ncd /sys/class/scsi_host/hostX echo \"- - - \" \u003e scan Replace the X with the proper controller number of your SCSI controller. If you\u0026rsquo;re not sure which controller is which, try running:\n# cat /sys/class/scsi_host/host0/proc_name sata_nv Credit for this find goes to Tony Dolan\n","date":"23 April 2009","permalink":"/p/re-scan-the-scsi-bus-in-linux-after-hot-swapping-a-drive/","section":"Posts","summary":"Servers with hot swappable drive bays are always handy.","title":"Re-scan the SCSI bus in Linux after hot-swapping a drive"},{"content":"","date":null,"permalink":"/tags/scsi/","section":"Tags","summary":"","title":"Scsi"},{"content":"","date":null,"permalink":"/tags/logs/","section":"Tags","summary":"","title":"Logs"},{"content":"If you have a centralized syslog server, or you use Splunk for log tracking, you may find the need to get older log files into a syslog port on that server.\nEdit: Using logger (as suggested by David and Jerry below) will give you a more reliable way to send the data to a syslog server:\ncat some.log | logger -t UsefulLabel -n yoursyslogserver.com -p 514 You\u0026rsquo;ll also be able to set a label for the text before it\u0026rsquo;s piped into the syslog server, which would be handy if you\u0026rsquo;re sorting or parsing the data later on.\nAlso, you can send your data in the raw using netcat:\ncat some.log | nc -w 1 -u yoursyslogserver.com 514","date":"21 April 2009","permalink":"/p/piping-log-files-to-a-syslog-server/","section":"Posts","summary":"If you have a centralized syslog server, or you use Splunk for log tracking, you may find the need to get older log files into a syslog port on that server.","title":"Piping log files to a syslog server"},{"content":"","date":null,"permalink":"/tags/phpmyadmin/","section":"Tags","summary":"","title":"Phpmyadmin"},{"content":"Users of PHPMyAdmin 3.x may find that the table indexes are automatically hidden at the bottom of the page. I find this to be a huge annoyance since table indexes are tremendously important to the structure of the table.\nIf you don\u0026rsquo;t want to downgrade to PHPMyAdmin 2.x, just add the following line to the top of your config.inc.php file:\n$cfg['InitialSlidersState'] = 'open'; This will cause the indexes to be displayed when you click Structure for a certain table. By default, they are hidden.\nSidenote: Some of you might be thinking: “Hey, you\u0026rsquo;re a DBA, you should know MySQL queries without needing PHPMyAdmin.” You\u0026rsquo;re right. I do know how to get the job done without PHPMyAdmin, but I enjoy the way PHPMyAdmin allows me to visualize my table structures. Also, it\u0026rsquo;s a handy way to present data to others very quickly.\n","date":"4 April 2009","permalink":"/p/phpmyadmin-3x-hides-the-table-indexes/","section":"Posts","summary":"Users of PHPMyAdmin 3.","title":"PHPMyAdmin 3.x hides the table indexes"},{"content":"Mac users feel a little left out when it comes to VMWare Server clients. There\u0026rsquo;s one for Windows and Linux, but Mac users are out of luck. Sure, you can VNC into a Linux box, use X forwarding, or use RDC to access a Windows box, but a real Mac client would really be helpful.\nHowever, I stumbled upon some documentation that will allow you to VNC to a VMWare Server VM\u0026rsquo;s main screen. It\u0026rsquo;s equivalent to having a network KVM connected to the VM so you can have out-of-band management. With VMWare server 2.x, you can enable it by following these steps:\nStep 1. Create a new VM in VMWare Server, but don\u0026rsquo;t start the VM.\nStep 2. SSH to the server and find your VM\u0026rsquo;s .vmx file. Normally, you can find the file in a location like /var/lib/vmware/[vmname]/[vmname].vmx.\nStep 3. Add the following lines to the end of the .vmx file:\nRemoteDisplay.vnc.enabled = \"TRUE\" RemoteDisplay.vnc.password = \"vncpassword\" RemoteDisplay.vnc.port = \"5900\" Step 4. Change the VNC port and password to values that suit your environment and then start the VM.\nDUH! Don\u0026rsquo;t set two VM\u0026rsquo;s to use the same vnc port, but that should go without saying.\n","date":"25 March 2009","permalink":"/p/enabling-vnc-as-a-pseudo-kvm-with-vmware-server/","section":"Posts","summary":"Mac users feel a little left out when it comes to VMWare Server clients.","title":"Enabling VNC as a pseudo-KVM with VMWare Server"},{"content":"","date":null,"permalink":"/tags/vmware/","section":"Tags","summary":"","title":"Vmware"},{"content":"Setting up new servers can be a pain if you\u0026rsquo;re not able to clone them from a server that is known to be working. Many VPS providers, like Slicehost, allow you to clone a system to a new system. Without that option, you can pull a list of RPM\u0026rsquo;s without their version number for a fairly quick and basic comparison.\nFirst, pull a list of RPM package by name only:\nrpm -qa --queryformat='%{NAME}\\n' | sort | uniq \u003e server.txt Once you\u0026rsquo;ve done that on both servers, just use diff to compare the two files:\ndiff serverold.txt servernew.txt ","date":"10 March 2009","permalink":"/p/compare-the-rpm-packages-installed-on-two-different-servers/","section":"Posts","summary":"Setting up new servers can be a pain if you\u0026rsquo;re not able to clone them from a server that is known to be working.","title":"Compare the RPM packages installed on two different servers"},{"content":"","date":null,"permalink":"/tags/diff/","section":"Tags","summary":"","title":"Diff"},{"content":"","date":null,"permalink":"/tags/annoyances/","section":"Tags","summary":"","title":"Annoyances"},{"content":"","date":null,"permalink":"/tags/gnome-keyring/","section":"Tags","summary":"","title":"Gnome-Keyring"},{"content":"I recently tossed Ubuntu 8.10 on my Mac Mini at home to use it as a home theater PC (with Boxee). When I connected to my wireless network via NetworkManager, I entered my WPA2 passphrase, and then I was prompted to enter a password for gnome-keyring. I went back to the couch, SSH\u0026rsquo;ed in, and continued configuring it remotely. When it rebooted, it never came back online.\nOnce I switched the TV back over to the Mini, I saw that gnome-keyring had popped up and it was asking for my password. I entered it, and the Mini joined the wireless network. Each time I rebooted, I had to go through this procedure (which is annoying to do with a HTPC that is across the room). I found a pretty fancy solution, but it looked a little complicated for my setup.\nHere\u0026rsquo;s how I did it in a simpler way in Ubuntu 8.10:\nClick Applications \u0026gt; Accessories \u0026gt; Passwords and Encryption Keys Click Edit \u0026gt; Preferences Click your keyring name (usually default) Click Change Unlock Password Enter your current password in the top box, but leave the bottom two boxes blank Click OK Click Use unsafe storage when you are prompted Click Close If you reboot your machine, it should not ask for a password for your keyring any longer. This allowed my system to log into my wireless network automatically.\nWHOA THERE: Since the only password being stored on the device is my WPA2 password, I\u0026rsquo;m not concerned about the security of the keyring. If you\u0026rsquo;re doing this on a laptop or desktop that other people use, I would highly recommend not following these steps. All of your passwords and keys will be stored unencrypted.\n","date":"27 February 2009","permalink":"/p/prevent-gnome-keyring-from-asking-for-a-password-when-networkmanager-starts/","section":"Posts","summary":"I recently tossed Ubuntu 8.","title":"Prevent gnome-keyring from asking for a password when NetworkManager starts"},{"content":"I\u0026rsquo;ve tested this Debian etch to lenny upgrade process a few times so far, and it seems to be working well.\nsudo vim /etc/apt/sources.list [change 'etch' -\u003e 'lenny'] sudo aptitude update sudo aptitude install apt dpkg aptitude sudo aptitude full-upgrade","date":"18 February 2009","permalink":"/p/upgrade-debian-etch-to-lenny/","section":"Posts","summary":"I\u0026rsquo;ve tested this Debian etch to lenny upgrade process a few times so far, and it seems to be working well.","title":"Upgrade Debian etch to lenny"},{"content":"Most linux distributions use some type of mechanism to gracefully stop daemons and unmount storage volumes during a reboot or shutdown. It\u0026rsquo;s most commonly done via scripts that will wait for each daemon to shut down gracefully before proceeding to the next daemon.\nAs we know, sometimes servers misbehave due to things put them through, and you can quickly end up in a situation where things are going badly. I\u0026rsquo;m talking about the type of situation where you\u0026rsquo;re connected via SSH to a server that controls phone lines for five million people and it sits in a tiny building 400 miles away from the nearest human being. We\u0026rsquo;re talking bad. If you issue a plain reboot command, it might not even make it that far. Once SSH stops running, you\u0026rsquo;re going to be out of luck.\nIf you find yourself in this situation (and I hope you won\u0026rsquo;t!), you have some options to get your way with a misbehaving server remotely. You can force an immediate reboot with the following:\necho 1 \u003e /proc/sys/kernel/sysrq echo b \u003e /proc/sysrq-trigger WHOA THERE! This is pretty much the same as pressing the reset button on the server (if equipped). No daemons will be shut down gracefully, no filesystem sync will occur, and you may get the wrath of a fsck (or worse, a non-booting server) upon reboot. To do things a little more carefully, read on.\nThese are called magic commands, and they\u0026rsquo;re pretty much synonymous with holding down Alt-SysRq and another key on older keyboards. Dropping 1 into /proc/sys/kernel/sysrq tells the kernel that you want to enable SysRq access (it\u0026rsquo;s usually disabled). The second command is equivalent to pressing Alt-SysRq-b on a QWERTY keyboard.\nThere\u0026rsquo;s a better way of rebooting a misbehaving server that Wikipedia shows with the mnemonic “Reboot Even If System Utterly Broken”:\nunRaw (take control of keyboard back from X), tErminate (send SIGTERM to all processes), kIll (send SIGKILL to all processes), Sync (flush data to disk), Unmount (remount all filesystems read-only), reBoot. I can\u0026rsquo;t vouch for this actually working, but I\u0026rsquo;m interested to try it. UPDATE: I\u0026rsquo;ve been told that doing this series of commands with ReiserFS is a very bad idea.\nIf you want to shut the machine down entirely (please think about it before using this on a remote system):\necho 1 \u003e /proc/sys/kernel/sysrq echo o \u003e /proc/sysrq-trigger If you want to keep SysRq enabled all the time, you can do that with an entry in your server\u0026rsquo;s sysctl.conf:\nkernel.sysrq = 1 ","date":"30 January 2009","permalink":"/p/linux-emergency-reboot-or-shutdown-with-magic-commands/","section":"Posts","summary":"Most linux distributions use some type of mechanism to gracefully stop daemons and unmount storage volumes during a reboot or shutdown.","title":"Linux: emergency reboot or shutdown with magic commands"},{"content":"","date":null,"permalink":"/tags/sysctl/","section":"Tags","summary":"","title":"Sysctl"},{"content":"","date":null,"permalink":"/tags/drivers/","section":"Tags","summary":"","title":"Drivers"},{"content":"I set up a system at home that has two SATA controllers: one is on the motherboard (nvidia chipset), while the other is on a Silicon Image SATA card that has three eSATA ports. Here is the relevant lspci output:\nroot@storageserver:~# lspci | grep ATA 00:08.0 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 00:08.1 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) There are two primary drives connected to the onboard controller and four connected to the controller card. One of the primary drives on the onboard controller contains the operating system (Ubuntu, in this case), while the other drive is blank.\nWhen the system booted, the sata_sil24 driver for the add-on card always loaded before the sata_nv drivers for the onboard storage controller:\nkernel: [ 4.125598] sata_sil24 0000:03:00.0: version 1.1 kernel: [ 4.126102] sata_sil24 0000:03:00.0: PCI INT A -\u003e Link[APC6] -\u003e GSI 16 (level, low) -\u003e IRQ 16 kernel: [ 4.126161] sata_sil24 0000:03:00.0: setting latency timer to 64 kernel: [ 4.129472] scsi0 : sata_sil24 kernel: [ 4.129635] scsi1 : sata_sil24 kernel: [ 8.293762] sata_nv 0000:00:08.0: version 3.5 kernel: [ 8.293779] sata_nv 0000:00:08.0: PCI INT A -\u003e Link[APSI] -\u003e GSI 20 (level, low) -\u003e IRQ 20 kernel: [ 8.293829] sata_nv 0000:00:08.0: setting latency timer to 64 kernel: [ 8.296764] scsi2 : sata_nv kernel: [ 8.296905] scsi3 : sata_nv kernel: [ 9.285034] sata_nv 0000:00:08.1: PCI INT B -\u003e Link[APSJ] -\u003e GSI 21 (level, low) -\u003e IRQ 21 kernel: [ 9.285074] sata_nv 0000:00:08.1: setting latency timer to 64 kernel: [ 9.285161] scsi4 : sata_nv kernel: [ 9.286015] scsi5 : sata_nv After specifying an explicit order in /etc/modules and /etc/modprobe.conf, I wasn\u0026rsquo;t able to see any changes. The sata_sil24 driver still loaded before the onboard sata_nv driver. Luckily, a very wise person on Twitter gave me a strategy that worked just fine.\nI added sata_sil24 to the bottom of my /etc/modprobe.d/blacklist file first. Then, in /etc/modules, I listed sata_nv first, followed by sata_sil24. When the system booted, I got the result that I wanted:\nkernel: [ 3.982909] sata_nv 0000:00:08.0: version 3.5 kernel: [ 3.982931] sata_nv 0000:00:08.0: PCI INT A -\u003e Link[APSI] -\u003e GSI 20 (level, low) -\u003e IRQ 20 kernel: [ 3.982993] sata_nv 0000:00:08.0: setting latency timer to 64 kernel: [ 3.984497] scsi0 : sata_nv kernel: [ 3.986013] scsi1 : sata_nv kernel: [ 4.971755] sata_nv 0000:00:08.1: PCI INT B -\u003e Link[APSJ] -\u003e GSI 21 (level, low) -\u003e IRQ 21 kernel: [ 4.971799] sata_nv 0000:00:08.1: setting latency timer to 64 kernel: [ 4.973153] scsi2 : sata_nv kernel: [ 4.974031] scsi3 : sata_nv kernel: [ 15.988862] sata_sil24 0000:03:00.0: version 1.1 kernel: [ 15.989454] sata_sil24 0000:03:00.0: PCI INT A -\u003e Link[APC6] -\u003e GSI 16 (level, low) -\u003e IRQ 16 kernel: [ 15.989511] sata_sil24 0000:03:00.0: setting latency timer to 64 kernel: [ 15.990201] scsi6 : sata_sil24 kernel: [ 15.991523] scsi7 : sata_sil24 The sata_nv driver is loading first, and Ubuntu boots off of it without an issue. The sata_sil24 driver loads next so that the drives connected to the card show up lower in the boot order.\nMany thanks to @Twirrim on Twitter for the suggestion!\n","date":"26 January 2009","permalink":"/p/linux-adjust-storage-kernel-module-load-order/","section":"Posts","summary":"I set up a system at home that has two SATA controllers: one is on the motherboard (nvidia chipset), while the other is on a Silicon Image SATA card that has three eSATA ports.","title":"Linux: Adjust storage kernel module load order"},{"content":"","date":null,"permalink":"/tags/ruby-on-rails/","section":"Tags","summary":"","title":"Ruby on Rails"},{"content":"Some of you may be wondering “why would you want to use Rails without a database?” There are several situations why a database would not be needed, and I\u0026rsquo;ve run into quite a few of them. One of the specific cases was when I wanted to write a web interface for an application that only had a REST interface available to the public.\nIf you find yourself needing to write a Rails application without a database, just do the following:\nFor Rails 1.0 and up:\nconfig/environment.rb:\nconfig.frameworks -= [ :active_record ] test/test_helper.rb\nclass Test::Unit::TestCase self.use_transactional_fixtures = false self.use_instantiated_fixtures = false def load_fixtures end end For Rails 2.1 and up: Comment out both of the lines that begin with ActiveRecord::Base in config/initializers/new_rails_defaults.rb:\nif defined?(ActiveRecord) # Include Active Record class name as root for JSON serialized output. # ActiveRecord::Base.include_root_in_json = true # Store the full class name (including module namespace) in STI type column. # ActiveRecord::Base.store_full_sti_class = true end For more details, review the full article on rubyonrails.org.\n","date":"9 January 2009","permalink":"/p/writing-a-ruby-on-rails-application-without-using-a-database/","section":"Posts","summary":"Some of you may be wondering “why would you want to use Rails without a database?","title":"Writing a Ruby on Rails application without using a database"},{"content":"I enjoy using CPAN because it installs Perl modules with a simple interface, fetches dependencies, and warns you when things are about to end badly. However, one of my biggest complaints is when it constantly confirms installing dependencies. While this is an annoyance if you have to install a module with many dependencies (or if you\u0026rsquo;re working with CPAN on a new server), you can tell CPAN to automatically confirm the installation of dependencies.\nTo do this, simply bring up a CPAN shell:\nperl -MCPAN -e shell Run these two commands in the CPAN shell:\no conf prerequisites_policy follow o conf commit Now, exit the CPAN shell, start the CPAN shell, and try to install a module that you need. All dependencies will be automatically confirmed, downloaded and installed.\nThe first line sets your dependency policy to follow rather than ask (the default). The second line tells CPAN to write the changes to your user\u0026rsquo;s CPAN configuration file to make them permanent.\nA big thanks goes out to Lee Hambley for the fix.\nWARNING: There are some occasions where you would not want to install dependencies from CPAN. Examples of these situations are when your operating system\u0026rsquo;s package manager (yum, up2date, apt-get, aptitude, etc) has installed Perl modules in an alternative location or when you have manually installed modules in a non-standard way. I\u0026rsquo;m a Red Hat guy, and these problems rarely arise on Red Hat/Fedora systems, but your mileage may vary.\n","date":"2 January 2009","permalink":"/p/cpan-automatically-install-dependencies-without-confirmation/","section":"Posts","summary":"I enjoy using CPAN because it installs Perl modules with a simple interface, fetches dependencies, and warns you when things are about to end badly.","title":"CPAN: Automatically install dependencies without confirmation"},{"content":"When it comes to frustrating parts of the Linux kernel, OOM killer takes the cake. If it finds that applications are using too much memory on the server, it will kill process abruptly to free up memory for the system to use. I spent much of this week wrestling with a server that was in the clutches of OOM killer.\nThere are a few processes on the server that keep it fairly busy. Two of the processes are vital to the server\u0026rsquo;s operation – if they are stopped, lots of work is required to get them running properly again. I found that a certain java process was being killed by OOM killer regularly, and another perl process was being killed occasionally.\nNaturally, my disdain for java made me think that the java process was the source of the issue. The process was configured to use a small amount of RAM, so it was ruled out. The other perl process used even less memory, so it was ruled out as well. When I checked the sysstat data with sar, I found that the server was only using about 2-3GB out of 4GB of physical memory at the time when OOM killer was started. At this point, I was utterly perplexed.\nI polled some folks around the office and gathered some ideas. After putting some ideas together, I found that the server was actually caching too much data in the ext3_inode_cache and dentry_cache. These caches hold recently accessed files and directories on the server, and they\u0026rsquo;re purged as the files and directories become stale. Since the operations on the server read and write large amounts of data locally and via NFS, I knew these caches had to be gigantic. If you want to check your own caches, you can use the slabtop command. For those who like things more difficult, you can also cat the contents of /proc/slabinfo and grep for the caches that are important to you.\nAn immense amount of Googling revealed very little, but I discovered a dirty hack to fix the issue (don\u0026rsquo;t run this yet):\necho 1 \u003e /proc/sys/vm/drop_caches # free pagecache [OR] echo 2 \u003e /proc/sys/vm/drop_caches # free dentries and inodes [OR] echo 3 \u003e /proc/sys/vm/drop_caches # free pagecache, dentries and inodes sync # forces the dump to be destructive There are huge consequences to dumping these caches and running sync. If you are writing data at the time you run these commands, you\u0026rsquo;ll actually be dumping the data out of the filesystem cache before it reaches the disk, which could lead to very bad things.\nWhile discussing the issue with a coworker, he found a different method for correcting the issue that was much safer. You can echo values into /proc/sys/vm/vfs_cache_pressure to tell the kernel what priority it should take when clearing out the inode/dentry caches. LinuxInsight explains the range of values well:\nAt the default value of vfs_cache_pressure = 100 the kernel will attempt to reclaim dentries and inodes at a “fair” rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.\nIn short, values less than 100 won\u0026rsquo;t reduce the caches very much as all. Values over 100 will signal to the kernel that you want to clear out the caches at a higher priority. I found that no matter what value you use, the kernel clears the caches at a slow rate. I\u0026rsquo;ve been using a value of 10000 on the server I talked about earlier in the article, and it has kept the caches down to a reasonable level.\n","date":"4 December 2008","permalink":"/p/reducing-inode-and-dentry-caches-to-keep-oom-killer-at-bay/","section":"Posts","summary":"When it comes to frustrating parts of the Linux kernel, OOM killer takes the cake.","title":"Reducing inode and dentry caches to keep OOM killer at bay"},{"content":"You can use the simple but powerful xinetd on your Linux server to monitor almost anything on the server. Since xinetd just holds open a port and waits for a connection, you can tell it to run a script and return the output directly to the network stream.\nTo start, you\u0026rsquo;ll need a script which will return data to stdout. In this example, I\u0026rsquo;ll use a very simple script like the following:\n#!/bin/bash echo `uptime | egrep -o \u0026#39;up ([0-9]+) days\u0026#39; | awk \u0026#39;{print $2}\u0026#39;` This script pulls the number of days that the server has been online. Make the script executable with a chmod +x.\nNow, you\u0026rsquo;ll need to choose a port on which to run the xinetd service. I normally find a service in /etc/services that I won\u0026rsquo;t be using on the server. In this example, I\u0026rsquo;ll use isdnlog, which runs on port 20011. Create a file called /etc/xinetd.d/myscript and include the following in the file:\nservice isdnlog { disable\t= no socket_type\t= stream protocol\t= tcp wait\t= no user\t= root server\t= /path/to/script.sh server_args\t= test } Depending on your xinetd version, you may need to enable your new configuration and restart xinetd:\nchkconfig myscript on /etc/init.d/xinetd restart You can test your new script using netcat:\n$ uptime 18:10:30 up 141 days, 19:17, 1 user, load average: 0.65, 1.47, 1.14 $ nc localhost 20011 141 If you need to pass arguments to your script, just adjust the server_args line in the xinetd configuration. Also, be sure that your script is set up to handle the arguments.\n","date":"3 December 2008","permalink":"/p/simple-server-monitoring-with-xinetd/","section":"Posts","summary":"You can use the simple but powerful xinetd on your Linux server to monitor almost anything on the server.","title":"Simple server monitoring with xinetd"},{"content":"","date":null,"permalink":"/tags/xinetd/","section":"Tags","summary":"","title":"Xinetd"},{"content":"If you have Excel files that need to be imported into MySQL, you can import them easily with PHP. First, you will need to download some prerequisites:\nPHPExcelReader - http://sourceforge.net/projects/phpexcelreader/\nSpreadsheet_Excel_Writer - http://pear.php.net/package/Spreadsheet_Excel_Writer\nOnce you\u0026rsquo;ve downloaded both items, upload them to your server. Your directory listing on your server should have two directories: Excel (from PHPExcelReader) and Spreadsheet_Excel_Writer-x.x.x (from Spreadsheet_Excel_Writer). To work around a bug in PHPExcelReader, copy oleread.inc from the Excel directory into a new path:\nSpreadsheet/Excel/Reader/OLERead.php\nThe PHPExcelReader code will expect OLERead.php to be in that specific location. Once that is complete, you\u0026rsquo;re ready to use the PHPExcelReader class. I made an example Excel spreadsheet like this:\nName Extension Email ---------------------------------------------------- Jon Smith 2001 jsmith@domain.com Clint Jones 2002 cjones@domain.com Frank Peterson 2003 fpeterson@domain.com After that, I created a PHP script to pick up the data and insert it into the database, row by row:\nrequire_once \u0026#39;Excel/reader.php\u0026#39;; $data = new Spreadsheet_Excel_Reader(); $data-\u0026gt;setOutputEncoding(\u0026#39;CP1251\u0026#39;); $data-\u0026gt;read(\u0026#39;exceltestsheet.xls\u0026#39;); $conn = mysql_connect(\u0026#34;hostname\u0026#34;,\u0026#34;username\u0026#34;,\u0026#34;password\u0026#34;); mysql_select_db(\u0026#34;database\u0026#34;,$conn); for ($x = 2; $x \u0026lt; = count($data-\u0026gt;sheets[0][\u0026#34;cells\u0026#34;]); $x++) { $name = $data-\u0026gt;sheets[0][\u0026#34;cells\u0026#34;][$x][1]; $extension = $data-\u0026gt;sheets[0][\u0026#34;cells\u0026#34;][$x][2]; $email = $data-\u0026gt;sheets[0][\u0026#34;cells\u0026#34;][$x][3]; $sql = \u0026#34;INSERT INTO mytable (name,extension,email) VALUES (\u0026#39;$name\u0026#39;,$extension,\u0026#39;$email\u0026#39;)\u0026#34;; echo $sql.\u0026#34;\\n\u0026#34;; mysql_query($sql); } After the script ran, each row had been added to the database table successfully. If you have additional columns to insert, just repeat these lines, using an appropriate variable for each column:\nsheets[0][\u0026#34;cells\u0026#34;][$row_number][$column_number]; For more details, you can refer to a post in Zend\u0026rsquo;s Developer Zone.\n","date":"7 November 2008","permalink":"/p/importing-excel-files-into-mysql-with-php/","section":"Posts","summary":"If you have Excel files that need to be imported into MySQL, you can import them easily with PHP.","title":"Importing Excel files into MySQL with PHP"},{"content":"","date":null,"permalink":"/tags/plesk/","section":"Tags","summary":"","title":"Plesk"},{"content":"If you have a Plesk server where short mail names are enabled, upgrading to Plesk 8.4 can cause some issues. Valid logins may be rejected, and they\u0026rsquo;ll appear in your /usr/local/psa/var/log/maillog as \u0026ldquo;no such user\u0026rdquo;. You can correct the issue by switching to long mail names (click Server -\u0026gt; Mail in Plesk), or you can run a shell script provided by Parallels.\nFor further details, refer to the Plesk KB article \u0026ldquo;Mail users cannot get or send mail after upgrade to Plesk 8.4\u0026rdquo;\n","date":"6 November 2008","permalink":"/p/plesk-upgrade-to-84-causes-no-such-user-error-in-maillog/","section":"Posts","summary":"If you have a Plesk server where short mail names are enabled, upgrading to Plesk 8.","title":"Plesk: Upgrade to 8.4 causes “no such user” error in maillog"},{"content":"I stumbled into this four line ruby script that will serve up all of the rdoc documentation for your server\u0026rsquo;s currently installed gems:\n#!/usr/bin/env ruby require \u0026#34;rubygems/server\u0026#34; options = {:gemdir =\u0026gt; Gem.dir, :port =\u0026gt; 4242, :daemon =\u0026gt; true} Gem::Server::run(options) Thanks to Daniel for the ruby code!\n","date":"6 November 2008","permalink":"/p/viewing-documentation-for-your-ruby-gems/","section":"Posts","summary":"I stumbled into this four line ruby script that will serve up all of the rdoc documentation for your server\u0026rsquo;s currently installed gems:","title":"Viewing documentation for your ruby gems"},{"content":"","date":null,"permalink":"/tags/sar/","section":"Tags","summary":"","title":"Sar"},{"content":"After running sar on my new slice from SliceHost*, I noticed a new column called steal. It\u0026rsquo;s generally very low on my virtual machine, and I\u0026rsquo;ve never seen it creep over 1-2%.\nIBM\u0026rsquo;s definition of steal time is actually pretty good:\nSteal time is the percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor.\nSo, relatively speaking, what does this mean?\nA high steal percentage may mean that you may be outgrowing your virtual machine with your hosting company. Other virtual machines may have a larger slice of the CPU\u0026rsquo;s time and you may need to ask for an upgrade in order to compete. Also, a high steal percentage may mean that your hosting company is overselling virtual machines on your particular server. If you upgrade your virtual machine and your steal percentage doesn\u0026rsquo;t drop, you may want to seek another provider.\nA low steal percentage can mean that your applications are working well with your current virtual machine. Since your VM is not wrestling with other VM\u0026rsquo;s constantly for CPU time, your VM will be more responsive. This may also suggest that your hosting provider is underselling their servers, which is definitely a good thing.\nI\u0026rsquo;ve been a customer of SliceHost for a while (prior to Rackspace\u0026rsquo;s acquisition), and I recommend them to anyone who needs a solid VM solution. If you want to help out with my hosting costs, you\u0026rsquo;re welcome to use my SliceHost referral link. ","date":"4 November 2008","permalink":"/p/what-is-steal-time-in-my-sysstat-output/","section":"Posts","summary":"After running sar on my new slice from SliceHost*, I noticed a new column called steal.","title":"What is ‘steal time’ in my sysstat output?"},{"content":"","date":null,"permalink":"/tags/iphone/","section":"Tags","summary":"","title":"Iphone"},{"content":"","date":null,"permalink":"/tags/itunes/","section":"Tags","summary":"","title":"Itunes"},{"content":"I know I usually talk about Linux server related topics on this blog, but I\u0026rsquo;m pretty proud of what I\u0026rsquo;ve figured out this morning on my Mac. As you know, the iPhone can really only fully sync with one machine, and if you want to connect it to a new Mac that you\u0026rsquo;ve purchased, you have to fully erase the iPhone and start over. (Of course, if you used the Migration Assistant to set up your new Mac, this won\u0026rsquo;t be necessary.)\nHere are the steps to migrate your iTunes data from one Mac to another without having to erase and re-sync your iPhone:\nMake sure that iTunes is not running on both Macs. Disconnect your iPhone/iPod from both Macs. Copy your iTunes folder. /Users/username/Music/iTunes Copy your iPhone/iPod backups. /Users/username/Library/Application Support/MobileSync Copy your iTunes configuration files. /Users/username/Library/Preferences/com.apple.iTunes* Open iTunes on your new Mac and verify that Applications and Ringtones appear. Connect your iPhone/iPod to the new Mac and accept any new authorizations. Use iTunes on your old Mac to de-authorize the computer. If you choose to keep your MP3\u0026rsquo;s separate from iTunes (and not in the library), this will only copy over the references to the MP3 files themselves.\n","date":"2 November 2008","permalink":"/p/syncing-an-iphone-with-a-new-mac-without-hassles/","section":"Posts","summary":"I know I usually talk about Linux server related topics on this blog, but I\u0026rsquo;m pretty proud of what I\u0026rsquo;ve figured out this morning on my Mac.","title":"Syncing an iPhone with a new Mac without hassles"},{"content":"","date":null,"permalink":"/tags/fonts/","section":"Tags","summary":"","title":"Fonts"},{"content":"Although the idea of putting something from Microsoft on a Linux box might sound awful at first, you may find a reason to use Microsoft TrueType fonts on a Linux server. If you\u0026rsquo;re using GD to render an image, these fonts may come in handy.\nIf you have an RPM-based linux distribution, you can use a spec file that is available on SourceForge. You can follow the instructions on the project\u0026rsquo;s page, or you can follow these abbreviated instructions here:\nInstall some prerequisites:\n// RHEL 4 up2date -i rpm-build wget ttmkfdir // RHEL 5 yum install rpm-build wget ttmkfdir Install cabextract.\nBuild the RPM:\nwget -O /usr/src/redhat/SPECS/msttcorefonts-2.0-1.spec http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec rpmbuild -bb msttcorefonts-2.0-1.spec rpm -Uvh /usr/src/redhat/SPECS/msttcorefonts-2.0-1.spec Test it to be sure that they\u0026rsquo;re installed:\nxlsfonts | grep ^-microsoft rpm -ql msttcorefonts ","date":"24 October 2008","permalink":"/p/installing-microsofts-truetype-fonts-on-linux-servers/","section":"Posts","summary":"Although the idea of putting something from Microsoft on a Linux box might sound awful at first, you may find a reason to use Microsoft TrueType fonts on a Linux server.","title":"Installing Microsoft’s TrueType fonts on Linux servers"},{"content":"I found a server last week that was having severe issues with disk I/O to the point where most operations were taking many minutes to complete. The server wasn\u0026rsquo;t under much load, but a quick run of dmesg threw quite a bit of these lines out onto the screen:\nEXT3-fs warning (device sda5): ext3_dx_add_entry: Directory index full!\nAfter a thorough amount of searching, I couldn\u0026rsquo;t find out what the error actually meant. As with most errors starting with EXT3-fs warning, I figured that a fsck might be the best option.\nDuring the fsck, several inodes were repaired and the check completed after 10-15 minutes. I jotted down some notes about the directories that popped up on the screen during the fsck. The server rebooted it came up without any problems.\nI reviewed the directories that appeared during the fsck and they were full of files. Some of the directories contained upwards of 200,000 files. Many of the files were moved into lost+found after the fsck, so they had to be restored from their backups. I still don\u0026rsquo;t know what caused the original issue as the hardware checked out fine. If you run into this error, a fsck should help, but make sure that you have backups handy.\n","date":"13 October 2008","permalink":"/p/ext3_dx_add_entry-directory-index-full/","section":"Posts","summary":"I found a server last week that was having severe issues with disk I/O to the point where most operations were taking many minutes to complete.","title":"ext3_dx_add_entry: Directory index full!"},{"content":"","date":null,"permalink":"/tags/fsck/","section":"Tags","summary":"","title":"Fsck"},{"content":"After working with some RHEL 5 servers fairly regularly, I noticed a reduction in Apache 2.2 performance when many connections were made to the server. There were messages like these streaming into the access_log as well:\n127.0.0.1 - - [21/Aug/2008:12:00:10 -0400] \u0026quot;GET / HTTP/1.0\u0026quot; 200 2269 \u0026quot;-\u0026quot; \u0026quot;Apache/2.2.3 (Red Hat) (internal dummy connection)\u0026quot;\u0026lt;br /\u0026gt; 127.0.0.1 - - [21/Aug/2008:12:00:11 -0400] \u0026quot;GET / HTTP/1.0\u0026quot; 200 2269 \u0026quot;-\u0026quot; \u0026quot;Apache/2.2.3 (Red Hat) (internal dummy connection)\u0026quot;\u0026lt;br /\u0026gt; 127.0.0.1 - - [21/Aug/2008:12:00:13 -0400] \u0026quot;GET / HTTP/1.0\u0026quot; 200 2269 \u0026quot;-\u0026quot; \u0026quot;Apache/2.2.3 (Red Hat) (internal dummy connection)\u0026quot;\u0026lt;br /\u0026gt; 127.0.0.1 - - [21/Aug/2008:12:00:14 -0400] \u0026quot;GET / HTTP/1.0\u0026quot; 200 2269 \u0026quot;-\u0026quot; \u0026quot;Apache/2.2.3 (Red Hat) (internal dummy connection)\u0026quot;\u0026lt;br /\u0026gt; 127.0.0.1 - - [21/Aug/2008:12:00:15 -0400] \u0026quot;GET / HTTP/1.0\u0026quot; 200 2269 \u0026quot;-\u0026quot; \u0026quot;Apache/2.2.3 (Red Hat) (internal dummy connection)\u0026quot;\nOn servers with ipv6 enabled, you might see a line like this one:\n::1 - - [21/Aug/2008:12:00:15 -0400] \u0026quot;GET / HTTP/1.0\u0026quot; 200 2269 \u0026quot;-\u0026quot; \u0026quot;Apache/2.2.3 (Red Hat) (internal dummy connection)\u0026quot;\nI began to wonder why Apache was making these connections back onto itself and initiating a GET /. Apache\u0026rsquo;s documentation had the following:\nWhen the Apache HTTP Server manages its child processes, it needs a way to wake up processes that are listening for new connections. To do this, it sends a simple HTTP request back to itself. This request will appear in the access_log file with the remote address set to the loop-back interface (typically 127.0.0.1 or ::1 if IPv6 is configured). If you log the User-Agent string (as in the combined log format), you will see the server signature followed by \u0026ldquo;(internal dummy connection)\u0026rdquo; on non-SSL servers. During certain periods you may see up to one such request for each httpd child process.\nThese requests are perfectly normal and you do not, in general, need to worry about them. They can simply be ignored.\nSure, I could easily ignore the requests, but the requests were increasing the load on my server more than I liked. Apache\u0026rsquo;s documentation suggested omitting the lines from the logs by adding the following to the Apache configuration:\nSetEnvIf Remote_Addr \u0026quot;127\\.0\\.0\\.1\u0026quot; loopback\nAnd then adding env=!loopback to your CustomLog lines ensures that the data won\u0026rsquo;t show up in your access logs. However, you\u0026rsquo;ll still end up with Directory index forbidden by Options directive: /var/www/html/ filling up your error_logs. A quick search revealed a handy mod_rewrite rule to get rid of these requests as quickly as possible with the lowest effort required from Apache:\nRewriteCond %{HTTP_USER_AGENT} ^.*internal\\ dummy\\ connection.*$ [NC]\u0026lt;br /\u0026gt; RewriteRule .* - [F,L]\nAt this point, the requests to the localhost should receive a 403 immediately. Since you can\u0026rsquo;t keep Apache from sending all of these requests to itself, the best you can do is respond to them in a manner that requires the lowest possible resources.\n","date":"24 September 2008","permalink":"/p/apache-22-internal-dummy-connection/","section":"Posts","summary":"After working with some RHEL 5 servers fairly regularly, I noticed a reduction in Apache 2.","title":"Apache 2.2: internal dummy connection"},{"content":"Most web developers expend a lot of energy optimizing queries, reducing the overhead of functions, and streamlining their application\u0026rsquo;s overall flow. However, many forget that one of the simplest adjustments is the compression of data as it leaves the web server.\nLuckily, mod_deflate makes this easy, and the Apache documentation has a handy initial configuration available:\n\u0026lt;Location /\u0026gt; SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\\.0[678] no-gzip BrowserMatch \\bMSI[E] !no-gzip !gzip-only-text/html SetEnvIfNoCase Request_URI \\.(?:gif|jpe?g|png)$ no-gzip dont-vary Header append Vary User-Agent env=!dont-vary \u0026lt;/Location\u0026gt; This configuration will compress everything except for images. Of course, you can\u0026rsquo;t test this with curl, but you can test it with Firefox and LiveHTTPHeaders. If you don\u0026rsquo;t have Firefox handy, you can try a very handy web application that will give you the statistics about the compression of your site\u0026rsquo;s data.\n","date":"19 September 2008","permalink":"/p/compress-your-web-content-for-better-performance/","section":"Posts","summary":"Most web developers expend a lot of energy optimizing queries, reducing the overhead of functions, and streamlining their application\u0026rsquo;s overall flow.","title":"Compress your web content for better performance"},{"content":"","date":null,"permalink":"/tags/sendmail/","section":"Tags","summary":"","title":"Sendmail"},{"content":"","date":null,"permalink":"/tags/squirrelmail/","section":"Tags","summary":"","title":"Squirrelmail"},{"content":"I found a Plesk 8.3 server running RHEL 4 last month that was presenting errors when users attempted to send e-mail via SquirrelMail:\nERROR: Email delivery error Server replied: 127 Can\u0026#39;t execute command \u0026#39;/usr/sbin/sendmail -i -t -fsomeuser@somedomain.com\u0026#39;. The error was appearing because safe_mode was enabled and SquirrelMail was unable to drop e-mails into /usr/sbin/squirrelmail. After disabling safe_mode on the server, the users were able to send e-mails via SquirrelMail.\n","date":"8 September 2008","permalink":"/p/squirrelmail-127-cant-execute-command/","section":"Posts","summary":"I found a Plesk 8.","title":"SquirrelMail: 127 Can’t execute command"},{"content":"For a recent project, I needed to automatically provision VM\u0026rsquo;s for testing. I wanted to make the .vmx files on the fly with the exact configuration required, but I couldn\u0026rsquo;t find documentation for the options that are allowed in the .vmx files. Luckily, a fellow named Ulli Hankeln has made an impressive list available on his site.\nThe listings contain tons of options that I wasn\u0026rsquo;t aware of, and it also provides hints on which ones you shouldn\u0026rsquo;t adjust.\n","date":"5 September 2008","permalink":"/p/listing-of-vmware-configuration-parameters/","section":"Posts","summary":"For a recent project, I needed to automatically provision VM\u0026rsquo;s for testing.","title":"Listing of VMWare configuration parameters"},{"content":"I was working with a CentOS 5 x86_64 installation running VMWare server last week when I stumbled upon this error:\nUse of uninitialized value in string eq at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/VMware/VmPerl.pm line 114. You can run the vmware-cmd application with this error (it\u0026rsquo;s not a fatal error) and keep going with your normal business. However, if you want to remove the error, comment out lines 114 and 115 in the Perl module referenced by the error:\ndie \u0026quot;Perl API Version does not match dynamic library version.\u0026quot; unless (version() eq $VERSION); Commenting out these lines does not affect the VMWare server in any way.\n","date":"3 September 2008","permalink":"/p/centosrhel-x86_64-vmware-use-of-uninitialized-value-in-string/","section":"Posts","summary":"I was working with a CentOS 5 x86_64 installation running VMWare server last week when I stumbled upon this error:","title":"CentOS/RHEL x86_64 + VMWare: Use of uninitialized value in string"},{"content":"I spoke with a customer last week who was curious about enabling encrypted partitions on a DAS connected to their server. I wasn\u0026rsquo;t entirely sure if it was possible in RHEL 5 since I couldn\u0026rsquo;t remember if it was available in Fedora 6. According to Red Hat\u0026rsquo;s release notes, it is possible. Here\u0026rsquo;s an excerpt from their release notes: Encrypted Swap Partitions and Non-root File Systems\nRed Hat Enterprise Linux 5 now provides basic support for encrypted swap partitions and non-root file systems. To use these features, add the appropriate entries to /etc/crypttab and reference the created devices in /etc/fstab.\nBelow is a sample /etc/crypttab entry:\nmy_swap /dev/hdb1 /dev/urandom swap,cipher=aes-cbc-essiv:sha256\nThis creates the encrypted block device /dev/mapper/my_swap, which can be referenced in /etc/fstab.\nBelow is a sample /etc/crypttab entry for a file system volume:\nmy_volume /dev/hda5 /etc/volume_key cipher=aes-cbc-essiv:sha256\nThe /etc/volume_key file contains a plaintext encryption key. You can also specify none as the key file name; this configures the system to ask for the encryption key during boot instead.\nIt is recommended to use LUKS (Linux Unified Key Setup) for setting up file system volumes. To do this, follow these steps:\nCreate the encrypted volume using cryptsetup luksFormat.\nAdd the necessary entry to /etc/crypttab.\nSet up the volume manually using cryptsetup luksOpen (or reboot).\nCreate a file system on the encrypted volume.\nAdd the necessary entry to /etc/fstab.\nAfter scouring the Red Hat Enterprise Linux manuals and knowledge base, I couldn\u0026rsquo;t find specific instructions to set it up. However, there was an article in the Red Hat Magazine that may help.\n","date":"2 September 2008","permalink":"/p/encrypted-filesystems-and-partitions-on-rhel-5/","section":"Posts","summary":"I spoke with a customer last week who was curious about enabling encrypted partitions on a DAS connected to their server.","title":"Encrypted filesystems and partitions on RHEL 5"},{"content":"I\u0026rsquo;ve used this extremely basic procmail configuration a million times, and it\u0026rsquo;s a great start for any server configuration. It passes e-mails through spamassassin (if they\u0026rsquo;re smaller than 256KB) and then filters any e-mail marked as spam to /dev/null:\nLOGFILE=/var/log/procmail.log DROPPRIVS=yes\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;:0fw | /usr/bin/spamc\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;:0 * ^X-Spam-Status: Yes /dev/null Of course, you can make this much more complicated with some additional customization.\n","date":"13 August 2008","permalink":"/p/basic-procmail-configuration-with-spamassassin-filtering/","section":"Posts","summary":"I\u0026rsquo;ve used this extremely basic procmail configuration a million times, and it\u0026rsquo;s a great start for any server configuration.","title":"Basic procmail configuration with spamassassin filtering"},{"content":"","date":null,"permalink":"/tags/procmail/","section":"Tags","summary":"","title":"Procmail"},{"content":"If you have Plesk 8.1 or later, you have support available for Ruby on Rails. Unfortunately, clicking the FastCGI checkbox in Plesk won\u0026rsquo;t get you all of the support you need (and expect). The folks over at Parallels created a relatively simple process to get Ruby on Rails working properly on your site:\nGo to your domain that you want to adjust, and click Setup. Make sure the CGI and FastCGI options are enabled. Pick a name for your application and make the directory for your application in the httpdocs directory. Upload your files to that directory.\nOnce you\u0026rsquo;ve done that, create an .htaccess file in the httpdocs directory with the following text inside:\nRewriteEngine On RewriteRule ^$ /public/index.html [L] RewriteCond % !^/railsapp/public RewriteRule ^(.*)$ /public/$1 [L] RewriteCond % !-f RewriteRule ^(.*)$ public/dispatch.fcgi/$1 [QSA,L] Remove the .htaccess file within the public directory of your application and add a file called dispatch.fcgi to that directory which contains:\n#!/usr/bin/ruby You should be able to access your application at http://domain.com/railsapp/.\n","date":"12 August 2008","permalink":"/p/enabling-ruby-on-rails-support-for-a-domain-in-plesk/","section":"Posts","summary":"If you have Plesk 8.","title":"Enabling Ruby on Rails support for a domain in Plesk"},{"content":"","date":null,"permalink":"/tags/iowait/","section":"Tags","summary":"","title":"Iowait"},{"content":"Many applications that are used on a standard server perform quite a few of small writes to the disk (like MySQL or Apache). These writes can pile up and limit the performance of your applications. If you have kernel 2.6.9 or later, you can adjust how these small writes are handled to allow for better performance.\nThere\u0026rsquo;s two main kernel variables to know:\nvm.dirty_ratio - The highest % of your memory that can be used to hold dirty data. If you set this to a low value, the kernel will flush small writes to the disk more often. Higher values allow the small writes to stack up in memory. They\u0026rsquo;ll go to the disk in bigger chunks.\nvm.dirty_background_ratio - The lowest % of your memory where pdflush is told to stop when it is writing dirty data. You\u0026rsquo;ll want to keep this set as low as possible.\nThese might confuse you. In short, when your memory begins filling with little pieces of data that needs to be written to the disk, it will keep filling until it reaches the dirty_ratio. At that point, pdflush will start up, and it will write data until it reduces the dirty data to the value set by dirty_background_ratio.\nStock 2.6.9 kernels have a dirty_background_ratio of 10% and a dirty_ratio of 40%. Some distributions tweak these defaults to something different, so you may want to review the settings on your system. On a system with heavy disk I/O, you can increase the dirty_ratio and reduce the dirty_background_ratio. A little experimentation may be necessary to find the perfect setting for your server.\nIf you want to play with the variables, just use your standard echo:\necho 5 \u0026gt; /proc/sys/vm/dirty_background_ratio echo 60 \u0026gt; /proc/sys/vm/dirty_ratio Once you\u0026rsquo;ve found the right setting, you can set it permanently by adding lines to your /etc/sysctl.conf:\nvm.dirty_background_ratio = 5 vm.dirty_ratio = 60 If you have a reliable server with a good RAID card and power supply, you could set the dirty_ratio to 100 and the dirty_background_ratio to 1. This was recommended by a buddy of mine who runs quite a few servers running virtual machines.\n","date":"7 August 2008","permalink":"/p/reduce-disk-io-for-small-reads-using-memory/","section":"Posts","summary":"Many applications that are used on a standard server perform quite a few of small writes to the disk (like MySQL or Apache).","title":"Reduce disk I/O for small reads using memory"},{"content":"Before you follow this guide, be sure to read about the issue I had in Fedora 12 with this strategy.\nAt work, I have a Mac Mini as my main workstation with one monitor. There\u0026rsquo;s another monitor to the right which is connected to my Linux box. I run a synergy server on the Mac, and I run a synergy client in Linux. However, I was getting pretty frustrated when I\u0026rsquo;d have to manually start the synergy client on the Linux box with another keyboard.\nAfter a bit of Google searching, I found a solution that will enable synergy at the GDM login as well as after the login (when the window manager starts). Here\u0026rsquo;s the process:\nOpen /etc/gdm/Init/Default in your editor of choice and go to the bottom of the file. Just before exit 0, add the following:\n/usr/bin/killall synergyc sleep 1 /usr/bin/synergyc 111.222.333.444 Next, you can create the /etc/gdm/PostLogin/Default file as an empty file, or you can copy over the template file from /etc/gdm/PostLogin/Default.sample to /etc/gdm/PostLogin/Default. Either way, add the following to that file:\n/usr/bin/killall synergyc sleep 1 Finally, edit the /etc/gdm/Presession/Default file and add in the following before exit 0:\n/usr/bin/killall synergyc sleep 1 /usr/bin/synergyc 111.222.333.444 Once that\u0026rsquo;s done, you can log out and log back in to see the changes. You can also reboot your Linux desktop or switch to runlevel 3 and back to 5 (if your OS supports runlevel changes).\n","date":"30 July 2008","permalink":"/p/automatically-starting-synergy-in-gdm-in-ubuntufedora/","section":"Posts","summary":"Before you follow this guide, be sure to read about the issue I had in Fedora 12 with this strategy.","title":"Automatically starting synergy in GDM in Ubuntu/Fedora"},{"content":"","date":null,"permalink":"/tags/courier-imap/","section":"Tags","summary":"","title":"Courier-Imap"},{"content":"If you recently upgraded to Plesk 8.4.0 with short names enabled, you may have found that it\u0026rsquo;s working with SMTP, but it doesn\u0026rsquo;t work with POP3 or IMAP. There\u0026rsquo;s a bug in the Plesk version that prevents the courier configuration from being updated.\nTo correct the issue, first make sure that Plesk has short names enabled (Server \u0026gt; Mail). Once you\u0026rsquo;ve confirmed that Plesk thinks it\u0026rsquo;s enabled, add SHORTNAMES=1 to the following configuration files:\n/etc/courier-imap/imapd /etc/courier-imap/imapd-ssl /etc/courier-imap/pop3d /etc/courier-imap/pop3d-ssl Restart courier-imap with /etc/init.d/courier-imap restart and you should be all set.\n","date":"28 July 2008","permalink":"/p/plesk-840-unable-to-use-short-names-for-pop3imap/","section":"Posts","summary":"If you recently upgraded to Plesk 8.","title":"Plesk 8.4.0: Unable to use short names for POP3/IMAP"},{"content":"","date":null,"permalink":"/tags/pop3/","section":"Tags","summary":"","title":"Pop3"},{"content":"","date":null,"permalink":"/tags/smtp/","section":"Tags","summary":"","title":"Smtp"},{"content":"If you run a fairly busy and/or badly configured MySQL server, you may receive something like this when attempting to connect:\n# mysql ERROR 1040: Too many connections MySQL is telling you that it is handling the maximum connections that you have configured it to handle. By default, MySQL will handle 100 connections simultaneously. This is very similar to the situation when Apache reaches the MaxClients setting. You won\u0026rsquo;t even be able to connect to MySQL to find out what is causing the connections to be used up, so you will be forced to restart the MySQL daemon to troubleshoot the issue.\nWhat causes MySQL to run out of connections? Here\u0026rsquo;s a list of reasons that may cause MySQL to run out of available connections, listed in order of what you should check:\nBad MySQL configuration\nVerify that you have set MySQL\u0026rsquo;s buffers and caches to appropriate levels for the type of data you\u0026rsquo;re storing and the types of queries that you are running. One quick way to check this information is via MySQLTuner. The script will tell you how well your server is performing along with the corrections you should make. Running the script only takes a few moments and it doesn\u0026rsquo;t require a DBA to decipher the results.\nData storage techniques\nRemember that MySQL works best when moving vertically, not horizontally. If you have a table with 20 columns, breaking it into two tables with 10 columns each will improve performance. Even if you need to join the two tables together to get your data, it will still perform at a higher level. Also, use the right data types for the right data. If you\u0026rsquo;re storing an integer only, don\u0026rsquo;t use a CHAR or VARCHAR data type. If your integer will be small, then use something like a TINYINT or SMALLINT rather than INT. This means MySQL will use less memory, pull less data from the disk, and have higher performing joins.\nSlow queries\nThese are generally pretty easy to fix. If you have queries that don\u0026rsquo;t use indexes, or if queries run slowly with indexes in place, you need to rethink how you\u0026rsquo;re pulling your data. Should your data be split into multiple tables? Are you pulling more data than you need? Keep these questions in mind, enable the slow query log, and re-work your queries to find where the bottlenecks occur.\nDivision of labor\nMost people who use MySQL have a dynamic site written in a scripting language, like PHP, Perl or Python. It\u0026rsquo;s obvious that your server will need to do some work to parse the scripts, send data back to the client, and communicate with MySQL. If you find that your server is overworked, consider moving MySQL to its own dedicated hardware. Among many other things, this will reduce your disk I/O, allow you to better utilize memory, and it will help you when you need to scale even further. Be sure to keep your MySQL server close to your web servers, however, as increased latency will only make your performance problem first.\nRight hardware\nDo you have the right hardware for the job? Depending on your budget, you may need to make the move for hardware that gives you better I/O throughput and more useable cores. MySQL is a multi-threaded application, so it can utilize multiple cores to serve data quickly. Also, writing logs, reading tables, and adjusting indexes are disk-intensive tasks that need fast drives to perform well. When you look for a dedicated server for MySQL, be sure to choose multiple-core machines with low latency RAM, fast drives (SCSI/SAS), and a reliable network interface.\nBy reviewing these bottlenecks, you can reduce the load on your MySQL server without increasing your maximum connections. Simply increasing the maximum connections is a very bad idea. This can cause MySQL to consume unnecessary resources on your server and it may lead to an unstable system (crash!).\n","date":"24 June 2008","permalink":"/p/mysql-error-1040-too-many-connections/","section":"Posts","summary":"If you run a fairly busy and/or badly configured MySQL server, you may receive something like this when attempting to connect:","title":"MySQL: ERROR 1040: Too many connections"},{"content":"","date":null,"permalink":"/tags/awstats/","section":"Tags","summary":"","title":"Awstats"},{"content":"There was a bug in versions of Plesk prior to 8.3 where the AWStats statistics for the previous months were unavailable. It was a bug within Plesk\u0026rsquo;s AWStat\u0026rsquo;s implementation, and it was fixed in Plesk 8.3.\nHowever, the fix only corrected the issue moving forward after the upgrade. There was no automated way to rebuild the previous months\u0026rsquo; statistics, even though the AWStats data was right there on the disk!\nI saw this blog post about the issue, and the fix is quite elegant:\nPlesk 8.3 AWStats on Linux - Rebuilding Previous Month Statistics\n","date":"20 June 2008","permalink":"/p/rebuilding-statistics-from-previous-months-on-plesk-83/","section":"Posts","summary":"There was a bug in versions of Plesk prior to 8.","title":"Rebuilding statistics from previous months on Plesk 8.3"},{"content":"As some of you might know, I interviewed for a position at Google in April of this year. It wasn\u0026rsquo;t a position that I sought out, but it all came about after I received an e-mail and phone call from a recruiter. Obviously, there\u0026rsquo;s some things I can\u0026rsquo;t talk about with regards to the interview process, but there\u0026rsquo;s quite a few things that can be said.\nHow it started\nThe initial recruiter that I spoke with was a very friendly fellow that didn\u0026rsquo;t seem too technical. He didn\u0026rsquo;t get into the job description much, but he was interested mostly in whether I wanted to relocate and what type of job I enjoy most. We ran through a few cursory technical questions and he tried to find out what my skill level was in certain areas. When it was all said and done, he said I\u0026rsquo;d be contacted from someone else at Google within a few weeks.\nTwo weeks later, I received some e-mails, went through [redacted] phone screens (with some pretty intelligent people), and learned more about the position. The folks from Google that I spoke with ranged from friendly and chatty to very direct and somewhat terse. Overall, I got the idea that they weren\u0026rsquo;t interested in running a quiz, but they wanted to know how deep my knowledge and understanding was with regards to critical topics relating to the position. I know this sounds vague, but it\u0026rsquo;s about as much as I can tell you.\nThe middle\nI received a few more e-mails after the phone screens and my recruiter wanted to bring me out to California. Travel arrangements were made, I flew out to San Jose, and then drove the short drive to Mountain View. The city and the surrounding areas were a little different than I was used to. Most of the buildings and structures look as if they were built between 1960 and 1980 and they had a peculiar architecture. I stayed in the Hotel Avante (which was quite comfortable) and made the short drive to the Googleplex in the morning.\nThis was about the point where I slapped myself and said \u0026ldquo;Holy crap, I\u0026rsquo;m interviewing at GOOGLE!\u0026rdquo;\nWhen I arrived, I went into the wrong buildings twice until I found the right one, but some Google employees finally pointed me in the right direction. I met with my recruiter, who was actually pretty entertaining, and he gave me a run down of how the day would go. I spent the morning interviewing, and then I joined a Google employee for lunch. He answered many of my questions about the cost of living, job benefits, and how he liked Google. When that was over, I went back to interviewing and was escorted out of the building at the end of the day.\nTowards the end\nI spoke with my recruiter a few more times after the interview for some basic paperwork-related issues, and he worked hard to keep me in the loop on my application status. There wasn\u0026rsquo;t much of a concern job-wise as I work for one of the best companies in my industry already. However, I was getting ready to move to a new home, so I let my recruiter know that I was in a bit of a time crunch.\nYou\u0026rsquo;ll probably want to know what happened next, but there\u0026rsquo;s not really anything that I\u0026rsquo;m allowed to say about it! What I can tell you is that I\u0026rsquo;m still with the best company in my industry, and I\u0026rsquo;m still enjoying it each day.\nSo I know what you\u0026rsquo;re probably thinking…\nWhy did you stay at Rackspace?\nIt\u0026rsquo;s easy to answer this question: I learn something new every day at Rackspace. Sometimes it\u0026rsquo;s something technical, and sometimes it\u0026rsquo;s something related to managing people or designing technology. The people that I share this learning opportunity with make it all worthwhile. I\u0026rsquo;ve never worked for a company where my managers cared so much about my personal and technical development. Also, I\u0026rsquo;ve never worked at a company where, as a manager, I\u0026rsquo;m encouraged to care for my own technicians\u0026rsquo; personal and technical development.\nIf you have any more questions about why I love working at Rackspace, please let me know. I\u0026rsquo;ll be happy to fill you in.\n","date":"19 June 2008","permalink":"/p/my-interview-experience-at-google/","section":"Posts","summary":"As some of you might know, I interviewed for a position at Google in April of this year.","title":"Why I interviewed at Google and stayed at Rackspace"},{"content":"I found myself in a peculiar situation last week. I\u0026rsquo;d been asked to downgrade a server from MySQL 4.1 to MySQL 3.23. Believe me, I tried to advise against the request, but I didn\u0026rsquo;t succeed.\nI made a MySQL 3.23 compatible dump with --compatible=mysql323, but the dump came out with backticks around the database names. This works with some 3.23 versions, but it doesn\u0026rsquo;t work with others. Apparently RHEL 3\u0026rsquo;s MySQL 3.23 is one of those versions where it simply won\u0026rsquo;t work.\nThis sed line came in handy to strip the backticks from the USE lines in the dump:\nsed -e \u0026#34;s/^USE \\`\\(.*\\)\\`/USE \\1/g\u0026#34; ","date":"18 June 2008","permalink":"/p/remove-backticks-from-mysql-dumps/","section":"Posts","summary":"I found myself in a peculiar situation last week.","title":"Remove backticks from MySQL dumps"},{"content":"","date":null,"permalink":"/tags/sed/","section":"Tags","summary":"","title":"Sed"},{"content":"One of the most frustrating aspects of CPAN is connecting to mirrors via FTP. Most of the time, the mirrors are extraordinarily slow when it comes to FTP logins, and they often fail. As we all know, RHEL enjoys pulling some shenanigans (Scalar::Util - enough said) when perl receives an upgrade, and when I need CPAN to work quickly, it often does the opposite.\nI was struggling to find a way to reconfigure CPAN to use HTTP mirrors rather than FTP, but I couldn\u0026rsquo;t figure out where CPAN was holding this data. It wasn\u0026rsquo;t in ~/.cpan and there was nothing in /etc for it. However, I found that you can reconfigure CPAN by running the following command:\n# perl -MCPAN -e shell CPAN: File::HomeDir loaded ok (v0.69) cpan shell -- CPAN exploration and modules installation (v1.9205) ReadLine support enabled cpan[1]\u0026gt; o conf init The configuration script will run again as if you had never configured CPAN. Best of all, if you need to stop mid-way through the reconfiguration, your original configuration is still there. If you\u0026rsquo;d rather just adjust your mirror list rather than starting over completely with the CPAN configuration, use the following:\nDisplay your current mirrors:\no conf urllist Delete the first mirror in your list:\no conf urllist shift Delete the last mirror in your list:\no conf urllist pop Add on a new mirror:\no conf urllist push http://cpan.mirror.facebook.com/ Save your mirror changes:\no conf urllist commit ","date":"16 June 2008","permalink":"/p/adjusting-cpan-mirror-list/","section":"Posts","summary":"One of the most frustrating aspects of CPAN is connecting to mirrors via FTP.","title":"Adjusting CPAN mirror list"},{"content":"","date":null,"permalink":"/tags/hp/","section":"Tags","summary":"","title":"Hp"},{"content":"Working with the RAID configurations on Linux can be a little involved if all you have is hpacucli. Luckily, the folks using HP\u0026rsquo;s OS distributions will get tools like hwraidinfo and hwraid status, but you can get these going in Linux as well.\nHere\u0026rsquo;s a bash script equivalent of hwraidinfo which will work in Linux:\n#!/bin/sh SLOTLIST=$(hpacucli ctrl all show | \\ grep Slot | sed -e \u0026#39;s/^.*Slot //g\u0026#39; -e \u0026#39;s/ .*$//g\u0026#39;) for i in $SLOTLIST do echo hpacucli ctrl slot=$i show | grep -v \u0026#34;^$\u0026#34; echo hpacucli ctrl slot=$i ld all show | grep -v \u0026#34;^$\u0026#34; hpacucli ctrl slot=$i pd all show | grep -v \u0026#34;^$\u0026#34; done echo And here is the script equivalent of hwraidstatus:\n#!/bin/sh SLOTLIST=$(hpacucli ctrl all show | \\ grep Slot | sed -e \u0026#39;s/^.*Slot //g\u0026#39; -e \u0026#39;s/ .*$//g\u0026#39;) for i in $SLOTLIST do echo hpacucli ctrl slot=$i show status | grep -v \u0026#34;^$\u0026#34; echo hpacucli ctrl slot=$i ld all show status | grep -v \u0026#34;^$\u0026#34; hpacucli ctrl slot=$i pd all show status | grep -v \u0026#34;^$\u0026#34; done echo Save these to the filesystem, run chmod +x and move them to /usr/sbin (or /usr/local/sbin) so that the root user can use them.\n","date":"13 June 2008","permalink":"/p/hp-servers-hwraidinfo-and-hwraidstatus-in-linux/","section":"Posts","summary":"Working with the RAID configurations on Linux can be a little involved if all you have is hpacucli.","title":"HP Servers: hwraidinfo and hwraidstatus in Linux"},{"content":"MySQL has quite a few cryptic error messages, and this one is one of the best:\nmysql\u0026gt; DROP USER \u0026#39;forums\u0026#39;@\u0026#39;db1.myserver.com\u0026#39;; ERROR 1268 (HY000): Can\u0026#39;t drop one or more of the requested users Naturally, I was quite interested to know why MySQL wasn\u0026rsquo;t going to allow me to remove this user. There was nothing special about the user, but then again, this wasn\u0026rsquo;t a server that I personally managed, so I wasn\u0026rsquo;t sure what kind of configuration was in place.\nIt\u0026rsquo;s always a good idea to get your bearings, so I checked the current grants:\nmysql\u0026gt; SHOW GRANTS FOR 'forums'@'db1.myserver.com'; +----------------------------------------------------------------------+ | Grants for forums@db1.myserver.com | +----------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'forums'@'db1.myserver.com' WITH GRANT OPTION | +----------------------------------------------------------------------+ 1 row in set (0.00 sec) The GRANT OPTION was causing my grief. It was the only privilege that the user had on the server. I revoked the privilege and attempted to drop the user yet again:\nmysql\u0026gt; REVOKE GRANT OPTION ON *.* FROM \u0026#39;forums\u0026#39;@\u0026#39;db1.myserver.com\u0026#39;; Query OK, 0 rows affected (0.00 sec) mysql\u0026gt; DROP USER \u0026#39;forums\u0026#39;@\u0026#39;db1.myserver.com\u0026#39;; Query OK, 0 rows affected (0.00 sec) It\u0026rsquo;s key to remember that revoking the GRANT OPTION is a completely separate process. Revoking ALL PRIVILEGES doesn\u0026rsquo;t include GRANT OPTION, so be sure to specify it separately:\nmysql\u0026gt; REVOKE ALL PRIVILEGES, GRANT OPTION ON *.* FROM \u0026#39;user\u0026#39;@\u0026#39;host\u0026#39;; ","date":"11 June 2008","permalink":"/p/mysql-cant-drop-one-or-more-of-the-requested-users/","section":"Posts","summary":"MySQL has quite a few cryptic error messages, and this one is one of the best:","title":"MySQL: Can’t drop one or more of the requested users"},{"content":"I received an e-mail from Tim Linden about a post he made in his blog about backing up MySQL data to Amazon\u0026rsquo;s S3.\nThe article goes over installing the Net::Amazon::S3 Perl module via WHM (which is handy for the cPanel users). However, if you\u0026rsquo;re not a cPanel user, you can install it via CPAN:\n# perl -MCPAN -e \u0026#39;install Net::Amazon::S3\u0026#39; If you\u0026rsquo;d rather install it through Webmin, go to the \u0026lsquo;Others\u0026rsquo; section, and click \u0026lsquo;Perl Modules\u0026rsquo;.\nAlso, Tim mentions configuring a Firefox extension for accessing S3 that works very well. However, I find myself using Safari most often, so I prefer to use Jungle Disk or Transmit on my Mac.\nOverall, it\u0026rsquo;s a great post, and I\u0026rsquo;m glad Tim told me about it!\n","date":"6 June 2008","permalink":"/p/backing-up-mysql-to-amazons-s3/","section":"Posts","summary":"I received an e-mail from Tim Linden about a post he made in his blog about backing up MySQL data to Amazon\u0026rsquo;s S3.","title":"Backing up MySQL to Amazon’s S3"},{"content":"Normally, qmail will be able to process the mail queue without any interaction from the system administrator, however, if you want to force it to process everything that is in the queue right now, you can do so:\nkill -ALRM `pgrep qmail-send` If for some peculiar reason you don\u0026rsquo;t have pgrep on your server, you can go about it a slightly different way:\nkill -ALRM `ps ax | grep qmail-send | grep -v grep | awk \u0026#39;{print $1}\u0026#39;` Your logs should begin filling up with data about e-mails rolling through the queue.\n","date":"2 May 2008","permalink":"/p/forcing-qmail-to-process-e-mail-in-the-queue/","section":"Posts","summary":"Normally, qmail will be able to process the mail queue without any interaction from the system administrator, however, if you want to force it to process everything that is in the queue right now, you can do so:","title":"Forcing qmail to process e-mail in the queue"},{"content":"Upgrading Plesk from 7.5.x to 8.x will change your Plesk-related MySQL tables from MyISAM to InnoDB. This allows for better concurrency in the Plesk panel when a lot of users are logged in simultaneously. However, some server administrators will disable InnoDB support in MySQL to save resources. This will cause problems after the upgrade.\nPlesk may display an error on a white page that looks something like:\nCannot initialize InnoDB\nThis could mean that InnoDB support was disabled when MySQL was started. To correct this issue, search through the /etc/my.cnf for this line:\nskip-innodb\nIf you find it in your configuration, remove it, and then restart MySQL. To test that InnoDB is enabled, you can refresh the Plesk page, or you can log into MySQL and run SHOW ENGINES. The output from the SHOW ENGINES statement should show YES on the line with InnoDB.\nShould DISABLED appear instead, you may have an issue with your InnoDB configuration in your /etc/my.cnf. Be sure to check for innodb_data_file_path and make sure that it is set to an appropriate value.\nA value of NO is not a good sign. This means that your version of MySQL was compiled without InnoDB support. This means that it cannot be enabled at runtime because MySQL wasn\u0026rsquo;t built with any support for InnoDB. Be sure to recompile MySQL with --with-innodb or obtain a new package for your operating system which includes InnoDB support.\nIf you suspect that your MySQL InnoDB configuration is incorrect, you may want to review this documentation on MySQL\u0026rsquo;s site:\nFor MySQL 5: 13.2.3. InnoDB Configuration\nFor MySQL 4/3.23: 13.2.4. InnoDB Configuration\n","date":"1 May 2008","permalink":"/p/after-plesk-upgrade-cannot-initialize-innodb/","section":"Posts","summary":"Upgrading Plesk from 7.","title":"After Plesk upgrade, “Cannot initialize InnoDB”"},{"content":"I finally remembered this book when someone asked me about how to get started with PHP and MySQL development. If you get the chance, get a copy of this book:\nPHP and MySQL Web Development by Luke Welling, Laura Thomson\nBarnes \u0026amp; Noble: http://snurl.com/265xp\nWhy do I like this book so much?\nTeaching by application - The book teaches fundamentals by showing how to apply techniques to an active website. There\u0026rsquo;s not a ton of theory to wade through, and you feel like you\u0026rsquo;re learning the material faster. Intertwined strategies - You learn how to get PHP working with MySQL, and then you learn how to optimize your code. It\u0026rsquo;s important to know which work is best done by PHP and which is best done by MySQL. This book teaches both. Lots of examples - The CD-ROM comes with tons of code examples that actually relate to something you can use. I\u0026rsquo;d be happy to loan my copy, but I\u0026rsquo;ve loaned it out and it never returned.\n","date":"29 April 2008","permalink":"/p/best-php-and-mysql-development-book/","section":"Posts","summary":"I finally remembered this book when someone asked me about how to get started with PHP and MySQL development.","title":"Best PHP and MySQL development book"},{"content":"UPDATE: The TRACE/TRACK methods are disabled in Plesk 8.4 right out of the box!\nIt\u0026rsquo;s always been a bit of a challenge to disable TRACE and TRACK methods with Plesk. The only available options were to create a ton of vhost.conf files or adjust the httpd.include files and prevent modifications with chattr (which is a bad idea on many levels).\nLuckily, Parallels has made things easier with a new knowledge base article.\n","date":"23 April 2008","permalink":"/p/plesk-disabling-tracetrack-methods-globally/","section":"Posts","summary":"UPDATE: The TRACE/TRACK methods are disabled in Plesk 8.","title":"Plesk: Disabling TRACE/TRACK methods globally"},{"content":"Before getting started, it\u0026rsquo;s important to understand why MySQL uses locks. In short - MySQL uses locks to prevent multiple clients from corrupting data due to simultaneous writes while also protecting clients from reading partially-written data.\nSome of you may be thinking, \u0026ldquo;Okay, this makes sense.\u0026rdquo; If that\u0026rsquo;s you, skip the next two paragraphs. If not, keep reading.\nAnalogies can help understand topics like these. Here\u0026rsquo;s one that I came up with during a training class. Consider two people sitting in front of a notepad on a table. Let\u0026rsquo;s say that a sentence like \u0026ldquo;The quick brown fox jumps over the lazy dog\u0026rdquo; is already written on the notepad. If both people want to read the sentence simultaneously, they can do so without getting in each other\u0026rsquo;s way. A third or fourth person could show up and they could all read it at the same time.\nWell, let\u0026rsquo;s say one of the people at the table is writing a screenplay for Cujo, and they want to change \u0026ldquo;lazy\u0026rdquo; to \u0026ldquo;crazy\u0026rdquo;. That person erases the \u0026ldquo;l\u0026rdquo; in \u0026ldquo;lazy\u0026rdquo; and then adds a \u0026ldquo;cr\u0026rdquo; to the front to spell \u0026ldquo;crazy\u0026rdquo;. So if the other person is reading the sentence while the first person is writing, they will see \u0026ldquo;lazy\u0026rdquo; turn into \u0026ldquo;azy\u0026rdquo;, then \u0026ldquo;c_azy\u0026rdquo;, and then finally, \u0026ldquo;crazy\u0026rdquo;. This isn\u0026rsquo;t a big issue in real life, but on the database level, this could be dangerous. If the person who was reading the sentence showed up during the middle of the letter changes, they would think that the dog was \u0026ldquo;azy\u0026rdquo;, and they\u0026rsquo;d walk away wondering what the adjective \u0026ldquo;azy\u0026rdquo; means. To get around this, MySQL uses locking to block clients from reading data while it\u0026rsquo;s being written and it blocks clients from writing data simultaneously.\nNow that we\u0026rsquo;re all familiar with what locks are, and why MySQL uses them, let\u0026rsquo;s talk about some ways to reduce the delays caused by locking. Here\u0026rsquo;s some situations you might be running up against:\nWrites are delayed because reads have locked the tables\nThis is the most common occurrence from the servers that I have seen. When you run a SHOW PROCESSLIST, you may see a few reads at the top of the queue that are in the status of \u0026ldquo;Copying to tmp table\u0026rdquo; and/or \u0026ldquo;Sending data\u0026rdquo;. On optimized servers running optimized queries, these should clear out quickly. If you\u0026rsquo;re finding that they are not clearing out quickly, try the following:\nUse EXPLAIN on your queries to be sure that they are optimized Add indexes to tables that you query often Reduce the amount of rows that are being returned per query Upgrade the networking equipment between web and database servers (if applicable) Consider faster hardware with larger amounts of RAM Use MySQLTuner to check your current server\u0026rsquo;s configuration for issues Consider moving to InnoDB to utilize row-based locking Reads and writes are delayed because writes have locked the tables\nSituations like these are a little different. There\u0026rsquo;s two main factors to consider here: either MySQL cannot write data to the disk fast enough, or your write queries (or tables) are not optimized. If you suspect a hardware issue, check your iowait with sar and see if it stays at about 10-20% or higher during the day. If it does, slow hardware may be the culprit. Try moving to SCSI disks and be sure to use RAID 5 or 10 for additional reliability and speed. SAN or DAS units may also help due to higher throughput and more disk spindles.\nIf you already have state-of-the-art hardware, be sure that your tables and queries are optimized. Run OPTIMIZE TABLES regularly if your data changes often to defragment the tables and clear out any holes from removed or updated data. Slow UPDATE queries suggest that you are updating too many rows, or you may be using a column in the WHERE clause that is not indexed. If you do a large amount of INSERT queries, use this syntax to enter multiple rows simultaneously:\nINSERT INTO table (col1,col2) VALUES (\u0026#39;a\u0026#39;,\u0026#39;1\u0026#39;), (\u0026#39;b\u0026#39;,\u0026#39;2\u0026#39;), (\u0026#39;c\u0026#39;,\u0026#39;3\u0026#39;); This syntax tells MySQL to hold off on updating indexes until the entire query is complete. If you are updating a very large amount of rows, and you need to use multiple queries to avoid reaching the max_allowed_packet directive, you can do something like this:\nALTER TABLE table DISABLE KEYS; INSERT INTO table (col1,col2) VALUES (\u0026#39;a\u0026#39;,\u0026#39;1\u0026#39;), (\u0026#39;b\u0026#39;,\u0026#39;2\u0026#39;), (\u0026#39;c\u0026#39;,\u0026#39;3\u0026#39;); ~~~ many more inserts ~~~ ALTER TABLE table ENABLE KEYS; This forces MySQL to not calculate any new index information until you re-enable the keys or run OPTIMIZE TABLE. If all of this does not help, consider using InnoDB as your storage engine. You can benefit from the row-level locking, which reduces locking in mixed read/write scenarios. In addition, InnoDB is able to write data much more efficiently than MyISAM.\n","date":"16 April 2008","permalink":"/p/reducing-locking-delays-in-mysql/","section":"Posts","summary":"Before getting started, it\u0026rsquo;s important to understand why MySQL uses locks.","title":"Reducing locking delays in MySQL"},{"content":"If you\u0026rsquo;re working in Plesk and you receive this error:\nmchk: Unable to initialize quota settings for someuser@somedomain.com\nRun this command to fix the issue, but be patient:\nfind /var/qmail/mailnames -type d -name \u0026#39;.*\u0026#39; ! -name \u0026#39;.spamassassin\u0026#39; -ls -exec touch \u0026#39;{}\u0026#39;/maildirfolder \\; -exec chown popuser:popuser \u0026#39;{}\u0026#39;/maildirfolder \\; Thanks to Mike Jackson for this one.\n","date":"14 April 2008","permalink":"/p/mchk-unable-to-initialize-quota-settings-for-someusersomedomaincom/","section":"Posts","summary":"If you\u0026rsquo;re working in Plesk and you receive this error:","title":"mchk: Unable to initialize quota settings for someuser@somedomain.com"},{"content":"DISCLAIMER: Okay, technical folks - I\u0026rsquo;m doing this as a favor to the general community of people that aren\u0026rsquo;t very technical, but they need to know some tips for ridding themselves of a technical person that is harming their business. If you look at it this way, there\u0026rsquo;s a 50/50 chance that this article might get you hired instead of fired.\nNo one has every really asked me \u0026ldquo;hey, if I want to fire my technical guy and get a new one, how do I do it?\u0026rdquo; So, how can I answer this question with any authority? Simple. I used to run my own company doing technical work for homes and businesses, I was hired and fired by people (much more hiring than firing), and I\u0026rsquo;ve learned a lot from being \u0026ldquo;the tech guy\u0026rdquo;. Also, from working at Rackspace, and a previous job, I\u0026rsquo;ve seen many situations in which a company lets their technical person go without any plan in place. You never realize how valuable your IT staff are until they\u0026rsquo;re not in the office, and your e-mail server falls apart.\nFiring your technician\nI\u0026rsquo;ll start with how to fire your current technical person. It should go without saying, but be sure that you\u0026rsquo;re firing this person for a substantial and legal reason. If you\u0026rsquo;re firing this person for something trivial or petty, stop right here and re-evaluate. But, if this is the kind of person that ignores your phone calls, takes down services to increase job security, or prints pornography on the office laserjet, then it\u0026rsquo;s time for them to go.\nFirst, create a plan. How much does this technical person know about the company that could be detrimental if they were fired abruptly? You\u0026rsquo;ll need to consider things like their access to your buildings, computers, corporate credit cards, cars, and colocated/dedicated servers. Take an inventory of the access that they have, and also how they access these items. For example, if they have multiple user accounts on your computer network, then make sure that all of those accounts are accounted for. If you have secret passwords with any of your service providers, be sure those are documented as well.\nIf you don\u0026rsquo;t know some of these items, but your technical person does, you might want to get this information from them in a careful manner. I\u0026rsquo;d recommend against going in and saying something like \u0026ldquo;we need to inventory all of our user accounts before you\u0026rsquo;re canned\u0026rdquo;. You need to find a plausible excuse for you to have a list of this information, and it needs to be something that the technical person won\u0026rsquo;t argue with. Some good ones I\u0026rsquo;ve heard are PCI, SOX or SAS/70 compliance. Let your employee know that compliance with these standards requires that you keep all of the access to all of these services in a safe place.\nBy the time you reach this stage, you really should have a new technician in mind. Interview them after work, or at night time so that the current technical person doesn\u0026rsquo;t become suspicious. I\u0026rsquo;ll talk more about how to find a new technical person a little later.\nAt this point, if you can trust your technical person, you should have a proper list of their entry points into your infrastructure. It\u0026rsquo;s more likely that you don\u0026rsquo;t trust this person at this point since you\u0026rsquo;re firing them after all. Some might argue with me here, I\u0026rsquo;d recommend bringing in some other technical person that you undoubtedly trust. The reason for this is that your technical person may have given you a partial list, or they may have left \u0026ldquo;backdoors\u0026rdquo; so that they can access your infrastructure after they leave. A trusted tech can review your company for any possible issues and can give you a heads up if they find any red flags. Of course, if your current technical person has set up traps to know when someone logs in, then you may have blown your cover entirely. I would certainly hope that your situation wouldn\u0026rsquo;t end this badly.\nNow that you have a complete list of everything to which your former tech had access, you have an idea of what will be involved in changing everything over. Most people like to fire employees on Fridays to reduce the chance of violence or uncomfortable moments, so here\u0026rsquo;s my recommendation. If anything financial needs to be taken care of, get it done late on Thursday or early on Friday. Then, on Friday, set a time with your current technical person to have a short meeting. Coordinate this time with your trusted technical person so they can begin changing passwords on accounts which the current tech has access to. Change the most sensitive passwords first, like the passwords on database servers. Also, change the root passwords as a high priority, but make sure you eventually change them all since you can be bitten by sudo or SSH keys.\nWhen your current technical person exits the meeting, you\u0026rsquo;re covered. If the meeting goes well, and the current technical person is amicable, then you\u0026rsquo;re going to be covered since their access is revoked. If the meeting goes badly, your still covered in case they try to do something nasty. Their access to your network and corporate infrastructure should be eliminated or minimized before they can do anything destructive.\nHiring your technician\nLuckily, hiring a new technical person is a bit easier than firing one. However, if you do a bad job on the hiring, you\u0026rsquo;ll be referring to the beginning of this article fairly soon.\nThe best way to find a new technical person is via recommendations from another person. They\u0026rsquo;ve probably had interactions with the tech, and they can give you an idea of their technical prowess and social skills (yes, these are important). If you can\u0026rsquo;t find any techs through recommendations, you can always check big job sites like Monster, LinkedIn, or Dice. Whichever route you choose, be sure to meet the technician in person. Don\u0026rsquo;t hire someone based on the initials after their name, their previous job experience, or how they sound over the phone. Your technical person is like a central pillar in your organization, and this needs to be a responsible, sensible, and practical person.\nOnce you\u0026rsquo;ve found one or more technicians that you\u0026rsquo;d like to hire, you need to test them just a little. I\u0026rsquo;d recommend contacting them late in the evening (8-10PM) or early/late on the weekend. See how receptive and cordial they are at these times, because when something explodes later, you\u0026rsquo;ll probably be calling them after business hours. You don\u0026rsquo;t want to pick up the phone at 4AM on Saturday when your Exchange server dies only to hear your tech tell you that he\u0026rsquo;ll be in on Monday to fix it for you, and that it can wait until then. When you talk to the technician on the phone, ask them to do something that forces them to go use the computer. For example, send them an e-mail for something reasonable that they need to respond to. Or, tell them that you discovered some neat product or service, and you want to know if they could start working at your company and maintain that product or service. If they respond quickly and they don\u0026rsquo;t give you the vibe that you\u0026rsquo;ve just inconvenienced them horribly, then that\u0026rsquo;s a good sign that you\u0026rsquo;ve found a worthy technician.\nIt\u0026rsquo;s up to you when you bring them on at your company. Some people might want to hold off until the current technician is out of the way, but some might want to bring the technician in a little early to help with the cleanup of the last technician. Either way is good in my opinion. However, I would recommend against having both of them employed at your company simultaneously. If your old technician is upset about something, that could rub off on the new guy, and you may be returning to the top part of this article sooner rather than later.\nAlso, don\u0026rsquo;t expect the new technician to be knee-deep in your problems immediately. They will need some time to figure out your network, review your vital services, and get an idea how everything works together. If you have the giant list your previous tech made, be sure to furnish it to the new technician so they have an idea of where to go to fix a certain problem.\nI certainly hope this article helps! If you have any questions, drop me a comment and I\u0026rsquo;ll be happy to give additional recommendations.\n","date":"2 April 2008","permalink":"/p/small-companies-how-to-hire-and-fire-a-technical-person/","section":"Posts","summary":"DISCLAIMER: Okay, technical folks - I\u0026rsquo;m doing this as a favor to the general community of people that aren\u0026rsquo;t very technical, but they need to know some tips for ridding themselves of a technical person that is harming their business.","title":"Small Companies: How to hire and fire a technical person"},{"content":"","date":null,"permalink":"/tags/qmail/","section":"Tags","summary":"","title":"Qmail"},{"content":"On a Plesk server, the maximum size for an individual e-mail sent through qmail is unlimited. You can limit this size by adding a number to the /var/qmail/control/databytes file.\nIf you wanted to limit this to something like 10MB, you can just run the following command:\necho \u0026#34;10485760\u0026#34; \u0026gt; /var/qmail/control/databytes This will limit the size of messages (including attachments) to 10MB as a maximum.\n","date":"24 March 2008","permalink":"/p/setting-the-maximum-mail-size-in-qmail/","section":"Posts","summary":"On a Plesk server, the maximum size for an individual e-mail sent through qmail is unlimited.","title":"Setting the maximum mail size in qmail"},{"content":"It\u0026rsquo;s tough to find examples of dumps that can\u0026rsquo;t be properly reimported on other servers. However, if you have a 64-bit server, and you make a MySQL dump file from it, you may see this issue when importing the dump on a 32-bit MySQL server:\nERROR 1118 (42000) at line 1686: Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs You really don\u0026rsquo;t have any options in this situation. You\u0026rsquo;ll need to adjust your table on the 64-bit server for good and then make a new dump file, or you will just have to live with the fact that it can\u0026rsquo;t be imported into a 32-bit instance of MySQL.\n","date":"21 March 2008","permalink":"/p/importing-mysql-dumps-made-on-64-bit-servers/","section":"Posts","summary":"It\u0026rsquo;s tough to find examples of dumps that can\u0026rsquo;t be properly reimported on other servers.","title":"Importing MySQL dumps made on 64-bit servers"},{"content":"I really dislike qmail. But, since I use Plesk, I\u0026rsquo;m stuck with it. However, I found a way to improve it\u0026rsquo;s awful mail queue performance by putting the mail queue onto a ramdisk. This is actually pretty darned easy to do.\nFirst, toss a line like this into your /etc/fstab:\nnone /mailqueue tmpfs defaults,size=100m,nr_inodes=999k,mode=775 0 0 This will make a 100MB ramdisk on /mailqueue. Now, just symlink /var/qmail/mqueue to /mailqueue and move your e-mail over:\n# mount /mailqueue # chmod 750 /mailqueue # chown qmailq:qmail /mailqueue # mv /var/qmail/mqueue /var/qmail/mqueue-old # ln -s /mailqueue /var/qmail/mqueue # rsync -av /var/qmail/mqueue-old /mailqueue This has significantly cut the iowait on my server during heavy e-mail periods. In addition, tools like qmHandle now fly through my mail queue and give me reports very quickly.\n","date":"14 March 2008","permalink":"/p/reduce-iowait-in-plesk-put-qmails-queue-on-a-ramdisk/","section":"Posts","summary":"I really dislike qmail.","title":"Reduce iowait in Plesk: put qmail’s queue on a ramdisk"},{"content":"One of my biggest Plesk gripes is dealing with the Plesk Professional Website Editor. One of the most common occurrences with PPWSE is that it hangs when you attempt to log into the server. Normally, this happens when a server is behind a firewall, and it is using private IP\u0026rsquo;s.\nPlesk will actually query the DNS for the domain (rather than simply connecting to the localhost), try to reach the public IP, and the traffic will be blocked by the firewall. This creates a login session that appears to hang, and then it shows \u0026ldquo;FTP: not connected\u0026rdquo; in the interface.\nThe fix is actually quite easy:\nBe sure that the ftp.domain.com CNAME/A record exists\nAdd a line to /etc/hosts that forces ftp.domain.com to resolve to the proper private IP address.\nThe third item should be to stop using PPWSE, but that\u0026rsquo;s the hardest one to work out. I\u0026rsquo;d recommend using something like TextMate on a Mac along with Transmit, but you can get some good results out of Dreamweaver as well. Whatever you do, don\u0026rsquo;t use Contribute.\n","date":"13 March 2008","permalink":"/p/plesk-professional-website-editor-hangs-at-login/","section":"Posts","summary":"One of my biggest Plesk gripes is dealing with the Plesk Professional Website Editor.","title":"Plesk Professional Website Editor hangs at login"},{"content":"Just in case some of you out there enjoy nomenclature and theory behind Linux filesystems, here\u0026rsquo;s some things to keep in mind. The modification time (mtime) of a file describes when the actual data blocks that hold the file changed. The changed time (ctime) of a file describes when the metadata was last changed.\nAlso, metadata is stored within a different location than the data blocks. The metadata fits in the inode while the file\u0026rsquo;s data goes within data blocks. The inode information contains the owner, owner\u0026rsquo;s group, time related data (atime, ctime, mtime), and the mode (permissions).\nThe name of the file itself is actually stored within the file that makes up the directory. And, the directory is simply a file that masquerades as a directory once the filesystem is mounted and read.\n","date":"12 March 2008","permalink":"/p/what-is-the-difference-between-file-data-and-metadata/","section":"Posts","summary":"Just in case some of you out there enjoy nomenclature and theory behind Linux filesystems, here\u0026rsquo;s some things to keep in mind.","title":"What is the difference between file data and metadata?"},{"content":"A question I\u0026rsquo;m asked daily is “How can I find out what is generating iowait on my server?” Sure, you can dig through pages of lsof output, restart services, or run strace, but it can be a frustrating process. I saw a process on this blog post, and I changed the regexes to fit Red Hat and CentOS systems a bit better:\n# /etc/init.d/syslog stop # echo 1 \u0026gt; /proc/sys/vm/block_dump # dmesg | egrep \u0026#34;READ|WRITE|dirtied\u0026#34; | egrep -o \u0026#39;([a-zA-Z]*)\u0026#39; | sort | uniq -c | sort -rn | head 1526 mysqld 819 httpd 429 kjournald 35 qmail 27 in 7 imapd 6 irqbalance 5 pop 4 pdflush 3 spamc In my specific situation, it looks like MySQL is the biggest abuser of my disk, followed by Apache and the filesystem journaling. As expected, qmail is a large contender, too.\nDon\u0026rsquo;t forget to set things back to their normal state when you\u0026rsquo;re done!\n# echo 0 \u0026gt; /proc/sys/vm/block_dump # /etc/init.d/syslog start ","date":"11 March 2008","permalink":"/p/hunting-down-elusive-sources-of-iowait/","section":"Posts","summary":"A question I\u0026rsquo;m asked daily is “How can I find out what is generating iowait on my server?","title":"Hunting down elusive sources of iowait"},{"content":"","date":null,"permalink":"/tags/horde/","section":"Tags","summary":"","title":"Horde"},{"content":"I saw a ticket the other day where a customer received this error from Horde when trying to expand items on the left pane of the interface:\nFatal error: Cannot use string offset as an array in /www/horde/lib/Horde/Block/Layout/Manager.php on line 389\nIt turns out that Plesk 8.1.1 bundles Horde 3.1.3 which has an occasional bug within the interface. Upgrading to Plesk 8.2.0 corrects the issue as Horde 3.1.4 is installed with the upgrade.\nSee Horde\u0026rsquo;s bug page for more information.\n","date":"11 March 2008","permalink":"/p/strange-error-with-horde-313-and-plesk-811/","section":"Posts","summary":"I saw a ticket the other day where a customer received this error from Horde when trying to expand items on the left pane of the interface:","title":"Strange error with Horde 3.1.3 and Plesk 8.1.1"},{"content":"","date":null,"permalink":"/tags/ntp/","section":"Tags","summary":"","title":"Ntp"},{"content":"I recently came across a server that was throwing this error into its message log:\nntpd_initres[2619]: ntpd returns a permission denied error!\nIt would only appear about every five minutes on the server, and restarting ntpd didn\u0026rsquo;t correct the issue. I stopped ntpd entirely, but the error still appeared a few minutes later.\nAfter examining the running processes, I found that there was a lonely ntpd process that was running using a non-standard method. I killed that process, started the default instance of ntpd using the init scripts, and the issue went away.\nIt turns out that ntpd daemon that was started manually was unable to access some of the required paths and sockets that is necessary for ntpd to run properly. These configuration items are set up in the init scripts, but they\u0026rsquo;re not included when ntpd is running manually.\nThis was tested on Red Hat Enterprise Linux 4.\n","date":"20 February 2008","permalink":"/p/ntpd_initres-ntpd-returns-a-permission-denied-error/","section":"Posts","summary":"I recently came across a server that was throwing this error into its message log:","title":"ntpd_initres: ntpd returns a permission denied error"},{"content":"If you see a large mail queue and your system\u0026rsquo;s I/O is increasing, you may find messages like these in your syslog:\nLosing q5/qfg9N5EwE3004499: savemail panic\u0026lt;br /\u0026gt; SYSERR(root): savemail: cannot save rejected email anywhere\nIn this situation, there\u0026rsquo;s some reason why sendmail cannot deliver e-mail to the postmaster address. There\u0026rsquo;s a few issues that can create this problem:\nMissing postmaster alias in /etc/aliases Hard disk is full The mail spool for the postmaster has the wrong ownership The mbox file for the postmaster is over 2GB and procmail can\u0026rsquo;t deliver the e-mail First, correct the situation that is preventing sendmail from delivering the e-mail to the postmaster user. Then, stop sendmail, clear the e-mail queue, and start sendmail again.\nI found this issue on a Red Hat Enterprise Linux 4 server and then found the solution on Brandon\u0026rsquo;s site.\n","date":"18 February 2008","permalink":"/p/sendmail-savemail-panic/","section":"Posts","summary":"If you see a large mail queue and your system\u0026rsquo;s I/O is increasing, you may find messages like these in your syslog:","title":"sendmail: savemail panic"},{"content":"One of my biggest complaints on RHEL 4 is the large resource usage by the version of SpamAssassin that is installed. When it runs, it uses a ton of CPU time and causes a lot of disk I/O as well. When running top, you may see multiple spamd processes. For a high-volume e-mail server (like the one I administer), this is simply unacceptable.\nI decided to do something about it, and here are the steps:\nFirst, you will need two RPMs:\nLatest SpamAssassin RPM from Dag\nThe psa-spamassassin RPM from SWSoft/Parallels.\nOnce you have them both on the server, install the new SpamAssassin package from Dag:\n# rpm -Uvh spamassassin-(version).el4.rf.(arch).rpm\nAt this point, Plesk\u0026rsquo;s spamassassin scripts will be non-functional, but the next step will fix it:\n# rpm -Uvh --force psa-spamassassin-(version).(arch).rpm\nNOTE: DO NOT REMOVE the psa-spamassassin RPM. This will begin stripping your system of all SpamAssassin configurations and it cannot be reversed!\nPlesk\u0026rsquo;s SpamAssassin scripts have been restored at this point in the process. Now, we need to do the part that really makes SpamAssassin work efficiently:\n# sa-update; sa-compile;\nThis will update the SpamAssassin rules, and it will compile the rules with re2c (you may also need to get this RPM from Dag). This compilation means less disk access, and less CPU time being used to process e-mails.\nTo activate the compiled rules within SpamAssassin, uncomment the plugin line in /etc/mail/spamassassin/v320.pre:\n# Rule2XSBody - speedup by compilation of ruleset to native code\u0026lt;br /\u0026gt; #\u0026lt;br /\u0026gt; loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody\nPlease bear in mind that this process is done at your own risk. This may cause issues getting support from SWSoft or your hosting company. This has been tested on Red Hat Enterprise Linux 4 64-bit with Plesk 8.1.1, 8.2.0, and 8.2.1 with SpamAssassin 3.2.3 and 3.2.4.\n","date":"31 January 2008","permalink":"/p/high-iowait-on-rhel-4-with-plesk-and-spamassassin/","section":"Posts","summary":"One of my biggest complaints on RHEL 4 is the large resource usage by the version of SpamAssassin that is installed.","title":"High iowait on RHEL 4 with Plesk and SpamAssassin"},{"content":"One of the questions I receive the most is: \u0026ldquo;What version of Plesk works with MySQL 5?\u0026rdquo; The minimum version of Plesk for MySQL 5 is 8.1.0. If you install MySQL 5 on a version prior to 8.1.0, you may be able to access then panel in the other 8.x versions, but your upgrades will fail miserably.\nIn case you\u0026rsquo;re curious about a slightly older system, full MySQL 4 support was available in Plesk 7.5.3. However, MySQL 4 is supported on some distributions as far back as 7.1:\nFedora Core 2\nMandrake 10\nSuSE 9.0\nFreeBSD 5.2.1\nCheck out SWSoft/Parallel\u0026rsquo;s site for more information about MySQL 4 and 5 support.\n","date":"30 January 2008","permalink":"/p/plesk-and-mysql-5/","section":"Posts","summary":"One of the questions I receive the most is: \u0026ldquo;What version of Plesk works with MySQL 5?","title":"Plesk and MySQL 5"},{"content":"By setting a certain bash environment variable, you can limit which commands are kept in the .bash_history file. The following options can be passed to the HISTCONTROL environmental variable:\nignorespace - omits commands beginning with a space\nignoredups - omits commands that match the previously run command\nignoreboth - combines ignorespace and ignoredups\nerasedups - removes previous lines that match the line that was just run\nTo set it, simply run the following from the command line, or add it to the .bashrc or a single user\u0026rsquo;s .bash_profile:\nexport HISTCONTROL=ignorespace\nIf no value is set, then all commands will be saved regardless of their content.\n","date":"29 January 2008","permalink":"/p/limiting-which-commands-are-kept-in-the-bash-history-file/","section":"Posts","summary":"By setting a certain bash environment variable, you can limit which commands are kept in the .","title":"Limiting which commands are kept in the bash history file"},{"content":"Installing new hardware may mean that new kernel need to be loaded when your server boots up. There\u0026rsquo;s a two step process to making a new initrd file:\nFirst, add the appropriate line to your /etc/modules.conf or /etc/modprobe.conf which corresponds to your new kernel module.\nNext, rebuild the initial ram disk after making a backup of the current one:\n# cp /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img.bak # mkinitrd -f initrd-`uname -r`.img `uname -r` Reboot the server now and make sure the new driver is loaded properly.\n","date":"28 January 2008","permalink":"/p/rebuilding-the-initial-ram-disk-initrd/","section":"Posts","summary":"Installing new hardware may mean that new kernel need to be loaded when your server boots up.","title":"Rebuilding the initial ram disk (initrd)"},{"content":"If you have a new Plesk installation and the following option is greyed out in Server -\u0026gt; Mail:\nSwitch on spam protection based on DNS blackhole lists\nJust install the following RPM from Plesk:\npsa-qmail-rblsmtpd\n","date":"25 January 2008","permalink":"/p/cant-enable-dnsblrbl-in-plesk-because-its-greyed-out/","section":"Posts","summary":"If you have a new Plesk installation and the following option is greyed out in Server -\u0026gt; Mail:","title":"Can’t enable DNSBL/RBL in Plesk because it’s greyed out"},{"content":"","date":null,"permalink":"/tags/spam/","section":"Tags","summary":"","title":"Spam"},{"content":"Using Linux kernel 3.12 or later? See this updated post instead.\nLast week, I found myself with a server under low load, but it couldn\u0026rsquo;t make or receive network connections. When I ran dmesg, I found the following line repeating over and over:\nip_conntrack: table full, dropping packet I\u0026rsquo;d seen this message before, but I headed over to Red Hat\u0026rsquo;s site for more details. It turns out that the server was running iptables, but it was under a very heavy load and also handling a high volume of network connections. Generally, the ip_conntrack_max is set to the total MB of RAM installed multiplied by 16. However, this server had 4GB of RAM, but ip_conntrack_max was set to 65536:\n# cat /proc/sys/net/ipv4/ip_conntrack_max 65536 I logged into another server with 1GB of RAM (RHES 5, 32-bit) and another with 2GB of RAM (RHES 4, 64-bit), and both had ip_conntrack_max set to 65536. I\u0026rsquo;m not sure if this is a known Red Hat issue, or if it\u0026rsquo;s just set to a standard value out of the box.\nIf you want to check your server\u0026rsquo;s current tracked connections, just run the following:\n# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count If you want to adjust it (as I did), just run the following as root:\n# echo 131072 \u003e /proc/sys/net/ipv4/ip_conntrack_max ","date":"24 January 2008","permalink":"/p/ip_conntrack-table-full-dropping-packet/","section":"Posts","summary":"Using Linux kernel 3.","title":"ip_conntrack: table full, dropping packet"},{"content":"I stumbled upon this peculiar bounce message recently while working on a server:\nHi. This is the qmail-send program at yourmailserver.com. I\u0026#39;m afraid I wasn\u0026#39;t able to deliver your message to the following addresses. This is a permanent error; I\u0026#39;ve given up. Sorry it didn\u0026#39;t work out. \u0026lt;user1@domain.com\u0026gt;: This message is looping: it already has my Delivered-To line. (#5.4.6) --- Below this line is a copy of the message.\u0026lt;/p\u0026gt; Return-Path: \u0026lt;remoteuser@otherdomain.com\u0026gt; Received: (qmail 14418 invoked by uid 110); 9 Jan 2008 13:04:33 -0600 Delivered-To: 54-user2@domain.com Received: (qmail 14411 invoked by uid 110); 9 Jan 2008 13:04:33 -0600 Delivered-To: 53-user1@domain.com Received: (qmail 14404 invoked from network); 9 Jan 2008 13:04:33 -0600 Received: from otherdomain.com (HELO otherdomain.com) (11.22.33.44) by yourmailserver.com with SMTP; 9 Jan 2008 13:04:33 -0600 Basically, this is qmail\u0026rsquo;s way of letting you know that your e-mails are stuck in a mail loop. One e-mail user is redirecting to another e-mail user, and that e-mail user is redirecting back to the first one. If q-mail already has a delivered to line which matches one that it already added, it bounces the e-mail and halts delivery.\n","date":"23 January 2008","permalink":"/p/qmail-this-message-is-looping-it-already-has-my-delivered-to-line/","section":"Posts","summary":"I stumbled upon this peculiar bounce message recently while working on a server:","title":"qmail: This message is looping: it already has my Delivered-To line"},{"content":"","date":null,"permalink":"/tags/dovecot/","section":"Tags","summary":"","title":"Dovecot"},{"content":"You may catch this error when you attempt to start dovecot on a Red Hat Enterprise Linux 5.1 system with the 64-bit architecture:\ndovecot: imap-login: imap-login: error while loading shared libraries: libsepol.so.1: failed to map segment from shared object: Cannot allocate memory dovecot: pop3-login: pop3-login: error while loading shared libraries: libsepol.so.1: failed to map segment from shared object: Cannot allocate memory If you start dovecot, the main dovecot daemon will run with one auth child process, but there will be no POP/IMAP processes started. To fix the issue, open the /etc/dovecot.conf and adjust the following directive:\nlogin_process_size = 64 Restart dovecot after making the change:\n# /etc/init.d/dovecot restart This was tested on RHEL 5.1 x86_64.\n","date":"22 January 2008","permalink":"/p/dovecot-libsepolso1-failed-to-map-segment-from-shared-object-cannot-allocate-memory/","section":"Posts","summary":"You may catch this error when you attempt to start dovecot on a Red Hat Enterprise Linux 5.","title":"Dovecot: libsepol.so.1: failed to map segment from shared object: Cannot allocate memory"},{"content":"If you\u0026rsquo;ve used newer versions of Horde with Plesk, you have probably noticed the news feed that runs down the left side of the screen. Depending on the types of e-mails you receive, you may get some pretty odd news popping up on the screen.\nLuckily, you can remove the news feeds pretty easily. Open the following file in your favorite text editor:\n/usr/share/psa-horde/templates/portal/sidebar.inc\nOnce the file is open, drop down to line 102 and comment out the entire if() statement (lines 102-117).\nNOTE: If you upgrade Plesk, this change will most likely be reversed.\n","date":"21 January 2008","permalink":"/p/removing-news-feeds-in-horde/","section":"Posts","summary":"If you\u0026rsquo;ve used newer versions of Horde with Plesk, you have probably noticed the news feed that runs down the left side of the screen.","title":"Removing news feeds in Horde"},{"content":"After a couple of weeks, my MySQL replication series has come to a close. Here\u0026rsquo;s links to all of the topics that I covered:\nPerformance\nRedundancy\nBackups and Data Integrity\nHorizontal Data Partitioning\nDelayed Slaves\nBreakdown\nReplication Across an External Network\nUpgrading the MySQL Server\nSlave Performance\nIf there\u0026rsquo;s any other questions that you have, please let me know and I\u0026rsquo;ll be happy to add some extra posts on your topic!\n","date":"15 January 2008","permalink":"/p/mysql-replication-wrap-up/","section":"Posts","summary":"After a couple of weeks, my MySQL replication series has come to a close.","title":"MySQL Replication: Wrap-up"},{"content":"There\u0026rsquo;s a few final configuration options that may help the performance of your slave MySQL servers. If you\u0026rsquo;re not using certain storage engines, like InnoDB or Berkeley, then by all means, remove them from your configuration. For those two specifically, just add the following to your my.cnf on the slave server:\nskip-innodb\u0026lt;br /\u0026gt; skip-bdb\nTo reduce disk I/O on big MyISAM write operations, you can delay the flushing of indexes to the disk:\ndelay_key_write = ALL\nYou can also make all of your write queries take a backseat to any reads:\nlow-priority-updates\nKeep in mind, however, that the last two options will increase slave performance, but it may cause them to lag behind the master. Depending on your application, this may not be acceptable.\n","date":"14 January 2008","permalink":"/p/mysql-replication-slave-performance/","section":"Posts","summary":"There\u0026rsquo;s a few final configuration options that may help the performance of your slave MySQL servers.","title":"MySQL Replication: Slave Performance"},{"content":"If you want to make a DBA nervous, just let them know that they need to upgrade MySQL servers that are replicating in a production environment. There\u0026rsquo;s multiple ways to get the job done, but here is the safest route:\nFirst, make sure you have dumped all of your databases properly. Verify that your backups are correct and intact, and that you have multiple copies of them.\nNext, upgrade the slave servers individually to the newest version. After upgrading the first one, make sure the slave server is operating properly. If it is working properly, then you can continue to upgrade the other slaves.\nOnce all of the slaves have been upgraded, then you can upgrade the master. If a busy web application is sending write queries to the master, you may want to put up a temporary page that tells visitors that maintenance is being performed. Once all of the writes clear out, stop the master and upgrade it.\nAfter the master starts up, be sure that the slaves reconnect, and you might want to perform a test write query. Verify that the write is performed on the slaves as it was done on the master.\n","date":"11 January 2008","permalink":"/p/mysql-replication-upgrading-the-mysql-server/","section":"Posts","summary":"If you want to make a DBA nervous, just let them know that they need to upgrade MySQL servers that are replicating in a production environment.","title":"MySQL Replication: Upgrading the MySQL server"},{"content":"While many people might find replicating over an external network to be an odd concept, it does have some uses. For example, if you need to replicate data for local access at certain locations, it may be helpful. Also, if you have a dedicated server, you can replicate to your home to run backups.\nFirst off, you\u0026rsquo;re going to need security for the connection. This is easily done with SSL. On the master, simply add the following lines to the [mysqld] section and restart the master:\nssl-ca=cacert.pem\u0026lt;br /\u0026gt; ssl-cert=server-cert.pem\u0026lt;br /\u0026gt; ssl-key=server-key.pem\nTo have the slaves use SSL connections to the master server, simply add on MASTER_SSL=1 to the CHANGE MASTER statement on the slave.\nAnother aspect to consider is bandwidth usage. This may be a priority if your remote areas have slow downlinks, or if you are charged for your bandwidth usage. You can compress the MySQL traffic very easily. Simply add the following to the MySQL configuration file in the [mysqld] section:\nslave_compressed_protocol = 1\nWith both of these changes, keep in mind that there is a significant CPU overhead required to compress and/or encrypt data. Determine carefully what your application requires and test your configuration thoroughly.\n","date":"10 January 2008","permalink":"/p/mysql-replication-across-an-external-network/","section":"Posts","summary":"While many people might find replicating over an external network to be an odd concept, it does have some uses.","title":"MySQL Replication: Across an external network"},{"content":"On some occasions, MySQL replication can break down if an statement comes from the master that makes no sense to the slave. For example, if an UPDATE statement arrives from the master server, but the table referenced by the UPDATE no longer exists, then the slave will halt replication and throw an error when SHOW SLAVE STATUS; is run.\nThe obvious question here is: how can the master and the slave have different data after replication has started? After all, you make a dump file prior to starting replication, so both servers contain the same information. Stray updates can be thrown into the mix from application errors or plain user errors. These kinds of things happen, even though we all try to avoid it.\nDon\u0026rsquo;t worry - this is almost always an easy fix. You have two main options:\nFix the problem yourself. If the master sent a query that the slave can\u0026rsquo;t run, fix it manually. For example, if the master wants to run an INSERT on a table that doesn\u0026rsquo;t exist, run a quick SHOW CREATE TABLE; on the master and create the table manually on the slave. When the table is there, run a START SLAVE; on the slave and you should be all set.\nSkip an unnecessary query. Let\u0026rsquo;s say that the master sent over a DROP TABLE query but the table doesn\u0026rsquo;t exist on the master. It\u0026rsquo;s safe to say that the master won\u0026rsquo;t be sending any write queries to that table in the future, so the query can be skipped. To skip it, run the following statement:\nmysql\u0026gt; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;\u0026lt;br /\u0026gt; mysql\u0026gt; START SLAVE;\nIn short, you\u0026rsquo;re telling MySQL to skip that unnecessary query and keep going with the ones after that. Of course, if you need to skip multiple queries, change the 1 to whatever number you need and then run START SLAVE;.\n","date":"9 January 2008","permalink":"/p/mysql-replication-breakdown/","section":"Posts","summary":"On some occasions, MySQL replication can break down if an statement comes from the master that makes no sense to the slave.","title":"MySQL Replication: Breakdown"},{"content":"In a perfect world, slaves will contain the same data as the master at all times. The events should be picked up and executed by the slaves in milliseconds. However, in real world scenarios, replication will be held up for different reasons. Whether it\u0026rsquo;s table locks, disk I/O, network saturation, or CPU usage, slaves might become several seconds, minutes or even hours behind the master.\nIn some situations, delays of less than 30 seconds may not be a big issue. Some applications, like social networking applications, would need to have the data match at all times. Lags would not be acceptable.\nFor example, review this scenario. Let\u0026rsquo;s say you go to a site and create an account. That would send a write query to the master. Once you\u0026rsquo;ve finished the account creation, the page will depend on a read query. If the slave is behind the master, it won\u0026rsquo;t have any data about your new account, and the application will probably tell you that you don\u0026rsquo;t have an account. That would be pretty annoying for your application\u0026rsquo;s users.\nTo check your current lag, simply run SHOW SLAVE STATUS; in MySQL, and review the number following Seconds_Behind_Master. If everything is running well, it should be followed by 0. If NULL is shown, then there is most likely an issue with replication, and you might want to check Last_Error.\nSo, how can replication lags be corrected? Try these methods:\nReview your queries. When queries keep running in MySQL, the slave may be unable to keep up. Make sure that your read queries are as optimized as possible so they complete quickly.\nOptimize your MySQL server variables. Be sure to thoroughly review your MySQL configuration for any bottlenecks.\nChoose the right storage engines. If you\u0026rsquo;re making a lot of updates to a table, consider using InnoDB. If your tables are not updated often, consider using MyISAM tables (or even compressed MyISAM tables).\nUpgrade your hardware. Find your hardware bottleneck. If it\u0026rsquo;s the CPU, consider upgrading to a multi-core CPU, or a CPU with a higher clock speed. For I/O bottlenecks, consider a RAID solution with SAS drives. If you\u0026rsquo;re lucky enough to have a network bottleneck (lucky since it means you\u0026rsquo;re doing well with CPU and I/O), use a dedicated switch or upgrade to gigabit (or faster) hardware.\n","date":"9 January 2008","permalink":"/p/mysql-replication-delayed-slaves/","section":"Posts","summary":"In a perfect world, slaves will contain the same data as the master at all times.","title":"MySQL Replication: Delayed Slaves"},{"content":"If you have a master with multiple slaves, you can get some performance and save money on hardware by splitting data horizontally among your servers. For example, if you have one high traffic database and two lower traffic databases, you can selectively split them among the slaves. With five slaves, set up three of the slaves to replicate your high traffic database, and the two other slaves can handle one each out of the two low traffic databases.\nThis allows you to expand when you\u0026rsquo;re ready, and you can move your databases around to take advantage of idle servers. MySQL AB already has documentation on how to make this possible.\n","date":"7 January 2008","permalink":"/p/mysql-replication-horizontal-data-partitioning/","section":"Posts","summary":"If you have a master with multiple slaves, you can get some performance and save money on hardware by splitting data horizontally among your servers.","title":"MySQL Replication: Horizontal Data Partitioning"},{"content":"An often overlooked benefit of MySQL replication is the ability to make reliable backups without affecting the integrity of the MySQL data.\nWith one MySQL server, backups have a huge impact on the server. If file-based backups are performed, you have to stop MySQL completely while the files are copied (unless you purchase expensive utilities that accomplish this while MySQL is running). If dumps are made with mysqldump, table locking and I/O operations will crush the performance of the server.\nYou can get around these performance hits by running dumps in single transaction mode, or by restricting mysqldump to locking one table at a time. The performance gain comes at a price, however, as your backups are not a perfect snapshot. After one table is locked for a period of table, previously locked tables are actively changing and some tables might not match up.\nBy having a slave available, you can perform a snapshot backup and lock all of the tables during the process. This provides an exact point-in-time backup with a very low effect on your MySQL servers\u0026rsquo; performance.\n","date":"4 January 2008","permalink":"/p/mysql-replication-backups-data-integrity/","section":"Posts","summary":"An often overlooked benefit of MySQL replication is the ability to make reliable backups without affecting the integrity of the MySQL data.","title":"MySQL Replication: Backups \u0026 Data Integrity"},{"content":"Although performance is a much larger benefit of replication, it provides some redundancy for your application as well. Adding a slave server to a master allows you to perform read operations on either server, but you\u0026rsquo;re still bound to the master server for writes. In a group of multiple slaves with one master, you have your data available and online in multiple locations, which means that certain servers can fall out of replication without a large disaster.\nWhen disaster does occur, use the following recommendations as a guide.\nIf the master fails in a two-server replication environment, you will be dead in the water with regards to write queries. You will need to convert the slave into a master. This can be done relatively quickly by following these steps:\nLog into MySQL on the slave and run STOP SLAVE; RESET SLAVE; Add log-bin to the slave\u0026rsquo;s /etc/my.cnf file and restart MySQL The slave server will now be running as a master Adjust your application to send reads and writes to the slave Once the original master comes back online, set it up just like a new slave. You can skip some steps, such as setting the server-id, since that still should correspond to your overall configuration.\nIf the master fails in a multiple-server replication environment, you\u0026rsquo;re still in bad shape for writes. Follow the steps shown above, and then adjust the other slaves (with CHANGE MASTER) so that they pull events from the new master instead.\nIf a slave fails in any replication environment, adjust your application so that it no longer attempts to send reads to the failed slave. While you work to bring the failed slave back online, your queries will be distributed to the remaining servers.\nYou can automate many of these operations by using applications like heartbeat, or by using load balancers to automatically route database traffic.\n","date":"4 January 2008","permalink":"/p/mysql-replication-redundancy/","section":"Posts","summary":"Although performance is a much larger benefit of replication, it provides some redundancy for your application as well.","title":"MySQL Replication: Redundancy"},{"content":"MySQL replication can increase performance by allowing developers to spread queries over two servers. Queries that write data must be sent to the master at all times, but queries that read data can be sent to either server. This means that by adding a slave server to a database environment allows you to effectively double your read query performance.\nHowever, there are some large caveats to consider here. The actual web site code itself will need to be written in such a way that read and write queries can be diverted to different destinations. Depending on the size of the application and how it has been developed, the work requires to provide this functionality may be prohibitive for replication.\nSome load balancers can balance MySQL query traffic, and this can help if the code cannot balance the load internally. Open source applications like MySQL Proxy and pound can be used as well.\nAlso, if the queries are not optimized, and the correct storage engines are not used, replication will not work well. If queries take an extended time to execute, the performance gains will be almost non existent. Also, if the wrong storage engines are used, and much of the rows or tables are locked, performance gains will be greatly limited. Some situations may actually cause replication to halt due to locking. When this occurs, the data on the slave becomes stale and SELECTs run against the master and slave will return different results.\nIn short:\nReplication can increase read performance It cannot fix issues caused by bad queries/storage engines Write queries can only be sent to the master ","date":"2 January 2008","permalink":"/p/mysql-replication-performance/","section":"Posts","summary":"MySQL replication can increase performance by allowing developers to spread queries over two servers.","title":"MySQL Replication: Performance"},{"content":"MySQL replication may sound complicated, but it can be done easily. Here\u0026rsquo;s a quick 7-step guide:\nCreate a replication user on the master: GRANT REPLICATION SLAVE ON *.* TO \u0026#39;repl\u0026#39;@\u0026#39;%\u0026#39; IDENTIFIED BY \u0026#39;password\u0026#39;; On the master server, add the following to the [mysqld] section in my.cnf and restart MySQL: server-id = 1 relay_log=mysqldrelay log-bin expire_logs_days = 7 On the slave server, add the following to the [mysqld] sesion in my.cnf and restart MySQL: server-id = 2 Create a mysqldump file on the master server which includes a global lock: databases.sql Configure the slave: # mysql -u user -ppassword mysql\u0026gt; CHANGE MASTER TO MASTER_HOST=\u0026#39;master host name\u0026#39;, MASTER_USER=\u0026#39;repl\u0026#39;, MASTER_PASSWORD=\u0026#39;repl\u0026#39;; Move the dump to the slave server and import it: mysql -u user -ppassword \u0026lt; databases.sql Start the slave: mysql -u user -ppassword mysql\u0026gt; START SLAVE; ","date":"31 December 2007","permalink":"/p/seven-step-mysql-replication/","section":"Posts","summary":"MySQL replication may sound complicated, but it can be done easily.","title":"Seven Step MySQL Replication"},{"content":"I stumbled upon a server running Plesk 8.2.1 where a certain user could not receive e-mail. I sent an e-mail to the user from my mail client, and I never saw it enter the user\u0026rsquo;s mailbox. It didn\u0026rsquo;t even appear in the logs.\nAfter checking the usual suspects, like MX records, mail account configuration, and firewalls, I was unable to find out why it was occurring. Even after a run of mchk, the emails would not be delivered.\nI began testing with a telnet connection to the SMTP port:\n$ telnet 11.22.33.44 25 Trying 11.22.33.44... Connected to 11.22.33.44. Escape character is \u0026#39;^]\u0026#39;. 220 www.yourserver.com ESMTP HELO domain.com 250 www.yourserver.com MAIL FROM: test@test.com 250 ok RCPT TO: someuser@somedomain.com 421 temporary envelope failure (#4.3.0) QUIT 221 www.yourserver.com Connection closed by foreign host. Temporary envelope failure? I was still confused. After reviewing the logs, I found the following line whenever I tried to telnet to port 25 and send an e-mail:\nDec 2 00:15:49 www relaylock: /var/qmail/bin/relaylock: mail from 44.33.22.11:17249 (yourdesktop.com) It turns out that the customer was using greylisting in qmail with qmail-envelope-scanner. After a quick check of /tmp/greylist_dbg.txt, I found the entries from me (as well as a lot of other senders), and that ended up being the root of the problem.\n","date":"4 December 2007","permalink":"/p/plesk-and-qmail-421-temporary-envelope-failure-430/","section":"Posts","summary":"I stumbled upon a server running Plesk 8.","title":"Plesk and qmail: 421 temporary envelope failure (#4.3.0)"},{"content":"After I was asked to create a stored procedure on a MySQL 5.0.45 installation last week, I received the following error:\nERROR 1146 at line 24: Table 'mysql.proc' doesn't exist\nThe server had the default MySQL 4.1.20 that comes with Red Hat Enterprise Linux 4, and it was upgraded to MySQL 5.0.45. After the upgrade, the mysql_upgrade script wasn\u0026rsquo;t run, so the privilege tables were wrong, and the special tables for procedures and triggers did not exist.\nTo fix the problem, I ran:\n# /usr/bin/mysql_upgrade\nAfter about 20 seconds, the script completed and I was able to add a stored procedure without a problem.\n","date":"29 November 2007","permalink":"/p/table-mysqlproc-doesnt-exist/","section":"Posts","summary":"After I was asked to create a stored procedure on a MySQL 5.","title":"Table ‘mysql.proc’ doesn’t exist"},{"content":"There\u0026rsquo;s a few issues with PHP 5.2.5 and the version of Horde that is bundled with Plesk 8.1.x and 8.2.x. The PHP include paths that appear in the Apache configuration generated by Plesk conflict with the PHP installation, and that causes the Horde webmail interface to segmentation fault.\nTo fix the problem, create a file called /etc/httpd/conf.d/zz050a_horde_php_workaround.conf and put the following inside it:\n\u0026lt;DirectoryMatch /usr/share/psa-horde\u0026gt; php_admin_value include_path \u0026#34;/usr/share/psa-horde/lib:/usr/share/psa-horde:/usr/share/psa-horde/pear:.\u0026#34; \u0026lt;/DirectoryMatch\u0026gt; Reload the Apache configuration and your Horde installation should work properly with PHP 5.2.5.\nCredit for this fix goes to Kevin M.\n","date":"28 November 2007","permalink":"/p/fixing-horde-problems-in-plesk-81x82x-with-php-525/","section":"Posts","summary":"There\u0026rsquo;s a few issues with PHP 5.","title":"Fixing Horde problems in Plesk 8.1.x/8.2.x with PHP 5.2.5"},{"content":"One of my biggest beefs with Plesk\u0026rsquo;s e-mail handling is the lack of server-side filtering. Plesk will only allow you to throw away e-mails marked as spam, but this won\u0026rsquo;t work for me since SpamAssassin marks some mails as spam that actually aren\u0026rsquo;t. If you set up filters in SquirrelMail or Horde, the filters will only work if you always log into the webmail interface to snag your e-mail.\nLuckily, you can do some fancy work with procmail to have the filtering done server-side.\nFirst, make sure procmail is installed on your server, and change to this directory:\n/var/qmail/mailnames/yourdomain.com/yourusername/\nInside that directory, drop in a .procmailrc file which contains the following:\nMAILDIR=/var/qmail/mailnames/yourdomain.com/yourusername/Maildir DEFAULT=${MAILDIR}/ SPAMDIR=${MAILDIR}/.Junk/ :0 * ^X-Spam-Status: Yes.* ${SPAMDIR} Once that file is in place, move the .qmail file out of the way, and replace it with this:\n| /usr/local/psa/bin/psa-spamc accept |preline /usr/bin/procmail -m -o .procmailrc Please be aware that these changes will disappear if you make any adjustments to your mail configuration within Plesk. To get around this annoyance, just change the file attributes to immutable:\n# chattr +i .qmail .procmailrc Credit for this trick goes to Russ Wittmann.\n","date":"27 November 2007","permalink":"/p/sort-e-mail-in-plesk-with-procmail/","section":"Posts","summary":"One of my biggest beefs with Plesk\u0026rsquo;s e-mail handling is the lack of server-side filtering.","title":"Sort e-mail in Plesk with procmail"},{"content":"If your system abruptly loses power, or if a RAID card is beginning to fail, you might see an ominous message like this within your logs:\nEXT3-fs error (device hda3) in start_transaction: Journal has aborted Basically, the system is telling you that it\u0026rsquo;s detected a filesystem/journal mismatch, and it can\u0026rsquo;t utilize the journal any longer. When this situation pops up, the filesystem gets mounted read-only almost immediately. To fix the situation, you can remount the partition as ext2 (if it isn\u0026rsquo;t your active root partition), or you can commence the repair operations.\nIf you\u0026rsquo;re working with an active root partition, you will need to boot into some rescue media and perform these operations there. If this error occurs with an additional partition besides the root partition, simply unmount the broken filesystem and proceed with these operations.\nRemove the journal from the filesystem (effectively turning it into ext2):\n# tune2fs -O ^has_journal /dev/hda3 Now, you will need to fsck it to correct any possible problems (throw in a -y flag to say yes to all repairs, -C for a progress bar):\n# e2fsck /dev/hda3 Once that\u0026rsquo;s finished, make a new journal which effectively makes the partition an ext3 filesystem again:\n# tune2fs -j /dev/hda3 You should be able to mount the partition as an ext3 partition at this time:\n# mount -t ext3 /dev/hda3 /mnt/fixed Be sure to check your dmesg output for any additional errors after you\u0026rsquo;re finished!\n","date":"20 November 2007","permalink":"/p/ext3-fs-error-device-hda3-in-start_transaction-journal-has-aborted/","section":"Posts","summary":"If your system abruptly loses power, or if a RAID card is beginning to fail, you might see an ominous message like this within your logs:","title":"EXT3-fs error (device hda3) in start_transaction: Journal has aborted"},{"content":"Apparently, a recent Red Hat Enterprise Linux update for ES3, 4 and 5 caused some Perl applications to throw errors like these:\nunable to call function somefunction on undefined value Of course, replace somefunction with your function of choice. To correct the issue, you can force CPAN to bring back a more sane version of Scalar::Util:\n# perl -MCPAN -e shell cpan\u0026gt; force install Scalar::Util ","date":"19 November 2007","permalink":"/p/red-hat-perl-issues-unable-to-call-function-somefunction-on-undefined-value/","section":"Posts","summary":"Apparently, a recent Red Hat Enterprise Linux update for ES3, 4 and 5 caused some Perl applications to throw errors like these:","title":"Red Hat Perl Issues: unable to call function somefunction on undefined value"},{"content":"","date":null,"permalink":"/tags/clamd/","section":"Tags","summary":"","title":"Clamd"},{"content":"A few days ago, I stumbled upon a server running qmail with qmail-scanner. The server was throwing out this error when a user on the server attempted to send an e-mail to someone else:\n451 qq temporary problem (#4.3.0)\nThe one thing I love about qmail is its extremely descriptive error messages. Did I say descriptive? I meant cryptic.\nLuckily, clamdscan was a bit more chatty in the general system logs:\nNov 12 10:21:17 server X-Antivirus-MYDOMAIN-1.25-st-qms: server.somehost.com119488087677512190] clamdscan: corrupt or unknown clamd scanner error or memory/resource/perms problem - exit status 512/2\nOkay, that helps a bit, but this one from /var/log/clamd.log was the big help:\nMon Nov 12 12:20:29 2007 -\u0026gt; ERROR: Socket file /tmp/clamd.socket exists. Either remove it, or configure a different one.\nI removed the /tmp/clamd.socket file and clamd began operating properly after a quick restart of the clamd service. This one was pretty easy, but it was not well documented (as I discovered from a little while of Google searching).\n","date":"16 November 2007","permalink":"/p/clamdscan-corrupt-or-unknown-clamd-scanner-error-or-memoryresourceperms-problem/","section":"Posts","summary":"A few days ago, I stumbled upon a server running qmail with qmail-scanner.","title":"clamdscan: corrupt or unknown clamd scanner error or memory/resource/perms problem"},{"content":"By default, Red Hat Enterprise Linux 4 sets the default character set in Apache to UTF-8. Your specific web application may need for the character set to be set to a different value, and the change can be made fairly easily. Here\u0026rsquo;s an example where the character set is changed to ISO-8859-1:\nFirst, adjust the AddDefaultCharset directive in /etc/httpd/conf/httpd.conf:\n#AddDefaultCharset UTF-8\u0026lt;br /\u0026gt; AddDefaultCharset ISO-8859-1 Then, reload Apache and check your headers:\n# /etc/init.d/httpd reload\u0026lt;br /\u0026gt; # curl -I localhost\u0026lt;br /\u0026gt; HTTP/1.1 403 Forbidden\u0026lt;br /\u0026gt; Date: Thu, 08 Nov 2007 22:18:14 GMT\u0026lt;br /\u0026gt; Server: Apache/2.0.52 (Red Hat)\u0026lt;br /\u0026gt; Accept-Ranges: bytes\u0026lt;br /\u0026gt; Content-Length: 3985\u0026lt;br /\u0026gt; Connection: close\u0026lt;br /\u0026gt; Content-Type: text/html; charset=ISO-8859-1 This was tested on Red Hat Enterprise Linux 4 Update 5\n","date":"15 November 2007","permalink":"/p/change-the-default-apache-character-set/","section":"Posts","summary":"By default, Red Hat Enterprise Linux 4 sets the default character set in Apache to UTF-8.","title":"Change the default Apache character set"},{"content":"I found myself wrestling with a server where the Plesk interface suddenly became unavailable without any user intervention. An attempt to start the service was less than fruitful:\n[root@server ~]# service psa start Key file: /opt/drweb/drweb32.key - Key file not found! A path to a valid license key file does not specified. Plesk authorization failed: HTTP request error [7] Error: Plesk Software not running. [FAILED] (Although I included the text from the drweb failure, I later found that it was not related to the issue. However, since it might appear in your logs prior to the HTTP request error, I included it anyways.)\nThis was a perfectly working server that had no other issues besides this peculiar Plesk issue. Another technician had upgraded the license a few weeks prior, and it was verified at the the time to be working properly. After a bit of Google searching, I found that the solution was to completely stop Plesk and its related services and then start it all up again.\n[root@server ~]# service psa stopall /usr/local/psa/admin/bin/httpsdctl stop: httpd stopped Stopping Plesk: [ OK ] Stopping named: [ OK ] service psa startStopping MySQL: [ OK ] Stopping : Stopping Courier-IMAP server: Stopping imap [ OK ] Stopping imap-ssl [ OK ] Stopping pop3 [ OK ] Stopping pop3-ssl [ OK ] Stopping postgresql service: [ OK ] Shutting down psa-spamassassin service: [ OK ] Stopping httpd: [ OK ] [root@server ~]# service psa start Starting named: [ OK ] Starting MySQL: [ OK ] Starting qmail: [ OK ] Starting Courier-IMAP server: Starting imapd [ OK ] Starting imap-ssl [ OK ] Starting pop3 [ OK ] Starting pop3-ssl [ OK ] Starting postgresql service: [ OK ] Starting psa-spamassassin service: [ OK ] Processing config directory: /usr/local/psa/admin/conf/httpsd.*.include /usr/local/psa/admin/bin/httpsdctl start: httpd started Starting Plesk: [ OK ] Starting up drwebd: [ OK ] I couldn\u0026rsquo;t nail down anything within the Plesk log files that would explain the cause of the problem, but this solution corrected the issue instantly.\nThis issue occurred with Plesk 8.1.1 on Red Hat Enterprise Linux 4 Update 5\n","date":"14 November 2007","permalink":"/p/plesk-authorization-failed-http-request-error-7/","section":"Posts","summary":"I found myself wrestling with a server where the Plesk interface suddenly became unavailable without any user intervention.","title":"Plesk authorization failed: HTTP request error [7]"},{"content":"Create a strong CSR and private key\nopenssl req -new -nodes -newkey rsa:2048 -out server.crt -keyout server.key\nParsing out the data within a certificate\nopenssl asn1parse -in server.crt\nChecking a certificate/key modulus to see if they correspond\nopenssl rsa -in server.key -modulus -noout | openssl md5\u0026lt;br /\u0026gt; openssl x509 -in server.crt -modulus -noout | openssl md5\nConvert a key from PEM -\u0026gt; DER\nopenssl rsa -inform PEM -in key.pem -outform DER -out keyout.der\nConvert a key from DER -\u0026gt; PEM\nopenssl rsa -inform DER -in key.der -outform PEM -out keyout.pem\nRemove the password from an encrypted private key\nopenssl rsa -in server.key -out server-nopass.key\nReviewing a detailed SSL connection\nopenssl s_client -connect 10.0.0.1:443\n","date":"7 November 2007","permalink":"/p/openssl-tricks/","section":"Posts","summary":"Create a strong CSR and private key","title":"OpenSSL Tricks"},{"content":"I\u0026rsquo;ve struggled at times to get a decent-looking terminal on my desktop, and I believe I\u0026rsquo;ve found a good one. Toss this into your ~/.Xdefaults:\naterm*loginShell:true aterm*transparent:true aterm*shading:40 aterm*background:Black aterm*foreground:White aterm*scrollBar:true aterm*scrollBar_right:true aterm*transpscrollbar:true aterm*saveLines:32767 aterm*font:*-*-fixed-medium-r-normal--*-110-*-*-*-*-iso8859-1 aterm*boldFont:*-*-fixed-bold-r-normal--*-*-110-*-*-*-*-iso8859-1 Then load up the changes and start aterm:\n$ xrdb -load .Xdefaults $ aterm Of course, if you like rxvt better for your Unicode needs, just use this configuration:\nrxvt*loginShell:true rxvt*transparent:true rxvt*shading:40 rxvt*background:Black rxvt*foreground:White rxvt*scrollBar:true rxvt*scrollBar_right:true rxvt*transpscrollbar:true rxvt*saveLines:32767 rxvt*font:*-*-fixed-medium-r-normal--*-110-*-*-*-*-iso8859-1 rxvt*boldFont:*-*-fixed-bold-r-normal--*-*-110-*-*-*-*-iso8859-1 ","date":"4 November 2007","permalink":"/p/attractive-atermrxvt-xdefaults-configuration/","section":"Posts","summary":"I\u0026rsquo;ve struggled at times to get a decent-looking terminal on my desktop, and I believe I\u0026rsquo;ve found a good one.","title":"Attractive aterm/rxvt .Xdefaults configuration"},{"content":"Here\u0026rsquo;s a pretty weird kernel panic that I came across the other day:\nEnforcing mode requested but no policy loaded. Halting now. Kernel panic - not syncing: Attempted to kill init! This usually means that you\u0026rsquo;ve set SELINUX in enforcing mode within /etc/sysconfig/selinux or /etc/selinux/selinux.conf but you don\u0026rsquo;t have the appropriate SELINUX packages installed. To fix the issue, boot the server into the Red Hat rescue environment and disable SELINUX until you can install the proper packages that contain the SELINUX targeted configuration.\nThis kernel panic appeared on a Red Hat Enterprise Linux 4 Update 5 server.\n","date":"17 October 2007","permalink":"/p/enforcing-mode-requested-but-no-policy-loaded-halting-now/","section":"Posts","summary":"Here\u0026rsquo;s a pretty weird kernel panic that I came across the other day:","title":"Enforcing mode requested but no policy loaded. Halting now."},{"content":"","date":null,"permalink":"/tags/kernel-panics/","section":"Tags","summary":"","title":"Kernel Panics"},{"content":"A few days ago, I began to install a group of packages with up2date, and the person next to me was surprised that up2date even had this functionality. I use it regularly, but I realized that many users might not be familiar with it.\nYou can install package groups using an at-sign (@) in front of the group name:\n# up2date -i \u0026#34;@X Window System\u0026#34; This will tell up2date to install all of the packages that are marked within the \u0026ldquo;X Window System\u0026rdquo; package group. That would include X drivers, the X libraries/binaries, and twm (among many other packages). If you\u0026rsquo;re not sure which groups are available, just pass the --show-groups flag and review the list:\n# up2date --show-groups Administration Tools Arabic Support Assamese Support Authoring and Publishing Base Bengali Support Brazilian Portuguese Support British Support Bulgarian Support Catalan Support Chinese Support Compatibility Arch Development Support Compatibility Arch Support Core Cyrillic Support Czech Support DNS Name Server Danish Support Development Libraries Development Tools Dialup Networking Support Dutch Support Editors Emacs Engineering and Scientific Estonian Support FTP Server Finnish Support French Support GNOME GNOME Desktop Environment GNOME Software Development Games and Entertainment German Support Graphical Internet Graphics Greek Support Gujarati Support Hebrew Support Hindi Support Hungarian Support ISO8859-2 Support ISO8859-9 Support Icelandic Support Italian Support Japanese Support KDE KDE (K Desktop Environment) KDE Software Development Korean Support Legacy Network Server Legacy Software Development Mail Server Miscellaneous Included Packages MySQL Database Network Servers News Server Norwegian Support Office/Productivity Polish Support Portuguese Support PostgreSQL Database Printing Support Punjabi Support Romanian Support Ruby Russian Support Serbian Support Server Server Configuration Tools Slovak Support Slovenian Support Sound and Video Spanish Support Swedish Support System Tools Tamil Support Text-based Internet Turkish Support Ukrainian Support Web Server Welsh Support Windows File Server Workstation Common X Software Development X Window System XEmacs ","date":"17 October 2007","permalink":"/p/installing-package-groups-with-up2date/","section":"Posts","summary":"A few days ago, I began to install a group of packages with up2date, and the person next to me was surprised that up2date even had this functionality.","title":"Installing package groups with up2date"},{"content":"If you\u0026rsquo;re using Plesk 8.0 or later, you can set up Dr. Web to be enabled for all new mail accounts. To do this, you have to create an event handler.\nHere\u0026rsquo;s the steps you will need:\n» Log into Plesk\n» Click \u0026ldquo;Server\u0026rdquo;\n» Click \u0026ldquo;Event Manager\u0026rdquo;\n» Choose \u0026ldquo;Mail Name Created\u0026rdquo; next to \u0026ldquo;Event\u0026rdquo;\n» In the command area, enter /usr/local/psa/bin/mail.sh --update $NEW_MAILNAME -antivirus inout\n» Click \u0026ldquo;OK\u0026rdquo;\n","date":"12 October 2007","permalink":"/p/enabling-dr-web-virus-scanning-for-new-accounts/","section":"Posts","summary":"If you\u0026rsquo;re using Plesk 8.","title":"Enabling Dr. Web virus scanning for new accounts"},{"content":"When you dump table data from MySQL, you may end up pulling a large chunk of data and it may exceed the MySQL client\u0026rsquo;s max_allowed_packet variable. If that happens, you might catch an error like this:\nmysqldump: Error 2020: Got packet bigger than \u0026#39;max_allowed_packet\u0026#39; bytes when dumping table `tablename` at row: 1627 The default max_allowed_packet size is 25M, and you can adjust it for good within your my.cnf by setting the variable in a section for mysqldump:\n[mysqldump] max_allowed_packet = 500M ","date":"12 October 2007","permalink":"/p/mysqldump-got-packet-bigger-than-max_allowed_packet-bytes/","section":"Posts","summary":"When you dump table data from MySQL, you may end up pulling a large chunk of data and it may exceed the MySQL client\u0026rsquo;s max_allowed_packet variable.","title":"mysqldump: Got packet bigger than ‘max_allowed_packet’ bytes"},{"content":"","date":null,"permalink":"/tags/bind/","section":"Tags","summary":"","title":"Bind"},{"content":"I was recently working on a server where a user on the server was concerned with these log messages:\nOct 7 20:59:33 web named[13698]: client 111.222.333.444#50389: updating zone \u0026#39;domain.com/IN\u0026#39;: update failed: \u0026#39;RRset exists (value dependent)\u0026#39; prerequisite not satisfied (NXRRSET) Oct 7 20:59:34 web named[13698]: client 111.222.333.444#50392: update \u0026#39;domain.com/IN\u0026#39; denied Oct 7 21:59:35 web named[13698]: client 111.222.333.444#50422: updating zone \u0026#39;domain.com/IN\u0026#39;: update failed: \u0026#39;RRset exists (value dependent)\u0026#39; prerequisite not satisfied (NXRRSET) Oct 7 21:59:35 web named[13698]: client 111.222.333.444#50425: update \u0026#39;domain.com/IN\u0026#39; denied Oct 7 22:59:20 web named[13698]: client 111.222.333.444#50458: updating zone \u0026#39;domain.com/IN\u0026#39;: update failed: \u0026#39;RRset exists (value dependent)\u0026#39; prerequisite not satisfied (NXRRSET) The messages here are actually showing that named is doing its job well. Some user was attempting to dynamically update a DNS zone repeatedly, but named was rejecting the updates since they were not coming from a valid sources.\nFurther reading:\nZytrax.com: DNS BIND Zone Transfers and Updates\nInternet Systems Consortium: Dynamic Updates\n","date":"10 October 2007","permalink":"/p/bind-rrset-exists-value-dependent-prerequisite-not-satisfied-nxrrset/","section":"Posts","summary":"I was recently working on a server where a user on the server was concerned with these log messages:","title":"BIND: ‘RRset exists (value dependent)’ prerequisite not satisfied (NXRRSET)"},{"content":"In some situations with dovecot running on your server, you may receive a message from your e-mail client stating that the \u0026ldquo;connection was interrupted with your mail server\u0026rdquo; or the \u0026ldquo;login process failed\u0026rdquo;. This may happen even if you\u0026rsquo;ve created the e-mail account, created the mail spool, and set a password for the user.\nIf you check your /var/log/maillog, you will generally find errors like these:\nOct 7 09:37:45 mailserver pop3-login: Login: newuser [111.222.333.444]\u0026lt;br /\u0026gt; Oct 7 09:37:45 mailserver pop3(newuser): mbox: Can\u0026#39;t create root IMAP folder /home/newuser/mail: Permission denied\u0026lt;br /\u0026gt; Oct 7 09:37:45 mailserver pop3(newuser): Failed to create storage with data: mbox:/var/spool/mail/newuser Dovecot is telling you that it wants to store some mail-related data in the user\u0026rsquo;s home directory, but it can\u0026rsquo;t get access to the user\u0026rsquo;s home directory. If the home directory doesn\u0026rsquo;t exist, create it and set the permissions properly:\n# mkdir /home/newuser\u0026lt;br /\u0026gt; # chown newuser:newuser /home/newuser\u0026lt;br /\u0026gt; # chmod 755 /home/newuser If the directory is already there, double check the ownership and permissions on the directory. If filesystem acl\u0026rsquo;s or filesystem quotas might be in play, be sure to check those as well.\n","date":"10 October 2007","permalink":"/p/dovecot-mbox-cant-create-root-imap-folder/","section":"Posts","summary":"In some situations with dovecot running on your server, you may receive a message from your e-mail client stating that the \u0026ldquo;connection was interrupted with your mail server\u0026rdquo; or the \u0026ldquo;login process failed\u0026rdquo;.","title":"Dovecot: mbox: Can’t create root IMAP folder"},{"content":"On brand new Plesk 8.2.x installations or on servers that have been upgraded to Plesk 8.2.x, you might run into this error when you attempt to log into squirrelmail after it was installed via RPM:\nError opening /var/lib/squirrelmail/prefs/default_pref Could not create initial preference file! /var/lib/squirrelmail/prefs/ should be writable by user apache Please contact your system administrator and report this error. No matter what you do to the /var/lib/squirrelmail/prefs/default_pref file, even if you chmod 777 the file, you will still get the error. If you check the /etc/php.ini, you will normally find safe_mode set to on.\n; ; Safe Mode ; safe_mode = Off Simply change safe_mode to off and reload Apache. If you try to log into squirrelmail again, it should complete successfully. I\u0026rsquo;ve tested this on Red Hat Enterprise Linux 4:\n# rpm -q squirrelmail squirrelmail-1.4.8-4.0.1.el4 ","date":"9 October 2007","permalink":"/p/plesk-error-opening-varlibsquirrelmailprefsdefault_pref/","section":"Posts","summary":"On brand new Plesk 8.","title":"Plesk: Error opening /var/lib/squirrelmail/prefs/default_pref"},{"content":"I\u0026rsquo;ve seen quite a few situations where the Horde login process can take upwards of 45 minutes to log a user into the webmail interface. There\u0026rsquo;s a few issues that can cause these extended delays, and most of them can be fixed rather easily:\nToo many filters / Giant whitelists and blacklists\nThis is the biggest cause that I\u0026rsquo;ve seen. Some users will create gigantic white and black lists (upwards of 5,000 is my record that I\u0026rsquo;ve seen) and this makes Horde compare each and every message in the inbox against these lists upon login. This also applies to filters as Plesk does not use sieve/procmail for mail delivery. Horde is forced to do all of the filtering upon login (in some versions) and this can cause extreme delays.\nMailbox is gigantic\nI\u0026rsquo;ve seen Horde logins take quite a while in mailboxes that are over 500MB in size. If the size of your e-mails is large, and you have a large mailbox with fewer e-mails, Horde can normally work quickly. But, if your inbox is full of tiny e-mails, Horde takes a long time to fully index your mail and display the list (even though it only displays 25-30 at a time).\nToo many users logged into Horde simultaneously\nIn my opinion, Horde\u0026rsquo;s CPU and memory requirements are too large for a webmail application. I\u0026rsquo;ve seen 30-40 simultaneous Horde sessions bring a dual-core box with 2-4GB of RAM and SCSI disks to its knees. Consider installing squirrelmail or roundcube webmail for some of your users and urge them to use it instead.\nIOwait caused by something else\nSometimes the server can simply be bogged down with other requests from other daemons, and this slows Horde down. Make sure that your MySQL installation is tuned properly, and that users are not abusing scripts running through Apache.\n","date":"5 October 2007","permalink":"/p/slow-horde-login-process-with-plesk/","section":"Posts","summary":"I\u0026rsquo;ve seen quite a few situations where the Horde login process can take upwards of 45 minutes to log a user into the webmail interface.","title":"Slow Horde login process with Plesk"},{"content":"One of the most annoying (and explosive) changes in Plesk 8.2 is the automatic addition of up2date sources for its use. As of 8.2.0, the packages are not signed, and they generate errors with up2date. Also, Plesk often keeps adding the sources over and over to /etc/sysconfig/rhn/sources, and this causes additional errors and delays when you use up2date.\nYou can disable this behavior entirely by running the following:\n# echo ALLOW_TO_USE_UP2DATE=no \u0026gt; /root/.autoinstallerrc\nThis will instruct Plesk\u0026rsquo;s autoinstaller to not add any sources to the up2date sources list.\n","date":"4 October 2007","permalink":"/p/preventing-plesk-82x-from-adding-up2date-sources/","section":"Posts","summary":"One of the most annoying (and explosive) changes in Plesk 8.","title":"Preventing Plesk 8.2.x from adding up2date sources"},{"content":"If you want to convert a MyISAM table to InnoDB, the process is fairly easy, but you can do something extra to speed things up. Before converting the table, adjust its order so that the primary key column is in order:\nThis will pre-arrange the table so that it can be converted quickly without a lot of re-arranging required in MySQL. Then, simply change the table engine: If your table is large, then it may take a while to convert it over. There will probably be a fair amount of CPU usage and disk I/O in the process.\nThese statements are also safe in replicated environments. When you issue this statement to the master, it will begin the conversion process. Once it is complete on the master, the statement will roll down to the slaves, and they will begin the conversion as well. Keep in mind, however, that this can greatly reduce the performance of your configuration in the process.\nSpecial thanks to Matthew Montgomery for the ORDER BY recommendation.\n","date":"4 October 2007","permalink":"/p/convert-myisam-tables-to-innodb/","section":"Posts","summary":"If you want to convert a MyISAM table to InnoDB, the process is fairly easy, but you can do something extra to speed things up.","title":"Convert MyISAM tables to InnoDB"},{"content":"Yet another weird Plesk error with terrible grammar popped up on a server that I worked with this week:\nError: There is incorrect combination of resource records in the zone\nAs you can see, this error is not terribly informative. Here\u0026rsquo;s a little background on what I was doing before this alert appeared:\nOn Plesk 8.1.1, I needed to create an alias for a certain domain. Each time I\u0026rsquo;d try to create the alias, I\u0026rsquo;d receive the above error. I could even try junk domains like \u0026rsquo;test.com\u0026rsquo; and it would still fail with the error. I went to a different domain on the server, tried to add an alias there, and it failed as well. So, I went back to analyze the error further.\nThe only thing that tipped me off was the zone word, and I immediately began thinking of DNS. I checked the DNS configuration for a few of the domains, and they appeared to be pretty standard. There weren\u0026rsquo;t any wild DNS records, and there were no problems with the named configuration nor the zone files themselves. I crawled through the dns_recs table in the psa database, and everything appears to be normal.\nI admitted defeat and escalated the issue to SWSoft to get their help. The answer came back, and I was dumbfounded.\nApparently this record was present in the DNS configuration for all of the sites on the server:\nmail.domain.com. CNAME domain.com.\nThis DNS record prevented Plesk from making an alias. Just this DNS record. In short, Plesk was unable to make the alias because of this lonely CNAME. The SWSoft developers claimed that it is an \u0026lsquo;old-style\u0026rsquo; notation and that it \u0026lsquo;should not be used\u0026rsquo;. However, during upgrades from 7.x to 8.x, they never thought it\u0026rsquo;d be a good idea to check for this record and fix it accordingly.\nBasically, the SWSoft developers recommended changing the DNS record manually for each domain to something like this:\nmail.domain.com. A 111.222.333.444\nI did that, and it worked flawlessly. Even though this fixes the issue, I still think that they should have considered this issue during the upgrade routines.\n","date":"3 October 2007","permalink":"/p/plesk-there-is-incorrect-combination-of-resource-records-in-the-zone/","section":"Posts","summary":"Yet another weird Plesk error with terrible grammar popped up on a server that I worked with this week:","title":"Plesk: There is incorrect combination of resource records in the zone"},{"content":"Normally, this error will pop up when you attempt to restart a Plesk-related service, like httpsd, psa-spamassassin or qmail:\nError: HTTPD_INCLUDE_D not defined This basically means that Plesk is unable to get some required configuration directives from the /etc/psa/psa.conf file. If you can\u0026rsquo;t find the directive in the file that Plesk is complaining about, check your Plesk RPM\u0026rsquo;s with rpm:\n# rpm -q psa Most likely, you will find that there is a psa-7.5.4 RPM and a psa-8.1.0 or psa-8.1.1 RPM installed simultaneously. This generally appears because of a botched upgrade that was started within Plesk by the admin user.\nTo fix the issue, get the psa-7.5.4 RPM from autoinstall.plesk.com. Remove the psa-8.1.1 RPM and install the psa-7.5.4 RPM again rather forcefully:\n# rpm -ev psa-8.1.1... # rpm -Uvh --force --nodeps psa-7.5.4... # /etc/init.d/psa restart At this point, you can download the command line autoinstaller and try the Plesk upgrade again.\nFurther reading: http://forum.swsoft.com/showthread.php?threadid=32299\n","date":"2 October 2007","permalink":"/p/plesk-754-error-httpd_include_d-not-defined/","section":"Posts","summary":"Normally, this error will pop up when you attempt to restart a Plesk-related service, like httpsd, psa-spamassassin or qmail:","title":"Plesk 7.5.4: Error: HTTPD_INCLUDE_D not defined"},{"content":"Some users will want to parse HTML through the PHP parser because one of their applications requires it, or because they think it\u0026rsquo;s a good idea. Parsing regular static content through PHP is not recommended as it will cause a performance hit on the server each time a static page is loaded.\nUnfortunately, enabling this in conjunction with Plesk will cause problems with the Plesk web statistics. Since the PHP parsing is disabled for the /plesk-stat/ directories, Apache will mark the page as a PHP page and your browser will attempt to download it rather than display it.\nTo fix this issue, simply add the following LocationMatch to the bottom of your Apache configuration:\nAddType application/x-httpd-php .php .html \u0026lt;LocationMatch \u0026#34;/plesk-stat/(.*)\u0026#34;\u0026gt; AddType text/html .html \u0026lt;/LocationMatch\u0026gt; This will force Apache to serve HTML files from /plesk-stat/ as text/html rather than application/x-http-php. Your web statistics will display in the browser rather than downloading as a PHP file.\n","date":"28 September 2007","permalink":"/p/parsing-html-through-php-in-plesk/","section":"Posts","summary":"Some users will want to parse HTML through the PHP parser because one of their applications requires it, or because they think it\u0026rsquo;s a good idea.","title":"Parsing HTML through PHP in Plesk"},{"content":"Since AOL sends their users\u0026rsquo; traffic through proxy servers, this can cause problems with Horde\u0026rsquo;s session handling in Plesk. The problem arises when the user\u0026rsquo;s IP changes during the middle of the session.\nYou may see an error message in Horde that looks like this:\nYour Internet Address has changed since the beginning of your Mail session. To protect your security, you must login again.\nYou\u0026rsquo;ll normally have this variable in /etc/psa-horde/horde/conf.php:\n# $conf['auth']['checkip'] = true;\nYou can disable this ip check functionality which breaks sessions for AOL users by setting it to false:\n# $conf['auth']['checkip'] = false;\n","date":"28 September 2007","permalink":"/p/session-problems-with-horde-in-plesk-with-aol/","section":"Posts","summary":"Since AOL sends their users\u0026rsquo; traffic through proxy servers, this can cause problems with Horde\u0026rsquo;s session handling in Plesk.","title":"Session problems with Horde in Plesk with AOL"},{"content":"In the event that your system is running out of file descriptors, or you simply want to know what your users are doing, you can review their count of open files by running this command:\nlsof | grep ' root ' | awk '{print $NF}' | sort | wc -l\nOf course, if you want to drop the count and show the actual processes, you can run:\nlsof | grep ' root '\n","date":"26 September 2007","permalink":"/p/counting-open-files-per-user/","section":"Posts","summary":"In the event that your system is running out of file descriptors, or you simply want to know what your users are doing, you can review their count of open files by running this command:","title":"Counting open files per user"},{"content":"If you want to adjust how long postfix will hold a piece of undeliverable mail in its queue, just adjust bounce_queue_lifetime. This variable is normally set to five days by default, but you can adjust it to any amount that you wish. You can set the value to zero, but that will cause e-mails that cannot be immediately sent to be rejected to their senders.\nPostfix Configuration Parameters: bounce_queue_lifetime\n","date":"18 September 2007","permalink":"/p/adjusting-postfix-queue-time-lifetime/","section":"Posts","summary":"If you want to adjust how long postfix will hold a piece of undeliverable mail in its queue, just adjust bounce_queue_lifetime.","title":"Adjusting postfix queue time / lifetime"},{"content":"With RHEL 5 ditching up2date for yum, many Red Hat users might find themselves confused with the new command line flags. Red Hat has published a document detailing the new changes and their old counterparts.\nRed Hat Knowledgebase: What are the yum equivalents of former up2date common tasks?\n","date":"17 September 2007","permalink":"/p/yum-equivalents-of-up2date-arguments/","section":"Posts","summary":"With RHEL 5 ditching up2date for yum, many Red Hat users might find themselves confused with the new command line flags.","title":"Yum equivalents of up2date arguments"},{"content":"If you have SpamAssassin installed, but you want to make sure that it is marking or filtering your e-mails, simply send an e-mail which contains the special line provided here:\nhttp://spamassassin.apache.org/gtube/gtube.txt\nSpamAssassin will always mark e-mails that contain this special line as spam:\nXJS*C4JDBQADN1.NSBN3*2IDNEN*GTUBE-STANDARD-ANTI-UBE-TEST-EMAIL*C.34X\n","date":"15 September 2007","permalink":"/p/testing-spamassassin-with-gtube/","section":"Posts","summary":"If you have SpamAssassin installed, but you want to make sure that it is marking or filtering your e-mails, simply send an e-mail which contains the special line provided here:","title":"Testing SpamAssassin with GTUBE"},{"content":"When you create a CSR and private key to obtain an SSL certificate, the private key has some internal data called a modulus. This is integral to the security of your SSL encryption, but for this specific post, we will focus on one specific aspect.\nIf your private key and certificate do not contain the same modulus, then Apache will sometimes refuse to start or it may not respond properly to SSL requests. You can check the modulus of your private key and SSL certificate with these commands:\n# openssl rsa -noout -modulus -in server.key | openssl md5 # openssl x509 -noout -modulus -in server.crt | openssl md5 If the MD5 checksums match, then the certificate and key will work together. However, if they are different, then you cannot use them together. Generally, this means that you used the wrong CSR (that corresponded to some other private key) when you obtained/created your SSL certificate.\n","date":"14 September 2007","permalink":"/p/check-the-modulus-of-an-ssl-certificate-and-key-with-openssl/","section":"Posts","summary":"When you create a CSR and private key to obtain an SSL certificate, the private key has some internal data called a modulus.","title":"Check the modulus of an SSL certificate and key with openssl"},{"content":"By default, Red Hat Enterprise Linux 2.1 comes with UW-IMAP which runs from xinetd. This is fine for most users, but when mailbox sizes creep upwards of 500MB, you may notice odd performance degradations and undelivered mail.\nThis is because UW-IMAP only supports mbox files in RHEL 2.1. This means your e-mail ends up in one big file which has each e-mail listed one after another. This is a simple way to handle mail, but it scales in a horrible fashion.\nDaniel Bernstein, the creator of qmail, created maildir, and (as much as I hate anything relating to qmail) it\u0026rsquo;s the best method for storing mail that I\u0026rsquo;ve seen so far.\nMbox files are slower because the entire file must be scanned when the POP or IMAP daemon receive a request for an e-mail held within it. That means that the daemon must scan through all of the e-mails until the one that it wants is found. If sendmail wants to drop off e-mail for the user, it has to wait since the mail spool is locked. If it can\u0026rsquo;t deliver the e-mail, it may bounce it after a period of time.\nThis is especially awful if a user receives a fair amount of e-mail and checks their e-mail from a mobile device. This means that their computer and the mobile device are making the mail daemons scan the mbox file repeatedly when they check in. It causes sendmail to back up, disk I/O skyrockets, and the server performance as a whole can suffer.\nThe solution is to move to a newer version of RHEL, hopefully RHEL 4 or 5 where Postfix and maildir support are available. The only fix on RHEL 2.1 is to ask the user to clear out their mailbox to reduce the amount of disk I/O required to pick up e-mail.\n","date":"12 September 2007","permalink":"/p/slow-imap-and-pop3-performance-with-large-mailboxes-on-rhel-21/","section":"Posts","summary":"By default, Red Hat Enterprise Linux 2.","title":"Slow IMAP and POP3 performance with large mailboxes on RHEL 2.1"},{"content":"When you find yourself in a pinch, and you don\u0026rsquo;t know the limits of a certain Red Hat Enterprise Linux version, you can find this information in one place. Whether you want to know RHEL\u0026rsquo;s CPU or memory limitations, you can find them here:\nhttp://www.redhat.com/rhel/compare/\n","date":"12 September 2007","permalink":"/p/rhel-limitations-cheat-sheet/","section":"Posts","summary":"When you find yourself in a pinch, and you don\u0026rsquo;t know the limits of a certain Red Hat Enterprise Linux version, you can find this information in one place.","title":"RHEL limitations cheat sheet"},{"content":"So, this is not really related to the normal system administration topics discussed here, but it\u0026rsquo;s Sunday, so I feel like something different.\nI downloaded the new Growl 1.1 tonight and I wanted to install GrowlMail to get mail notifications from Apple Mail. I went through the package installer, started Mail, and nothing happened. The preference pane didn\u0026rsquo;t exist either. After doing a bit of forum digging, I found these two commands to run in the terminal:\ndefaults write com.apple.mail EnableBundles 1 defaults write com.apple.mail BundleCompatibilityVersion 2 It worked like a charm and I was all set. If you haven\u0026rsquo;t tried it out yet, download the new Growl 1.1 and install it. There\u0026rsquo;s a ton of new features, and it\u0026rsquo;s been worth the wait.\n","date":"10 September 2007","permalink":"/p/getting-growlmail-working-with-apple-mail-in-growl-11/","section":"Posts","summary":"So, this is not really related to the normal system administration topics discussed here, but it\u0026rsquo;s Sunday, so I feel like something different.","title":"Getting GrowlMail working with Apple Mail in Growl 1.1"},{"content":"We all enjoy having the GoogleBot and other search engine robots index our sites as it brings us higher on search engines, but it\u0026rsquo;s annoying when some user scrapes your site for their own benefit. This is especially bad on forum sites as they\u0026rsquo;re always a target, and it can severely impact server performance.\nTo hunt down these connections when the spidering is happening, simply run this command:\nnetstat -plan | grep :80 | awk '{print $5}' | sed 's/:.*$//' | sort | uniq -c | sort -rn\nThe IP\u0026rsquo;s that are making the most connections will appear at the top of the list, and from there, you can find out which unwelcome spider is scraping your site.\n","date":"8 September 2007","permalink":"/p/hunting-down-annoying-web-spiders/","section":"Posts","summary":"We all enjoy having the GoogleBot and other search engine robots index our sites as it brings us higher on search engines, but it\u0026rsquo;s annoying when some user scrapes your site for their own benefit.","title":"Hunting down annoying web spiders"},{"content":"If you\u0026rsquo;ve run MySQL in a replication environment, or if you\u0026rsquo;ve enabled binary logging for transactional integrity, you know that the binary logs can grow rather quickly. The only safe way to delete the logs is to use PURGE MASTER LOGS in MySQL, but if you want MySQL to automatically remove the logs after a certain period of time, add this in your my.cnf:\nexpire_logs_days = 14 5.11.3. The Binary Log\n","date":"7 September 2007","permalink":"/p/mysql-binary-log-rotation/","section":"Posts","summary":"If you\u0026rsquo;ve run MySQL in a replication environment, or if you\u0026rsquo;ve enabled binary logging for transactional integrity, you know that the binary logs can grow rather quickly.","title":"MySQL binary log rotation"},{"content":"I hear a lot of complaints about Plesk\u0026rsquo;s backup routines and how they can bring a server to its knees. You can reduce the load (except for mysqldumps) by renicing pleskbackup. If you want something really handy, use this Perl scriptlet that I wrote:\n#!/usr/bin/perl @domains = `ls /var/www/vhosts/ | egrep -v \u0026#39;^default\\$|^chroot\\$\u0026#39;`; $today = `date +%m%d%y`; foreach $domain (@domains) { chomp($domain); $cmd = \u0026#34;nice -n 19 /usr/local/psa/bin/pleskbackup -vv domains $domain --skip-logs - | ssh someuser\\@somehost -i /home/username/.ssh/id_rsa \\\u0026#34;dd of=/home/username/pleskbackups/$domain-$today.dump\\\u0026#34;\u0026#34;; `$cmd`; } It will transmit your backups to another server via SSH, and it will reduce the priority to the lowest available. This combination will reduce CPU usage and disk I/O throughout the backup.\n","date":"6 September 2007","permalink":"/p/low-priority-plesk-backups/","section":"Posts","summary":"I hear a lot of complaints about Plesk\u0026rsquo;s backup routines and how they can bring a server to its knees.","title":"Low priority Plesk backups"},{"content":"If an .frm file that corresponds to an InnoDB table gets deleted without using DROP TABLE, MySQL won\u0026rsquo;t let you create a new table with the same name. You\u0026rsquo;ll find this in the error log:\nInnoDB: Error: table test/parent already exists in InnoDB internal InnoDB: data dictionary. Have you deleted the .frm file InnoDB: and not used DROP TABLE? Have you used DROP DATABASE InnoDB: for InnoDB tables in MySQL version \u0026lt;= 3.23.43? InnoDB: See the Restrictions section of the InnoDB manual. InnoDB: You can drop the orphaned table inside InnoDB by InnoDB: creating an InnoDB table with the same name in another InnoDB: database and moving the .frm file to the current database. InnoDB: Then MySQL thinks the table exists, and DROP TABLE will InnoDB: succeed. Luckily, the error tells you exactly how to fix the problem! Simply make a new database and create a table that matches your old .frm file. Stop MySQL, move the .frm file from the new database\u0026rsquo;s directory back to the old database\u0026rsquo;s directory. Start MySQL, and then run DROP TABLE like normal.\nThis will remove the table from the ibdata tablespace file and allow you to create a new table with the same name.\nFurther reading:\n13.2.17.1. Troubleshooting InnoDB Data Dictionary Operations\n","date":"2 September 2007","permalink":"/p/mysql-and-innodb-orphaned-frm-files/","section":"Posts","summary":"If an .","title":"MySQL and InnoDB: Orphaned .frm files"},{"content":"Let\u0026rsquo;s say you have a user who can\u0026rsquo;t receive e-mail. Each time they send a message to the server, this pops up in the mail logs:\npostfix/smtpd[23897]: NOQUEUE: reject: RCPT from remotemailserver.com[10.0.0.2]: 554 \u0026lt;user@domain.com\u0026gt;: Relay access denied; from=\u0026lt;user@otherdomain.com\u0026gt; to=\u0026lt;user@domain.com\u0026gt; proto=ESMTP helo=\u0026lt;remotemailserver.com\u0026gt; This is happening because Postfix is receiving e-mail for a domain for which it doesn\u0026rsquo;t expect to handle mail. Add the domains to the mydestination parameter in /etc/postfix/main.cf:\nmydestination = domain.com, domain2.com, domain3.com If you have a lot of domains to add, create a mydomains hash file and change the mydestination parameter:\nmydestination = hash:/etc/postfix/mydomains Create /etc/postfix/mydomains:\nlocalhost OK localmailserver.com OK domain.com OK Then run:\n# postmap /etc/postfix/mydomains This will create the hash file (mydomains.db) within /etc/postfix. If you\u0026rsquo;ve just added the directive to the main.cf, run postfix reload. However, if the directive was already there, but you just adjusted the mydomains and ran postmap, then there is nothing left to do.\n","date":"31 August 2007","permalink":"/p/postfix-554-relay-access-denied/","section":"Posts","summary":"Let\u0026rsquo;s say you have a user who can\u0026rsquo;t receive e-mail.","title":"Postfix: 554 Relay access denied"},{"content":"Lots of PCI Compliance and vulnerability scan vendors will complain about TRACE and TRACK methods being enabled on your server. Since most providers run Nessus, you\u0026rsquo;ll see this fairly often. Here\u0026rsquo;s the rewrite rules to add:\nRewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] These directives will need to be added to each VirtualHost.\nFurther reading:\nApache Debugging Guide\n","date":"29 August 2007","permalink":"/p/apache-disable-trace-and-track-methods/","section":"Posts","summary":"Lots of PCI Compliance and vulnerability scan vendors will complain about TRACE and TRACK methods being enabled on your server.","title":"Apache: Disable TRACE and TRACK methods"},{"content":"If you find yourself in a pinch and you need a temporary fix when your primary IP is blacklisted, use the following iptables rule:\n/sbin/iptables -t nat -A POSTROUTING -p tcp --dport 25 -j SNAT --to-source [desired outgoing ip] Keep in mind, however, that you will need to adjust any applicable SPF records for your domains since your e-mail will appear to be leaving via one of the secondary IP\u0026rsquo;s on your server. Also, remember that this is only a temporary fix - you should find out why you were blacklisted and eliminate that problem as soon as possible. :-)\n","date":"28 August 2007","permalink":"/p/use-a-different-ip-for-sending-mail/","section":"Posts","summary":"If you find yourself in a pinch and you need a temporary fix when your primary IP is blacklisted, use the following iptables rule:","title":"Use a different IP for sending mail"},{"content":"This error completely stumped me a couple of weeks ago. Apparently someone was adjusting the Apache configuration, then they checked their syntax and attempted to restart Apache. It went down without a problem, but it refused to start properly, and didn\u0026rsquo;t bind to any ports.\nWithin the Apache error logs, this message appeared over and over:\n[emerg] (28)No space left on device: Couldn't create accept lock Apache is basically saying “I want to start, but I need to write some things down before I can start, and I have nowhere to write them!” If this happens to you, check these items in order:\n1. Check your disk space\nThis comes first because it\u0026rsquo;s the easiest to check, and sometimes the quickest to fix. If you\u0026rsquo;re out of disk space, then you need to fix that problem. :-)\n2. Review filesystem quotas\nIf your filesystem uses quotas, you might be reaching a quota limit rather than a disk space limit. Use repquota / to review your quotas on the root partition. If you\u0026rsquo;re at the limit, raise your quota or clear up some disk space. Apache logs are usually the culprit in these situations.\n3. Clear out your active semaphores\nSemaphores? What the heck is a semaphore? Well, it\u0026rsquo;s actually an apparatus for conveying information by means of visual signals. But, when it comes to programming, semaphores are used for communicating between the active processes of a certain application. In the case of Apache, they\u0026rsquo;re used to communicate between the parent and child processes. If Apache can\u0026rsquo;t write these things down, then it can\u0026rsquo;t communicate properly with all of the processes it starts.\nI\u0026rsquo;d assume if you\u0026rsquo;re reading this article, Apache has stopped running. Run this command as root:\n# ipcs -s If you see a list of semaphores, Apache has not cleaned up after itself, and some semaphores are stuck. Clear them out with this command:\n# for i in `ipcs -s | awk '/httpd/ {print $2}'`; do (ipcrm -s $i); done Now, in almost all cases, Apache should start properly. If it doesn\u0026rsquo;t, you may just be completely out of available semaphores. You may want to increase your available semaphores, and you\u0026rsquo;ll need to tickle your kernel to do so. Add this to /etc/sysctl.conf:\nkernel.msgmni = 1024 kernel.sem = 250 256000 32 1024 And then run sysctl -p to pick up the new changes.\nFurther reading:\nWikipedia: Semaphore (Programming)\nApache accept lock fix\n","date":"24 August 2007","permalink":"/p/apache-no-space-left-on-device-couldnt-create-accept-lock/","section":"Posts","summary":"This error completely stumped me a couple of weeks ago.","title":"Apache: No space left on device: Couldn’t create accept lock"},{"content":"","date":null,"permalink":"/tags/quotas/","section":"Tags","summary":"","title":"Quotas"},{"content":"","date":null,"permalink":"/tags/semaphore/","section":"Tags","summary":"","title":"Semaphore"},{"content":"This error will pop up when binary logging is enabled, and someone thought it was a good idea to remove binary logs from the filesystem:\n/usr/sbin/mysqld: File \u0026#39;./mysql_bin.000025\u0026#39; not found (Errcode: 2) [ERROR] Failed to open log (file \u0026#39;./9531_mysql_bin.000025\u0026#39;, errno 2) [ERROR] Could not open log file [ERROR] Can\u0026#39;t init tc log [ERROR] Aborting InnoDB: Starting shutdown... InnoDB: Shutdown completed; log sequence number 0 2423986213 [Note] /usr/sbin/mysqld: Shutdown complete Basically, MySQL is looking in the mysql-bin.index file and it cannot find the log files that are listed within the index. This will keep MySQL from starting, but the fix is quick and easy. You have two options:\nEdit the index file\nYou can edit the mysql-bin.index file in a text editor of your choice and remove the references to any logs which don\u0026rsquo;t exist on the filesystem any longer. Once you\u0026rsquo;re done, save the index file and start MySQL.\nTake away the index file\nMove or delete the index file and start MySQL. This will cause MySQL to reset its binary log numbering scheme, so if this is important to you, you may want to choose the previous option.\nSo how do you prevent this from happening? Use the PURGE MASTER LOGS statement and allow MySQL to delete its logs on its own terms. If you\u0026rsquo;re concerned about log files piling up, adjust the expire_logs_days variable in your /etc/my.cnf.\nFurther reading:\n12.6.1.1. PURGE MASTER LOGS Syntax\n5.2.3 System Variables\n","date":"24 August 2007","permalink":"/p/mysql-couldnt-find-log-file/","section":"Posts","summary":"This error will pop up when binary logging is enabled, and someone thought it was a good idea to remove binary logs from the filesystem:","title":"MySQL couldn’t find log file"},{"content":"When connecting to your server\u0026rsquo;s POP3 service, your client might provide this error just after authentication:\nThe connection to the server was interrupted.\nYour best bet is to check the mail log and see exactly what the problem is:\nweb pop3-login: Login: john [192.168.0.5] pop3(john): Invalid mbox file /var/spool/mail/john: No such file or directory pop3(john): Failed to create storage with data: mbox:/var/spool/mail/john dovecot: child 29864 (pop3) returned error 89 In this case, the mbox file has become corrupt (possible from malformed \u0026lsquo;From\u0026rsquo; headers). You have the option of repairing the issues within the file, or you can simply create a new mail spool for the user.\n","date":"23 August 2007","permalink":"/p/pop3-server-disconnects-immediately-after-login/","section":"Posts","summary":"When connecting to your server\u0026rsquo;s POP3 service, your client might provide this error just after authentication:","title":"POP3 server disconnects immediately after login"},{"content":"It\u0026rsquo;s not abnormal for qmail act oddly at times with Plesk, and sometimes it can use 100% of the CPU. However, if you find qmail\u0026rsquo;s load to be higher than usual with a small volume of mail, there may be a fix that you need.\nFirst off, check for two files in /var/qmail/control called dh512.pem and dh1024.pem. If they are present, well, then this article won\u0026rsquo;t be able to help you. You have a different issue that is causing increased CPU load (check for swap usage and upgrade your disk\u0026rsquo;s speed).\nIf the files aren\u0026rsquo;t there, do the following:\n# cd /var/qmail/control # cp dhparam512.pem dh512.pem # cp dhparam1024.pem dh1024.pem # /etc/init.d/qmail restart # /etc/init.d/xinetd restart At this point, your CPU load should be reduced once the currently running processes for qmail clear out.\nSo why is this fix required? Without dh512.pem and dh1024.pem, qmail has to create certificate and key pairs when other mail servers or mail users connect to qmail via TLS. If qmail is forced to generate them on the fly, you will get a big performance hit, and your load will be much higher than it could be. By copying the dhparam files over, you will pre-populate the SSL key and certificate for qmail to use, and all it has to do is pick it up off the file system rather than regenerating it each time.\nFurther reading:\nSWsoft Forums: Qmail-smtpd spawning many processes, using full cpu\n","date":"22 August 2007","permalink":"/p/qmail-smtpd-spawns-many-processes-and-uses-100-of-cpu/","section":"Posts","summary":"It\u0026rsquo;s not abnormal for qmail act oddly at times with Plesk, and sometimes it can use 100% of the CPU.","title":"Qmail-smtpd spawns many processes and uses 100% of CPU"},{"content":"If you have to use short e-mail usernames in Plesk (which is a bad idea), and someone accidentally sets the server to use full usernames, you can force Plesk to go back. You can\u0026rsquo;t do this in the interface, however. Plesk realizes that duplicate mail names exist, and it wont allow the change.\nPlesk will say something like:\nUnable to allow the use of short mail names for POP3/IMAP accounts. There are mail names matching the encrypted passwords.\nForcing it back is easy with one SQL statement:\n# mysql -u admin -p`cat /etc/psa/.psa.shadow` psa mysql\u0026gt; UPDATE misc set val=\u0026#39;enabled\u0026#39; where param=\u0026#39;allow_short_pop3_names\u0026#39;; Keep in mind that users logging in with shortnames will get into the same mailbox if they have the same username and password.\nAdditional reading:\nHow can I change back the option \u0026ldquo;Use of short and full POP3/IMAP mail account names is allowed\u0026rdquo; forcedly?\n","date":"21 August 2007","permalink":"/p/change-plesk-back-to-short-mail-names/","section":"Posts","summary":"If you have to use short e-mail usernames in Plesk (which is a bad idea), and someone accidentally sets the server to use full usernames, you can force Plesk to go back.","title":"Change Plesk back to short mail names"},{"content":"While running into MySQL\u0026rsquo;s open files limit will manifest itself into various error messages, this is the standard one that you\u0026rsquo;ll receive during a mysqldump:\nmysqldump: Got error: 29: File \u0026#39;./databasename/tablename.MYD\u0026#39; not found (Errcode: 24) when using LOCK TABLES\u0026lt;/pre\u0026gt; The best way to get to the bottom of the error is to find out what it means: $ perror 24 OS error code 24: Too many open files\nThere\u0026rsquo;s two ways to fix the problem. First, if you find that you only hit the limit during mysqldumps and never during normal database operation, just add --single-transaction to your mysqldump command line options. This will cause mysql to keep only one table open at a time.\nHowever, if this happens while backups aren\u0026rsquo;t running, you may want to increase the open_files_limit in your MySQL configuration file. By default, the variable is set to 1,024 open files.\nFor further reading:\n5.2.3. System Variables\n7.13. mysqldump - A Database Backup Program\n","date":"20 August 2007","permalink":"/p/mysql-errcode-24-when-using-lock-tables/","section":"Posts","summary":"While running into MySQL\u0026rsquo;s open files limit will manifest itself into various error messages, this is the standard one that you\u0026rsquo;ll receive during a mysqldump:","title":"MySQL: Errcode: 24 when using LOCK TABLES"},{"content":"By default, views in MySQL 5.x are created with a security definer set to the root user. However, Plesk drops the root user from MySQL and replaces it with the admin user. When this happens, your views cannot by dumped by mysqldump since the root user (the security definer for the view) doesn\u0026rsquo;t exist in the mysql.user table.\nYou receive an error similar to the following:\nmysqldump: Couldn\u0026#39;t execute \u0026#39;SHOW FIELDS FROM `some_tablename`\u0026#39;: There is no \u0026#39;root\u0026#39;@\u0026#39;localhost\u0026#39; registered (1449) Usually, if you run a SHOW CREATE VIEW tablename, you\u0026rsquo;ll see something like this:\nCREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `some_tablename` AS select distinct `some_database`.`some_tablename`.`some_column` AS `alias` from `some_tablename` You have two options in this situation:\nChange the security definer for each of your views to \u0026lsquo;admin\u0026rsquo;@\u0026rsquo;localhost\u0026rsquo;. Any new views you create will need to be adjusted as well. Create a root user in MySQL with the same privileges as the admin user and use the root user\u0026rsquo;s login to run mysqldump. ","date":"18 August 2007","permalink":"/p/issues-with-mysqldump-and-views-in-plesk/","section":"Posts","summary":"By default, views in MySQL 5.","title":"Issues with mysqldump and views in Plesk"},{"content":"Sometimes MySQL\u0026rsquo;s process list will fill with unauthenticated login entries that look like this:\n| 971 | unauthenticated user | xxx.xxx.xxx.xxx:35406 | NULL | Connect | NULL | login | NULL | Generally, this means one of two things are happening. First, this could be a brute force attack against your server from an external attacker. Be sure to firewall off access to port 3306 from the outside world or run MySQL with skip-networking in the /etc/my.cnf file, and that should curtail those login attempts quickly.\nHowever, MySQL could be attempting to resolve the reverse DNS for each connection, and this definitely isn\u0026rsquo;t necessary if your grant statements refer to remote machines\u0026rsquo; IP addresses rather than hostnames (as they should). In this case, add skip-name-resolve to your /etc/my.cnf and restart MySQL. These connection attempts should authenticate much faster, and they shouldn\u0026rsquo;t pile up in the queue any longer.\nNote: Connections via sockets aren\u0026rsquo;t affected by DNS resolution since sockets don\u0026rsquo;t involve any networking access at all. If your web applications use \u0026rsquo;localhost\u0026rsquo; for their connection string, then MySQL won\u0026rsquo;t bring DNS resolution into play whatsoever.\nRecommended reading: 6.5.9. How MySQL Uses DNS\n","date":"16 August 2007","permalink":"/p/mysql-unauthenticated-login-pile-up/","section":"Posts","summary":"Sometimes MySQL\u0026rsquo;s process list will fill with unauthenticated login entries that look like this:","title":"MySQL unauthenticated login pile-up"},{"content":"Help me out! Digg my MySQLTuner script on digg.com!\nDon\u0026rsquo;t have the money or time for a DBA? Use this free MySQL tuning script to review your server\u0026rsquo;s variables and statistics. It will suggest specific variable changes and point to configuration errors that exist on your server.\nread more | digg story\n","date":"15 August 2007","permalink":"/p/automated-mysql-performance-tuning-script/","section":"Posts","summary":"Help me out!","title":"It’s on Digg: Automated MySQL Performance Tuning Script"},{"content":"I\u0026rsquo;ve been flooded with requests for MySQLTuner and I\u0026rsquo;ve answered them this weekend. Here\u0026rsquo;s the changes that were made:\nSpecific variable recommendations are made with suggested values as well Odd recommendations have been reduced Some math errors were corrected More configuration items are supported, like table locks, thread caching, table caching and open file limits. To find out more and to download the script, head on over to mysqltuner.com.\n","date":"12 August 2007","permalink":"/p/huge-mysqltuner-overhaul/","section":"Posts","summary":"I\u0026rsquo;ve been flooded with requests for MySQLTuner and I\u0026rsquo;ve answered them this weekend.","title":"Huge MySQLTuner overhaul"},{"content":"In some situations, you may want to have domain.com as well as *.domain.com point to the same site in Plesk. Plesk will automatically set up hosting for domain.com and www.domain.com within the Apache configuration, but you can direct all subdomains for a particular domain to a certain virtual host fairly easily.\nDNS\nAdd a CNAME or A record for *.domain.com which points to domain.com (for a CNAME), or the domain\u0026rsquo;s IP (for an A record.\nApache Configuration\nEdit the /var/www/vhosts/domain.com/conf/vhost.conf or /home/httpd/vhosts/domain.com/conf/vhost.conf file and enter this information:\nServerAlias *.domain.com\nIf the vhost.conf didn\u0026rsquo;t exist before, you will need to run:\n# /usr/local/psa/admin/bin/websrvmng -av\nWhether the vhost.conf was new or not, you will need to reload the Apache configuration:\n# /etc/init.d/httpd reload\nCredit for this fix goes to SWSoft\u0026rsquo;s KB #955\n","date":"11 August 2007","permalink":"/p/using-wildcard-subdomains-in-plesk/","section":"Posts","summary":"In some situations, you may want to have domain.","title":"Using wildcard subdomains in Plesk"},{"content":"With Plesk 7.5.x, a PHP upgrade to version 5 will cause some issues with Horde. These issues stem from problems with the pear scripts that Horde depends on.\nTo fix it, run these commands:\n# pear upgrade DB # cp -a /usr/share/pear/DB.php /usr/share/pear/DB/ /usr/share/psa-horde/pear/ Credit for this fix goes to Mike J.\n","date":"11 August 2007","permalink":"/p/correcting-horde-problems-after-upgrading-to-php-5-on-plesk-75x/","section":"Posts","summary":"With Plesk 7.","title":"Correcting Horde problems after upgrading to PHP 5 on Plesk 7.5.x"},{"content":"Urchin sometimes takes it upon itself to do some weird things, and this is one of those times. If Urchin has archived a month of data, and then you ask Urchin to parse a log that contains accesses from that archived month, you\u0026rsquo;ll receive this ugly error:\nUnable to open database for writing since it has been archived\nTo fix it, cd into /usr/local/urchin/data/reports/[profile name]/ and unzip the YYYYMM-archive.zip files, then move the zip files out of the way. Make sure that the unzipped files are owned by the Urchin user and group. You should then be able to re-run your stats without a problem.\nCredit for this fix goes to Google\n","date":"10 August 2007","permalink":"/p/urchin-unable-to-open-database-for-writing-since-it-has-been-archived/","section":"Posts","summary":"Urchin sometimes takes it upon itself to do some weird things, and this is one of those times.","title":"Urchin: Unable to open database for writing since it has been archived"},{"content":"An often misused and misunderstood aspect of MySQL is the query cache. I\u0026rsquo;ve seen blog post after blog post online talking about query caching as the most integral and important feature in MySQL. Many of these same posts advocate cranking the variables to the max to give you \u0026ldquo;ultimate performance.\u0026rdquo; One of the worst things you can do to a MySQL server is crank your variables up and hope for the best. I\u0026rsquo;ll try to clear some things up here.\nThe MySQL query cache is available in MySQL 4.0, 4.1, 5.0, 5.1, and 6.0 (3.23 has no query cache). The goal of the query cache is to hold result sets that are retrieved repeatedly. Since the data is held in memory, MySQL only feeds the data from memory (which is fast) into your application without digging into the tables themselves (which is slow). The result set from the query you\u0026rsquo;re running and the query in the query cache must be completely identical, or MySQL will pull the data as it traditionally does from the tables.\nQueries and result sets must meet certain criteria to make it into the query cache:\nMust not be prepared statements (See 12.7. SQL Syntax for Prepared Statements) Subqueries are not cached, only the outer query is cached Queries that are run from stored procedures, functions, or triggers are not cached (applies to versions 5.0+ only) The result set must be equal to or smaller than the query_cache_limit (more on this below) The query cannot refer to the mysql database Queries cannot use user variables, user-defined functions, temporary tables or tables with column-level privileges Besides these rules, all other queries are approved to enter the query cache. This includes wild things such as views, joins, and queries with subqueries.\nThe MySQL query cache is controlled by several variables:\nquery_alloc_block_size (defaults to 8192): the actual size of the memory blocks created for result sets in the query cache (don\u0026rsquo;t adjust) query_cache_limit (defaults to 1048576): queries with result sets larger than this won\u0026rsquo;t make it into the query cache query_cache_min_res_unit (defaults to 4096): the smallest size (in bytes) for blocks in the query cache (don\u0026rsquo;t adjust) query_cache_size (defaults to 0): the total size of the query cache (disables query cache if equal to 0) query_cache_type (defaults to 1): 0 means don\u0026rsquo;t cache, 1 means cache everything, 2 means only cache result sets on demand query_cache_wlock_invalidate (defaults to FALSE): allows SELECTS to run from query cache even though the MyISAM table is locked for writing Explaining the query_cache_type is a little rough. If the query_cache_type is 0:\nand the query_cache_size is 0: no memory is allocated and the cache is disabled and the query_cache_size is greater than 0: the memory is allocated but the cache is disabled If the query_cache_type is 1:\nand the query_cache_size is 0: no memory is allocated and the cache is disabled and the query_cache_size is greater than 0: the cache is enabled and all queries that don\u0026rsquo;t use SQL_NO_CACHE will be cached automatically If the query_cache_type is 2:\nand the query_cache_size is 0: no memory is allocated and the cache is disabled and the query_cache_size is greater than 0: the cache is enabled and queries must use SQL_CACHE to be cached Now that we have the variables behind us, how can we tell if we\u0026rsquo;re using the query cache appropriately? Each time a query runs against the query cache, the server will increment the Qcache_hits status variable instead of Com_select (which is incremented when a normal SELECT runs). If the table changes for any reason, its data is rendered invalid and is dropped from the query cache.\nIt\u0026rsquo;s vital to understand the performance implications of the query cache:\nPurging the cache\nIf the query cache fills completely, it will be flushed entirely - this is a significant performance hit as many memory addresses will have to be adjusted. Check your Qcache_lowmem_prunes in your status variables and increase the query_cache_size if you find yourself pruning the query cache more than a few times per hour.\nQuery cache utilization\nThere\u0026rsquo;s a simple formula to calculate your query cache efficiency in percentage form:\nQcache_hits / (Com_select + Qcache_hits) x 100\nA query cache efficiency percentage of 20% or less points to a performance problem. You may want to shrink your result sets by building more restrictive queries. If that isn\u0026rsquo;t possible, then you can increase your query_cache_limit so that more of your larger result sets actually make it into the cache. Keep in mind, however, that this will increase your prunes (see the previous paragraph) and can reduce performance. Increasing the query_cache_limit by small amounts and then recalculating your efficiency is a good idea.\nFighting fragmentation\nAs queries move in and out of the query cache, the memory may become fragmented. This is normally signified by an increase in slow queries, but your query cache efficiency percentage still remains high. In this situation, run FLUSH QUERY CACHE from the MySQL client and keep monitoring your efficiency. If this doesn\u0026rsquo;t help, you may be better off flushing the cache entirely with RESET QUERY CACHE.\nI\u0026rsquo;ve tried to piece quite a bit of documentation and DBA knowledge into this article, but you may benefit from reviewing the following documentation sections on MySQL.com: 5.2.3. System Variables, 5.2.5 Status Variables, and 6.5.4. The MySQL Query Cache.\n","date":"9 August 2007","permalink":"/p/mysqls-query-cache-explained/","section":"Posts","summary":"An often misused and misunderstood aspect of MySQL is the query cache.","title":"MySQL’s query cache explained"},{"content":"Should you find yourself in the situation where you\u0026rsquo;ve forgotten the Urchin admin password, don\u0026rsquo;t worry. It\u0026rsquo;s easily reset with the following command:\ncd util ./uconf-driver action=set_parameter table=user name=\u0026#34;(admin)\u0026#34; ct_password=urchin This will set the password to \u0026lsquo;urchin\u0026rsquo;, and then you can log into Urchin\u0026rsquo;s web interface and change it to a secure password. The credit for this fix goes to Urchin\u0026rsquo;s site.\n","date":"9 August 2007","permalink":"/p/reset-the-urchin-admin-password/","section":"Posts","summary":"Should you find yourself in the situation where you\u0026rsquo;ve forgotten the Urchin admin password, don\u0026rsquo;t worry.","title":"Reset the Urchin admin password"},{"content":"When Urchin\u0026rsquo;s task scheduler fails, you\u0026rsquo;ll notice big gaps in your data within Urchin. If your logs rotate out before someone catches the problem, then your data is gone, and unless you have it backed up, you\u0026rsquo;re out of luck. I\u0026rsquo;ve scoured the internet (and Urchin gurus) and I\u0026rsquo;ve yet to find a complete explanation for the occasional death of Urchin\u0026rsquo;s task scheduler.\nYou\u0026rsquo;ll see the \u0026ldquo;Warning! Task scheduler disabled.\u0026rdquo; error in bright red print in Urchin\u0026rsquo;s configuration menu when you click the \u0026ldquo;Run/Schedule\u0026rdquo; tab. It appears right below the gleaming \u0026ldquo;Run Now\u0026rdquo; button. If you click \u0026ldquo;Run Now\u0026rdquo;, Urchin will tell you again that the task scheduler is disabled.\nTo correct the problem, completely stop Urchin as root:\n# /etc/init.d/urchin stop -- OR -- # /usr/local/urchin/bin/urchinctl stop Now, change to the /usr/local/urchin/bin directory and run:\n# ./urchinctl status If the Urchin webserver is running, but the task scheduler isn\u0026rsquo;t (which is the most likely situation), run:\n# ./urchinctl -s start # ./urchinctl status Urchin webserver is running Urchin scheduler is running You should be all set. Credit for this fix goes to Urchin\u0026rsquo;s site.\n","date":"9 August 2007","permalink":"/p/urchin-warning-task-scheduler-disabled/","section":"Posts","summary":"When Urchin\u0026rsquo;s task scheduler fails, you\u0026rsquo;ll notice big gaps in your data within Urchin.","title":"Urchin: Warning! Task scheduler disabled."},{"content":"One question I hear quite often is \u0026ldquo;how do I add IP aliases in FreeBSD?\u0026rdquo; It\u0026rsquo;s not terribly intuitive, but you can follow these steps:\nExample:\nServer\u0026rsquo;s primary IP: 192.168.1.5\nAdditional IP\u0026rsquo;s to add: 192.168.1.10, 192.168.1.15, and 192.168.1.20\nBoot-time configuration:\nAdd it to /etc/rc.conf first (so you don\u0026rsquo;t forget). In this example, we have a Realtek card called rl0:\nifconfig_rl0=\u0026#34;inet 192.168.1.5 netmask 255.255.255.0\u0026#34; ifconfig_rl0_alias0=\u0026#34;inet 192.168.1.10 netmask 255.255.255.0\u0026#34; ifconfig_rl0_alias1=\u0026#34;inet 192.168.1.15 netmask 255.255.255.0\u0026#34; ifconfig_rl0_alias2=\u0026#34;inet 192.168.1.20 netmask 255.255.255.0\u0026#34; UBER-IMPORTANT NOTE: Start with the number 0 (zero) any time that you make IP alias configurations in /etc/rc.conf.\nThis is BAD form:\nifconfig_rl0=\u0026#34;inet 192.168.1.5 netmask 255.255.255.0\u0026#34; ifconfig_rl0_alias1=\u0026#34;inet 192.168.1.10 netmask 255.255.255.0\u0026#34; ifconfig_rl0_alias2=\u0026#34;inet 192.168.1.15 netmask 255.255.255.0\u0026#34; ifconfig_rl0_alias3=\u0026#34;inet 192.168.1.20 netmask 255.255.255.0\u0026#34; If you do it the wrong way (which means starting alias with anything but alias0), only the primary comes up. Keep that in mind.\nBringing up the new IP\u0026rsquo;s:\nYou can do things the extraordinarily dangerous way:\n# /etc/rc.network restart Or, you can follow the recommended steps:\n# ifconfig rl0 alias 192.168.1.10 netmask 255.255.255.0 # ifconfig rl0 alias 192.168.1.15 netmask 255.255.255.0 # ifconfig rl0 alias 192.168.1.20 netmask 255.255.255.0 Test your work:\nAny good system administrator knows to test things once their configured. Make sure to ping your new IP\u0026rsquo;s from a source on your network and outside your network (if possible/applicable).\n","date":"9 August 2007","permalink":"/p/adding-ip-aliases-in-freebsd/","section":"Posts","summary":"One question I hear quite often is \u0026ldquo;how do I add IP aliases in FreeBSD?","title":"Adding IP aliases in FreeBSD"},{"content":"Using the InnoDB engine can be tricky due to the ibdata files\u0026rsquo; rather untraditional behavior. Instead of storing data in MYI and MYD files for each table, InnoDB stores everything in one (or several) large files starting with ibdata1. Of course, MySQL nerds know that you can adjust this behavior slightly with innodb_file_per_table, but you can read up on this at your leisure.\nIf you\u0026rsquo;ve restored the ibdata files from a previous backup, or if you just toss the .frm files into a database directory, you might find this when you start MySQL:\nERROR 1016 (HY000): Can\u0026#39;t open file: \u0026#39;files.ibd\u0026#39; (errno: 1) Any good MySQL DBA will find out what error #1 means:\n# perror 1 OS error code 1: Operation not permitted This error sure sounds like a permission error. Go ahead and check your permissions in /var/lib/mysql, but you\u0026rsquo;ll probably find that they\u0026rsquo;re properly set.\nSo, why is the operation not permitted?\nMySQL is actually hiding the actual problem behind an incorrect error. The actual issue is that the tables described in your .frm files are not present in the InnoDB tablespace (the ibdata files). This may occur if you restore the .frm files, but you don\u0026rsquo;t restore the correct ibdata files.\nWhat\u0026rsquo;s the solution?\nThe easiest fix is to obtain a mysqldump backup of your original data. When you import it, MySQL will create your .frm files and populate the ibdata files for you without any fuss. You\u0026rsquo;ll be up and running in no time.\nIf you don\u0026rsquo;t have mysqldump backups, then you\u0026rsquo;ve just realized how important it is to have a flatfile backup of your databases. :-) If you can restore your original ibdata file that you backed up with your .frm\u0026rsquo;s, you should be able to stop MySQL, put the old ibdata file and transaction logs back, and start MySQL. However, if multiple databases have InnoDB tables, you\u0026rsquo;re going to be reverting them to their previous state. This could cause BIG problems if you\u0026rsquo;re not careful. You will want to begin running this on a regular basis:\nmysqldump -Q --opt -A --single-transaction -u username -p \u0026gt; mysqldump.sql As a sidenote, this error utterly stumped this DBA. I\u0026rsquo;ve never run into this issue before, and I assumed that the server was supposed to have tablespaces per table, but I couldn\u0026rsquo;t find any mention in the /etc/my.cnf file. I found the solution on MySQL\u0026rsquo;s site after some intense Google action.\n","date":"9 August 2007","permalink":"/p/mysql-missing-ibd-files/","section":"Posts","summary":"Using the InnoDB engine can be tricky due to the ibdata files\u0026rsquo; rather untraditional behavior.","title":"MySQL: Missing *.ibd files"},{"content":"MySQL documentation can be awfully flaky - extremely verbose on issues that don\u0026rsquo;t require such verbosity, and then extremely terse on issues that need a lot of explanation. The documentation for max_seeks_for_key matches the latter.\nThis is MySQL\u0026rsquo;s own documentation:\n7.2.16. How to Avoid Table Scans\nStart mysqld with the -max-seeks-for-key=1000 option or use SET max_seeks_for_key=1000 to tell the optimizer to assume that no key scan causes more than 1,000 key seeks. See Section 5.2.3, “System Variables”.\n5.2.3. System Variables\nLimit the assumed maximum number of seeks when looking up rows based on a key. The MySQL optimizer assumes that no more than this number of key seeks are required when searching for matching rows in a table by scanning an index, regardless of the actual cardinality of the index (see Section 13.5.4.13, \u0026ldquo;SHOW INDEX Syntax\u0026rdquo;). By setting this to a low value (say, 100), you can force MySQL to prefer indexes instead of table scans.\nJust in case you need a quick refresher on cardinality, here you go:\n13.5.4.13. SHOW INDEX Syntax\nCardinality\nAn estimate of the number of unique values in the index. This is updated by running ANALYZE TABLE or myisamchk -a. Cardinality is counted based on statistics stored as integers, so the value is not necessarily exact even for small tables. The higher the cardinality, the greater the chance that MySQL uses the index when doing joins.\nAre you confused yet? If you\u0026rsquo;re not confused, you are a tremedously awesome DBA (or you\u0026rsquo;re a MySQL developer). Here\u0026rsquo;s the break down:\nCardinality is the count of how many items in the index are unique. So, if you have 10 values in an indexed column, and the same two values are reused throughout, then the cardinality would be relatively low. A good example of this would be if you have country or state names in a database table. You\u0026rsquo;re going to have repeats, so this means that your cardinality is low. A good example of high cardinality is when you have a column that is a primary key (or unique). In this case, every single row has a unique key in the column, and the cardinality should equal the number of rows.\nHow does this come into play with max_seeks_for_key? It\u0026rsquo;s higly confusing based on the documentation, but lowering this variable actually makes MySQL prefer to use indexes - even if your cardinality is low - rather than using table scans. This can reduce total query time, iowait, and CPU usage. I\u0026rsquo;m not completely sure why MySQL doesn\u0026rsquo;t default to this behavior since it\u0026rsquo;s easy to see the performance gains.\nBy default, this variable is set to the largest number your system can handle. On 32-bit systems, this is 4,294,967,296. On 64-bit systems, this is 18,446,744,073,709,551,616. Some linux variants, like Gentoo Linux, are setting this value to 1,000 in the default configuration files. Reducing max_seeks_for_key to 1,000 is like telling MySQL that you want it to use indexes when the cardinality of the index is over 1,000. I\u0026rsquo;ve seen this variable reduced to as low as 1 on some servers without any issues.\nI\u0026rsquo;m still utterly confused at why this variable is set so high by default. If anyone has any ideas, please send them my way!\n","date":"3 August 2007","permalink":"/p/obscure-mysql-variable-explained-max_seeks_for_key/","section":"Posts","summary":"MySQL documentation can be awfully flaky - extremely verbose on issues that don\u0026rsquo;t require such verbosity, and then extremely terse on issues that need a lot of explanation.","title":"Obscure MySQL variable explained: max_seeks_for_key"},{"content":"Plesk has a (somewhat annoying) default firewall configuration that you can adjust from within the Plesk interface. However, if you want to add additional rules, you may find that you can\u0026rsquo;t add the rules you want from the interface. If you add them from the command line, Plesk will overwrite them when it feels the urge, even if you run service iptables save as you\u0026rsquo;re supposed to.\nYou can override this by making /etc/sysconfig/iptables immutable with chattr. Just run the following:\n# chattr +i /etc/sysconfig/iptables\nNow, Plesk can\u0026rsquo;t adjust your iptables rules without your intervention. Well, that is until SWSoft figures out how to run chattr when Plesk can\u0026rsquo;t edit certain configuration files. :-)\n","date":"3 August 2007","permalink":"/p/add-custom-rules-to-the-plesk-firewall/","section":"Posts","summary":"Plesk has a (somewhat annoying) default firewall configuration that you can adjust from within the Plesk interface.","title":"Add custom rules to the Plesk firewall"},{"content":"If you need a quick self-signed certificate, you can generate the key/certificate pair, then sign it, all with one openssl line:\nopenssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout server.key -out server.crt ","date":"3 August 2007","permalink":"/p/generate-self-signed-certificate-and-key-in-one-line/","section":"Posts","summary":"If you need a quick self-signed certificate, you can generate the key/certificate pair, then sign it, all with one openssl line:","title":"Generate self-signed certificate and key in one line"},{"content":"This error means that Plesk attempted to make a DNS change and reload named, but it failed. The problem generally lies within some seemingly innocent RPM\u0026rsquo;s that are causing problems with Plesk\u0026rsquo;s installation of bind.\nCheck your /var/log/messages for lines like these:\nnamed[xxx]: could not configure root hints from \u0026#39;named.root\u0026#39;: file not found named[xxx]: loading configuration: file not found named[xxx]: exiting (due to fatal error) named: named startup failed In this case, do a quick check for these RPM\u0026rsquo;s and remove them if they are on the system:\nbind-chroot caching-nameserver # rpm -ev bind-chroot # rpm -ev caching-nameserver ","date":"3 August 2007","permalink":"/p/plesk-unable-to-make-action-unable-to-manage-service-by-dnsmng-dnsmng-service-named-failed-to-start/","section":"Posts","summary":"This error means that Plesk attempted to make a DNS change and reload named, but it failed.","title":"Plesk: Unable to make action: Unable to manage service by dnsmng: dnsmng: Service named failed to start"},{"content":"If you\u0026rsquo;ve used Plesk with a large amount of domains, you know what a pain running out of file descriptors can be. Web pages begin acting oddly, Horde throws wild errors, and even squirrelmail rolls over onto itself. Luckily, Plesk introduced piped Apache logs (along with lots of bugs!) in Plesk 8.2, and you can enable piped logs with the following commands:\n# mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa -e \u0026#34;replace into misc (param,val) values (\u0026#39;apache_pipelog\u0026#39;, \u0026#39;true\u0026#39;);\u0026#34; # /usr/local/psa/admin/sbin/websrvmng -v -a Technically, these changes will allow Plesk to host about 900 sites, but this is still a little extreme in my opinion, even on the best hardware money can buy. If you find yourself passing the 900 mark, then you should probably follow this SWSoft KB article, adjust your FD_SETSIZE and recompile.\nMore information about configuring piped logs can be found on SWSoft\u0026rsquo;s site. Thanks, Jon!\n","date":"3 August 2007","permalink":"/p/freeing-up-file-descriptors-in-plesk-82-with-piped-apache-logs/","section":"Posts","summary":"If you\u0026rsquo;ve used Plesk with a large amount of domains, you know what a pain running out of file descriptors can be.","title":"Freeing up file descriptors in Plesk 8.2 with piped Apache logs"},{"content":"These two commands will enable SpamAssassin for all users on a Plesk 8 server:\n# mysql -u admin -p`cat /etc/psa/.psa.shadow` psa mysql\u0026gt; update mail set spamfilter = \u0026#39;true\u0026#39; where postbox = \u0026#39;true\u0026#39;; # /usr/local/psa/admin/bin/mchk --with-spam Thanks to Sean R. for this one!\n","date":"30 July 2007","permalink":"/p/add-spam-filtering-for-all-users-in-plesk/","section":"Posts","summary":"These two commands will enable SpamAssassin for all users on a Plesk 8 server:","title":"Add spam filtering for all users in Plesk"},{"content":"Add to /etc/make.conf:\nWITHOUT_X11=yes\u0026lt;br /\u0026gt; USE_NONDEFAULT_X11BASE=yes\n","date":"18 July 2007","permalink":"/p/disable-x-support-in-freebsd/","section":"Posts","summary":"Add to /etc/make.","title":"Disable X support in FreeBSD"},{"content":"With portinstall:\n# portinstall lighttpd fcgi php5 Without portinstall:\n# make -C /usr/ports/www/lighttpd all install clean # make -C /usr/ports/www/fcgi all install clean # make -C /usr/ports/lang/php5 all install clean Add lighttpd_enable=\u0026quot;YES\u0026quot; to /etc/rc.conf, and uncomment the usual items in /usr/local/etc/lighttpd.conf to enable fastcgi.\n","date":"18 July 2007","permalink":"/p/installing-lighttpd-php-fastcgi-on-freebsd/","section":"Posts","summary":"With portinstall:","title":"Installing Lighttpd + PHP + FastCGI on FreeBSD"},{"content":"It can be best to upgrade FreeBSD in an offline state, but if you do it online, you can do it like this:\n# csup -g -L 2 -h cvsup5.us.freebsd.org /usr/share/examples/cvsup/standard-supfile # cd /usr/src # make buildworld # make buildkernel # make installkernel # make installworld # shutdown -r now ","date":"18 July 2007","permalink":"/p/upgrading-freebsd-remotely/","section":"Posts","summary":"It can be best to upgrade FreeBSD in an offline state, but if you do it online, you can do it like this:","title":"Upgrading FreeBSD remotely"},{"content":"Making Java keystores at the same time as you create a CSR and key is pretty easy, but if you have a pre-made private key that you want to throw into a keystore, it can be difficult. However, follow these steps and you\u0026rsquo;ll ber done quickly!\nSave the new certificate to server.crt and the new key to server.key. If intermediate certificates are necessary, then place all of the certificates into a file called cacert.crt. Now, you will have to make a PKCS #12 file:\nopenssl pkcs12 -export -inkey server.key -in server.crt \\ -name tomcat-domain.com -certfile cacert.crt -out domain.com.p12 To perform the rest of the work, you will need a copy of the KeyTool GUI. In the GUI, make a new keystore in JKS format. Import the PKCS #12 key pair, and save the keystore as a JKS. Upload the keystore to the server and then configure the keystore within Tomcat/JBoss.\n","date":"18 July 2007","permalink":"/p/importing-existing-keys-and-certificates-into-java-keystore-files/","section":"Posts","summary":"Making Java keystores at the same time as you create a CSR and key is pretty easy, but if you have a pre-made private key that you want to throw into a keystore, it can be difficult.","title":"Importing existing keys and certificates into java keystore files"},{"content":"If you find yourself stuck with over 30,000 files in a directory (text files in this example), packing them into a tar file can be tricky. You can get around it with this:\nfind . -name \u0026#39;*.txt\u0026#39; -print \u0026gt;/tmp/test.manifest tar -cvzf textfiles.tar.gz --files-from /tmp/test.manifest find . -name \u0026#39;*.txt\u0026#39; | xargs rm -v ","date":"6 July 2007","permalink":"/p/bintar-argument-list-too-long/","section":"Posts","summary":"If you find yourself stuck with over 30,000 files in a directory (text files in this example), packing them into a tar file can be tricky.","title":"/bin/tar: Argument list too long"},{"content":"If you want to make a quick bookmark that will automatically log yourself into Plesk, make this bookmark:\nhttps://yourserver.com:8443/login_up.php3?login_name=admin\u0026amp;passwd=yourpassword\n","date":"6 July 2007","permalink":"/p/automatic-plesk-login/","section":"Posts","summary":"If you want to make a quick bookmark that will automatically log yourself into Plesk, make this bookmark:","title":"Automatic Plesk login"},{"content":"Enabling submission port support for Postfix is really easy. To have postfix listen on both 25 and 587, be sure that the line starting with submission is uncommented in /etc/postfix/master.cf:\nsmtp inet n - n - - smtpd submission inet n - n - - smtpd ","date":"5 July 2007","permalink":"/p/enable-submission-port-587-in-postfix/","section":"Posts","summary":"Enabling submission port support for Postfix is really easy.","title":"Enable submission port 587 in Postfix"},{"content":"","date":null,"permalink":"/tags/submission/","section":"Tags","summary":"","title":"Submission"},{"content":"Sometimes servers just have the weirdest SSL problems ever. In some of these situations, the entropy has been drained. Entropy is the measure of the random numbers available from /dev/urandom, and if you run out, you can\u0026rsquo;t make SSL connections. To check the status of your server\u0026rsquo;s entropy, just run the following:\n# cat /proc/sys/kernel/random/entropy_avail If it returns anything less than 100-200, you have a problem. Try installing rng-tools, or generating I/O, like large find operations. Linux normally uses keyboard and mouse input to generate entropy on systems without random number generators, and this isn\u0026rsquo;t very handy for dedicated servers.\n","date":"1 July 2007","permalink":"/p/check-available-entropy-in-linux/","section":"Posts","summary":"Sometimes servers just have the weirdest SSL problems ever.","title":"Check available entropy in Linux"},{"content":"One of the main reasons people like passive FTP is that it\u0026rsquo;s easier to get through firewalls with it. However, some users might now know that they need to enable passive FTP, or they may have incapable clients. To get active FTP through firewalls, start by adding these rules:\nAllowing established and related connections is generally a good idea:\niptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT Inbound connections on port 21 are required:\niptables -A INPUT -p tcp --dport 21 -j ACCEPT Just to cover our bases, add in a rule to allow established and related traffic leaving port 20 on the client\u0026rsquo;s machine:\niptables -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT Now, you have everything you need to allow the connections, but iptables will need to be able to mark and track these connections to allow them to pass properly. That is done with the ip_conntrack_ftp kernel module. To test things out, run this:\nmodprobe ip_conntrack_ftp At this point, you should be able to connect without a problem. However, to keep this module loaded whenever iptables is running, you will need to add it to /etc/sysconfig/iptables-config:\nIPTABLES_MODULES=\u0026#34;ip_conntrack_ftp\u0026#34; ","date":"1 July 2007","permalink":"/p/active-ftp-connections-through-iptables/","section":"Posts","summary":"One of the main reasons people like passive FTP is that it\u0026rsquo;s easier to get through firewalls with it.","title":"Active FTP connections through iptables"},{"content":"Depending on your situation, it may be handy to redirect e-mails that have a certain subject line before it even reaches a user\u0026rsquo;s inbox. Let\u0026rsquo;s say you\u0026rsquo;re tired of getting e-mails that start with the word \u0026ldquo;Cialis\u0026rdquo;. Just follow these steps to redirect those e-mails.\nFirst, enable header checks in /etc/postfix/main.cf:\nheader_checks = regexp:/etc/postfix/header_checks Then, create /etc/postfix/header_checks and add the following:\n/^Subject: Cialis*/ REDIRECT someotheruser@domain.com For a lot more information about header checks in postfix, review the documentation here:\nhttp://www.postfix.org/header_checks.5.html ","date":"1 July 2007","permalink":"/p/redirect-e-mails-in-postfix-based-on-subject-line/","section":"Posts","summary":"Depending on your situation, it may be handy to redirect e-mails that have a certain subject line before it even reaches a user\u0026rsquo;s inbox.","title":"Redirect e-mails in postfix based on subject line"},{"content":"Table corruption in MySQL can often wreak havoc on the auto_increment fields. I\u0026rsquo;m still unsure why it happens, but if you find a table tries to count from 0 after a table corruption, just find the highest key in the column and add 1 to it (in this example, I\u0026rsquo;ll say the highest key is 9500).\nJust run this one SQL statement on the table:\nALTER TABLE brokentablename AUTO_INCREMENT=9501;\nIf you run a quick insert and then run SELECT last_insert_id(), the correct key number should be returned (9501 in this case).\n","date":"1 July 2007","permalink":"/p/repair-auto_increment-in-mysql/","section":"Posts","summary":"Table corruption in MySQL can often wreak havoc on the auto_increment fields.","title":"Repair auto_increment in MySQL"},{"content":"If you have postfix installed with OpenSSL support compiled in, you can enable SSL connections by editing two configuration files. First, add the following to /etc/postfix/main.cf:\nsmtpd_use_tls = yes #smtpd_tls_auth_only = yes smtpd_tls_key_file = /etc/postfix/newkey.pem smtpd_tls_cert_file = /etc/postfix/newcert.pem smtpd_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_loglevel = 3 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom Then, simply uncomment this line in /etc/postfix/master.cf:\nsmtps inet n - n - - smtpd Make sure to keep tabs between the elements in the master.cf file.\n","date":"1 July 2007","permalink":"/p/enable-ssl-support-in-postfix/","section":"Posts","summary":"If you have postfix installed with OpenSSL support compiled in, you can enable SSL connections by editing two configuration files.","title":"Enable SSL support in Postfix"},{"content":"In some situations, the system time zone will be different than the one in MySQL, even though MySQL is set to use the system time zone. This normally means that a user has changed the system time zone, but they haven\u0026rsquo;t started MySQL to cause it to change as well.\n$ date Sun Jul 1 11:32:56 CDT 2007 mysql\u003e show variables like '%time_zone%'; +------------------+--------+ | Variable_name | Value | +------------------+--------+ | system_time_zone | PDT | | time_zone | SYSTEM | +------------------+--------+ 2 rows in set (0.00 sec) If you find yourself in this situation, just restart MySQL and the situation should be fixed:\nmysql\u003e show variables like '%time_zone%'; +------------------+--------+ | Variable_name | Value | +------------------+--------+ | system_time_zone | CDT | | time_zone | SYSTEM | +------------------+--------+ 2 rows in set (0.00 sec)","date":"1 July 2007","permalink":"/p/mysql-time-zone-different-from-system-time-zone/","section":"Posts","summary":"In some situations, the system time zone will be different than the one in MySQL, even though MySQL is set to use the system time zone.","title":"MySQL time zone different from system time zone"},{"content":"If this situation pops up in Plesk, it means that a user has changed their MySQL password outside of Plesk. The password in Plesk\u0026rsquo;s own database does not match, so the auto-creation of the phpMyAdmin settings fails. You\u0026rsquo;ll end up seeing this after clicking \u0026ldquo;DB WebAdmin\u0026rdquo;:\nMySQL said: Non-static method PMA_Config::isHttps() should not be called statically\nThe funny thing is that MySQL doesn\u0026rsquo;t actually say this. It\u0026rsquo;s a PHP error. To correct the problem, you can manually change the password within Plesk\u0026rsquo;s database, or you can follow an easier method:\nClick Databases\nClick Database Users\nClick the user that has a password change\nIn the password fields, enter the new password that they\u0026rsquo;re using with MySQL\nThis will force Plesk to change its password in its own database, and it will run the query to change the password in MySQL (but since it\u0026rsquo;s the same password, no change will be made).\n","date":"1 July 2007","permalink":"/p/plesk-and-phpmyadmin-non-static-method-pma_configishttps-should-not-be-called-statically/","section":"Posts","summary":"If this situation pops up in Plesk, it means that a user has changed their MySQL password outside of Plesk.","title":"Plesk and PHPMyAdmin: Non-static method PMA_Config::isHttps() should not be called statically"},{"content":"If you want to remove all of the open_basedir restrictions for all sites in Plesk, simply create a file called /etc/httpd/conf.d/zzz_openbasedir_removal.conf and add this text within it:\n\u0026lt;DirectoryMatch /var/www/vhosts/(.*)/httpdocs/\u0026gt; php_admin_value open_basedir none \u0026lt;/DirectoryMatch\u0026gt; Just a note, this isn\u0026rsquo;t a terribly great idea from a security standpoint. :-)\n","date":"30 June 2007","permalink":"/p/remove-all-open_basedir-restrictions-in-plesk/","section":"Posts","summary":"If you want to remove all of the open_basedir restrictions for all sites in Plesk, simply create a file called /etc/httpd/conf.","title":"Remove all open_basedir restrictions in Plesk"},{"content":"If you want to get a really basic, wide-open for localhost setup for SNMP, just toss the following into /etc/snmp/snmpd.conf:\ncom2sec local 127.0.0.1/32 public group MyROGroup v1 local group MyROGroup v2c local group MyROGroup usm local view all included .1 80 access MyROGroup \u0026#34;\u0026#34; any noauth exact all none none syslocation MyLocation syscontact Me \u0026lt;me@somewhere.org\u0026gt; ","date":"27 June 2007","permalink":"/p/basic-snmp-configuration/","section":"Posts","summary":"If you want to get a really basic, wide-open for localhost setup for SNMP, just toss the following into /etc/snmp/snmpd.","title":"Basic SNMP Configuration"},{"content":"If you find that /dev/null is no longer a block device, and it causes issues during init on Red Hat boxes, you will need to follow these steps to return things to normal:\nReboot the server When grub appears, edit your kernel line to include init=/bin/bash at the end Allow the server to boot into the emergency shell Run the following three commands # rm -rf /dev/null # mknod /dev/null c 1 3 # chmod 666 /dev/null You should be back to normal. Make sure that the root users on your server don\u0026rsquo;t use cp or mv with /dev/null as this will cause some pretty ugly issues.\n","date":"19 June 2007","permalink":"/p/corrupt-devnull/","section":"Posts","summary":"If you find that /dev/null is no longer a block device, and it causes issues during init on Red Hat boxes, you will need to follow these steps to return things to normal:","title":"Corrupt /dev/null"},{"content":"If you find yourself with the ever-so-peculiar 500 OOPS error from vsftpd when you attempt to login over SSH, there could be a few different things at play. Generally, this is the type of error you will get:\n500 OOPS: cannot change directory:/home/someuser 500 OOPS: child died You can search for a solution in this order:\nHome Directory\nDoes the user\u0026rsquo;s home directory even exist? Check /etc/passwd for the current home directory for the user and see what\u0026rsquo;s set:\n# grep someuser /etc/passwd someuser❌10001:2524::/var/www/someuser:/bin/bash In this case, does /var/www/someuser exist? If it doesn\u0026rsquo;t, fix that and then move onto the next solution if you\u0026rsquo;re still having problems.\nFile/Directory Permissions\nBe sure that the user that you are logging in as actually has permissions to be in the directory. This affects users that have home directories of /var/www/html because the execute bit normally isn\u0026rsquo;t set for the world on /var/www or /var/www/html. Make sure that the appropriate permissions and ownerships are set, and this should help eliminate the issue.\nSELINUX\nIf SELINUX is rearing its ugly head on the server, this can be a problem. Check your current SELINUX status and disable it if necessary:\n# setenforce Enforcing # setenforce 0 Try to login over FTP again and you should have a success. If you want to turn off SELINUX entirely, adjust /etc/sysconfig/selinux (RHEL4) or /etc/selinux/config (RHEL5).\n","date":"14 June 2007","permalink":"/p/500-oops-error-from-vsftpd/","section":"Posts","summary":"If you find yourself with the ever-so-peculiar 500 OOPS error from vsftpd when you attempt to login over SSH, there could be a few different things at play.","title":"500 OOPS error from vsftpd"},{"content":"If you want to adjust how long e-mails will spend in the qmail queue before they\u0026rsquo;re bounced, simple set the queuelifetime:\n# echo \u0026#34;432000\u0026#34; \u0026gt; /var/qmail/control/queuelifetime # /etc/init.d/qmail restart The above example is for 5 days (qmail needs the time length in seconds). Just take the days and multiply by 86,400 seconds to get your result.\n","date":"14 June 2007","permalink":"/p/adjusting-qmail-queue-time-lifetime/","section":"Posts","summary":"If you want to adjust how long e-mails will spend in the qmail queue before they\u0026rsquo;re bounced, simple set the queuelifetime:","title":"Adjusting qmail queue time / lifetime"},{"content":"By default, sendmail will keep items in the queue for up to 5 days. If you want to make this something shorter, like 3 days, you can adjust the following in /etc/mail/sendmail.mc:\ndefine(`confTO_QUEUERETURN\u0026#39;, `3d\u0026#39;)dnl If you want to get super fancy, you can adjust the queue lifetime for messages with certain priorities:\ndefine(`confTO_QUEUERETURN_NORMAL\u0026#39;, `3d\u0026#39;)dnl define(`confTO_QUEUERETURN_URGENT\u0026#39;, `5d\u0026#39;)dnl define(`confTO_QUEUERETURN_NONURGENT\u0026#39;, `1d\u0026#39;)dnl ","date":"14 June 2007","permalink":"/p/adjusting-sendmail-queue-time-lifetime/","section":"Posts","summary":"By default, sendmail will keep items in the queue for up to 5 days.","title":"Adjusting sendmail queue time / lifetime"},{"content":"If you find that memory limits differ between root and other users when PHP scripts are run from the command line, there may be an issue with your php.ini or your script. To verify that it isn\u0026rsquo;t your script, try this:\n$ echo \u0026#34;\u0026lt;? var_dump(ini_get(\u0026#39;memory_limit\u0026#39;)); ?\u0026gt;\u0026#34; \u0026gt;\u0026gt; memtest.php $ php -f memtest.php string(3) \u0026#34;8M\u0026#34; $ su - # php -f memtest.php string(3) \u0026#34;64M\u0026#34; If you get the same two values from both users, there\u0026rsquo;s probably a problem with your script. Make sure that there\u0026rsquo;s no ini_set() functions in your script that are overriding the php.ini file.\nHowever, if you get results like the ones above, check the permissions on /etc/php.ini:\n# ls -al /etc/php.ini -rw------- 1 root root 27 Jun 6 18:39 /etc/php.ini As you can see, php.ini is only readable to root, which prevents PHP\u0026rsquo;s command line parser from accessing your custom memory_limit directive in the php.ini. PHP\u0026rsquo;s general default is 8M for a memory limit if nothing is specified anywhere else, and that\u0026rsquo;s why normal users cannot get the higher memory limit that\u0026rsquo;s set in your php.ini file.\nSimply set the permissions on the file to 644 and you should be set to go:\n# chmod 644 /etc/php.ini # ls -al /etc/php.ini -rw-r--r-- 1 root root 45022 Jun 6 23:00 /etc/php.ini ","date":"14 June 2007","permalink":"/p/php-cli-memory-limit-is-different-between-users-and-root/","section":"Posts","summary":"If you find that memory limits differ between root and other users when PHP scripts are run from the command line, there may be an issue with your php.","title":"PHP CLI memory limit is different between users and root"},{"content":"Should you find yourself needing to send e-mail destined for a certain account to a blackhole or to /dev/null, you\u0026rsquo;ll find very little information from Google. The actual solution is not terribly intuitive, and not well documented:\nClick Domains Click the domain you want to modify Click Mail If the account hasn\u0026rsquo;t been created, click \u0026ldquo;Add New Mail Name\u0026rdquo; and create the account as usual. Then simply uncheck the mailbox option near the bottom. This will create a mail account, but any inbound e-mail for the user is thrown out.\nIf the e-mail account has already been created, but you want to blackhole any future e-mails, just click the Mailbox icon and uncheck the Mailbox checkbox on the next page. Click OK and any future e-mails are thrown out.\n","date":"14 June 2007","permalink":"/p/send-plesk-e-mail-to-devnull-or-blackhole/","section":"Posts","summary":"Should you find yourself needing to send e-mail destined for a certain account to a blackhole or to /dev/null, you\u0026rsquo;ll find very little information from Google.","title":"Send Plesk e-mail to /dev/null or blackhole"},{"content":"If you find that someone has done a recursive chmod or chown on a server, don\u0026rsquo;t fret. You can set almost everything back to its original permissions and ownership by doing the following:\nrpm -qa | xargs rpm --setperms --setugids Depending on how many packages are installed as well as the speed of your disk I/O, this may take a while to complete.\n","date":"14 June 2007","permalink":"/p/rebuild-rpm-file-permissions-and-ownerships/","section":"Posts","summary":"If you find that someone has done a recursive chmod or chown on a server, don\u0026rsquo;t fret.","title":"Rebuild RPM file permissions and ownerships"},{"content":"If something horrible happened to your Urchin license key or you need to replace it with something else, just run this command to change the key:\ncd /usr/local/urchin/util ./uconf-driver action=set_parameter recnum=1 ct_serial=[NEW SERIAL] uconf-driver action=set_parameter recnum=1 ct_license=0 For some reason, this blows up on some Urchin versions. If it doesn\u0026rsquo;t work, then the command will actually remove your license entirely. Don\u0026rsquo;t worry! You can log into Urchin\u0026rsquo;s web interface and put in the new key without a problem.\n","date":"7 June 2007","permalink":"/p/replace-urchin-license-key-serial-number/","section":"Posts","summary":"If something horrible happened to your Urchin license key or you need to replace it with something else, just run this command to change the key:","title":"Replace Urchin license key / serial number"},{"content":"If you\u0026rsquo;re used to SHOW PROCESSLIST; or mysqladmin processlist in MySQL, you might be searching for this same functionality in postgresql. Here\u0026rsquo;s the quick way to get a process list in postgresql:\nSwitch to the postgres user:\n# su - postgres\nGet into the postgres shell:\n# psql\nThen run a quick query:\nselect * from pg_stat_activity;\nNOTE: To actually see the queries being run, you will need logging enabled (it\u0026rsquo;s disabled by default). I don\u0026rsquo;t know how to turn it on yet, so this post will be left open until I find out!\n","date":"7 June 2007","permalink":"/p/postgres-process-listing/","section":"Posts","summary":"If you\u0026rsquo;re used to SHOW PROCESSLIST; or mysqladmin processlist in MySQL, you might be searching for this same functionality in postgresql.","title":"Postgres process listing"},{"content":"One of the nifty things about FreeBSD\u0026rsquo;s kernel is that it will limit closed port RST responses, which, in layman\u0026rsquo;s terms, just means that if someone repeatedly hits a port that\u0026rsquo;s closed, the kernel won\u0026rsquo;t respond to all of the requests.\nYou generally get something like this in the system log:\nkernel: Limiting closed port RST response from 211 to 200 packets/sec kernel: Limiting closed port RST response from203 to 200 packets/sec In certain situations, this functionality might be undesirable. For example, if you\u0026rsquo;re running an IDS like snort or a vulnerability scanner like nessus, these responses might be helpful. If you want to disable this functionality, just add the following to /etc/sysctl.conf:\nnet.inet.tcp.blackhole=2 net.inet.udp.blackhole=1 ","date":"7 June 2007","permalink":"/p/freebsd-limiting-closed-port-rst-response/","section":"Posts","summary":"One of the nifty things about FreeBSD\u0026rsquo;s kernel is that it will limit closed port RST responses, which, in layman\u0026rsquo;s terms, just means that if someone repeatedly hits a port that\u0026rsquo;s closed, the kernel won\u0026rsquo;t respond to all of the requests.","title":"FreeBSD: Limiting closed port RST response"},{"content":"I found myself pretty darned frustrated when my arrow keys didn\u0026rsquo;t work in iTerm in vi/vim or other ncurses-based applications. However, give this a shot in an iTerm if you find yourself in the same predicament:\nexport TERM=linux\nThen open something in vi/vim or run an ncurses application. It should let your arrow keys work normally now. To make the setting stick, just do this:\necho \u0026quot;TERM=linux\u0026quot; \u0026gt;\u0026gt; ~/.profile\n","date":"1 June 2007","permalink":"/p/arrow-keys-in-iterm-not-working-in-vivim/","section":"Posts","summary":"I found myself pretty darned frustrated when my arrow keys didn\u0026rsquo;t work in iTerm in vi/vim or other ncurses-based applications.","title":"Arrow keys in iTerm not working in vi/vim"},{"content":"If you receive the following error, your PIX does not have a key set up for use with SSH:\nType help or \u0026#39;?\u0026#39; for a list of available commands. pix\u0026gt; Cannot select private key Regenerating the key can be done by executing the following:\nconf t ca zeroize rsa ca generate rsa key 1024 ca save all write mem reload ","date":"28 May 2007","permalink":"/p/cisco-pix-cannot-select-private-key/","section":"Posts","summary":"If you receive the following error, your PIX does not have a key set up for use with SSH:","title":"Cisco PIX: Cannot select private key"},{"content":"Installing snort from ports on FreeBSD is pretty straightforward, but there are some \u0026lsquo;gotchas\u0026rsquo; that you need to be aware of. Here\u0026rsquo;s a step by step:\nCompile snort form the ports tree:\n# portinstall snort -- OR -- # make -C /usr/ports/security/snort install all You will be asked about which support you want to add to snort, so be sure to choose MySQL (unless you\u0026rsquo;re not going to use MySQL). When the build is complete, you\u0026rsquo;ll need oinkmaster as well to update your snort rules:\n# portinstall oinkmaster -- OR -- # make -C /usr/ports/security/oinkmaster install all Oinkmaster needs a snort download code/hash to be able to get your rules for you. Go to http://snort.org and register for an account. You\u0026rsquo;ll be given a hash (looks SHA-1-ish) at the bottom of your main account page. Copy /usr/local/etc/oinkmaster.conf.sample to /usr/local/etc/oinkmaster.conf:\n# cp /usr/local/etc/oinkmaster.conf.sample /usr/local/etc/oinkmaster.conf Replace with the hash you received from snort.org in /usr/local/etc/oinkmaster.conf and uncomment the line:\n# Example for Snort-current (\u0026#34;current\u0026#34; means cvs snapshots). url = http://www.snort.org/pub-bin/oinkmaster.cgi/\u0026lt;oinkcode\u0026gt;/snortrules-snapshot-CURRENT.tar.gz Now that oinkmaster is set up, you can update your snort rules using this command:\n# oinkmaster -o /usr/local/etc/snort/rules/ Loading /usr/local/etc/oinkmaster.conf Downloading file from http://www.snort.org/pub-bin/oinkmaster.cgi/*oinkcode*/snortrules-snapshot-CURRENT.tar.gz... done. Archive successfully downloaded, unpacking... done. Setting up rules structures... done. Processing downloaded rules... disabled 0, enabled 0, modified 0, total=9942 Setting up rules structures... done. Comparing new files to the old ones... done. Updating local rules files... done. Create the snort database and user:\n# mysql -u root -ppassword mysql\u0026gt; CREATE DATABASE `snort`; mysql\u0026gt; GRANT ALL PRIVILEGES ON snort.* TO \u0026#39;snort\u0026#39;@\u0026#39;localhost\u0026#39; IDENTIFIED BY \u0026#39;snortpassword\u0026#39;;` There\u0026rsquo;s a script that is pre-packaged with snort to set up the tables for you:\n# mysql -u snort -psnortpassword snort \u0026lt; /usr/local/share/examples/snort/create_mysql Now it\u0026rsquo;s time to make changes in the snort.conf:\n# nano -w /usr/local/etc/snort/snort.conf Uncomment and configure these lines:\n# config detection: search-method lowmem # output alert_syslog: LOG_AUTH LOG_ALERT # output database: log, mysql, user=root password=test dbname=db host=localhost Uncomment all of the include $RULE_PATH/*.rules lines except for this one:\n# include $RULE_PATH/local.rules [comment this line out] Now, enable snort in the /etc/rc.conf and start it up:\n# echo \u0026#34;snort_enable=\\\u0026#34;YES\\\u0026#34;\u0026#34; \u0026gt;\u0026gt; /etc/rc.conf # /usr/local/etc/rc.d/snort start Starting snort. If you run tail /var/log/messages, you should get some output like this:\nsnort[12558]: Initializing daemon mode kernel: fxp0: promiscuous mode enabled snort[12559]: PID path stat checked out ok, PID path set to /var/run/ snort[12559]: Writing PID \u0026#34;12559\u0026#34; to file \u0026#34;/var/run//snort_fxp0.pid\u0026#34; snort[12559]: Daemon initialized, signaled parent pid: 12558 snort[12558]: Daemon parent exiting snort[12559]: Snort initialization completed successfully (pid=12559) If you see an error like this, don\u0026rsquo;t worry, nothing\u0026rsquo;s wrong:\nsnort[12559]: Not Using PCAP_FRAMES To test snort, run a ping against your server from an outside source, and you should see something in your syslog like this:\nsnort[12559]: [1:368:6] ICMP PING BSDtype [Classification: Misc activity] [Priority: 3]: {ICMP} xxx.xxx.xxx.xxx -\u0026gt; xxx.xxx.xxx.xxx snort[12559]: [1:366:7] ICMP PING *NIX [Classification: Misc activity] [Priority: 3]: {ICMP} xxx.xxx.xxx.xxx -\u0026gt; xxx.xxx.xxx.xxx snort[12559]: [1:384:5] ICMP PING [Classification: Misc activity] [Priority: 3]: {ICMP} xxx.xxx.xxx.xxx -\u0026gt; xxx.xxx.xxx.xxx Installing BASE is pretty simple. You\u0026rsquo;ll need the adodb port plus the BASE tarball from SourceForge:\n# portinstall adodb -- OR -- # make -C /usr/ports/databases/adodb install clean After you expand the tarball, go to your BASE install\u0026rsquo;s URL in a browser. It will ask for the path to adodb, which is /usr/local/share/adodb. Provide the snort database information on the third screen and then just finish out the wizard. You will then be all set!\n","date":"27 May 2007","permalink":"/p/install-snort-and-base-on-freebsd/","section":"Posts","summary":"Installing snort from ports on FreeBSD is pretty straightforward, but there are some \u0026lsquo;gotchas\u0026rsquo; that you need to be aware of.","title":"Install snort and BASE on FreeBSD"},{"content":"Installing mysql on FreeBSD from ports is one of the oddest installations I\u0026rsquo;ve ever completed. Here\u0026rsquo;s the step by step:\nGet it compiled:\n# portinstall mysql50-server -- OR -- # make -C /usr/ports/databases/mysql50-server install clean Once it\u0026rsquo;s installed, copy my-small.cnf, my-medium.cnf or my-huge.cnf to /usr/local/etc/my.cnf:\n# cp /usr/local/share/mysql/my-small.cnf /usr/local/etc/my.cnf\nEnable mysql in the rc.conf:\n# echo \u0026quot;mysql_enable=\\\u0026quot;YES\\\u0026quot;\u0026quot; \u0026gt;\u0026gt; /etc/rc.conf\nInstall the authentication tables:\n# mysql_install_db\nLast, change the ownership on MySQL\u0026rsquo;s data directory:\n# chown -R mysql:mysql /var/db/mysql\nIf you miss the last step, you\u0026rsquo;ll get something ugly like this:\nmysqld started [ERROR] /usr/local/libexec/mysqld: Can\u0026#39;t find file: \u0026#39;./mysql/host.frm\u0026#39; (errno: 13) [ERROR] /usr/local/libexec/mysqld: Can\u0026#39;t find file: \u0026#39;./mysql/host.frm\u0026#39; (errno: 13) [ERROR] Fatal error: Can\u0026#39;t open and lock privilege tables: Can\u0026#39;t find file: \u0026#39;./mysql/host.frm\u0026#39; (errno: 13) mysqld ended ","date":"27 May 2007","permalink":"/p/install-mysql-server-from-ports-on-freebsd/","section":"Posts","summary":"Installing mysql on FreeBSD from ports is one of the oddest installations I\u0026rsquo;ve ever completed.","title":"Install mysql-server from ports on FreeBSD"},{"content":"If Redhat, CentOS, Fedora, or any other similar OS provides the following error:\n# ifup eth1 Device eth1 has different MAC address than expected, ignoring. Check that someone didn\u0026rsquo;t put an IP in as a hardware address:\nDEVICE=eth1 HWADDR=10.240.11.100 NETMASK=255.255.224.0 ONBOOT=yes TYPE=Ethernet If they did, then fix it with the correct configuration directive:\nDEVICE=eth1 IPADDR=10.240.11.100 NETMASK=255.255.224.0 ONBOOT=yes TYPE=Ethernet ","date":"27 May 2007","permalink":"/p/errors-with-ifup-regarding-mac-addresses/","section":"Posts","summary":"If Redhat, CentOS, Fedora, or any other similar OS provides the following error:","title":"Errors with ifup regarding MAC addresses"},{"content":"Normally, Postfix will reject e-mail sent to non-existent users if a catchall isn\u0026rsquo;t present for the specific domain that is receiving mail. However, you can make a super catchall to catch any and all e-mail that Postfix receives for the domains in its mydestination list:\nAdd the following to /etc/postfix/main.cf:\nluser_relay = root local_recipient_maps = Then reload the Postfix configuration:\n# postfix reload For more information: http://www.postfix.org/rewrite.html#luser_relay\n","date":"27 May 2007","permalink":"/p/forward-e-mail-sent-to-non-existent-users-in-postfix/","section":"Posts","summary":"Normally, Postfix will reject e-mail sent to non-existent users if a catchall isn\u0026rsquo;t present for the specific domain that is receiving mail.","title":"Forward e-mail sent to non-existent users in Postfix"},{"content":"","date":null,"permalink":"/tags/bdb/","section":"Tags","summary":"","title":"Bdb"},{"content":"If up2date throws some horrible Python errors and rpm says “rpmdb: Lock table is out of available locker entries”, you can restore your system to normality with the following:\nThe errors:\nrpmdb: Lock table is out of available locker entries error: db4 error(22) from db-\u0026gt;close: Invalid argument error: cannot open Packages index using db3 - Cannot allocate memory (12) error: cannot open Packages database in /var/lib/rpm Make a backup of /var/lib/rpm in case you break something:\ntar cvzf rpmdb-backup.tar.gz /var/lib/rpm Remove the Berkeley databases that rpm uses:\nrm /var/lib/rpm/__db.00* Make rpm rebuild the databases from scratch (may take a short while):\nrpm --rebuilddb Now, check rpm to make sure everything is okay:\nrpm -qa | sort Why does this happen?\nWhen rpm accesses the Berkeley database files, it makes temporary locker entries within the tables while it searches for data. If you control-c your rpm processes often, this issue will occur much sooner because the locks are never cleared.\n","date":"27 May 2007","permalink":"/p/rpmdb-lock-table-is-out-of-available-locker-entries/","section":"Posts","summary":"If up2date throws some horrible Python errors and rpm says “rpmdb: Lock table is out of available locker entries”, you can restore your system to normality with the following:","title":"rpmdb: Lock table is out of available locker entries"},{"content":"","date":null,"permalink":"/tags/up2date/","section":"Tags","summary":"","title":"Up2date"},{"content":"If you have an open_basedir restriction that is causing issues with a domain, you can remove the restriction easily. First, put the following text in /home/httpd/vhosts/[domain]/conf/vhost.conf:\n\u0026lt;Directory /home/httpd/vhosts/[domain]/httpdocs\u0026gt; php_admin_value open_basedir none \u0026lt;/Directory\u0026gt; If there was already a vhost.conf in the directory, then just reload Apache. Otherwise, run the magic wand:\n/usr/local/psa/admin/bin/websrvmng -av Then reload Apache:\n/etc/init.d/httpd reload ","date":"23 May 2007","permalink":"/p/remove-php-open_basedir-restriction-in-plesk/","section":"Posts","summary":"If you have an open_basedir restriction that is causing issues with a domain, you can remove the restriction easily.","title":"Remove PHP’s open_basedir restriction in Plesk"},{"content":"When Plesk is installed, the default certificate for the Plesk interface itself is a self-signed certificate that is generated during the installation. This can be easily changed within the Server options page.\nIf your SSL certificate is installed at the domain level:\nClick Domains \u0026gt; domain.com \u0026gt; Certificates \u0026gt; certificate name. Copy the CSR, key and CA certificates to a text application temporarily, and then click Server \u0026gt; Certificates. Once you\u0026rsquo;re there, click Add Certificate and paste in the CSR, key and CA certificate. You will need to select a new name for the certificate that is different from the one you use at the domain level. Once you\u0026rsquo;re done inserting that information, click OK and follow the instructions below.\nIf your SSL certificate is installed at the server level\nClick Server \u0026gt; Certificates. Click the checkbox next to the certificate which needs to be installed as the default, then click Setup just above the certificate listing. Plesk will install the certificate and reload itself (which generally takes 5-15 seconds). Depending on your browser, you may need to log out of Plesk and log back in to see the new certificate.\nWhen everything is complete, verify that the correct certificate is used when you access the Plesk interface, and also be sure that the intermediate certificates are installed correctly as well.\n","date":"22 May 2007","permalink":"/p/changing-the-default-ssl-certificate-in-plesk/","section":"Posts","summary":"When Plesk is installed, the default certificate for the Plesk interface itself is a self-signed certificate that is generated during the installation.","title":"Changing the default SSL certificate in Plesk"},{"content":"To enable submission access on port 587 in sendmail, add the following to the sendmail.mc:\nDAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl\nRebuild the sendmail.cf file and restart sendmail.\n","date":"21 May 2007","permalink":"/p/enable-submission-port-587-in-sendmail/","section":"Posts","summary":"To enable submission access on port 587 in sendmail, add the following to the sendmail.","title":"Enable submission port 587 in Sendmail"},{"content":"On some operating systems, postgresql is not configured to listen on the network. To enable the TCP/IP connections, edit the /var/lib/pgsql/data/postgresql.conf and change the following:\ntcpip_socket = true port = 5432 Restart postgresql and you should be all set:\n/etc/init.d/postgresql restart ","date":"21 May 2007","permalink":"/p/postgresql-not-listening-on-network/","section":"Posts","summary":"On some operating systems, postgresql is not configured to listen on the network.","title":"Postgresql not listening on network"},{"content":"If there\u0026rsquo;s one question I get a lot, it would be \u0026ldquo;Hey, how can I speed up MySQL?\u0026rdquo; There\u0026rsquo;s absolutely no end-all, be-all answer to this question. Instead a combination of many factors contribute to the overall performance of any SQL server. However, here\u0026rsquo;s a list of my recommendations for great MySQL performance. They\u0026rsquo;re arranged from the biggest gains to smallest gains:\nQuery Optimization\nI know - you were hoping I\u0026rsquo;d talk about hardware to start this thing off, but optimizing queries is the #1 way to get a MySQL server in gear. MySQL gives you great tools, like the slow query log, multiple status variables, and the EXPLAIN statement. Put these three things together and your queries will be on their way to a more optimized state. I\u0026rsquo;ll go into great detail about query optimization in a later post.\nMemory / System Architecture\nWe all know MySQL likes RAM, and the more you give it (to a point) the better the performance will be. If you consider the alternative to memory, which is swapping on disk, it\u0026rsquo;s obvious to see the gains.\nSo why did I add system architecture to this section? Well, if you have 32-bit Redhat, you can only allocate 2GB per process with the standard kernel. If you jump up to the SMP or hugemem kernel (in ES 2.1, you need the hugemem kernel for this to work), you can allocate 3GB per process. There is a caveat - MySQL can only use 2GB per buffer in 32-bit land. In a 64-bit OS with an appropriate Redhat kernel, you can allocate much larger buffers, and this can be tremendously helpful to tables which use the InnoDB engine. The memory allocation abilities are a great benefit, but also keep in mind that you will also get a boost in math performance within MySQL due to the 64-bit architecture. It\u0026rsquo;s a win-win!\nDisk Performance\nRunning a critical database on IDE or SATA drives just doesn\u0026rsquo;t cut it any more. A SCSI or SAS drive is required for the best performance. Although you hope that MySQL doesn\u0026rsquo;t touch the disk much, it\u0026rsquo;s important to remember that you need to make backups often, and you may need to restore data. Also, if your site is write-intensive, the disk performance is much more important than you think. It will reduce the time that tables are locked, and it will also reduce the time for backups and restores.\nCPU\nAlthough CPU comes last, don\u0026rsquo;t forget how important it can be. If you run a high number of complex queries and perform a lot of mathematical operations, you\u0026rsquo;re going to need a CPU that can handle this load. Dual CPU\u0026rsquo;s or dual core CPU\u0026rsquo;s will help out even more, since MySQL can use multiple CPU cores to perform simultaneous operations. Keep in mind that 64-bit will outperform 32-bit in MySQL, and also allow for greater memory allocations (look in the Memory section above).\nFinal Note:\nKeep in mind that these are general suggestions, and these suggestions may not apply to all users. For example, on sites that are heavily read-intensive, you may find that CPU speed is more important than disk speed. Also, if you\u0026rsquo;re not using all of the available memory on your server, but your performance is still sagging, adding more memory won\u0026rsquo;t help. Consult with a DBA and find out where your server\u0026rsquo;s slowdowns are, then make a change with your queries or with your hardware. Remember, throwing more hardware at the problem will not always solve it.\n","date":"21 May 2007","permalink":"/p/speeding-up-mysql/","section":"Posts","summary":"If there\u0026rsquo;s one question I get a lot, it would be \u0026ldquo;Hey, how can I speed up MySQL?","title":"Speeding up MySQL"},{"content":"On some servers, you may notice that MySQL is consuming CPU and memory resources when it\u0026rsquo;s not processing any queries. During these times, running a mysqladmin processlist will show many processes in the \u0026lsquo;sleep\u0026rsquo; state for many minutes.\nThese issues occur because of code that uses a persistent connection to the database. In PHP, this is done with mysql_pconnect. This causes PHP to connect to the database, execute queries, remove the authentication for the connection, and then leave the connection open. Any per-thread buffers will be kept in memory until the thread dies (which is 28,800 seconds in MySQL by default). There\u0026rsquo;s three ways to handle this type of issue:\nFix the code\nThis is the #1 most effective way to correct the problem. Persistent connections are rarely needed. The only time when they would be even mildly useful is if your MySQL server has a huge latency. For example, if your web server takes \u0026gt; 250ms to make contact with your MySQL server, this setting might save you fractions of a second. Then again, if your web server and MySQL server are so far apart to where latency is even a consideration, you have more problems than I can help you with.\nRestrict the connections\nIf push comes to shove, and you have users on a server who are abusing their MySQL privileges with mysql_pconnect, then you can pull the plug on their shenanigans with GRANT. You can reduce the maximum simultaneous connections for their database user, and they\u0026rsquo;ll find themselves wanting to make code changes pretty quickly. MySQL doesn\u0026rsquo;t queue extra connections for users who have passed their maximum, so they get a really nice error stating that they have exceeded their max connections. To set up this grant, just do something like the following:\nGRANT ALL PRIVILEGES ON database.* TO 'someuser'@'localhost' WITH MAX_USER_CONNECTIONS = 20;\nReduce the timeouts\nIf changing the code isn\u0026rsquo;t an option, and you don\u0026rsquo;t feel mean enough to restrict your users (however, if they were causing a denial of service on my MySQL server, I\u0026rsquo;d have no problem restricting them), you can reduce the wait_timeout and interactive_timeout variables. The wait_timeout affects non-interactive connections (like TCP/IP and Unix socket) and interactive_timeout affects interactive connections (if you don\u0026rsquo;t know what these are, you\u0026rsquo;re not alone). The defaults of these are fairly high (usually 480 minutes) and you can drop them to something more reasonable, like 30-60 seconds. Web visitors shouldn\u0026rsquo;t notice the difference - it will just cause the next page load to start a new connection to the database server.\n","date":"21 May 2007","permalink":"/p/mysql-connections-in-sleep-state/","section":"Posts","summary":"On some servers, you may notice that MySQL is consuming CPU and memory resources when it\u0026rsquo;s not processing any queries.","title":"MySQL connections in sleep state"},{"content":"Thanks to a highly awesome technician on my team, we\u0026rsquo;ve discovered the perfect permissions setup for Joomla and Plesk:\nChange the umask in \u0026lsquo;/etc/proftpd.conf\u0026rsquo; to \u0026lsquo;002\u0026rsquo; and add the \u0026lsquo;apache\u0026rsquo; user to the \u0026lsquo;psacln\u0026rsquo; group. Then, update the directory permissions:\ncd /home/httpd/vhosts/[domain.com] chown -R [username]:psacln httpdocs chmod -R g+w httpdocs find httpdocs -type d -exec chmod g+s {} \\; Joomla also complains about some PHP settings, sometimes including not being able to write to \u0026lsquo;/var/lib/php/session\u0026rsquo;. To fix the issues, make some adjustments to the vhost.conf for the domain:\n\u0026lt;Directory /home/httpd/vhosts/[domain]/httpdocs\u0026gt; php_admin_flag magic_quotes_gpc on php_admin_flag display_errors on php_admin_value session.save_path /tmp \u0026lt;/Directory\u0026gt; If the vhost.conf is brand new, then run:\n/usr/local/psa/admin/bin/websrvmng -av Make sure Apache runs with your new configuration:\n# httpd -t (check your work) # /etc/init.d/httpd reload Credit for this goes to Bryan T.\n","date":"21 May 2007","permalink":"/p/joomla-and-plesk-permissions/","section":"Posts","summary":"Thanks to a highly awesome technician on my team, we\u0026rsquo;ve discovered the perfect permissions setup for Joomla and Plesk:","title":"Joomla and Plesk permissions"},{"content":"If you need to strip query strings from a URL with mod_rewrite, you can use a rewrite syntax such as the following:\nRewriteEngine on RewriteCond %{QUERY_STRING} \u0026#34;action=register\u0026#34; [NC] RewriteRule ^/. http://www.domain.com/registerpage.html? [R,L] ","date":"18 May 2007","permalink":"/p/remove-query-strings-from-urls-with-mod_rewrite/","section":"Posts","summary":"If you need to strip query strings from a URL with mod_rewrite, you can use a rewrite syntax such as the following:","title":"Remove query strings from URL’s with mod_rewrite"},{"content":"If you\u0026rsquo;re checking through your mail logs, or you catch a bounced e-mail with \u0026ldquo;554 relay access denied\u0026rdquo; in the bounce, the issue can be related to a few different things:\nIf your server bounces with this message when people send e-mail to you:\nCheck to make sure that your mail server is configured to receive mail for your domain Postfix: /etc/postfix/mydomains (on some systems) Sendmail: /etc/mail/local-host-names Qmail: /var/qmail/control/rcpthosts Verify that your MX records are pointing to your server, and not someone else\u0026rsquo;s (very important during server migrations) If you recently made changes in Postfix, make sure to run postmap on your domains file and run postfix reload If you get this message when you try to send e-mail to other people through your server:\nEnable SMTP authentication in your e-mail client If SMTP authentication is on in your client, check your server\u0026rsquo;s authentication daemons to be sure they\u0026rsquo;re operating properly ","date":"18 May 2007","permalink":"/p/relay-access-denied/","section":"Posts","summary":"If you\u0026rsquo;re checking through your mail logs, or you catch a bounced e-mail with \u0026ldquo;554 relay access denied\u0026rdquo; in the bounce, the issue can be related to a few different things:","title":"Relay access denied"},{"content":"If you can\u0026rsquo;t see hidden files in proftpd (the files beginning with a dot, like .htaccess), you can enable the option in your client. However, you can force the files to be displayed in almost all clients with a server wide variable in your proftpd.conf:\nListOptions -a\nMake sure to restart proftpd afterwards and re-connect to the FTP server to see the changes.\n","date":"17 May 2007","permalink":"/p/show-hidden-dot-files-in-proftpd/","section":"Posts","summary":"If you can\u0026rsquo;t see hidden files in proftpd (the files beginning with a dot, like .","title":"Show hidden dot files in proftpd"},{"content":"To enable SSL/TLS support in proftpd, add the following to the proftpd.conf file:\n\u0026lt;IfModule mod_tls.c\u0026gt; TLSEngine on TLSLog /var/ftpd/tls.log TLSRequired off TLSRSACertificateFile /usr/share/ssl/certs/server.crt TLSRSACertificateKeyFile /usr/share/ssl/private/server.key TLSCACertificateFile /usr/share/ssl/certs/cacert.crt TLSVerifyClient off TLSRenegotiate required off \u0026lt;/IfModule\u0026gt; To require SSL/TLS on all connections, change TLSRequired to on. Of course, replace the certificate, key, and CA certificate (if applicable) to the correct files on your system.\nOnce you\u0026rsquo;re all done, close your FTP connection and make a new one. There is no need to restart xinetd.\n","date":"17 May 2007","permalink":"/p/add-ssltls-support-to-proftpd/","section":"Posts","summary":"To enable SSL/TLS support in proftpd, add the following to the proftpd.","title":"Add SSL/TLS support to proftpd"},{"content":"If you can\u0026rsquo;t send mail via port 25 due to blocks imposed by your ISP, you can enable the submission port within Plesk pretty easily. There\u0026rsquo;s two methods:\nThe iptables way:\niptables -t nat -A PREROUTING -p tcp --dport 587 -i eth0 -j REDIRECT --to-ports 25 The xinetd way (recommended):\n# cd /etc/xinetd.d # cp smtp_psa smtp_additional # vi smtp_additional Make the first line say \u0026ldquo;service submission\u0026rdquo; and save the file. Then restart xinetd:\n/etc/rc.d/init.d/xinetd restart This is no longer needed in Plesk 8.4. To enable the submission port in Plesk 8.4, log into the Plesk interface as the Administrator, click Server and click Mail.\n","date":"15 May 2007","permalink":"/p/plesk-submission-port-587-for-outbound-mail/","section":"Posts","summary":"If you can\u0026rsquo;t send mail via port 25 due to blocks imposed by your ISP, you can enable the submission port within Plesk pretty easily.","title":"Plesk submission port (587) for outbound mail"},{"content":"If you find that Horde (with Plesk) keeps refreshing when you attempt to log in, and there are no errors logged on the screen or in Apache\u0026rsquo;s logs, check the session.auto_start variable in /etc/php.ini.\nIf session.auto_start is set to 1, set it to 0 and Horde will miraculously start working again.\n","date":"7 May 2007","permalink":"/p/horde-refreshes-when-logging-in/","section":"Posts","summary":"If you find that Horde (with Plesk) keeps refreshing when you attempt to log in, and there are no errors logged on the screen or in Apache\u0026rsquo;s logs, check the session.","title":"Horde refreshes when logging in"},{"content":"When you need to find information about anything in Plesk, here\u0026rsquo;s some SQL statements that you can use:\nStart out with:\n# mysql -u admin -p`cat /etc/psa/.psa.shadow` mysql\u0026gt; use psa; Find all e-mail passwords:\nselect concat_ws(\u0026#39;@\u0026#39;,mail.mail_name,domains.name),accounts.password from domains,mail,accounts where domains.id=mail.dom_id and accounts.id=mail.account_id order by domains.name ASC,mail.mail_name ASC; Find e-mail passwords made out of only letters:\nselect concat_ws(\u0026#39;@\u0026#39;,mail.mail_name,domains.name),accounts.password from domains,mail,accounts where domains.id=mail.dom_id and accounts.id=mail.account_id and accounts.password rlike binary \u0026#39;^[a-z]+$\u0026#39;; Find e-mail passwords made out of only numbers:\nselect concat_ws(\u0026#39;@\u0026#39;,mail.mail_name,domains.name),accounts.password from domains,mail,accounts where domains.id=mail.dom_id and accounts.id=mail.account_id and accounts.password rlike \u0026#39;^[0-9]+$\u0026#39;; Find which domains aren\u0026rsquo;t bouncing/rejecting e-mails to unknown recipients:\nselect d.name as domain, p.value as catchall_address from Parameters p, DomainServices ds, domains d where d.id = ds.dom_id and ds.parameters_id = p.id and p.parameter = \u0026#39;catch_addr\u0026#39; order by d.name ","date":"27 April 2007","permalink":"/p/plesk-sql-statements/","section":"Posts","summary":"When you need to find information about anything in Plesk, here\u0026rsquo;s some SQL statements that you can use:","title":"Plesk SQL Statements"},{"content":"To add a chrooted FTP user outside of Plesk properly, you need to:\nCreate the user with the home directory as the root of what they can access Give the user a password Make their primary group psacln Add them to the psaserv group as well # useradd username -d /var/www/html/website/slideshow/ # echo \u0026#34;password\u0026#34; | passwd username --stdin Changing password for user username. passwd: all authentication tokens updated successfully. # usermod -g psacln username # usermod -G psaserv username # lftp username:password@localhost lftp username@localhost:/\u0026gt; cd .. lftp username@localhost:/\u0026gt; ","date":"27 April 2007","permalink":"/p/adding-chrooted-ftp-users-outside-of-plesk/","section":"Posts","summary":"To add a chrooted FTP user outside of Plesk properly, you need to:","title":"Adding chrooted FTP users outside of Plesk"},{"content":"To install PayFlowPro, you will need a few things:\nThe PHP source code for version of PHP installed (go here) The SDK from Verisign/PayPal (this comes from the portal, login required) The gcc and automake packages Take the Verisign SDK and copy the following:\nCopy pfpro.h to /usr/include Copy the .so file to /usr/lib Untar the PHP source code and cd into php-[version]/ext/pfpro. Run phpize and make sure it finishes successfully. Now run:\n./configure --prefix=/usr --enable-shared\nThen run make and make install. Now, go to the php.ini and add:\nextension=pfpro.so\nRun php -i | grep pfpro to make sure the module was successfully built. Restart Apache and you\u0026rsquo;re all set!\nThe pfpro module is now available via pecl in PHP 5.1+. Thanks to Chris R. for pointing that out.\n","date":"26 April 2007","permalink":"/p/install-payflowpro-for-php-on-rhel/","section":"Posts","summary":"To install PayFlowPro, you will need a few things:","title":"Install PayFlowPro for PHP on RHEL"},{"content":"If you find yourself in the situation where you need to bulk add SPF records to every domain in Plesk, you can use this huge one-liner:\nmysql -u admin -p`cat /etc/psa/.psa.shadow` psa -e \u0026#34;select dns_zone_id,displayHost from dns_recs GROUP BY dns_zone_id ORDER BY dns_zone_id ASC;\u0026#34; | awk \u0026#39;{print \u0026#34;INSERT INTO dns_recs (type,host,val,time_stamp,dns_zone_id,displayHost,displayVal) VALUES (\u0026#39;\\\u0026#39;\u0026#39;TXT\u0026#39;\\\u0026#39;\u0026#39;,\u0026#39;\\\u0026#39;\u0026#39;\u0026#34;$2\u0026#34;\u0026#39;\\\u0026#39;\u0026#39;,\u0026#39;\\\u0026#39;\u0026#39;v=spf1 a mx ~all\u0026#39;\\\u0026#39;\u0026#39;,NOW(),\u0026#34;$1\u0026#34;,\u0026#39;\\\u0026#39;\u0026#39;\u0026#34;$2\u0026#34;\u0026#39;\\\u0026#39;\u0026#39;,\u0026#39;\\\u0026#39;\u0026#39;v=spf1 a mx ~all\u0026#39;\\\u0026#39;\u0026#39;);\u0026#34;}\u0026#39; | mysql -u admin -p`cat /etc/psa/.psa.shadow` psa` Then you\u0026rsquo;ll need to make Plesk write these changes to the zone files:\n# mysql -Ns -uadmin -p`cat /etc/psa/.psa.shadow` -D psa -e \u0026#39;select name from domains\u0026#39; | awk \u0026#39;{print \u0026#34;/usr/local/psa/admin/sbin/dnsmng update \u0026#34; $1 }\u0026#39; | sh You can check your work by viewing the new entries you made:\nmysql -u admin -p`cat /etc/psa/.psa.shadow` psa -e \u0026#34;SELECT * FROM dns_recs WHERE type=\u0026#39;TXT\u0026#39;;\u0026#34; ","date":"24 April 2007","permalink":"/p/add-spf-records-to-all-domains-in-plesk/","section":"Posts","summary":"If you find yourself in the situation where you need to bulk add SPF records to every domain in Plesk, you can use this huge one-liner:","title":"Add SPF records to all domains in Plesk"},{"content":"If you get this error, you\u0026rsquo;ve most likely done a file-based MySQL backup restore, and the InnoDB files are hosed. The horde_sessionhandler table isn\u0026rsquo;t a MyISAM table at all - it\u0026rsquo;s actually an InnoDB table. The easiest way to fix the issue is to stop MySQL and trash the .frm:\n# /etc/init.d/mysqld stop # rm /var/lib/mysql/horde/horde_sessionhandler.frm Now start MySQL and re-create the table:\n# /etc/init.d/mysqld start # mysql -u admin -p`cat /etc/psa/.psa.shadow` Here\u0026rsquo;s the SQL statements to run:\nCREATE TABLE horde_sessionhandler (session_id VARCHAR(32) NOT NULL, session_lastmodified INT NOT NULL, session_data LONGBLOB, PRIMARY KEY (session_id)) ENGINE = InnoDB; GRANT SELECT, INSERT, UPDATE, DELETE ON horde_sessionhandler TO horde@localhost; You\u0026rsquo;re good to go!\n","date":"19 April 2007","permalink":"/p/cant-find-file-horde_sessionhandlermyi/","section":"Posts","summary":"If you get this error, you\u0026rsquo;ve most likely done a file-based MySQL backup restore, and the InnoDB files are hosed.","title":"Can’t find file: ‘horde_sessionhandler.MYI’"},{"content":"If Plesk throws an error that it can\u0026rsquo;t upgrade your license key because of languages, you need to remove the extra locales:\n# rpm -qa | grep psa-locale | grep -v base psa-locale-el-GR-8.1-build81061127.19 psa-locale-fr-FR-8.1-build81061127.19 psa-locale-lt-LT-8.1-build81061127.19 psa-locale-pt-BR-8.1-build81061127.19 psa-locale-sv-SE-8.1-build81061127.19 psa-locale-ca-ES-8.1-build81061127.19 psa-locale-de-DE-8.1-build81061127.19 psa-locale-es-ES-8.1-build81061127.19 psa-locale-fi-FI-8.1-build81061127.19 psa-locale-hu-HU-8.1-build81061127.19 psa-locale-ja-JP-8.1-build81061127.19 psa-locale-nl-BE-8.1-build81061127.19 psa-locale-pl-PL-8.1-build81061127.19 psa-locale-pt-PT-8.1-build81061127.19 psa-locale-ru-RU-8.1-build81061127.19 psa-locale-tr-TR-8.1-build81061127.19 psa-locale-zh-TW-8.1-build81061127.19 psa-locale-cs-CZ-8.1-build81061127.19 psa-locale-es-MX-8.1-build81061127.19 psa-locale-it-IT-8.1-build81061127.19 psa-locale-nl-NL-8.1-build81061127.19 psa-locale-ro-RO-8.1-build81061127.19 psa-locale-zh-CN-8.1-build81061127.19 # rpm -ev `rpm -qa | grep psa-locale | grep -v base` ","date":"19 April 2007","permalink":"/p/too-many-languages-cant-upgrade-plesk-license/","section":"Posts","summary":"If Plesk throws an error that it can\u0026rsquo;t upgrade your license key because of languages, you need to remove the extra locales:","title":"Too many languages – can’t upgrade Plesk license"},{"content":"If you ever need to communicate with a POP3 server via telnet to test it, here\u0026rsquo;s some commands you can use:\nUSER userid PASS password STAT LIST RETR msg# TOP msg# #lines DELE msg# RSET QUIT ","date":"17 April 2007","permalink":"/p/telnet-pop3-commands/","section":"Posts","summary":"If you ever need to communicate with a POP3 server via telnet to test it, here\u0026rsquo;s some commands you can use:","title":"Telnet POP3 Commands"},{"content":"If you have weird SSL errors and this one appears, you are trying to speak SSL to a daemon that doesn\u0026rsquo;t understand it:\n$ openssl s_client -connect 222.222.222.222:443 CONNECTED(00000003) 5057:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:567: If you get this with Apache, be sure that you have SSLEngine On in the applicable VirtualHost and be sure that mod_ssl is being loaded.\n","date":"17 April 2007","permalink":"/p/ssl-connection-to-a-non-secure-port/","section":"Posts","summary":"If you have weird SSL errors and this one appears, you are trying to speak SSL to a daemon that doesn\u0026rsquo;t understand it:","title":"SSL connection to a non-secure port"},{"content":"To pretty much completely disable SSH timeouts, simply adjust the following directives in /etc/ssh/sshd_config:\nTCPKeepAlive yes ClientAliveInterval 30 ClientAliveCountMax 99999 EDIT: Once that\u0026rsquo;s changed, be sure to restart your ssh daemon.\nSECURITY WARNING: If you remove users from your system, but they\u0026rsquo;re still connected via ssh, their connection may remain open indefinitely. Be sure to check all active ssh sessions after adjusting a user\u0026rsquo;s access.\n","date":"12 April 2007","permalink":"/p/disable-ssh-timeouts/","section":"Posts","summary":"To pretty much completely disable SSH timeouts, simply adjust the following directives in /etc/ssh/sshd_config:","title":"Disable SSH timeouts"},{"content":"Before you upgrade Plesk, it\u0026rsquo;s always a good idea to make a backup and also make your ip and shell maps:\n/usr/local/psa/bin/psadump -f /path/to/psa.dump --nostop --nostop-domain /usr/local/psa/bin/psarestore -t -f /path/to/psa.dump -m ip_map -s shell_map If you need to restore data, just drop the -t on the psarestore command.\n","date":"10 April 2007","permalink":"/p/pre-upgrade-plesk-backup/","section":"Posts","summary":"Before you upgrade Plesk, it\u0026rsquo;s always a good idea to make a backup and also make your ip and shell maps:","title":"Pre-upgrade Plesk Backup"},{"content":"As with most things, turning off SSLv2 in Lighttpd is much easier than in Apache. Toss the following line in your lighttpd.conf and you\u0026rsquo;re good to go:\nssl.use-sslv2 = \u0026#34;disable\u0026#34; ","date":"8 April 2007","permalink":"/p/disable-sslv2-in-lighttpd/","section":"Posts","summary":"As with most things, turning off SSLv2 in Lighttpd is much easier than in Apache.","title":"Disable SSLv2 in Lighttpd"},{"content":"WordPress uses .htaccess files to process its permalinks structure, but Lighttpd won\u0026rsquo;t obey .htaccess files (yet). So, instead of banging your head against the wall, just use something like the following:\nserver.error-handler-404 = \u0026#34;/index.php?error=404\u0026#34; For example, the virtual host for this very website is:\n$HTTP[\u0026#34;host\u0026#34;] =~ \u0026#34;rackerhacker\\\\.com\u0026#34; { server.document-root = basedir+\u0026#34;rackerhacker.com/\u0026#34; server.error-handler-404 = \u0026#34;/index.php?error=404\u0026#34; } ","date":"8 April 2007","permalink":"/p/wordpress-permalinks-in-lighttpd/","section":"Posts","summary":"WordPress uses .","title":"WordPress permalinks in Lighttpd"},{"content":"It seems like lighttpd and Tomcat are at the forefront of what is ‘hot\u0026rsquo; these days. If you don\u0026rsquo;t need the completeness of Apache on your server, you can use lighttpd to proxy to Tomcat, and it\u0026rsquo;s pretty simple. This how-to will show you how to install lighttpd, Tomcat, and the Java JRE. Once they\u0026rsquo;re installed it will also show you how to get lighttpd to use mod_proxy to connect to your Tomcat installation.\nFirst, some downloading has to be done. Grab the latest lighttpd RPM from rpmfind.net for your distribution. You will also need to pick up the latest version of Tomcat and the Java JRE.\nOnce all three of those are on the server, get them installed:\n# rpm -Uvh lighttpd-1.3.16-1.2.el4.rf.i386.rpm # tar xvzf apache-tomcat-6.0.10.tar.gz # mv apache-tomcat-6.0.10 /usr/local/ # chmod +x jre-6u1-linux-i586.bin # ./jre-6u1-linux-i586.bin # mv jre1.6.0_01 /usr/local/ Before you can do much else, you will need to set up your JAVA_HOME and add JAVA_HOME/bin to your path. Open up /etc/profile and add the following before the export statement:\nJAVA_HOME=\"/usr/local/jre1.6.0_01/\" export JAVA_HOME PATH=$JAVA_HOME/bin:$PATH To make this change actually take effect, you will need to log out and become root again. Now, check that your JAVA_HOME is set:\n# echo $JAVA_HOME /usr/local/jre1.6.0_01/ If the JAVA_HOME is not set up, check your /etc/profile again. If it\u0026rsquo;s set up, try starting Tomcat – there\u0026rsquo;s no need to set the $CATALINA_HOME, because Tomcat can figure it out on its own:\n# /usr/local/apache-tomcat-6.0.10/bin/startup.sh Using CATALINA_BASE: /usr/local/apache-tomcat-6.0.10 Using CATALINA_HOME: /usr/local/apache-tomcat-6.0.10 Using CATALINA_TMPDIR: /usr/local/apache-tomcat-6.0.10/temp Using JRE_HOME: /usr/local/jre1.6.0_01/ Try to connect to the server now on port 8080 and you should see a Tomcat default page. Now, go add a manager user to the $CATALINA_HOME/conf/tomcat-users.xml:\n\u0026lt;role rolename=\"manager\"/\u003e \u0026lt;user username=\"tomcat\" password=\"password\" roles=\"manager\"/\u003e Restart Tomcat for the changes to take effect:\n# /usr/local/apache-tomcat-6.0.10/bin/startup.sh # /usr/local/apache-tomcat-6.0.10/bin/shutdown.sh Tomcat is ready to go, so it\u0026rsquo;s time to configure lighttpd. Open the /etc/lighttpd/lighttpd.conf and activate mod_proxy by uncommenting it:\nserver.modules = ( # \"mod_rewrite\", # \"mod_redirect\", # \"mod_alias\", \"mod_access\", # \"mod_cml\", # \"mod_trigger_b4_dl\", # \"mod_auth\", # \"mod_status\", # \"mod_setenv\", # \"mod_fastcgi\", \"mod_proxy\", # \"mod_simple_vhost\", # \"mod_evhost\", # \"mod_userdir\", # \"mod_cgi\", # \"mod_compress\", # \"mod_ssi\", # \"mod_usertrack\", # \"mod_expire\", # \"mod_secdownload\", # \"mod_rrdtool\", \"mod_accesslog\" ) Drop to the bottom of the configuration file and add something like this, replacing your information as necessary:\n$HTTP[\"host\"] =~ \"10.10.10.56\" { proxy.server = ( \"\" =\u003e ( \"tomcat\" =\u003e ( \"host\" =\u003e \"127.0.0.1\", \"port\" =\u003e 8080, \"fix-redirects\" =\u003e 1 ) ) ) } Replace the IP address with a hostname or the correct IP for your server. This proxy directive makes lighttpd connect to Tomcat on the localhost on port 8080 whenever a request comes in on port 80 to lighttpd on the IP 10.10.10.56. Start lighttpd now and try it yourself!\n# /etc/init.d/lighttpd start ","date":"6 April 2007","permalink":"/p/lighttpd-proxy-to-tomcat/","section":"Posts","summary":"It seems like lighttpd and Tomcat are at the forefront of what is ‘hot\u0026rsquo; these days.","title":"Lighttpd proxy to Tomcat"},{"content":"To disable reverse lookups in qmail with Plesk, simply add -Rt0 to the server_args line in /etc/xinetd.d/smtp_psa\nservice smtp { socket_type = stream protocol = tcp wait = no disable = no user = root instances = UNLIMITED server = /var/qmail/bin/tcp-env server_args = \u0026lt;strong\u0026gt;-Rt0\u0026lt;/strong\u0026gt; /usr/sbin/rblsmtpd -r sbl-xbl.spamhaus.org /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true } Once that\u0026rsquo;s been saved, simply restart xinetd:\n# /etc/init.d/xinetd restart WATCH OUT! This change will be overwritten if you change certain mail settings in Plesk, like MAPS protection.\n","date":"5 April 2007","permalink":"/p/disable-reverse-lookups-with-qmail-in-plesk/","section":"Posts","summary":"To disable reverse lookups in qmail with Plesk, simply add -Rt0 to the server_args line in /etc/xinetd.","title":"Disable reverse lookups with qmail in Plesk"},{"content":"This is one of Exim\u0026rsquo;s more cryptic errors:\nMar 29 11:22:52 114075-web1 postfix/smtp[20589]: 9E0142FC589: to=\u0026lt;orders@somehost.com\u0026gt;, relay=somehost.com[11.11.11.11], delay=147966, status=deferred (host somehost.com[11.11.11.11] said: 451 Could not complete sender verify callout (in reply to RCPT TO command)) When you send e-mail to an Exim server with a sender verify callout enabled, the Exim server will connect back into your server and verify that your server accepts mail for the sender\u0026rsquo;s e-mail address. For example, if you send e-mail from orders@somehost.com, the Exim server will connect back into your server and do this:\nHELO someotherhost.com 250 somehost.com MAIL FROM: test@someotherhost.com 250 2.1.0 Ok RCPT TO: orders@somehost.com 250 2.1.5 Ok Exim will make sure that it gets a 250 success code before it will allow the e-mail to come into its server. Some situations that cause problems with this process are:\nPort 25 is blocked inbound on the sender\u0026rsquo;s server Something else is filtering port 25 inbound on the sender\u0026rsquo;s server The sender\u0026rsquo;s server uses blacklists which delay the responses to Exim\u0026rsquo;s commands ","date":"29 March 2007","permalink":"/p/451-could-not-complete-sender-verify-callout/","section":"Posts","summary":"This is one of Exim\u0026rsquo;s more cryptic errors:","title":"451 Could not complete sender verify callout"},{"content":"If you need to change the hostname that Sendmail announces itself as, just add the following to sendmail.mc:\ndefine(`confDOMAIN_NAME', `mail.yourdomain.com')dnl\nAnd, to add additional stuff onto the end of the line:\ndefine(`confSMTP_LOGIN_MSG',`mailer ready')dnl\n","date":"27 March 2007","permalink":"/p/setting-the-hostname-in-sendmail/","section":"Posts","summary":"If you need to change the hostname that Sendmail announces itself as, just add the following to sendmail.","title":"Setting the hostname in Sendmail"},{"content":"If you have too many files to remove, try this trick:\nfind . -name \u0026#39;*\u0026#39; | xargs rm -v ","date":"26 March 2007","permalink":"/p/binrm-argument-list-too-long/","section":"Posts","summary":"If you have too many files to remove, try this trick:","title":"/bin/rm: Argument list too long"},{"content":"If you\u0026rsquo;ve forgotten the root password for a MySQL server, but you know the system root, you can reset the MySQL root password pretty easily. Just remember to work quickly since the server is wide open until you finish working.\nFirst, add skip-grant-tables to the [mysqld] section of /etc/my.cnf and restart the MySQL server.\nNext, run mysql from the command line and use the following SQL statement:\nUPDATE mysql.user SET Password=PASSWORD(\u0026#39;newpwd\u0026#39;) WHERE User=\u0026#39;root\u0026#39;;\u0026lt;br /\u0026gt; FLUSH PRIVILEGES; Remove the skip-grant-tables from /etc/my.cnf and leave the server running. There\u0026rsquo;s no need to restart it.\n","date":"26 March 2007","permalink":"/p/reset-mysql-root-password/","section":"Posts","summary":"If you\u0026rsquo;ve forgotten the root password for a MySQL server, but you know the system root, you can reset the MySQL root password pretty easily.","title":"Reset MySQL root password"},{"content":"You may find that some sites do not work well if you omit a trailing slash on the URL. For example, if you have a directory on domain.com called \u0026ldquo;news\u0026rdquo;, the following two URL\u0026rsquo;s should take you to the same place:\nhttp://domain.com/news\nhttp://domain.com/news/\nIf you find that they do not take you to the same place, be sure that the mod_dir (Apache 1 or Apache 2) module is being loaded in Apache. If that module is being loaded, and you\u0026rsquo;re still having problems, make sure mod_rewrite is loaded as well.\nIf none of that works, make sure that there is no ErrorDocument 301 or ErrorDocument 302. Should either of those exist, promptly slap the developer/sysadmin that enabled those options. Apache will do a 301 redirect when the trailing slash is missing so that the user will be directed to the correct location, and if there is an ErrorDocument 301, this error document will always be presented rather than the proper redirection to the directory on your site.\n","date":"23 March 2007","permalink":"/p/apaches-mysterious-trailing-slash/","section":"Posts","summary":"You may find that some sites do not work well if you omit a trailing slash on the URL.","title":"Apache’s mysterious trailing slash"},{"content":"Often times, the wonderful webmail application known as Horde will spin out of control and cause unnecessary resource usage and often cause defunct Apache processes to appear. You may wonder how this can happen, especially if you set the max_execution_time variable in php.ini. Well, the Horde developers took it upon themselves to overwrite your settings in their own configuration file in /usr/share/psa-horde/config/conf.xml:\n\u0026lt;configinteger name=\u0026#34;max_exec_time\u0026#34; desc=\u0026#34;If we need to perform a long operation, what should we set max_execution_time to (in seconds)? 0 means no limit; however, a value of 0 will cause a warning if you are running in safe mode. See http://www.php.net/manual/function.set-time-limit.php for more information.\u0026#34;\u0026gt;0\u0026lt;/configinteger\u0026gt; It\u0026rsquo;s set to forever by default in Horde. However, if you do turn on safe_mode, Horde will have some problems setting its time limit variable. You can change the zero to something more reasonable, such as 30 or 60 by editing the conf.xml and reloading Apache.\n","date":"23 March 2007","permalink":"/p/adjust-max_execution_time-for-horde-in-plesk/","section":"Posts","summary":"Often times, the wonderful webmail application known as Horde will spin out of control and cause unnecessary resource usage and often cause defunct Apache processes to appear.","title":"Adjust max_execution_time for Horde in Plesk"},{"content":"First, you have to get the certificate and key out of Windows in a pfx (PKCS #12) format.\nClick Start, Run, then type \u0026ldquo;mmc\u0026rdquo; and hit enter. In the leftmost menu, choose \u0026ldquo;Add/Remove Snap In\u0026rdquo;. Click \u0026ldquo;Add\u0026rdquo;, then click \u0026ldquo;Certificates\u0026rdquo;, then OK. When the wizard starts, choose \u0026ldquo;Computer Account\u0026rdquo;, \u0026ldquo;Local Computer\u0026rdquo; and finish out the wizard. Once you\u0026rsquo;re finished, get back to the MMC and expand the \u0026ldquo;Certificates\u0026rdquo; node, then the \u0026ldquo;Personal\u0026rdquo; node. Click on the \u0026ldquo;Certificates\u0026rdquo; node under \u0026ldquo;Personal\u0026rdquo; and find your certificate in the right pane. Right click on the certificate and choose \u0026ldquo;All Tasks\u0026rdquo;, then \u0026ldquo;Export\u0026rdquo;. When the wizard starts, choose \u0026ldquo;Yes\u0026rdquo; for exporting the private key, then select ONLY \u0026ldquo;Strong Private Key Protection\u0026rdquo; from the PFX section. You will also need to set a password and specify a location for the PFX file. Once the PFX file has been saved, close out the MMC (don\u0026rsquo;t save the snap-in if it asks). Get the PFX over to the Linux server somehow. Once the PFX makes it over to the Linux server, you have to decrypt the PFX into a plaintext PEM file (PFX\u0026rsquo;s are binary files, and can\u0026rsquo;t be viewed in a text editor):\nopenssl pkcs12 -in file.pfx -out file.pem You will be asked for the password for the PFX (which is the one you set in the Windows wizard). Once you enter that, you will be asked for a new password. This new password is used to encrypt the private key. You cannot proceed until you enter a password that is 4 characters or longer. REMEMBER this password!\nWhen this step is complete, you should have a PEM file that you can read in a text editor. Open the file in a text editor and copy the private key and certificate to different files. Remember to keep the dashed lines intact when you copy the certificates - this is important. There is some additional text above the key, and also between the key and certificate - this text should be ignored and should not be included in the certificate and key files.\nNow that you have the key and certificate separated, you need to decrypt the private key (or face the wrath of Apache every time you restart the server). You can decrypt the private key like this:\nopenssl rsa -in file.key -out file.key Yes, provide the same file name twice and it will decrypt the key onto itself, keeping everything in one file. OpenSSL will ask for a password to decrypt the key, and this is the password you set when you decrypted the PFX. If you forgot the password, you will need to start over from when you brought it over from the Windows box.\nAfter this entire process, you will have four files, a PFX, PEM, KEY, and CRT. Throw away the PFX and PEM, and you can use the key and certificate files to install into Apache. In case you forget the syntax, here\u0026rsquo;s what goes in the Apache configuration:\nSSLEngine On SSLCertificateFile /path/to/your/certificate SSLCertificateKeyFile /path/to/your/privatekey ","date":"23 March 2007","permalink":"/p/exporting-ssl-certificates-from-windows-to-linux/","section":"Posts","summary":"First, you have to get the certificate and key out of Windows in a pfx (PKCS #12) format.","title":"Exporting SSL certificates from Windows to Linux"},{"content":"If you can\u0026rsquo;t use PHP to force HTTPS, you can use mod_rewrite instead. Toss this in an .htaccess file in the web root of your site:\nRewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://www.domain.com/$1 [R,L] Or, if it needs to be forced only for a certain folder:\nRewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} somefolder RewriteRule ^(.*)$ https://www.domain.com/somefolder/$1 [R,L] ","date":"21 March 2007","permalink":"/p/forcing-https-with-mod_rewrite/","section":"Posts","summary":"If you can\u0026rsquo;t use PHP to force HTTPS, you can use mod_rewrite instead.","title":"Forcing HTTPS (SSL) with mod_rewrite"},{"content":"To force HTTPS with a PHP script, just put this snippet near the top:\nif ($_SERVER[\u0026#39;SERVER_PORT\u0026#39;] != 443) { header(\u0026#34;Location: https://\u0026#34;.$_SERVER[\u0026#39;HTTP_HOST\u0026#39;].$_SERVER[\u0026#39;REQUEST_URI\u0026#39;]); } ","date":"21 March 2007","permalink":"/p/forcing-https-with-php/","section":"Posts","summary":"To force HTTPS with a PHP script, just put this snippet near the top:","title":"Forcing HTTPS with PHP"},{"content":"The AWStats package in RHEL4/Centos4 and Plesk 8.1 uses an alias directory for the icons called /awstats-icon, but when the AWStats contents is generated, the icon directory is different (/icon). To fix this issue, change this file:\n/usr/share/awstats/awstats_buildstaticpages.pl:\nmy $DirIcons=\u0026#39;/awstats-icon\u0026#39;; ","date":"18 March 2007","permalink":"/p/awstats-icons-dont-appear-in-plesk-81/","section":"Posts","summary":"The AWStats package in RHEL4/Centos4 and Plesk 8.","title":"AWStats icons don’t appear in Plesk 8.1"},{"content":"I rarely try to toot my own horn, but I\u0026rsquo;ve created a pretty handy site. Check out Blacklist Watch if you get the chance. You can immediately test a server against the most commonly used spam blacklists available online. Soon enough, I\u0026rsquo;ll have an automated notification service so that you can be notified when your IP\u0026rsquo;s appear on a blacklist.\nHere\u0026rsquo;s a quick example of the blacklist checking.\n","date":"12 March 2007","permalink":"/p/quick-and-fancy-mail-blacklist-checking/","section":"Posts","summary":"I rarely try to toot my own horn, but I\u0026rsquo;ve created a pretty handy site.","title":"Quick and fancy mail blacklist checking"},{"content":"To stop those evil double bounce e-mails in Plesk, just do:\necho \u0026quot;#\u0026quot; \u0026gt; /var/qmail/control/doublebounceto\n","date":"5 March 2007","permalink":"/p/stopping-double-bounces-in-plesk/","section":"Posts","summary":"To stop those evil double bounce e-mails in Plesk, just do:","title":"Stopping Double Bounces in Plesk"},{"content":"If you need to change to a different primary IP in Plesk, here\u0026rsquo;s the easiest way:\nIn Plesk 7 there is no concept of the Primary IP address for the server. From the Control panels point of view all IP addresses are equal. The only difference between the main IP address and aliases is that the main IP address can not be deleted from the control panel.\nTo change the main IP address you need to first remove this address from all IP pools. Then stop Plesk and manually change the IP address on the server from the backend as root. Then start Plesk again and restore the list of IP addresses through SERVER -\u0026gt; IP Aliasing and click on Re-read button.\n","date":"28 February 2007","permalink":"/p/change-primary-ip-address-in-plesk/","section":"Posts","summary":"If you need to change to a different primary IP in Plesk, here\u0026rsquo;s the easiest way:","title":"Change Primary IP Address in Plesk"},{"content":"To disable SSLv2 server-wide on a Plesk server, add this in your /etc/httpd/conf.d/ssl.conf:\nSSLCipherSuite ALL:!ADH:!LOW:!SSLv2:!EXP:+HIGH:+MEDIUM SSLProtocol all -SSLv2 Put the directive very high in the file, outside the VirtualHost directive, preferably right below the Listen directive. This will work for all SSL VirtualHosts.\nHow can I ensure that Apache does not allow SSL 2.0 protocol that has known weaknesses?\n","date":"27 February 2007","permalink":"/p/disabling-sslv2-in-plesk/","section":"Posts","summary":"To disable SSLv2 server-wide on a Plesk server, add this in your /etc/httpd/conf.","title":"Disabling SSLv2 in Plesk"},{"content":"You can edit /etc/drweb/drweb_qmail.conf to eliminate receiving notification messages when Dr. Web has an issue:\n[VirusNotifications] SenderNotify = no AdminNotify = no RcptsNotify = no Then just restart Dr. Web with:\n/etc/init.d/drwebd restart Plesk has a KB article about this issue as well.\n","date":"27 February 2007","permalink":"/p/disable-dr-web-notifications-plesk/","section":"Posts","summary":"You can edit /etc/drweb/drweb_qmail.","title":"Disable Dr. Web Notifications in Plesk"},{"content":"If you want to hide the current version of Apache and your OS, just replace\nServerTokens OS\nwith\nServerTokens Prod\nand restart Apache.\n","date":"23 February 2007","permalink":"/p/hide-apache-version/","section":"Posts","summary":"If you want to hide the current version of Apache and your OS, just replace","title":"Hide Apache Version"},{"content":"A really really strange issue randomly appears with ProFTPD and Plesk occasionally. On the filesystem, a file will have a correct creation/modification date, but then when you view it over FTP, it\u0026rsquo;s always off by the amount of hours you differ from GMT.\nFor example, if the server is on Central Time, all of the files will seem to be created 6 hours after they were really created. The filesystem will show something like 10AM, but the FTP client will say 4PM. Luckily, there is a fix!\nAdd the following to your /etc/proftpd.conf file and you should be good to go:\nTimesGMT off SetEnv TZ :/etc/localtime ","date":"21 February 2007","permalink":"/p/gmt-ftp-timestamps-in-plesk/","section":"Posts","summary":"A really really strange issue randomly appears with ProFTPD and Plesk occasionally.","title":"ProFTPD shows incorrect GMT time with Plesk"},{"content":"If you\u0026rsquo;ve ever worked on a linux system, chances are that you\u0026rsquo;ve used chmod many times. However, the quickest way to stump many linux users is to ask how many octets a full permissions set has. Many people think of this and say three:\nchmod 777 file\nBut what you\u0026rsquo;re actually saying:\nchmod 0777 file\nThe first octet works the same way as the other three as it has 3 possible values that add to make the octet (for the letter-lovers, i\u0026rsquo;ve included those too):\n4 - setuid (letter-style: s)\u0026lt;br /\u0026gt; 2 - setgid (letter-style: s)\u0026lt;br /\u0026gt; 1 - sticky bit (letter-style: t)\nRemember - your first octet will always be reset to 0 when using chown or chgrp on files.\nSetuid\nIf you setuid on a binary, you\u0026rsquo;re telling the operating system that you want this binary to always be executed as the user owner of the binary. So, let\u0026rsquo;s say the permissions on a binary are set like so:\n`# chmod 4750 some_binary\nls -al some_binary\n#-rwsr-x\u0026mdash; 1 root users 0 Feb 13 21:43 some_binary`\nYou\u0026rsquo;ll notice the small \u0026rsquo;s\u0026rsquo; in the user permissions blocks - this means that if a user on the system executes this binary, it will run as root with full root permissions. Obviously, anyone in the users group can run this binary since the execute bit is set for the group, but when the binary runs, it will run with root permissions. Be smart with setuid! Anything higher than 4750 can be very dangerous as it allows the world to run the binary as the root user. Also, if you allow full access plus setgid, you will be opening yourself up for something mighty nasty:\n`# chmod 4777 some_binary\nls -al some_binary\n#-rwsrwxrwx 1 root users 0 Feb 13 21:43 some_binary`\nNot only can every user on the system execute this binary, but they can edit it before it runs at root! It goes without saying, but this could be used to beat up your system pretty badly. If you neglect to allow enough user permissions for execution, linux laughs at you by throwing the uppercase \u0026lsquo;S\u0026rsquo; into your terminal:\n`# chmod 4400 some_binary\nls -al some_binary\n#-r-S\u0026mdash;\u0026mdash; 1 root users 0 Feb 13 21:43 some_binary`\nSince no one can execute this script anyways (except root), you get the big capital \u0026lsquo;S\u0026rsquo; for \u0026lsquo;Silly\u0026rsquo;. (It probably doesn\u0026rsquo;t stand for silly, but whatever.)\nSetgid\nSetgid is pretty much the exact same as setuid, but the binary runs with the privileges of the owner group rather than the user\u0026rsquo;s primary group privileges. This isn\u0026rsquo;t quite so useful in my opinion, but in case you need it, here\u0026rsquo;s how the permissions come out:\n`# chmod 2750 some_binary\nls -al some_binary\n#-rwxr-s\u0026mdash; 1 root users 0 Feb 13 21:43 some_binary`\nAnd if you enjoy being made fun of:\n`# chmod 2400 some_binary\nls -al some_binary\n#-r\u0026mdash;-S\u0026mdash; 1 root users 0 Feb 13 21:43 some_binary`\nSticky Bit\nThis is such a giggly term for a linux file permission, but it\u0026rsquo;s rather important, and it best applies to your tmp directory (or any other world writable location). Since world writable locations allow users to go hog-wild with creating, editing, appending, and deleting files, this can get annoying if certain users share a common directory.\nLet\u0026rsquo;s say users work in an office and they work on files in a world writeable directory. One user gets mad because another user got a raise, so they trash all of the files that belong to that recently promoted user. Obviously, this could lead to a touchy situation. If you apply the sticky bit on the directory, the users can do anything they want to files they create, but they can\u0026rsquo;t write to or delete files which they didn\u0026rsquo;t create. Pretty slick, er, sticky, right? Here\u0026rsquo;s how to set the sticky bit:\n`#chmod 1777 /tmp\nls -ld /tmp\n#drwxrwxrwt 3 root root 4096 Feb 13 21:57 /tmp`\nAnd again, linux will laugh at you for setting sticky bits on non-world writable directories, but this time it does so with a capital \u0026lsquo;T\u0026rsquo;:\n`#chmod 1744 /tmp\nls -ld /tmp\n#drw-r\u0026ndash;r-T 3 root root 4096 Feb 13 21:57 /tmp`\nSetuid + Setgid on Directories\nSetting the setgid bit on a directory means any files created in that directory will be owned by the group who owns the directory. No matter what your primary group is, any files you make will be owned by the group who owns the directory.\nSetting the setuid bit on a directory has no effect in almost all Linux variants. However, in FreeBSD, it acts the same as the setgid (except it changes the ownership of new files as the user who owns the folder).\n","date":"14 February 2007","permalink":"/p/chmod-and-the-mysterious-first-octet/","section":"Posts","summary":"If you\u0026rsquo;ve ever worked on a linux system, chances are that you\u0026rsquo;ve used chmod many times.","title":"Chmod and the mysterious first octet"},{"content":"LVM is handy when you want additional flexibility to grow or shrink your storage space safely without impacting filesystems negatively. It\u0026rsquo;s key to remember that LVM provides flexibility - not redundancy. The best way to understand LVM is to understand four terms: physical volumes, physical extents, volume groups and logical volumes.\nPhysical volumes are probably the easiest to understand for most users. The stuff you deal with all day, /dev/hda2, /dev/sd3 - these are physical volumes. They\u0026rsquo;re real hard drive partitions which are finitely defined. LVM comes along and chops those physical volumes up into little pieces called physical extents. Extents are simply just pieces of a regular system partition, and the size of the extent is determined by the OS.\nSo what happens with these extents? You can pool a group of extents together to form a volume group. From there, you can carve out chunks of the extents from the volume group to make logical volumes.\nConfused? You should be! Let\u0026rsquo;s try an example:\nYou have two system partitions: /dev/sda2 and /dev/sda3. Let\u0026rsquo;s say that /dev/sda2 has 1,000 extents and /dev/sda3 has 2,000 extents. The first thing you\u0026rsquo;ll want to do is initialize the physical volumes, which basically tells LVM you want to chop them up into pieces so you can use them later:\n# pvcreate /dev/sda2 # pvcreate /dev/sda3` Graphically, here\u0026rsquo;s what\u0026rsquo;s happened so far:\n+-----[ Physical Volume ]------+ | PE | PE | PE | PE | PE | PE | +------------------------------+ Now, LVM has split these physical volumes (partitions) into small pieces called extents. So, we should have 3,000 extents total once we create the physical volumes with LVM (1,000 for sda2 and 2,000 for sda3). Now, we need to take all of these extents and put them into a group, called the volume group:\nvgcreate test /dev/sda2 /dev/sda3 Again, here\u0026rsquo;s what we\u0026rsquo;ve done:\n+------[ Volume Group ]-----------------+ | +--[PV]--------+ +--[PV]---------+ | | | PE | PE | PE | | PE | PE | PE | | | +--------------+ +---------------+ | +---------------------------------------+ So what\u0026rsquo;s happened so far? The physical volumes (partitions) are unchanged, but LVM has split them into extents, and we\u0026rsquo;ve now told LVM that we want to include the extents from both physical volumes in a volume group called test. The volume group test is basically a big bucket holding all of our extents from both physical volumes. To move on, you need to find out how many extents we have in our volume group now:\nvgdisplay -v test We should see that Total PE in the output shows 3,000, with a Free PE of 3,000 since we haven\u0026rsquo;t done anything with our extents yet. Now we can take all these extents in the volume group and lump them together into a 1,500 extent partition:\nlvcreate -l 1500 -n FIRST test What did we just do? We made a real linux volume called /dev/test/FIRST that has 1,500 extents. Toss a filesystem onto that new volume and you\u0026rsquo;re good to go:\nmke2fs -j /dev/test/FIRST So, this new logical volume contains 1,500 extents, which means we have 1,500 left over. Might as well make a second volume out of the remaining extents in our volume group:\nlvcreate -l 1500 -n SECOND test mke2fs -j /dev/test/SECOND Now you have two equal sized logical volumes whereas you had one small one (sda2) and one large one (sda3) before. The two logical volumes use extents from both physical volumes that are both held within the same volume group. You end up with something like this:\n+------[ Volume Group ]-----------------+ | +--[PV]--------+ +--[PV]---------+ | | | PE | PE | PE | | PE | PE | PE | | | +--+---+---+---+ +-+----+----+---+ | | | | | +-----/ | | | | | | | | | | | | +-+---+---+-+ +----+----+--+ | | | Logical | | Logical | | | | Volume | | Volume | | | | | | | | | | /FIRST | | /SECOND | | | +-----------+ +------------+ | +---------------------------------------+ ","date":"14 February 2007","permalink":"/p/understanding-lvm/","section":"Posts","summary":"LVM is handy when you want additional flexibility to grow or shrink your storage space safely without impacting filesystems negatively.","title":"Understanding LVM"},{"content":"Okay, so we know it\u0026rsquo;s easy to measure web, ftp and mail traffic, right? You can just parse the logs, sum it all up, and move on with your day. However, what do you do about users with SFTP or RSYNC privileges? This can create a problem when the bandwidth on your server keeps cranking up, but your web/ftp/mail traffic stats don\u0026rsquo;t show an increase.\nNeed a solution? Enjoy:\nFirst, create an OUTPUT rule for your user, which in this case will be root. Why no INPUT rule? Many hosts don\u0026rsquo;t charge for incoming bandwidth, so why bother?\n# iptables -A INPUT -j ACCEPT -m owner --uid-owner=root Now check this out:\n# /sbin/iptables -v -xL -Z Chain OUTPUT (policy ACCEPT 1287 packets, 221983 bytes) pkts bytes target prot opt in out source destination 437 59684 ACCEPT all -- any any anywhere anywhere OWNER UID match root` The number in the \u0026lsquo;bytes\u0026rsquo; column is the count of bytes that this user sent out of your server since the last time you ran that iptables command. If you don\u0026rsquo;t want to zero out the bytes each time you run the command, just drop the Z flag from the iptables command.\nYou can go wild with awk if you desire:\n# /sbin/iptables -v -xL | grep root | awk \u0026#39;{ print $2 }\u0026#39; 59684 ","date":"12 February 2007","permalink":"/p/measuring-raw-shell-bandwidth/","section":"Posts","summary":"Okay, so we know it\u0026rsquo;s easy to measure web, ftp and mail traffic, right?","title":"Measuring raw shell bandwidth"},{"content":"There\u0026rsquo;s lots of situations where you\u0026rsquo;d want to use a bulk IP change in Plesk:\nServer is moving and needs to change IP\u0026rsquo;s An IP is the destination for some type of DDOS attack An IP needs to be removed from the server So how do you shift tons of domains from one IP to another without spending hours in Plesk clicking and clicking? Do the following instead:\nGet into MySQL and find out which IP you\u0026rsquo;re moving from and to:\nmysql -u admin -p`cat /etc/psa/.psa.shadow` mysql\u0026gt; select * from IP_Addresses; You should see a printout of all of the available IP\u0026rsquo;s on the server. Make a note of the \u0026ldquo;id\u0026rdquo; of the IP you\u0026rsquo;re moving from and to. In this example, here\u0026rsquo;s what we\u0026rsquo;re doing:\nMoving FROM \u0026ldquo;192.168.1.192\u0026rdquo; (id = 2) Moving TO \u0026ldquo;192.168.1.209\u0026rdquo; (id =3) Now we can start shifting the physically hosted domains over in the database:\nmysql\u0026gt; update hosting set ip_address_id=3 where ip_address_id=2; We also need to change the domains that are set up for standard or frame forwarding:\nmysql\u0026gt; update forwarding set ip_address_id=3 where ip_address_id=2; Now we\u0026rsquo;re stuck with the arduous task of updating DNS records. Plesk is kind enough to store this data in four different ways:\nmysql\u0026gt; update dns_recs set displayHost=\u0026#39;192.168.1.209\u0026#39; where displayHost=\u0026#39;192.168.1.192\u0026#39;; mysql\u0026gt; update dns_recs set host=\u0026#39;192.168.1.209\u0026#39; where host=\u0026#39;192.168.1.192\u0026#39;; mysql\u0026gt; update dns_recs set displayVal=\u0026#39;192.168.1.209\u0026#39; where displayVal=\u0026#39;192.168.1.192\u0026#39;; mysql\u0026gt; update dns_recs set val=\u0026#39;192.168.1.209\u0026#39; where val=\u0026#39;192.168.1.192\u0026#39;; Everything domain related is now moved, but the clients that the domains belong to might not have this new IP address in their IP pool. First, we need to find out our component ID\u0026rsquo;s from the repository table (which generally should be the same as the IP_Addresses.id column, but not always)\nmysql\u0026gt; SELECT clients.login, IP_Addresses.ip_address,Repository.* FROM clients LEFT JOIN Repository ON clients.pool_id = Repository.rep_id LEFT JOIN IP_Addresses ON Repository.component_id = IP_Addresses.id; For this example, we\u0026rsquo;ll pretend that the output consists of 2\u0026rsquo;s for these clients. We can flip the IP\u0026rsquo;s in the clients\u0026rsquo; IP pools by running the following:\nmysql\u0026gt; update Repository set component_id=3 where component_id=2; Now that everything is changed in Plesk\u0026rsquo;s database, it\u0026rsquo;s time to change up the Apache and BIND configuration files. Luckily, this can be done pretty easily with Plesk\u0026rsquo;s command line tools:\n# /usr/local/psa/admin/bin/websrvmng -av # mysql -Ns -uadmin -p`cat /etc/psa/.psa.shadow` -D psa -e \u0026#39;select name from domains\u0026#39; | awk \u0026#39;{print \u0026#34;/usr/local/psa/admin/sbin/dnsmng update \u0026#34; $1 }\u0026#39; | sh All that is left is to force Apache and BIND to pick up the new configuration:\n# /etc/init.d/httpd reload # /etc/init.d/named reload Just wait for the DNS records to propagate and you should be all set! The instructions are cumbersome, I know, but it\u0026rsquo;s easier than clicking for-ev-er.\n","date":"12 February 2007","permalink":"/p/bulk-ip-update-in-plesk/","section":"Posts","summary":"There\u0026rsquo;s lots of situations where you\u0026rsquo;d want to use a bulk IP change in Plesk:","title":"Bulk IP update in Plesk"},{"content":"Moving domains from client to client in Plesk is pretty quick from the command line. Just replace DOMAIN with the domain name you want to move and CLIENTLOGIN with the client\u0026rsquo;s username:\n/usr/local/psa/bin/domain.sh --update DOMAIN -clogin CLIENTLOGIN\n","date":"12 February 2007","permalink":"/p/move-domain-between-clients-in-plesk/","section":"Posts","summary":"Moving domains from client to client in Plesk is pretty quick from the command line.","title":"Move domain between clients in Plesk"},{"content":"If odd bounced e-mails are coming back to the server or the server is listed in a blacklist, some accounts may be compromised on the server. Here\u0026rsquo;s how to diagnose the issue:\nRead the queue and look for messages with funky senders or lots of recipients.\n10 Feb 2007 07:31:08 GMT #476884 10716 \u0026lt;service@paypal.com\u0026gt; remote debbarger@earthlink.net remote debbiabbis@hotmail.com remote debbiak@aol.com *** lots more recicpients below *** This is a phishing e-mail being sent out to imitate PayPal. Now you need to find which IP is sending this e-mail, so grab the message ID and pass it to qmHandle:\n# qmHandle -m476884 | less Received: (qmail 20390 invoked from network); 10 Feb 2007 07:31:08 -0600 Received: from unknown (HELO User) (207.219.92.194) In this case, the offender is from 207.219.92.194. Now we can dig for the login in /var/log/messages:\n# grep -i 207.219.92.194 /var/log/messages Feb 10 10:19:33 s60418 smtp_auth: SMTP connect from unknown@ [207.219.92.194] Feb 10 10:19:33 s60418 smtp_auth: smtp_auth: SMTP user [USER] : /var/qmail/mailnames/[DOMAIN]/[USER] logged in from unknown@ [207.219.92.194] Just for giggles, let\u0026rsquo;s find out what their password is:\n# mysql -u admin -p`cat /etc/psa/.psa.shadow` mysql\u0026gt; use psa; mysql\u0026gt; select CONCAT(mail_name,\u0026#34;@\u0026#34;,name) as email_address,accounts.password from mail left join domains on domains.id=mail.dom_id left join accounts on accounts.id=mail.account_id where mail_name like \u0026#39;[USER]\u0026#39;; +---------------------------+----------+ | email_address | password | +---------------------------+----------+ | [USER]@[DOMAIN] | password | +---------------------------+----------+ 1 row in set (0.00 sec) Well, \u0026lsquo;password\u0026rsquo; isn\u0026rsquo;t a great password. Log into Plesk and change this password ASAP. To verify your work, tail /var/log/messages and you should see this:\n# tail -f /var/log/messages Feb 10 10:27:08 s60418 smtp_auth: SMTP connect from unknown@ [207.219.92.194] Feb 10 10:27:08 s60418 smtp_auth: smtp_auth: FAILED: [USER] - password incorrect from unknown@ [207.219.92.194] Big thanks goes to Jon B. and Mike J. for this.\n","date":"10 February 2007","permalink":"/p/finding-compromised-mail-accounts-in-plesk/","section":"Posts","summary":"If odd bounced e-mails are coming back to the server or the server is listed in a blacklist, some accounts may be compromised on the server.","title":"Finding compromised mail accounts in Plesk"},{"content":"You can delete them based on what they\u0026rsquo;re doing:\niptables -D INPUT -s 127.0.0.1 -p tcp --dport 111 -j ACCEPT Or you can delete them based on their number and chain name:\niptables -D INPUT 4 ","date":"9 February 2007","permalink":"/p/delete-single-iptables-rules/","section":"Posts","summary":"You can delete them based on what they\u0026rsquo;re doing:","title":"Delete single iptables rules"},{"content":"If you need to enable SSL in ProFTPD, try this out:\n\u0026lt;IfModule mod_tls.c\u0026gt; TLSEngine on TLSRequired off TLSRSACertificateFile /etc/httpd/conf/ssl.crt/server.crt TLSRSACertificateKeyFile /etc/httpd/conf/ssl.key/server.key TLSVerifyClient off \u0026lt;/IfModule\u0026gt; ","date":"8 February 2007","permalink":"/p/enabling-ssl-in-proftpd/","section":"Posts","summary":"If you need to enable SSL in ProFTPD, try this out:","title":"Enabling SSL in ProFTPD"},{"content":"Need to redirect all users except for yourself to another site until yours is live?\nRewriteCond %{REMOTE_ADDR} !\u0026#34;^64\\.39\\.0\\.38\u0026#34; RewriteRule .* http://othersite.com/ ","date":"8 February 2007","permalink":"/p/rewrite-for-certain-ip-addresses/","section":"Posts","summary":"Need to redirect all users except for yourself to another site until yours is live?","title":"Rewrite for certain IP addresses"},{"content":"Check for a SYN flood:\n# netstat -alnp | grep :80 | grep SYN_RECV -c 1024 Adjust network variables accordingly:\necho 1 \u0026gt; /proc/sys/net/ipv4/tcp_syncookies echo 30 \u0026gt; /proc/sys/net/ipv4/tcp_fin_timeout echo 1800 \u0026gt;/proc/sys/net/ipv4/tcp_keepalive_time echo 0 \u0026gt;/proc/sys/net/ipv4/tcp_window_scaling echo 0 \u0026gt;/proc/sys/net/ipv4/tcp_sack echo 0 \u0026gt;/proc/sys/net/ipv4/tcp_timestamps ","date":"7 February 2007","permalink":"/p/fighting-ddos-attacks-in-linux/","section":"Posts","summary":"Check for a SYN flood:","title":"Fighting DDOS attacks in Linux"},{"content":"If you think an e-mail account has been hacked in Plesk, use this to hunt down which one it could be:\ncat /var/log/messages | grep -i smtp_auth | grep \u0026#34;logged in\u0026#34; | awk {\u0026#39; print $11 \u0026#39;} | awk -F / {\u0026#39; print $6\u0026#34;@\u0026#34;$5 \u0026#39;} | sort | uniq -c | sort -n | tail ","date":"7 February 2007","permalink":"/p/getting-the-smtp-auth-id-with-plesk/","section":"Posts","summary":"If you think an e-mail account has been hacked in Plesk, use this to hunt down which one it could be:","title":"Getting the SMTP Auth ID with Plesk"},{"content":"If you have a Cisco device logging to RHEL, here\u0026rsquo;s all that\u0026rsquo;s necessary:\n# vi /etc/sysconfig/syslog SYSLOGD_OPTIONS=\u0026#34;-m 0 -r\u0026#34; Check the facility listed in the Cisco configuration, and convert it into the linux syslog facility levels found on Cisco\u0026rsquo;s syslog configuration documentation:\nFor example, Cisco\u0026rsquo;s facility 19 is the same as linux\u0026rsquo;s facility 3.\n# vi /etc/syslog.conf *.info;mail.none;authpriv.none;cron.none;local3.none; /var/log/messages local3.* /var/log/cisco.log Add local3.none; to the /var/log/messages line and add the local3.* line at the bottom of the file.\nRestart syslog with /etc/init.d/syslog restart. Verify that the syslog server is listening on port 514 and then tail your new /var/log/cisco.log:\n# netstat -plan | grep 514 udp 0 0 0.0.0.0:514 0.0.0.0:* 3770/syslogd ","date":"6 February 2007","permalink":"/p/cisco-logging-to-rhel/","section":"Posts","summary":"If you have a Cisco device logging to RHEL, here\u0026rsquo;s all that\u0026rsquo;s necessary:","title":"Cisco Logging to RHEL"},{"content":"Need a handy way to list all the email accounts and their passwords?\nselect CONCAT(mail_name,\u0026#34;@\u0026#34;,name) as email_address,accounts.password from mail left join domains on domains.id=mail.dom_id left join accounts on accounts.id=mail.account_id; ","date":"1 February 2007","permalink":"/p/get-plesk-e-mail-addresses-and-passwords/","section":"Posts","summary":"Need a handy way to list all the email accounts and their passwords?","title":"Get Plesk e-mail addresses and passwords"},{"content":"TCP: Treason uncloaked! Peer 203.12.220.221:59131/80 shrinks window 76154906:76154907. Repaired. TCP: Treason uncloaked! Peer 203.12.220.227:39670/443 shrinks window 280180313:280180314. Repaired. TCP: Treason uncloaked! Peer 203.12.220.227:39670/443 shrinks window 280180313:280180314. Repaired. TCP: Treason uncloaked! Peer 203.12.220.227:39670/443 shrinks window 280180313:280180314. Repaired. TCP: Treason uncloaked! Peer 203.12.220.237:53759/80 shrinks window 283676616:283676617. Repaired. TCP: Treason uncloaked! Peer 203.12.220.237:36407/80 shrinks window 352393585:352393586. Repaired. TCP: Treason uncloaked! Peer 203.12.220.237:38616/443 shrinks window 529411143:529411144. Repaired. TCP: Treason uncloaked! Peer 58.139.248.9:7611/443 shrinks window 2279076446:2279076447. Repaired. If this is caused by sending strange packets that consume kernel memory, perhaps adding some of these attacker IP addresses to an iptables rule to drop the packets would help. The attacker(s) will probably keep moving to another IP address, so you have get a script to read the logs (\u0026ldquo;grep Treason\u0026rdquo;) and add new blocking rules to iptables (maybe your old system uses \u0026lsquo;ipchains\u0026rsquo; instead).\n","date":"31 January 2007","permalink":"/p/treason-uncloaked/","section":"Posts","summary":"TCP: Treason uncloaked!","title":"Treason Uncloaked"},{"content":"If Plesk ever appears to be out of sync with the configuration files, or if there\u0026rsquo;s a Plesk issue that\u0026rsquo;s occurring that makes no sense at all, just stand back and wave the Plesk magic wand:\n/usr/local/psa/admin/bin/websrvmng -av\nThen restart whatever service was acting up, and things should be sorted out.\n","date":"31 January 2007","permalink":"/p/wave-the-plesk-magic-wand/","section":"Posts","summary":"If Plesk ever appears to be out of sync with the configuration files, or if there\u0026rsquo;s a Plesk issue that\u0026rsquo;s occurring that makes no sense at all, just stand back and wave the Plesk magic wand:","title":"Wave the Plesk magic wand"},{"content":"To make Apache write logs similar to IIS, toss this into your Apache configuration:\nLogFormat \u0026#34;%{%Y-%m-%d %H:%M:%S}t %h %u %m %U %q %\u0026gt;s %b %T %H %{Host}i %{User-Agent}i %{Cookie}i %{Referer}i\u0026#34; iis ","date":"29 January 2007","permalink":"/p/make-apache-logs-mimic-iis/","section":"Posts","summary":"To make Apache write logs similar to IIS, toss this into your Apache configuration:","title":"Make Apache logs mimic IIS"},{"content":"If you\u0026rsquo;re migrating a domain, sometimes their mail will go to the old server for a while after you\u0026rsquo;ve changed the DNS. You can move their mail to the new server by following these steps:\nGo to the user\u0026rsquo;s Maildir directory cd /var/qmail/mailnames/\u0026lt;domain\u0026gt;/\u0026lt;user\u0026gt;/Maildir\nTar their mail directories tar cvzf \u0026lt;user\u0026gt;.tar.gz cur new tmp\nMove to a web accessible location mv \u0026lt;user\u0026gt;.tar.gz /home/httpd/vhosts/\u0026lt;web-accessible-domain\u0026gt;/httpdocs/\nLog onto the second server and go to the user\u0026rsquo;s Maildir directory cd /var/qmail/mailnames/\u0026lt;domain\u0026gt;/\u0026lt;user\u0026gt;/Maildir\nRetrieve the user\u0026rsquo;s mail tar file that you created wget http://\u0026lt;web-accessible-domain\u0026gt;/\u0026lt;user\u0026gt;.tar.gz\nUn-tar the files to their correct locations tar xvzf \u0026lt;user\u0026gt;.tar.gz\nRemove the tar file rm \u0026lt;user\u0026gt;.tar.gz\nGo to the original server and remove the tar file rm /home/httpd/vhosts/\u0026lt;web-accessible-domain\u0026gt;/httpdocs/\u0026lt;user\u0026gt;.tar.gz\n","date":"27 January 2007","permalink":"/p/moving-mail-between-some-plesk-servers/","section":"Posts","summary":"If you\u0026rsquo;re migrating a domain, sometimes their mail will go to the old server for a while after you\u0026rsquo;ve changed the DNS.","title":"Moving mail between some Plesk servers"},{"content":"Add this to the Apache configuration:\nScriptAlias /cgi-bin/ \u0026#34;/var/www/html/cgi-bin/\u0026#34; \u0026lt;Directory \u0026#34;/var/www/html/cgi-bin\u0026#34;\u0026gt; Options +ExecCGI AddHandler cgi-script .cgi \u0026lt;/Directory\u0026gt; Reload Apache and throw this in as test.cgi into your cgi-bin directory:\n#!/usr/bin/perl print \u0026#34;Content-type: text/html\\n\\n\u0026#34;; print \u0026#34;Hello, World.\u0026#34;; Do not omit the content-type on your perl scripts. If you do, Apache will throw a random 500 Internal Server Error and it won\u0026rsquo;t log anything about it.\n","date":"26 January 2007","permalink":"/p/enabling-cgi-in-apache-virtual-hosts/","section":"Posts","summary":"Add this to the Apache configuration:","title":"Enabling CGI in Apache virtual hosts"},{"content":"Need a username and password from the Plesk DB? Use this one-liner:\nselect REPLACE(sys_users.home,\u0026#39;/home/httpd/vhosts/\u0026#39;,\u0026#39;\u0026#39;) AS domain,sys_users.login,accounts.password from sys_users LEFT JOIN accounts on sys_users.account_id=accounts.id; ","date":"26 January 2007","permalink":"/p/finding-usernames-and-passwords-in-plesk-db/","section":"Posts","summary":"Need a username and password from the Plesk DB?","title":"Finding usernames and passwords in Plesk DB"},{"content":"MySQL\u0026rsquo;s default configuration sets the maximum simultaneous connections to 100. If you need to increase it, you can do it fairly easily:\nFor MySQL 3.x:\n# vi /etc/my.cnf set-variable = max_connections = 250 For MySQL 4.x and 5.x:\n# vi /etc/my.cnf max_connections = 250 Restart MySQL once you\u0026rsquo;ve made the changes and verify with:\necho \u0026#34;show variables like \u0026#39;max_connections\u0026#39;;\u0026#34; | mysql WHOA THERE: Before increasing MySQL\u0026rsquo;s connection limit, you really owe it to yourself (and your server), to find out why you\u0026rsquo;re reaching the maximum number of connections. Over 90% of the MySQL servers that are hitting the maximum connection limit have a performance limiting issue that needs to be corrected instead.\n","date":"24 January 2007","permalink":"/p/increase-mysql-connection-limit/","section":"Posts","summary":"MySQL\u0026rsquo;s default configuration sets the maximum simultaneous connections to 100.","title":"Increase MySQL connection limit"},{"content":"If you\u0026rsquo;re looking to get PCI/CISP compliance, or you just like better security, disable SSL version 2. Here\u0026rsquo;s how to check if it\u0026rsquo;s enabled on your server:\nTesting a web server:\nopenssl s_client -connect hostname:443 -ssl2 Testing an SMTP server:\nopenssl s_client -connect hostname:25 -starttls smtp -ssl2 If you get lines like these, SSLv2 is disabled:\n419:error:1407F0E5:SSL routines:SSL2\\_WRITE:ssl handshake failure:s2\\_pkt.c:428: 420:error:1406D0B8:SSL routines:GET\\_SERVER\\_HELLO:no cipher list:s2_clnt.c:450: If it shows the actual certificate installed, SSLv2 is enabled!\n","date":"24 January 2007","permalink":"/p/verify-that-sslv2-is-disabled/","section":"Posts","summary":"If you\u0026rsquo;re looking to get PCI/CISP compliance, or you just like better security, disable SSL version 2.","title":"Verify that SSLv2 is disabled"},{"content":"Okay, so you\u0026rsquo;ve verified that the correct admin password is being used, but you still can\u0026rsquo;t login? Most likely the account has been locked out. You can reset the account by running the following SQL statement:\necho \u0026#34;use psa; truncate lockout;\u0026#34; | mysql -u root -p\\`cat /etc/psa/.psa.shadow\\` ","date":"24 January 2007","permalink":"/p/plesk-admin-user-cant-login/","section":"Posts","summary":"Okay, so you\u0026rsquo;ve verified that the correct admin password is being used, but you still can\u0026rsquo;t login?","title":"Plesk admin user can’t login"},{"content":"First, check max_upload_size in php.ini, but if that doesn\u0026rsquo;t work, look for LimitRequestBody in /etc/httpd/conf.d/php.conf and comment it out. Restart apache and you\u0026rsquo;re all set.\n","date":"24 January 2007","permalink":"/p/cant-upload-large-files-in-php/","section":"Posts","summary":"First, check max_upload_size in php.","title":"Can’t upload large files in PHP"},{"content":"If you have a ton of files in a directory and you need to remove them, but rm says that the \u0026ldquo;argument list [is] too long\u0026rdquo;, just use find and xargs:\nfind . -name \u0026#39;filename*\u0026#39; | xargs rm -vf ","date":"24 January 2007","permalink":"/p/argument-list-too-long/","section":"Posts","summary":"If you have a ton of files in a directory and you need to remove them, but rm says that the \u0026ldquo;argument list [is] too long\u0026rdquo;, just use find and xargs:","title":"Argument list too long"},{"content":"If you need to remove subdomains from the URL that users enter to visit your website, toss this into your VirtualHost directive:\nRewriteEngine On RewriteCond %{HTTP_HOST} ^www.domain.com$ [NC] RewriteRule ^(.*)$ http://domain.com/$1 [R=301,L] Of course, you can tack on a subdomain too, if that\u0026rsquo;s what you need:\nRewriteEngine On RewriteCond %{HTTP_HOST} ^domain.com$ [NC] RewriteRule ^(.*)$ http://www.domain.com/$1 [R=301,L] ","date":"15 January 2007","permalink":"/p/strip-off-www-from-urls-with-mod_rewrite/","section":"Posts","summary":"If you need to remove subdomains from the URL that users enter to visit your website, toss this into your VirtualHost directive:","title":"Strip off www from URLs with mod_rewrite"},{"content":"If you\u0026rsquo;re not a fan of scientific notation, use this to calculate the apache bandwidth used from log files in MB:\ncat /var/log/httpd/access_log | awk \u0026#39;{ SUM += $5} END { print SUM/1024/1024 }\u0026#39; ","date":"15 January 2007","permalink":"/p/sum-apache-bandwidth-from-logs/","section":"Posts","summary":"If you\u0026rsquo;re not a fan of scientific notation, use this to calculate the apache bandwidth used from log files in MB:","title":"Sum Apache Bandwidth From Logs"},{"content":"There are three main things to remember when it comes to the qmail queue:\nDon\u0026rsquo;t mess with the qmail queue while qmail is running. Don\u0026rsquo;t mess with the qmail queue while qmail is stopped. Don\u0026rsquo;t mess with the qmail queue ever. The qmail application keeps a database (sort of) of the pieces of mail it expects to be in the queue (and on the filesystem). Many python scripts (like mailRemove.py) claim they will speed up your qmail queue by removing failure notices and tidying up the queue files. Most of the time, these scripts work just fine, but sometimes they remove something they shouldn\u0026rsquo;t and then qmail can\u0026rsquo;t find the file.\nWhat does qmail do when it can\u0026rsquo;t find the file that corresponds to an item in the queue? It stops delivering mail, eats the CPU, and cranks the load average up. Impressive, isn\u0026rsquo;t it?\nShould you find yourself with an impressively hosed qmail queue, do the following (and say goodbye to every e-mail in your queue):\n/etc/init.d/qmail stop cd /var/qmail/queue rm -rf info intd local mess remote todo mkdir mess for i in `seq 0 22`; do mkdir mess/$i done cp -r mess info cp -r mess intd cp -r mess local cp -r mess remote cp -r mess todo chmod -R 750 mess todo chown -R qmailq:qmail mess todo chmod -R 700 info intd local remote chown -R qmailq:qmail intd chown -R qmails:qmail info local remote /etc/init.d/qmail start Just in case you missed it, this will delete all mail messages that exist in your queue. But, then again, you\u0026rsquo;re not going to get those messages anyways (thanks qmail!), so repairing the queue is your only option.\n","date":"11 January 2007","permalink":"/p/rebuilding-the-qmail-queue/","section":"Posts","summary":"There are three main things to remember when it comes to the qmail queue:","title":"Repairing the qmail queue"},{"content":"If you work on enough servers, you discover that a lot of people put the security of their MySQL server on the back burner. With the importance of databases for dynamic sites, MySQL\u0026rsquo;s security is arguably more important than anything else on the server. If someone were able to shut off the server, or worse, steal sensitive data, the entire server - and possibly the owner - could be in jeopardy.\nHere are some basic tips to secure a MySQL server on any distribution:\nCreate a strong root password\nBy default on almost all distributions, MySQL comes with an empty root password. Sometimes the root logins are restricted to the localhost only, which will help somewhat, but anyone with shell access or a knack for writing PHP scripts can do anything to the MySQL server. However you set the root password, set it and make it strong.\nCut off network access\nAs with any daemon, the more exposure it has to the internet, the higher the chance of it being hacked and brute forced. If your users need network access to MySQL, then restrict it by at least altering the MySQL permissions to their IP only. The better solution would be to restrict it via a firewall and permissions. If you users don\u0026rsquo;t need any network access to MySQL, add the following to your my.cnf:\nlisten = 127.0.0.1 Restart MySQL and it shouldn\u0026rsquo;t be listening on any network addresses except the localhost. This won\u0026rsquo;t affect any PHP scripts on your server.\nForce the use of named pipes\nRemoving MySQL\u0026rsquo;s ability to even bind to the network is a great security measure. All access to MySQL will be done through a filesystem socket, which is /var/lib/mysql/mysql.sock on most systems. This will require your PHP scripts to refer to your host as \u0026ldquo;localhost\u0026rdquo; and not \u0026ldquo;127.0.0.1\u0026rdquo;.\nReview your user list often\nEvery once in a while, check the list of users authorized to log into your MySQL server and be sure that when the list changes, the changes are valid. Be careful when allowing GRANT access to certain users.\nBackup often\nHow often should you backup your MySQL databases? Well, ask yourself how important your data is to you. If your MySQL server is generally busy all of the time, you may want to run a slave server and do backups from that server to reduce the amount of table-locking that mysqldump requires. If your MySQL server is not terribly busy, then you can run mysqldumps pretty often on the server.\n","date":"5 January 2007","permalink":"/p/securing-mysql/","section":"Posts","summary":"If you work on enough servers, you discover that a lot of people put the security of their MySQL server on the back burner.","title":"Securing MySQL"},{"content":"As most folks know, by default, MySQL limits the size of a MyISAM table at 4GB. Where does this limit come from? It\u0026rsquo;s the maximum of a 32-bit address:\n232 = 4,294,967,296 bytes = 4GB\nHow is this 4GB allocated? Well here\u0026rsquo;s the math:\nrow count X row length = 4GB max\nBasically, if your rows don\u0026rsquo;t contain much information, you can cram a lot of rows into a table. On the flip side, if you don\u0026rsquo;t plan on having too many rows, you can cram a lot of information in each row.\nHere\u0026rsquo;s where things get ugly. If you have a MyISAM table and you exceed the maximum data length for the table, it may or may not tell you that you\u0026rsquo;ve exceeded the limit (depending on the version). If it doesn\u0026rsquo;t tell you, your data will actually become corrupt.\nSo, how can you find out what a table\u0026rsquo;s limit is? Run show table status like 'tablename' and check the value for Max_data_length. The default, of course, is 4294967295.\nHow can the Max_data_length be increased? Just run something like alter table tablename max_rows = 200000000000 avg_row_length = 50. This example would increase your Max_data_length to 1,099,511,627,775.\n","date":"5 January 2007","permalink":"/p/mysql-row-data-limits/","section":"Posts","summary":"As most folks know, by default, MySQL limits the size of a MyISAM table at 4GB.","title":"MySQL Row \u0026 Data Limits"},{"content":"Sticky bits help you take file permissions to the next level. Here\u0026rsquo;s an example of a situation where sticky bits help:\nLet\u0026rsquo;s say you have a directory on a server called \u0026ldquo;share\u0026rdquo;. For this directory, you have 3 users: adam, bill, and carl. You are the administrator, so you want to create a directory where all three users can manage files in the share directory. That\u0026rsquo;s easily done: put all three users in the same group, set the permissions as 664, set the owner of the directory as the group that all three users are in, and you\u0026rsquo;re done.\nHold on - adam is going to be upset if bill or carl changes or removes adam\u0026rsquo;s files. How can you let all three users manage files in the same directory but not let them alter each other\u0026rsquo;s files? Sticky bits!\nAfter a chmod 664, and a chown user:group to fix the group, the directory looks like this:\n-rw-rw-r-- 1 admin sharegroup 18367 Dec 30 22:05 shared Now, run a chmod 1664 on the directory:\n-rw-rw-r-t 1 admin sharegroup 18367 Dec 30 22:05 shared What\u0026rsquo;s the t all about? That\u0026rsquo;s your sticky bit! Whenever adam creates a file, bill and carl can\u0026rsquo;t delete it, modify it, or rename it. They can read it all they want, but adam is the only one who can make the modifications because write priviliges are \u0026ldquo;stuck\u0026rdquo; to his user (even though the folder is writable to the group).\nOkay, so why do you need sticky bits? This all sounds like fun and games for shared folders, but how can you use this in the real world? Well, think about your /tmp directory. Users write to the directory all the time whether they know it or not, but what if one user trashed another users temporary files? Or what if a user hosed out the whole directory? That\u0026rsquo;s where sticky bits can save the day. Always chmod 1777 your /tmp directory for good security on a shared temporary directory.\n","date":"31 December 2006","permalink":"/p/about-sticky-bits/","section":"Posts","summary":"Sticky bits help you take file permissions to the next level.","title":"About Sticky Bits"},{"content":"If you find yourself in the sticky situation where kill -9 still won\u0026rsquo;t kill a sendmail process, check the process list. If ps fax returns a \u0026ldquo;D\u0026rdquo; status code, you won\u0026rsquo;t be able to stop the process. It\u0026rsquo;s in an \u0026ldquo;uninterruptable sleep\u0026rdquo; state which cannot be killed.\nWhat can you do to fix this? Check for file locking. Are files in the mail queue directory locked? Are the files in the mail queue mounted over NFS (by an idiotic administrator)? If so, the only fix is to set sendmail to not start on reboot, then reboot the box.\n","date":"29 December 2006","permalink":"/p/cant-kill-sendmail-processes/","section":"Posts","summary":"If you find yourself in the sticky situation where kill -9 still won\u0026rsquo;t kill a sendmail process, check the process list.","title":"Can’t Kill Sendmail Processes"},{"content":"Add this to the virtual host configuration if PHPLive says it has no session.save_path:\nphp_admin_flag safe_mode off php_admin_flag register_globals off PHPLive cannot operate with safe_mode enabled.\n","date":"27 December 2006","permalink":"/p/phplive-has-no-sessionsave_path/","section":"Posts","summary":"Add this to the virtual host configuration if PHPLive says it has no session.","title":"PHPLive Has No session.save_path"},{"content":"Remember, if you raise MaxClients for an MPM in Apache, you must raise the ServerLimit directive, which is normally set to 256 on most servers. The ServerLimit maximum is always obeyed, no matter what MaxClients says. For example, if MaxClients is set to 500 and ServerLimit is 256 (or it is unspecified), then Apache can only serve 256 clients at a time.\nImportant items to remember:\nOnly add ServerLimit in the actual MPM configuration section itself. Increase the MaxClients/ServerLimit in a sane manner - make small increments and test. Keep in mind that 500 concurrent requests can use 75% or more of modern CPU\u0026rsquo;s and upwards of 1.5GB of RAM, depending on the content. ","date":"27 December 2006","permalink":"/p/raising-maxclients-change-serverlimit/","section":"Posts","summary":"Remember, if you raise MaxClients for an MPM in Apache, you must raise the ServerLimit directive, which is normally set to 256 on most servers.","title":"Raising MaxClients? Change ServerLimit."},{"content":"If you think you have a rooted RHEL box, you\u0026rsquo;ll want to run the usual rkhunter, chkrootkit, and you will want to inspect for rogue processes. However, if the rootkit hasn\u0026rsquo;t exposed its malfeasance yet, how do you know it\u0026rsquo;s there?\nrpm -Va RPM\u0026rsquo;s verify functionality can tell you what\u0026rsquo;s happened to files installed by an RPM since they were installed. Changes in permissions, file sizes, locations, and ownership can all be detected. Here\u0026rsquo;s some example output:\n.M....... /etc/cups S.5....TC c /etc/cups/cupsd.conf .......TC c /etc/cups/printers.conf .M....... /var/spool/cups/tmp S.5....T. c /etc/sysconfig/system-config-securitylevel S.5....T. c /etc/xml/catalog S.5....T. c /usr/share/sgml/docbook/xmlcatalog ........C /var/lib/scrollkeeper S.?...... /usr/lib/libcurl.so.3.0.0 So what do the letters mean?\nS file Size differs M Mode differs (includes permissions and file type) 5 MD5 sum differs D Device major/minor number mismatch L readLink(2) path mismatch U User ownership differs G Group ownership differs T mTime differs c %config configuration file. d %doc documentation file. g %ghost file (i.e. the file contents are not included in the package payload). l %license license file. r %readme readme file. Lots of MD5\u0026rsquo;s and ownerships will change from time to time, but watch out for any action in important executables, such as /bin/ls or /bin/passwd - if these have changed, you may be rooted.\n","date":"27 December 2006","permalink":"/p/rootkit-checks-on-rhel/","section":"Posts","summary":"If you think you have a rooted RHEL box, you\u0026rsquo;ll want to run the usual rkhunter, chkrootkit, and you will want to inspect for rogue processes.","title":"Rootkit Checks on RHEL"},{"content":"So you have multiple users that need to read and write to certain files on the filesystem? This can be done with vsftpd or proftpd quite easily. Let\u0026rsquo;s say you have users called ann, bill and carl and they need to manage files in /var/www/html. Here\u0026rsquo;s the steps:\nFor vsftpd, change the umask for files created by FTP users. Open the vsftpd.conf file and edit the following:\nlocal_umask = 077 \u0026lt;-- old local_umask = 022 \u0026lt;-- new For proftpd, change the umask for files created by FTP users. Open the proftpd.conf file and edit the following:\nUmask 022 This makes sure that new files are chmodded as 775 (full read/write for users/group, but only read for everyone else).\nNext, create a new group. We will call ours \u0026ldquo;sharedweb\u0026rdquo;:\ngroupadd sharedweb Now, put the users into that group by adding them in /etc/group:\nsharedweb:*:##:ann,bill,carl Modify the users so that their primary group is sharedweb. If you forget this step, when they make new FTP files, they will be owned by each user\u0026rsquo;s primary group (sometimes named the same as the user on some systems) and the permissions will be completeld hosed.\nusermod -g ann sharedweb usermod -g bill sharedweb usermod -g carl sharedweb Restart vsftpd to pick up the new configuration and your users should be able upload, delete, and edit each other\u0026rsquo;s files.\n","date":"27 December 2006","permalink":"/p/group-editing-with-vsftpd/","section":"Posts","summary":"So you have multiple users that need to read and write to certain files on the filesystem?","title":"Group Editing With FTP"},{"content":"If your server is spewing an invalid HELO, you could be blacklisted pretty quickly. The Spamhaus SBL-XBL list and CBL list work together to find servers announcing themselves improperly.\nThe common reasons why mail servers are blocked for bad HELO\u0026rsquo;s are:\nServer is announcing itself as “localhost”. Server is announcing itself as an IP address. Server is announcing itself as a hostname that does not exist. Are you unsure what your server\u0026rsquo;s announcing itself as? Try these:\nSend an e-mail to helocheck@cbl.abuseat.org. You will get an immediate response with exactly what your HELO contains. Telnet to port 25 on your mailserver. Run telnet mail.yourdomain.com 25 and wait a few seconds. Your server\u0026rsquo;s HELO message should appear. So your server is announcing itself as the wrong thing? Well, fix it!\nManaging HELO with QMail\nIf /var/qmail/control/me exists, edit it so that it matches your reverse DNS record for your server\u0026rsquo;s primary IP address. If the file doesn\u0026rsquo;t exist, you can create the file and add the correct hostname to it, or adjust your hostname on your operating system. Try running hostname mail.yourdomain.com to fix things immediately, and edit the proper configuration files to correct your hostname at boot time.\nManaging HELO with Postfix\nThe default value for Postfix\u0026rsquo;s HELO is the value of $myhostname. If that variable is defined in the main.cf, adjust it so that it matches the reverse DNS record of your server. If it isn\u0026rsquo;t defined in main.cf, then adjust the hostname on your operating system. Try running hostname mail.yourdomain.com to fix things immediately, and edit the proper configuration files to correct your hostname at boot time. Should neither of those methods suffice on your server, simply adjust the smtp_helo_name variable to match the reverse DNS record of your server. For example:\nsmtp_helo_name = mail.yourdomain.com Managing HELO with Sendmail\nAdjust the hostname on your operating system. Try running hostname mail.yourdomain.com to fix things immediately, and edit the proper configuration files to correct your hostname at boot time.\n","date":"27 December 2006","permalink":"/p/fixing-invalid-helos/","section":"Posts","summary":"If your server is spewing an invalid HELO, you could be blacklisted pretty quickly.","title":"Fixing Invalid HELO’s"},{"content":"Setting up Postfix to handle mail for a virtual domain and forward it to external mailboxes is pretty easy. Here\u0026rsquo;s an example for a few domains:\n/etc/postfix/main.cf\nvirtual_alias_domains = hash:/etc/postfix/mydomains virtual_alias_maps = hash:/etc/postfix/virtual /etc/postfix/mydomains\nfoo.com OK foo1.com OK foo2.com OK /etc/postfix/virtual\nfrank@foo.com frank@gmail.com jane@foo.com jane@earthlink.net jim@foo1.com jimmy@yahoo.com peter@foo2.com pete@hotmail.com Remember, each time you edit /etc/postfix/virtual, do the following:\npostmap /etc/postfix/virtual /etc/postfix/mydomains postfix reload ","date":"27 December 2006","permalink":"/p/postfix-virtual-mailboxes-forwarding-externally/","section":"Posts","summary":"Setting up Postfix to handle mail for a virtual domain and forward it to external mailboxes is pretty easy.","title":"Postfix – Forwarding Virtual Mailboxes"},{"content":"","date":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories"}]