My Immutable Desktop Workflow with Fedora Silverblue and Distrobox

Created:
Last Update:

Author: Christoph Stoettner
Read in about 5 min · 995 words

Balance

Photo by Michaela J | Unsplash

In late 2023, I started exploring immutable operating systems, specifically Fedora Silverblue and Bluefin . If you’re curious about why I made this switch and want more technical details, check out my Froscon Talk on ‘Next-Gen Desktops’ or watch the video presentation below.

What’s an Immutable OS?

Unlike traditional Linux distributions, Silverblue has an immutable core system. This means when you install new software with rpm-ostree, it creates a new layer on top of the base system. These changes require a reboot to activate - a small price to pay for the stability and reliability benefits.

I’ve personalized my setup by creating my own operating system image using scripts from Universal Blue . If you’re interested, you can find all the details in my repository and the corresponding GitHub Action . All my essential packages are included in this custom OS image.

My Daily Workflow with Distrobox

While my core OS contains the essentials, I install most applications via Flathub/Flatpak. For testing packages and doing my daily work, I rely heavily on Distrobox , which lets me run different Linux distributions in containers.

My Main Containers

I primarily use three containers:

  1. Fedora-based container for daily work with tools like:

    • NeoVim
    • Neomutt
    • Difftastic
    • mbsync
    • khal
    • vdirsyncer
  2. Kali-based container for basic penetration testing and security research

  3. Ubuntu-based container for tools not available in Fedora:

    • Swissbit software for security keys
    • Davmail for fetching emails from Office 365

Some of these tools are exported back to the main OS and can run as Systemd services or scheduled tasks (like mail and calendar synchronization).

Creating My Containers

I build my container images using GitHub Actions in my toolbox repository . Usually, I just need to add package names to the appropriate packages.<os> file (like toolboxes/fedora-toolbox/packages.fedora).

I define my containers in a distrobox.ini file and create them with:

distrobox-assemble create --file ~/distrobox.ini -R --name ubuntu
distrobox-assemble create --file ~/distrobox.ini -R --name fedora
distrobox-assemble create --file ~/distrobox.ini -R --name kali

The Ubuntu and Fedora containers are rootless, so they can’t modify the host OS. The Kali container runs as root, which gives it access to physical hardware (necessary for network traces and similar tasks).

To make accessing my containers easier, I’ve set up these aliases in my .bashrc:

alias fedora='distrobox enter fedora'
alias kali='distrobox enter --root kali'
alias ubuntu='distrobox enter ubuntu'

My typical workflow starts with tmux, where I create additional panes and windows for different shells as needed.

Installing New Software

Since installing packages in the core Silverblue image always requires a reboot, I normally install software in one of my Distrobox containers using dnf or apt.

If I find myself using a tool regularly, I’ll update the toolbox image (by editing the containerfile or packages.<os> file). For tools I only need temporarily, they’ll disappear the next time I run distrobox-assemble. This approach keeps my environment much cleaner than my previous “normal” desktop installations.

Running Container Software with Systemd

These containers work just like normal shells - no need to know anything about container technology. I can export binaries from the containers for use in the host system.

In my distrobox.ini, I define which programs to export:

[fedora]
...
exported_bins="/usr/bin/nvim /usr/bin/mbsync /usr/bin/difft /usr/bin/vdirsyncer /usr/local/bin/maestral /usr/bin/flameshot /usr/bin/syncthing"
exported_bins_path="/var/home/stoeps/.local/bin"
...

This creates executable shell scripts in ~/.local/bin that I can run from my default Silverblue shell (and in Systemd) or from another container. Here’s an example of such a script for Syncthing:

#!/bin/sh
# distrobox_binary
# name: fedora
if [ -z "${CONTAINER_ID}" ]; then
	exec "/usr/bin/distrobox-enter"  -n fedora  --  '/usr/bin/syncthing'  "$@"
elif [ -n "${CONTAINER_ID}" ] && [ "${CONTAINER_ID}" != "fedora" ]; then
	exec distrobox-host-exec '/var/home/stoeps/.local/bin/syncthing' "$@"
else
	exec '/usr/bin/syncthing' "$@"
fi

Since ~/.local/bin is in my $PATH, I can run all these scripts directly from my shell. The script checks whether I’m running it from inside the Fedora container, from another container, or from the host system, and routes the execution accordingly.

I can even run these exported applications as systemd services. Here’s my systemd definition for Syncthing in ~/.local/systemd/user/syncthing.service:

[Unit]
Description=Syncthing - Open Source Continuous File Synchronization
Documentation=man:syncthing(1)
StartLimitIntervalSec=60
StartLimitBurst=4

[Service]
ExecStart=/var/home/stoeps/.local/bin/syncthing serve --no-browser --no-restart --logflags=0
Restart=on-failure
RestartSec=1
SuccessExitStatus=3 4
RestartForceExitStatus=3 4

# Hardening
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
NoNewPrivileges=true

[Install]
WantedBy=default.target

This way, Syncthing runs automatically with my login, despite being installed inside a container!

Creating Desktop Entries for Container Applications

Another neat trick I use is creating desktop files for GUI applications running in my containers. This allows me to launch these applications directly from my desktop environment (Gnome or KDE) just like regular applications.

I create .desktop files in ~/.local/share/applications/. Here’s an example for Davmail from my Ubuntu container:

[Desktop Entry]
Comment=Access m365 mail and calendar
Exec=distrobox-enter -n ubuntu -- bash -cl "/home/stoeps/.local/bin/davmail"
GenericName=Davmail
Name[en_US]=Davmail
Name=Davmail
Icon=/var/home/stoeps/Pictures/davmail.png
Path=/var/home/stoeps
StartupNotify=true
Terminal=false
Type=Application
Version=1.0

The key part is the Exec= line, which uses distrobox-enter to run the application inside the appropriate container.

After adding or modifying desktop files, I update the application database with:

update-desktop-database -v ${HOME}/.local/share/applications/

This ensures the new application appears in my desktop environment’s application menu. Now I can launch container-based applications with a simple click, just like any native application!

Manage the services and desktop files

To reuse these service and desktop definitions, I added them to my chezmoi dotfile repository . At Chemnitzer Linuxtage 2025, I made a talk to start with Chezmoi

Final Thoughts

Switching to an immutable OS with containers has made my computing environment more reliable and clean. The separation between core system and applications helps prevent dependency conflicts, and the container approach lets me experiment freely without worrying about breaking my system. Plus, rebuilding containers regularly helps avoid the cruft that tends to accumulate in traditional installations.

The combination of exported binaries, systemd services, and desktop entries makes the containerized applications feel completely native - I get all the benefits of isolation without sacrificing convenience.

If you’re considering a similar setup or have questions about my workflow, feel free to reach out!

Comments
Error
There was an error sending your comment, please try again.
Thank you!
Your comment has been submitted and will be published once it has been approved.
Success!
Your comment has been posted successfully.

Comments

Loading comments...

Leave a Comment

Your email address will not be published. Required fields are marked with *

Suggested Reading
Card image cap

Since end of last year, I use Fedora Silverblue or, better Universal Blue on my notebooks for daily work and my personal projects.

Created: Read in about 2 min
Aaron Burden: Fountain pen and a notebook

In the last days I had a problem with a crashed virtual disk on a WebSphere Application Server. The backup team was able to recover all the data, but the operating system needs to be reinstalled. The operating system was Red Hat Linux, so rpm-based. One of the first tasks after recovery was to identify and reinstall missing packages.

Created:
Last Update:
Read in about 2 min
Card image cap

This year marked my inaugural attendance at the Chemnitzer Linuxdays. The experience was fantastic, and I had the pleasure of connecting with numerous intriguing individuals. Chemnitzer Linuxdays stands out as one of the premier events in the Linux and open-source community. With 3,200 attendees this year, participants had the opportunity to engage with a diverse array of topics through 94 talks and hands-on workshops .

Created: Read in about 1 min