What are your ‘defaults’ for your desktop Linux installations, especially when they deviate from your distros defaults? What are your reasons for this deviations?
To give you an example what I am asking for, here is my list with reasons (funnily enough, using these settings on Debian, which are AFAIK the defaults for Fedora):
-
Btrfs: I use Btrfs for transparent compression which is a game changer for my use cases and using it w/o Raid I had never trouble with corrupt data on power failures, compared to ext4.
-
ZRAM: I wrote about it somewhere else, but ZRAM transformed even my totally under-powered HP Stream 11" with 4GB Ram into a usable machine. Nowadays I don’t have swap partitions anymore and use ZRAM everywhere and it just works ™.
-
ufw: I cannot fathom why firewalls with all ports but ssh closed by default are not the default. Especially on Debian, where unconfigured services are started by default after installation, it does not make sense to me.
My next project is to slim down my Gnome desktop installation, but I guess this is quite common in the Debian community.
Before you ask: Why not Fedora? - I love Fedora, but I need something stable for work, and Fedoras recent kernels brake virtual machines for me.
Edit: Forgot to mention ufw
KDE, just because it’s a good balance of usability and customisability.
I don’t think I will ever go back to a filesystem without snapshot support. BTRFS with Snapper is just so damn cool. It’s an absolute lifesaver when working with Nvidia drivers because if you breathe on your system wrong it will fail to boot. Kernel updates and driver updates are a harrowing experience with Nvidia, but snapper is like an IRL cheat code.
OpenSuse has this by default, but I’m back to good ol’ Debian now. This and PipeWire are the main reasons I installed Debian via Spiral Linux instead of the stock Debian installer. Every time I install a new package with apt, it automatically created
pre
andpost
snapshots. Absolutely thrilled with the results so far. Saved me a few hours already, after yet another failed Nvidia installation attempt.Nice use case for snapshots! :-) I’ll put it in my backlog, perhaps it is a nice insurance for my crash prone machines.
Please tell me more about Spiral Linux. I’m not a huge Debian fan personally(at least for desktop), but I often install Linux on other people’s machines. And Mint/ Debian is great for them.
How does it differ from stock?
Details on the Spiral Linux web site: https://spirallinux.github.io/
Key points are BTRFS with Snapper, PipeWire, newer kernels and some other niceties from backports, proprietary drivers/codecs by default, VirtualBox support (which I’ve personally had huge problems with in the past on multiple distros). They also mention font tweaks, but I haven’t done side-by-side comparisons, so I’m not sure exactly what that means.
Edit: shoutout to Spiral Linux creator @sb56637@lemmy.ca , who posted a few illuminating comments on this older thread: https://lemmy.ca/post/6855079 (if there’s a way to link to posts in an instance-agnostic way on Lemmy, please let me know!)
deleted by creator
How does it differ from stock?
Well for one thing their driver support is apparently “harrowing”. 😊
I will never understand why people choose distributions that will brick themselves when the wind blows, so they add snapshot support as a band-aid, and then they celebrate “woo hoo, it takes pre and post snapshots after every package install!”
How about using a distro where you never have to restore a snapshot…
To clarify, this is my first time using Spiral Linux. My experience regarding Nvidia drivers is across several different distros (most recently Ubuntu LTS and OpenSuse Tumbleweed). I have never had a seamless experience. Often the initial driver installation works, but CUDA and related tools are finicky. Sometimes a kernel update breaks everything. Sometimes it doesn’t play nice with other kernel extensions.
The Debian version of the drivers didn’t set up Secure Boot properly. Instead, I rolled back and used the generic Nvidia .run installer, which worked fine. Not seamless, obviously, but not really worse than my experience on other distros. In the future I will always just use the generic installers from Nvidia.
Point is, with BTRFS you can just try anything without fear. I’m not going to worry about installing kernel updates from now on, or driver updates, or anything, because if anything goes wrong, it’s no big deal.
And my point is that it’s not normal to fear updates. Any updates, but especially updates to essential packages like the kernel or graphics driver.
If you’re using the experimental branch of a distro or experimental versions of packages on purpose then snapshots are a good tool. But if you’re using a normal distro and its normal packages you should not have to resort to such measures.
Nvidia just sucks across every distro I’ve used. Have you had good experience running CUDA, cuDNN, and cuBLAS? If so, which distro?
And have you run it alongside other things that require kernel modules, like ZFS and VirtualBox?
- NixOS
- disko + nixos-anywhere (automatic partitioning & remote installation of new systems)
- stylix (system-wide theming)
- agenix (secret management)
- impermanence (managing persistent data)
- nixos containers for sandboxing applications & services (using systemd-nspawn)
- TMPFS as /
- LUKS
- BTRFS as /nix (might try bcachefs)
- SWAP partition (= RAM size, to susbend to disk)
- Greetd with TUIgreet (DM)
- SwayFX (WM)
- Kitty & foot (term)
- Nushell (shell)
- Helix (editor)
- Firefox (browser)
- slackhq/nebula (c.f. self-hosted tailscale, connecting my systems beyond double NATs)
EDIT1: fix “DE” -> “DM”
Now that’s quite an interesting NixOS setup, I’m especially intrigued by the tmpfs root portion. The link you provided was a great read, and I’ll keep this and honestly most of what you’ve described in mind for when I mess with NixOS again.
There are also these two blog posts by elis on setting up tmpfs specifically. Though these posts rather are setup guides, than “talking about the philosophy” of systems design.
Much appreciated, I’ll definitely take a look!
This is a very interesting setup would you mind providing more explanation / documentation? Also would you mind sharing your nixOS config? I would love to try it.
My system configuration can be found on git.sr.ht/~sntx/flake. I’ve linked the file tree pinned to the version 0.1.1 of my config, since I’m currrently restructuring the entire config[1] as the current tree is non-optimal[2].
The documentation in the README in combination with the files should cover most of what I’ve described, with the following exception: disko is not present to the repo yet, since I’ve set it up with a forked version of my config and the merge depends on finishing the restructuring of my system configuration.
- You can take a look at these (non-declarative) installation steps to get an idea on how TMPFS as root can be setup
- If you’re interested, I can also DM you the disko expression for it
The goal is to provide definitions for desktops, user-packages, system-packages, themes and users. Each system can then enable a set of users, which in turn have their own desktop, user-packages and theme. A system can also enable system-packages for itself, independent of users. If a user is enabled that has a desktop set, the system will need to have display-manager set as well, which should launch the users configured desktop. ↩︎
The current config assumes a primary user, and can only configure a single DE and apply the application/service configs only to that user. ↩︎
This looks like a whole project. What is the overall goal of this build?
I am very new to nixOS and am interested in it. Specifically for ansible scripts to build out easily replicateable docker hosts for lab. I have also considered it for switching my primary desktop and laptops as being able to have the same OS with everything the way I like it is also intriguing.
Sorry for theate response. P.S. I love your wallpaper.
What is the overall goal of this build?
There’s no overall goal to the project. It’s just the result of me tinkering with my systems from time to time (I’m allocating a bit less than three hours each day to coding on personal projects to improve my skills, some of that time flows into my nixos config).
I am very new to nixOS and am interested in it. Specifically for ansible scripts to build out easily replicateable docker hosts for lab.
I’ve extensively used docker/compose before I switched my systems to NixOS, since then I’ve barely touched it.
The thing with Ansible and Docker is that you mostly define the steps you want your systems to automatically go through to reach a specific state.
Nix[1] approaches the problem the other way around. You define the state you want to have, and Nix solves for the steps that need to be taken to reach that state.
If you want to try your hands at that concept, I recommend installing just Nix on one of your test machines and trying out
development shells
/devshells
with it.For example the SwayFX repo contains a
flake.nix
providing adevShell
. This allows everyone working on the project to just runnix develop
in the cloned repo, ornix develop github:WillPower3309/swayfx
without cloning the repo to enter the development environment.This can be combined with tools like direnv to automatically setup development environments, based on the current directory.
If you want a more encompassing example of what Nix can provide, take a look at:
- nixified.ai
- This presentation by Matthew Croughan on Nix-Flakes and Dockerfiles.
I have also considered it for switching my primary desktop and laptops as being able to have the same OS with everything the way I like it is also intriguing.
While I personally think NixOS is one of the most potent software in existence, and a computer without feels less capable for me, I do not recommend it easily.
Just take a look at hlissner’s FAQ on his system config (which I greatly agree with).
That said, I initially tried NixOS on my PC and pushed the config to a git-forge. I then installed the base NixOS ISO on my laptop and told it to build the config from git. And that worked flawlessly.
In leaving the PC unattended for about 20mins, it went from a full Gnome desktop to my Sway setup.
That’s the point when I was sold.
Sorry for theate response. P.S. I love your wallpaper.
Don’t worry about the late reponse ^^
The wallpaper can be build with
nix build sourcehut:~sntx/nix-bg#abstract-liquid
btw.
The “package manager” that NixOS is build around. Though I think of it more as a “build system” - not to be confused with Nix, the language the build “scripts” are written in. ↩︎
Humm good points in the articles. I think my goal of building docker hosts makes more sense. It is interesting how the took the declarative concepts of something like terraform and kubernetes and built it into an OS. It’s kind of like fedora silverblue but the two took different approaches. Perhaps fedora makes more sense on a desktop. I have a dev and DevOps background and like the idea of being able to more deeply learn Linux without having to rebuild my system from scratch when I bust it.
Can you explain home manager? What about things to consider when installing NIX package manager on another distro?
Perhaps figuring out how to get the wallpaper out of a nix distrobox would be a good learning experience.
- NixOS
- LUKS
- Btrfs
- sway
Nobara KDE user here. One of the reasons why I chose it is because it comes with many of the customisations that I’d normally do (such as using an optimized kernel). But in addition, I use:
- Opal instead of LUKS
- KDE configured with a more GNOME/macOS like layout (top panel+side dock)
- GDM instead of SDDM, for fingerprint login
- Fingerprint authentication for sudo
- TLP instead of power-profiles-daemon for better power saving (AMD P-State EPP control, charging thresholds etc)
- Yakuake terminal (and Kitty for ad-hoc stuff)
- fish shell instead of bash
- mosh instead of ssh
- btop instead of top/htop
- gdu instead of du/ncdu
- bat instead of cat
- eza instead of ls
- fd instead of find
- ripgrep instead of grep
- broot instead of tree
- skim instead of fzf
Impressive list! What is the benefit of using Opal compared to LUKS?
Opal drives are self-encrypting, so they’re done by the disk’s own controller transparently. The main advantage is that there’s almost no performance overhead because the encryption is fully hardware backed. The second advantage is that the encryption is transparent to the OS - so you could have a multi-boot OS setup (Windows and FreeBSD etc) all on the same encrypted drive, so there’s no need to bother with Bitlocker, Veracrypt etc to secure your other OSes. This also means you no longer have a the bootloader limitation of not being able to boot from an encrypted boot partition, like in the case of certain filesystems. And because your entire disk is encrypted (including the ESP), it’s more secure.
Thank you very much for your explanation.
I still feel skeptical about using a chips controller for encryption. AFAIK there have been multiple problems in the past:
- Errors in the implementation which weaken the encryption considerably
- I think I even read about ways to extract the key from the hardware (TPM based encryption)
Do you provide a password and there are ‘hooks’ which the boot process uses for you to enter the password on boot?
I think it is nice to have full disk encryption, but usually we are speaking about evil-maid attacks (?), and IMHO it is mostly game over when an attacker has physical access to your device.
Yes, I do provide a password on boot, as you said, keys can be extracted from the hardware so that’s not secure, which is why I don’t use the TPM to store the keys.
There are no hooks necessary in the bootloader, as it’s the BIOS which prompts you for the password and unlocks the drive.
And yes, there have been implementation problems in the past, but that’s why the Opal 2.0 standard exists - don’t just buy any random self-encrypting drive, do your research on past vulnerabilities for that manufacturer, and check if there are any firmware updates for the drive (don’t just rely on LVFS).
Also, the common hardware attacks rely on either a SATA interface (to unplug the drive while it still has power) or older external ports vulnerable to DMA attacks such as PCMCIA or Thunderbolt 3.x or below; so those attacks only affects older laptops. Of course, someone could theoretically install a hardware keylogger or something, but this is also why you have chassis intrusion detection, and why you should secure and check any external ports and peripherals connected to your machine. Overall physical security is just as important these days.
But ultimately, as always, it comes down to your personal threat model and inconvenience tolerance levels. In my case, I think the measures I’ve taken are reasonably secure, but mostly, I’ve chosen Opal for performance and convenience reasons.
Thank you very much for elaborating. :-)
My next project is to slim down my Gnome desktop installation, but I guess this is quite common in the Debian community.
This is pretty easy on Debian.
- Uncheck all tasksel entries during initial installation
- Reboot
sudo apt install gnome-shell gnome-terminal nautilus
- Reboot again.
It’ll boot right into a fully functional Gnome desktop and hardly anything else. The only extra software this installs are yelp, gnome-shell-extension-prefs and network-manager-gnome. Uninstall them with sudo apt purge and sudo apt autoremove --purge if you don’t need them. sudo apt install cups if you need printing and remove your wifi device from /etc/network/devices to let network-manager-gnome handle wifi if you use it.
Your system will require 2.8GB of disk space.
Yes Debian, then use Flatpack to get all the latest desktop software and enjoy.
Yep, that’s exactly the purpose of this.
Thanks for the list.
The way I setup my minimal systems is to uncheck everything during tasksel, then switch to another virtual console, chroot to /target and install what I need. Saves one reboot and hassles, when installing via thump drive. (Did this for Xfce in the past.)
What about fedora silverblue? Would it have saved you?
I totally love the idea of Fedora Silverblue and UBlue. Played around with Silverblue and perhaps it will replace my Debian installation on my multi media laptop. Still, no substitute for Debian since the kernel is too new/fast changing (problems with VM and I don’t want to pin an old kernel w/o security updates forever) and I have a very custom (but fully automated) setup via Ansible, which wouldn’t work like this on Silverblue. (I would have to use Ansible for the host and then create a lot of custom containers, to the best of my understanding.)
I’ve never had a problem with ext4 after power failure.
Zram is not a substitute for swap. Your system is less optimal by not having at least a small swap.
Firewalls should never default to on. It’s an advanced tool and it should be left to advanced users.
Not to mention how much grief it would cause distro maintainers. If they don’t auto configure the firewall they get blasted by people who don’t know why their stuff isn’t working. If they auto configure they get blasted by people upset that the auto configurator dared change their precious firewall rules. You just can’t win.
Honnestly. Firewalls shut be enabled by default. Specially on laptops connecting to public places.
A good default shut be choosen by the disteo maintainer. A default shut not overwrite your own config. Like any config really. So no upset folks that like to change the firewall. Also if you dont block much outgoing trafic you are not likely to run into problems. And for people that like to poke holes in the incoming trafic. Your a “advanced” user anyway.
So what should happen when the user installs a service that needs an open port in order to work? Presumably the whole point of installing it being to, you know, use it.
Their are not many programs that require open ports for incoming trafic. Things like ssh or a web server do. But then again those are services you would manualy want to open anyway.
What is the difference between physical swap and having a swap partition on ZRAM, especially for the kernel? To the best of my knowledge, nearly no Linux distribution supports suspend to disk any more, any ZRAM swap looks for the kernel like … swap. Thanks to the virtual file system. Further, I have high trust in the Fedora community, which decided to use ZRAM.
We can agree to disagree about the firewalls, especially for people who don’t now why their stuff isn’t working, it protects them and is much better than having unconfigured services with open ports on a laptop in a public network IMHO.
Why does not having swap make the system less optimal? Considering obviously it has more than enough ram available.
Swap holds memory pages which are not currently used. Putting them out of the way will optimize the main RAM for normal operations.
It’s not a huge difference on a modern fast system with lots of actual RAM but it can be felt on older systems and/or less RAM.
So it’s not not having swap that makes the system “less optimal” but not having enough RAM if I understand correctly?
They go hand in hand. Given enough RAM you can keep the swap in RAM rather than on disk to make it faster, but you still need swap.
I’m confused, so if there’s no swap, what is the system doing given enough RAM? What’s the impact?
Perhaps this can help: https://chrisdown.name/2018/01/02/in-defence-of-swap.html
I have a question about swap.
My current rig has 64 gb, and I opted to not create a swap partition. My logic being I have more than enough.
The question is does swap ever get used for non-overflow reasons? I would have expected 64 GB to be more than enough to keep most applications in memory. (including whatever the kernel wants to cache)
I believe so, though I went without swap for a while myself and never noticed any issues. When in doubt a 1gb swap partition can’t hurt.
Start with a small swap file (100 MB) and see how much gets used, no need to waste 1 GB.
I also have 64 GB and yes, it gets used. For very low quantities, mind you, we’re talking couple hundred KB at most, and only if you don’t reboot for extended periods of time (including suspend time).
Creating a big swap is not needed, but if you add one that’s a couple hundred MB you will see it gets used eventually.
You don’t have to create a swap partition, you can create a swap file (with dd, mkswap, swapon and /etc/fstab). You can also look into zswap.
Swap is not meant as overflow “disk RAM”, it’s meant as a particular type of data cache. It can be used when you run out of RAM but the system will be extremely slow when that happens and most users would just reboot.
- /boot and root partition: i dont use swap (i dont need it, i have plenty of ram) and i usually encrypt the root partition with luks
- ext4: ppl keep telling me btrfs is better and all that but idk shit about filesystems and ext4 just works
- any x11 wm: currently im using qtile and ive used bunch of wms in the past
- alacritty: its fast and it has easy config with great doc
- firefox with arkenfox userjs, ublock and tor proxy configuration
- (neo)vim
- qemu/kvm/virt-manager
- doas
- fish shell
Well, almost the opposite of you, I currently use Fedora Silverblue (including BTRFS which I very much appreciate for versioned backups), except that I override GNOME Software (never got it to work properly for me) and Fedora’s Firefox (I use the Firefox from Flathub but not Fedora).
I feel envious - I would love to run Silverblue like you do! :-)
deleted by creator
Gnome with Wayland: I am just too used to the touchpad gestures and sleek looking apps to go back. Even windows looks and behaves janky in comparison
Firefox: plain better than the alternatives, the scrolling is so much better under Wayland too
The auto dark mode GNOME extention: it between dark and light mode depending on the time of day
Rounded window corners GNOME extension: forces all 4 corners of applications to have rounded corners
Separate /home partition, very useful for distro hopping or in case just going the nuclear option and reinstalling everything is the easiest way to deal with a breakage
Xfs filesystem and a kernel with BORE scheduler, which are the default on CachyOs for a faster and snappier system.
Once, some years back, I posted a topic on how could I slim down my Gnome DE.
It sparked a rather long and complex discussion and the bottom line was that Gnome integration was already at a point where so many parts depended on so many it was not an easy task.
I opted to move to a GTK compatible DE. Currently I use XFCE but spent years with Mate.
I still use Mate, but switched the window manager to i3.
Xfce / Mate are great (and lightweight) options!
I used Mate for years, but at some point it became unstable for me. I need Wayland, though, so I have to hold my breath until Xfce supports it in the future.
Fedora Workstation (i.e. the Gnome one)
Separate /home partition
Then the only other changes are to a few keyboard shortcuts, icons, and changing Firefox to a GTK4-style theme.
I’m coming back to linux as a main desktop, finally ditching windows (again). I tried out fedora workstation and the fedora KDE spin. KdE looks so good now, before i atteibuted it to a windows wanna-be knock off. This was back in the windows xp days… now it looks so polished. I probably prefer it to gnome because I’ve been a windows user for so long but gnome is nice with its minimal approach, looks nice and clean. Can’t get away from how nice KDE looks though, I’m going to stick with that I think.