Apparently, one of the reasons Steam Deck’s underlaying technology owes to Nier Automata if the itnerviewlets at Proton and Tier: Automata – the unique story behind what makes Steam Deck tick, are to be believed. Which really doesn’t surprise me.

Steam Deck’s graphics and battery life in my opinion aren’t as impressive as achieving them in such a small, portable package. You get roughly Xbox One grade graphics from roughly Xbox One grade hardware, and x86 will never offer great battery life under heavy load. But it’s got one thing I love most of all.

Video games work on it. There’s a fair bit of video games on Steam that actually have a native Linux version, and unlike the support for macOS, it’s not quite a joke. But the vast majority of games are Direct3D based games for Windows that require DirectX. That’s how video games are written in this world.

Yet, Steam Deck runs them well as the hardware is capable. In ways that I was never able to achieve back in the day, now more than a decade in the past, using purely Wine and derivative solutions. So I find myself very glad now that folks made a video game with 2B and 9S 🙂

Actually, that reminds me: I’ve been debating picking up a copy of the game on Steam one of these sales. Haven’t played it since I was active on console, and I haven’t even bothered to hook up Deathstar One since moving thanks to getting Rimuru operational and Steam Deck largely taking over for both the ol’ Steam Link and Deathstar One.

Network device names are meaningless

Annoying factoid, the modern naming of Linux network interface cards provides a consistent way of naming the devices. But not a permanent one.

Taking advantage of Rimuru’s old dual port 10G USB-C card that I replaced with a Thunderbolt 4 card, happily works. But as a side effect this means that enp3s0 and wlp4s0 are now enp4s0 and wlp5s0, which as you might expect breaks the networking configuration for both interfaces.

Because why the fuck would you expect devices to retain their topology just because they happened to be soldered to the board? On the upside at least it became obvious what was going on when I inspected the files in /etc/NetworkManager/system-connections and noticed that the digits had changed.

I’m guessing that since Zeta’s lone PCI-E slot is an x16, it ends up numero uno. It just so happens that I have an PCI-E x4 card plugged in with a USB controller instead of a GPU, because the machine’s jobs are all server related. Although, I bet she would make a dandy little gaming box in as much as a 2-slot wide GPU and a SFX PSU can actually handle anything modern and juicy. Especially if Valve was to you know, release StreamOS 3.x for PC instead of Steam Deck only 🙂

lspci -t -v
-[0000:00]-+-00.0  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne Root Complex
           +-00.2  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne IOMMU
           +-01.0  Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge
           +-01.1-[01]----00.0  ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
           +-02.0  Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge
           +-02.1-[02-05]--+-00.0  Advanced Micro Devices, Inc. [AMD] 500 Series Chipset USB 3.1 XHCI Controller
           |               +-00.1  Advanced Micro Devices, Inc. [AMD] 500 Series Chipset SATA Controller
           |               \-00.2-[03-05]--+-00.0-[04]----00.0  Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
           |                               \-01.0-[05]----00.0  Intel Corporation Dual Band Wireless-AC 3168NGW [Stone Peak]
           +-08.0  Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge
           +-08.1-[06]--+-00.0  Advanced Micro Devices, Inc. [AMD/ATI] Cezanne [Radeon Vega Series / Radeon Vega Mobile Series]
           |            +-00.1  Advanced Micro Devices, Inc. [AMD/ATI] Renoir Radeon High Definition Audio Controller
           |            +-00.2  Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor
           |            +-00.3  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1
           |            +-00.4  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1
           |            \-00.6  Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller
           +-14.0  Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller
           +-14.3  Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge
           +-18.0  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 0
           +-18.1  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 1
           +-18.2  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 2
           +-18.3  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 3
           +-18.4  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 4
           +-18.5  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 5
           +-18.6  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 6
           \-18.7  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 7

Backups, backups, backups

Now that Zeta is effectively operational, I’ve turned my master plan to its next stage: not losing data.

Cream’s drives had a very simple arrangement. One drive, designated “Master”, or M:, was the base of all the file shares kept there. A second drive, “Backups”, or Z:, was kept next to it, and a scheduled task would run a robocopy in mirror mode three times a week to sync the drives up. Nice and simple and cheesy, and for bonus points the mirrored drive racked up about half the power on hours over the past few years. For monitoring purposes, the log was saved to one of Cream’s internal drives which was periodically imaged to the host specific backups area on the master drive.

Zeta on the other hand doesn’t have the curse of NT, but I did kind of like this simplicity. It fits my recovery model where having an easily recovered copy of data is desirable, but changes are infrequent enough that rolling up the backups every 2 or 3 days is probably okay. At first, I decided to create one disk as ext4 to function as the backups, because dependable and trusted, while making the other xfs to function as the master, because that’s the default in AlmaLinux 9.

This created one small problem however, in that getting rsync to play nice with the SELinux, POSIX ACLs, and a few extended attributes proved to be a pain in my ass! For SELinux, you can just relabel the drive after. Not something I want to scale up to 8 TB but not too bad for the actual storage use (2 TB) today. But then we’ve got the issue of the POSIX ACLs and extended attributes used on my file share infrastructure.

Turns out that rsync’s --archive flag effectively breaks the flags you would want for synchronizing these, and then leaves you to go fiddle around with permission masks. So, I said fuck that. I was rather disappointed in rsync at that, but let’s face it, acls and xattrs aren’t that popular when 1970s unix permissions are an 80% solution.

After taking suitable backups (one local, one remote) of the critical files, I set about turning to tools that I know how to fuck with. The backup drive was sacrificed to create one disk of a RAID1 Mirror, and since madam allows specifying the drives like missing /dev/sdwhatever or vice versa, it was easy to spawn the array in degraded form. Then sync the data to the array from the master drive, before wiping and adding it to the mirror’s missing slot. About 10 or 11 hours later syncing at max speed, everyone’s all riled up and gone through the reboot test.

How did I migrate the data if rsync was being a bugger, you ask? Well, it’s slow as hell, but cp --archive and tar --acls --selinux --xattrs really does do what you want when you’re Rooty Tooty and want a lossless copy :P.

In the past, I would typically have used LVM2 pools to manage this sort of operation. It’s overly complicated command line administrata, but hey, it works well and it has features I like, such as snapshots and storage pools. The advantage for me of mdadm is that it is very simple to manage thanks to fewer moving parts.

Having been “That guy” at some point in my career who ended up writing the management software my old job used for mdadm software raid in their audio IRDs, and then later extended to custom hardware built ontop of firmware raid, I know how to use mdadm and more importantly, how reliable it is — and how easy it is to recover a mirror without fucking up. Which, you know, is like the number one way your data goes bye, bye when recovering, right next to oh shit the drive died before it was synced. As much as I appreciate LVM2, it’s got enough moving parts that I’m more leery about the failure scenarios. More importantly, I have more experience with mdadm failure and recovery than I do with LVM.

Of course, this does create a new problem and its own solution. Since my backup drive is now in hot sync with the master drive, it is no longer uber idle enough to be considered a ‘backup’. No, it’s redundancy to buy time to replace drives before the entire array goes to the scrap yard.

This doesn’t really change my original recovery scenario: which is “Go buy two drives if one fails,” it just means that there is a higher probability that both drives will actually fail closer together when that happens. What’s the solution to this? Why, my favorite rule of data storage: ALWAYS HAVE A BACKUP! Thus, a third drive will be entering the picture upon which to do periodic backups of the array, and be kept separate and offline when not being refreshed.

In practice though, this will be more like a fourth drive; in the sense of ‘smaller disk, most important data’ and ‘big ass disk, all the data’. My spare archive drives are large enough to easily do the former, and one can basically contain the entire ‘in use’ storage or close to it, but none of my spares for sporadic backup has the capacity to handle the entire array.

Networks and Pizza

Having finally merged some code that’s been stuck in my craw, I decided on a mini-celebration: pizza and eggplant parmigiana, although sadly I forgot about the beer in the fridge. Oh, well; it’ll be there to go with the leftovers 😋.

On the flip side, I think it’s almost time to declare Zeta an operational battle station.

The first problem was I/O performance. Her predecessor, Cream had been pressed into sharing its Wi-FI with Rimuru, leaving the SMB shares on Cream only accessible via wireless clients. Having fished out the aerials that came with Rimuru’s Motherboard 2.0, that solved that connectivity gotcha. But not the simple fact that the file server and the clients are within a meter or two of each other, and the access point is across the house! As much as I suspect a mesh system will be the upgrade path for my network, I’m not replacing that router until it dies or Wi-Fi 7 is ready to rock.

Thus, my shiny new file server was only achieving about 5 MB/s connectivity with my Mac and PC on the other side of the L-shaped monster. Now, I’ve never expected big things of Samba compared to NT’s SMB stack, but Samba’s got waaaay better performance than that and so does Zeta’s hand me down platter drives. My solution to this problem? Gigabit!

At first, I attempted to solve this problem using the combination of libvirt and pfSense. But, I didn’t have much luck getting the bridging to work in order to have a VM on the host be a router while the client functions as the physical. In the end, I discarded this idea and configured Zeta to function as the router for my little local IPv6 network. Yeah, that’s right: I said IPv6, baby! Since this is a local network intended to join Zeta (server), Shion (Mac), and Rimuru (PC) and the occasional other machine, I opted to set this up as IPv6. There’s no real need for IPv4 in my desk’s wired LAN. Maybe I’ll enable IPv4, so I can jack old PowerBook G3 into the switch since MacOS 9.x probably lacks IPv6 support the way Sonoma lacks AppleTalk support 🤣.

Configuring things was pretty easy. A little bit of radvd to enable the Router Advertisement and Router Solicitation issues and for good measure, setup DHCPv6 as an insurance policy, and configured the Ethernet port with the desired address and itself as the gateway. In the future, I may try setting up BIND, so I can have DNS A records map to Zeta’s IPv4 address on the household Wi-Fi and AAAA records map to Zeta’s IPv6 on the desk’s Ethernet, or perhaps even separate domains. But I’m a little hesitant of taking out DNS whenever I reboot the server.

On the flip side, thanks to the lack of fuckwittery, Samba and the SMB stacks on Mac and NT just handles this case fine. Navigating to \\ZETA or smb://ZETA while jacked into the local Ethernet switch nets me about 80 to 115 MB/s, or roughly how fast you can spew data over a Gigabit link to SATA powered things. Seems that the SMB stacks are smart enough to prefer the local Ethernet, but something more DNS aware is how to fix cases like SSH.

The next phase has been setting up the virtual machine environment, which will probably replace the Parallel’s VMs I sometimes spin up on my Mac and the WSL2 environments on my PC. For this, it basically amounted to setting up a bridge interface with the same IP information and using Zeta’s Ethernet port as its bridge port. Then setting the virtual machine’s second interface to bridge to LAN, so that it can be routable over the local switch.

Thus, Shion, Rimuru -> Zeta works. Shion, Rimuru, Zeta -> some VM on Zeta works. Muhuahuaha!

Decommissioning Cream

As the process of migrating files from Cream to Zeta continues, and rather devolves into making more like 1983 than 2023, I am reminded of how much I despise using Windows machines in important roles on my network.

Yes, the whole experiment of using Windows 10 for my home file server worked out pretty well relative to what I expected. But also, yes–it has pissed me off a lot over the years.

More than a few times in the last 6 – 8 years or however long it has been, I’ve thought to myself, “Gee, if I had just loaded Debian or FreeBSD a few months later like I had planned…” that it would have been cheaper in the long run. To be fair, there have also been times that I found it rather neat, but most of those involved things like ssh/scp becoming (mostly) first class citizens in the land of NT.

I am sure, whether or not Zeta proves to be closer to the “Ten year server” plan than Cream did, AlmaLinux will at least be less of a pain in my ass than NT was.

RAM versus I/O

And this my friends, is why I love having extra memory!

[terrypoulin@zeta ~]$ dd if=/dev/zero of=./dd.test bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 0.38554 s, 5.4 GB/s
[terrypoulin@zeta ~]$ dd if=/dev/zero of=./dd.test2 bs=1M count=2000 oflag=direct
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 4.01215 s, 523 MB/s
[terrypoulin@zeta ~]

This machine has a cheapo 1 TB Inland Professional 2.5″ SATA SSD to serve as its system disk. But she’s got 64 freaking gigs of RAM. Yes, that’s right – sixty-four freaking gigs!

[terrypoulin@zeta ~]$ free -h
               total        used        free      shared  buff/cache   available
Mem:            61Gi       966Mi        57Gi       8.0Mi       4.1Gi        60Gi
Swap:           31Gi          0B        31Gi

The first dd command writes 2 GB of zeros to a file one MB at a time, as fast as the system can go. Thanks to the OS being able to say, “Hey, I’ve got memory to buffer that; carry on wayward son,” it is completed Really Damn Fast. This buffering isn’t good for the case of a slow removable disk (or IMHO, oh shit, batteries), but is very effective when doing a lot of file I/O such as compiling software or working on large projects with many files. By contrast if the system had little available memory, it wouldn’t go so fast.

The second command, effectively says the same thing but uses Direct I/O to ensure the data is spewed the disk quickly and immediately, meaning that we get the speed a decent SATA SSD can achieve when combined with its own little bit of internal buffering. But we don’t experience the crazy speeeeeeeed that is RAM.

Why is this important for Zeta? Well, Zeta is replacing Cream — which means she has an 8 TB storage array to take care of, spends most of her life dealing with networked file transfers, streaming media to client machines, and unlike Cream, will end up running several virtual machines thanks to having enough extra memory. Did I mention, fire sales on DDR4 have already begun to make way for DDR5? 😁

 In my opinion this video should be titled, “on why user space Linux sucks”.

In terms of what most users think about in terms of desktop this video has jack shit to do with you. Rather the video mainly focuses on the concerns of packaging your binaries and expecting it to run on Joe Random Linux Distribution.

I kind of applaud Torvalds for his long fought religious mantra of Don’t Break User Space. When you’re working with Linux itself, out of tree drivers breaking or needing pieces rewritten isn’t that unusual. Don’t maintain your driver, and you’re liable to go oh snap they replaced an entire subsystem or removed a deprecated API after comical number of years. But compatibility between the Linux kernel and user space software, is pretty superb.

One of the reasons why MS-DOS PCs took off, and CP/M before it, is the drive towards binary compatibility between customer machines. As much as Windows has often deserved its hate, backwards compatibility and stable ABIs–not I said, ABI, not API, has generally been pretty good.

Binary compatibility between Linux distributions has improved from the days where source systems were the best way to make shit work. But just the same, I did have to snicker at Torvald’s comments about the GNU C library (glibc), which has often pissed me off over the years with their concept of compatibility for such a core piece of user space.

As someone quite fond of desktop linux, I can’t say that binary compatibility of large applications between distributions is especially a fun thing. Not because it’s impossible, but because most of us involved just don’t care. I assume most, like me learned Unix systems in an environment where API compatibility was the only path to victory, or they simply don’t care about the zillion other Linux distributions.

Thinking over my experiment, I think I can principally call it a success. Over the past ~three weeks, I’ve managed to not go out of my cottin pickin’ mind and want to flip my NT partition out a window. 

As someone that’s principally had a FreeBSD or Debian setup on hand for the past fifteen years, I’m going to call that signs of progress on Microsoft’s part. Because while there are worts here and there using a X session: traditionally they piss me off less than Windows.

Since the experiment began I’ve made numerous changes, but mostly small ones.

The freaky SD card freezes Explorer thing hasn’t happened in ages, so I can probably thank updates for that. Likewise I find turning my keyboard off before power down seems to result in less having to repair the damned thing. Pretty stable for the most part.

IIS is definitely slow as dog poop compared to lighttpd, but given how easy it is to lock it down to a specific network, I’ll forgive that.

The thing I miss the most is how copy/paste works in X. The whole thing about ^C/^V versus the mouse selection and clicking the middle mouse button is really helpful when you’re copying between a terminal and a note editor. The select / right click thing with modern Windows terminal emulators and console things, not so much. But that was also something I disliked about using Android and a monitor anyway.

Since modern Linux tends to treat otherwise available memory like a ramdisk, and buffers the crap out of it for faster file I/O, I’ve made a small tweak to WSL2 setup:

PS > type ..wslconfig
[wsl2]
# default is 80%, or about 12.8 if you’ve got 16GB.
memory=8GB
# Default is 25%, or about 4GB if you’ve got 16GB
# It’s a VHDX file used as swap: not system virtual memory.
#swap=?

Effectively limiting WSL2 to half my system’s physical memory. Otherwise I’ll find myself with 300~400 MB free in task manager. Some of the projects I work on generate ~20 GB of files and general nut punches memory.

That my desktop sessions on W10 usually burn up to 3.5 ~ 6.5 GB according to Task Manager, this further increments my desire that my next machine offer at least 32 GB of RAM. Because 640 KB isn’t enough for anything anymore 🤣.

As most of my interests revolve around an xterm session, WSL works pretty damned good for me. Enough so that using Xfce4 and Thunar vs NT’s DWM and Explorer aren’t as large an impact as the change from Davmail + Thunderbird to ActiveSync and Windows Mail.

Much to my amusement if I do the old “Send to” -> “Mail recipient” trick, I get an error message that there is no program for that. Actually, I can’t remember what decade I last actually tried that. But I’m pretty sure that XP or ’98 was still sexy at the time.

I’ve been happy to see that Windows Terminal has come along nicely. When I tried it very early on, I found it annoying because I abuse bash’s classic line editing instead of using notepad like shortcuts. No frustrations or interference with a release version of Windows Terminal; in fact my only negative comment is the fancy pane split stuff isn’t the same as tmux or screen, hehe.

Spending more time on Windows 10 than normal, I find myself reminded that it’s been about a decade since I figured I should learn how to use PowerShell. While I mostly skipped having to care about batch files in command.com, I have had no such luck avoiding cmd.exe in my life. And PowerShell frequently reminds me how nicer it is–if only I’d have the time to learn it as well as I do bash and general Bourne Shell voodoo.

My little experiment of using Windows 10 Pro instead of Debian Buster was meant to answer the question “Can I” use NT as my main development and work platform. The question of “Should I” however is a different one. That I haven’t gone insane is a positive. That I’ve tried at all of course indicates that I may be a tad more insane than normal.

But hey, I spent many years using Android as a desktop replacement, so I’m obviously crazy to begin with ^_^.

 When I think back over the past fifteen years and the various systems I’ve used, I think a table of Bluetooth problems would look like this:

  • GNU/Linux on PC: 100% success, or no BT driver.
  • FreeBSD on PC: 100% success, or no BT driver.
  • Android on phone, tablet, television, and laptop: 90% success.
  • Windows NT on XP, 7, and 10: 25% success.
  • iOS on tablet: 100% success.
Actually, come to think of it, aside from the ruckus with Google changing BT stacks and a painful Nexus 7, I’m pretty sure the only operating systems that ever give me grief about Bluetooth are all ones baring the name NT!!!
I’d like to think it would suck less on a device with an integrated Wi-Fi/BT card. But applying USB BT to laptops and desktops has definitely been a painful experience over the past ~15 years.