Books are like a queue

Remind me, to never go looking at the suggested reading. Especially when I’ve worked through most of my immediate reading set :-/.

Perhaps it’s actually worse with the Kindle Rewards Beta program. In the sense, that I had enough rewards that one of the books I’ve added to my queue was almost free, and the others, well, just half refilled my rewards points ^_^.

One of my little side projects, has been building a spreadsheet of books that I’ve bought, read, or started this year. Reading Insights shows I’m about 25 pages away from having read 60 books this year, which is one off from my spreadsheet. Somewhat scarier may be how fast my queue drains, especially when stumbling onto a series that I enjoy, since books are rarely one off.

I’m not sure how much detail I’ll add to my journal when I get to the year end version of my spreadsheet. But so far, I find it interesting. For every 3 books that I’ve bought this year (including pre-orders from last year that released this year), on average I’ve read 2 of them. Of those I haven’t finished, half I started to read. Most of those unfinished books are entries in long-running series that I will likely cycle back to between now and this coming summer, and a few are more specialized; epics you don’t read quickly and informational books you read most of but don’t always care to finish.

In the long run though, I want to take a year end review of my reading for 2023. Both to see how my goals of reading something every day has affected my habits, and because I’m curious to see how the higher influx of serialized fiction has had an effect. The thing that I refuse to put in the spreadsheet however, is how much I’ve spent on books this year…lol

Normalization ftw

There’s several upsides on standardizing on cables and devices when possible. In my case, that’s been braided (i.e., tangle free) USB-C cables rated for 100W charging when the cables are long and comparable 10 Gbit/s or faster rated cables when they’re short.

One of these upsides is “Ahh, it’ll charge a laptop!” when paring a suitable charger with any of my longer cables. These cables are usually poor on data speed but superb at power delivery, which is often what I want when the desired cable is measured in meters, which is also when I really want tangle free….lol.

Another is knowing that when I grab a smaller cable, it’s going to be good enough to feed I/O devices like a NVMe based SSD or any SATA thing I’ve still got handy. Aptly, most of these short cables either came with NVMe enclosures rated for 10 Gbit/s USB connectivity or are in fact Thunderbolt 3/4 cables rated for both 40 Gbit/s connectivity and 100W charging.

Increasingly, when the cables are short I’m aiming for 40 Gbit/s + 100W unless they’re packaged with something. The downside is that Thunderbolt cables are costly and have limited cable lengths, but generally are sufficient for ‘all the USB things’ once you’ve groaned at the bill. If I find myself buying a short cable these days, I’ll save up for a Thunderbolt for future proofing because more and more of my devices support either Thunderbolt or USB at 40 Gbit/s.

For devices in general, I’ve been swinging for USB-C 10 Gbit/s for a while now. Things like motherboards, drive enclosures and external drives, USB hubs and PCI-E expansion cards are chosen based on this. This choice was made based on the rise of the NVMe external drive, and the fact that such a cable will be no problemo when pared with my older gear that maxes out at USB 3.0 or SATA speeds.

Similarly for chargers, the rare time that I buy a charger, I’ve generally aimed for the 90~100W scenario. In the sense that most of my devices will happily charge from a 45W or 65W charger, and the hungriest ship with a 90W charger.

Is this excessive? Not really. Why? Well, let’s see… my primary machine has 40G ports, my gaming machine has 10G ports and a card with 40G ports. SteamDeck has a 10G port and my file server has an expansion card with 10G ports.

Much like USB-A and MicroUSB-B has become relegated to specialized and rare things around here over the past decade, so has 5 Gbit/s connectivity begun to age out of the herd ;).

A most satisfying conclusion

Last night, I almost finished reading The Dark Ones and was very tempted to just skip sleeping in order to finish it in one sitting. This afternoon/evening, I managed to finish it.

The conclusion to The Vixen War Bride series is a very satisfying one, and I almost busted a gut laughing my ass off in the middle of the finale’s finale. Coincidentally, book two in the series is one of the best books I’ve read all year, but that’s the subject of a later journal entry.

During the series, it’s suggestively hinted more and more that the humans are not the “Dark Ones” that the Va’Shen believed them to be, and in the final entry, of course the dark ones actually show up! The prologue with the Neil Armstrong was superb, but much of novel deals with the resulting fall out as the Dark Ones make landfall. As human forces gather to counter an unknown enemy that’s been making like a hot knife through butter, our hero Ben is effectively left with his finger in the dam when his Rangers are tasked with channeling the ancient Spartans at Thermopylae to buy the combined joint task force the hours needed to gather their forces.

But far, far better than this is the aftermath of it all. See, our poor hero, Ben was supposed to be separating from the army as part of Reduction In Force, i.e., too many bodies, war is over, you’re done pal. When the Dark Ones show up and refugees start streaming into the village, that goes out the window, since no one is going anywhere until the Over the Rainbow arrives. After waking up in the hospital, Ben finds himself in the unique position of having somehow survived but still getting crapped on by red tape. The situation was so dire that Rangers and Va’Shen commando ended up fighting side by side, and our hero may have managed to experience what it’s like to be fed through an alien nutcracker and bombed off the map but there is always red tape.

Fortuitously, Alacea his native wife and our heroine, has her own role in the finale. Seriously, part of the woman’s job is to argue her community’s case before the Va’shen’s gods — the Va’Sh imperial court and the CJTF’s general ain’t gonna win that argument (^_^).

The imperial official’s internal thoughts, are especially hilarious during the meeting between the emperor’s representative and the human general, and it is a beautiful twisting of Va’Shen honor and their saving face that has caused the emperor to declare Ben a Va’Sh citizen and other virtues for having Just Saved All Their Asses. Which leads to Ben also having to export a certain general officer who May Have Fucked Up Big Time ™ into letting him be out processed there on Va’Sh, saving the U.S. government the few billion dollars it would take too ship him home for the rubber stamping only for Ben to have to fight his way back to Va’Sh and Alacea.

Sho’Nan, the sassy chef, “The one who feeds,” continues to be her awesome self when Ben Gibson returns the village and needs to speak to the chieftain Kasshas and the Na’Sha Alacea about joining the community, and Sho’Nan introduces him to the whole council as some vagrant who can’t even speak properly 🤣. Without a doubt, Sho’Nan is my favorite character throughout the series along with John Ramirez, perhaps the two single most entertaining goons, I mean, supporting characters, in the entire series!

Needless to say, things get crazy when Ben comes before the council and Alacea looses her shit in excitement at her husband’s return, but we are treated to a superb finish as the two are finally reunited. It’s one of the more satisfying endings I’ve read to a sci-fi series.

Network device names are meaningless

Annoying factoid, the modern naming of Linux network interface cards provides a consistent way of naming the devices. But not a permanent one.

Taking advantage of Rimuru’s old dual port 10G USB-C card that I replaced with a Thunderbolt 4 card, happily works. But as a side effect this means that enp3s0 and wlp4s0 are now enp4s0 and wlp5s0, which as you might expect breaks the networking configuration for both interfaces.

Because why the fuck would you expect devices to retain their topology just because they happened to be soldered to the board? On the upside at least it became obvious what was going on when I inspected the files in /etc/NetworkManager/system-connections and noticed that the digits had changed.

I’m guessing that since Zeta’s lone PCI-E slot is an x16, it ends up numero uno. It just so happens that I have an PCI-E x4 card plugged in with a USB controller instead of a GPU, because the machine’s jobs are all server related. Although, I bet she would make a dandy little gaming box in as much as a 2-slot wide GPU and a SFX PSU can actually handle anything modern and juicy. Especially if Valve was to you know, release StreamOS 3.x for PC instead of Steam Deck only 🙂

lspci -t -v
-[0000:00]-+-00.0  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne Root Complex
           +-00.2  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne IOMMU
           +-01.0  Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge
           +-01.1-[01]----00.0  ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
           +-02.0  Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge
           +-02.1-[02-05]--+-00.0  Advanced Micro Devices, Inc. [AMD] 500 Series Chipset USB 3.1 XHCI Controller
           |               +-00.1  Advanced Micro Devices, Inc. [AMD] 500 Series Chipset SATA Controller
           |               \-00.2-[03-05]--+-00.0-[04]----00.0  Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
           |                               \-01.0-[05]----00.0  Intel Corporation Dual Band Wireless-AC 3168NGW [Stone Peak]
           +-08.0  Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge
           +-08.1-[06]--+-00.0  Advanced Micro Devices, Inc. [AMD/ATI] Cezanne [Radeon Vega Series / Radeon Vega Mobile Series]
           |            +-00.1  Advanced Micro Devices, Inc. [AMD/ATI] Renoir Radeon High Definition Audio Controller
           |            +-00.2  Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor
           |            +-00.3  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1
           |            +-00.4  Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1
           |            \-00.6  Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller
           +-14.0  Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller
           +-14.3  Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge
           +-18.0  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 0
           +-18.1  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 1
           +-18.2  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 2
           +-18.3  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 3
           +-18.4  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 4
           +-18.5  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 5
           +-18.6  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 6
           \-18.7  Advanced Micro Devices, Inc. [AMD] Cezanne Data Fabric; Function 7

Oh, Christmas Tree!

When I was moving, I had decided to toss the small tree that I typically setup on my kitchen counter; apartment space being a premium, and dogs being mischievous, that worked well. But for the past couple years, I’ve had it on my list to replace it since it was wearing out from over a decade of use.

It being my first Christmas here, I opted to go with a more normal sized tree. And seriously, I forgot how much work it is to fluff up a full sized tree.

Given the relatively safe environment, I decided to use some of my mother’s nicer Disney ornaments that haven’t been put up since she was alive, for fear they would get broken. In the same vein, I incorporated my father’s Christmas balls as well (damn, that just doesn’t sound right 😅). They haven’t been put up in at least thirty years, and I have no recollection of them being put up since I was very young. Rather there were so many of dad’s balls broken (oi, oi) in the 1990s and 2000s that we spent most of the past few decades trying to keep them from being further destroyed in storage. Much to my surprise, only one ball was broken when I inspected the box earlier this year.

An open question is what I want to do about the star. It fits this size tree much nicer than the old one meter tall tree, but the connector for the lights isn’t the old style plug. Therefore, my options are leave the star unlit or run an extension cord halfway down the tree.

Ahh, it’s been a decent day

Saturday’s walk, rather wiped me out to the point that I could barely sleep from the pain in my feet. It wasn’t so bad afterwards but by the end of the day, it wasn’t pretty. About three weeks ago, I noticed that my boots are worn enough that the right outsole has cracked all the way through, such that you can flex it enough to stick fingers through to the sock if you try 😲. For me, that’s actually not so bad, given my history with footwear from before I started to wear boots, but still means new ones are overdue.

In retrospect, going for a 2.5 km walk in the park was probably not the brightest idea, even if my feet haven’t been paining me as part of my regular days. But just the same, after spending Sunday trying to actively stay off my feet to recover, I think buying new boots has gone from “Yeah, I should plan on that” status to “Do I want to do that over vacation” status. Soaking my feet also made a good opportunity to catch up on my reading for the weekend.

Today, on the positive side, I’ve felt well enough to be mostly unencumbered. Sore enough that I wouldn’t be inclined to go for a long walk, but normal enough not to be bothered. To the point that farting around the computer, I didn’t have any problems making routine trips downstairs to refill my water, rather than keeping a canteen handy.

Taking advantage of the day off, I decided to start on early on dinner plans that I drafted yesterday. Mirepoix (carrots, celery, onions), a few leftover mushrooms, and some ground sausage made in the fashion of beef stew using stock and seasonings. I had bought the celery planning on such a meal, but had yet to go for it. Figured, best do it while the carrots and onions were still good.

While such a stew can be accelerated by preparing the vegetables the night ahead, simmering soups and stews aren’t an expeditious cooking experience. Which means more time spent standing in the kitchen, lol.

Varying structure of MGS

A passing thought, from revisiting Metal Gear Solid after twenty years.

MGS2: Sons of Liberty, was pretty much an epic stretch of gameplay punctuated by boss fights to drive the convoluted plot forward.

MGS3: Snake Eater, was pretty much made with boss fights serving to section the various areas of the game as the plot moves forward.

MGS4, sadly I didn’t get to play, because it was a PlayStation exclusive and I haven’t owned one since the PS2. But I’m pretty sure, it must’ve had no shortage of annoying boss fights if Kojima was involved (^_^).

MGS5: The Phantom Pain, took more of a “What the fuck is this!?” method of leaping out of the closet and tossing an unexpected boss fight at you.

And then there’s the original Metal Gear Solid: a series of boss fights, punctuated by the rest of the game.

Metal Gear is kind of like James Bond movies, in its use of unique villains. except being a video game: they’re far more annoying IMHO. But I can’t help but feel that that the original Solid, feels a lot more like a marathon of boss fights compared to its sequels. Like SOL didn’t just add the features they didn’t have time to ship, it also brought a much needed focus on the core gameplay loop.

On the flip side in MGS1: we also get Kojima’s story at some of its finer moments in Metal Gear boss-battle mania. Sniper Wolf and Psycho Mantis’s boss battles aren’t very satisfying battles themselves, but they have well written finishes for Metal Gear villains. The difficulty is often skewed like mad, e.g., fighting Grey Fox is “Huh, is it broken?” kind of easy compared to Psycho Mantis’ zipping around the commander’s office despite being very similar fights. You hit the Ninja as he lumbers towards you and he’s stunned for ages. You hit Mantis, and you may have had to spray and pray to hit the bastard before he flys off again. Some are more strategic, such as going round two with Vulcan Raven where you can use claymores to counter the shaman’s mini-gun of death. And some are just kind of absurd but surprisingly well balanced, like fighting Vulcan in the M1 tank and Liquid showing up in a Hind-D. But if nothing else, the original game offers a lot of boss battles. And then to bracket it out some more, we get odds and ends like the elevator incidents :D.

Ahh, and next comes REX!

French Onion Soup Grantiee

Standing in Publix, noticing that there is surprisingly unsalted broth and stock available as opposed to the lethal dose of sodium, I decided to take a shot at a recipe I came across a few days ago.

As a young boy, there were two staples of my weekends. One was watching Loony Toons in the afternoon because that’s all there was for cartoons. The other, was laying out on the living room floor playing with my toys as my mother watched her cooking shows on TV. There were usually three different programs on that she would watch featuring different chefs.

One of the chefs in those cooking shows was none other than Jacques Pepin. Imagine my great surprise, scrolling across YouTube when one of the suggested videos just happened to feature him! Now a much older man, hey, even I have some grey hairs now ^_^, but seems that he is still cooking :).

Coincidentally, I’ve never had much talent for soups and stews. My mother could make a good stew that would coat your insides. Mine on the other hand, often fall flat and I rarely make soups. Well, thanks to Jacques Pepin, I can now say that I’ve cooked at least one soup that I thoroughly enjoyed 🤤.

Backups, backups, backups

Now that Zeta is effectively operational, I’ve turned my master plan to its next stage: not losing data.

Cream’s drives had a very simple arrangement. One drive, designated “Master”, or M:, was the base of all the file shares kept there. A second drive, “Backups”, or Z:, was kept next to it, and a scheduled task would run a robocopy in mirror mode three times a week to sync the drives up. Nice and simple and cheesy, and for bonus points the mirrored drive racked up about half the power on hours over the past few years. For monitoring purposes, the log was saved to one of Cream’s internal drives which was periodically imaged to the host specific backups area on the master drive.

Zeta on the other hand doesn’t have the curse of NT, but I did kind of like this simplicity. It fits my recovery model where having an easily recovered copy of data is desirable, but changes are infrequent enough that rolling up the backups every 2 or 3 days is probably okay. At first, I decided to create one disk as ext4 to function as the backups, because dependable and trusted, while making the other xfs to function as the master, because that’s the default in AlmaLinux 9.

This created one small problem however, in that getting rsync to play nice with the SELinux, POSIX ACLs, and a few extended attributes proved to be a pain in my ass! For SELinux, you can just relabel the drive after. Not something I want to scale up to 8 TB but not too bad for the actual storage use (2 TB) today. But then we’ve got the issue of the POSIX ACLs and extended attributes used on my file share infrastructure.

Turns out that rsync’s --archive flag effectively breaks the flags you would want for synchronizing these, and then leaves you to go fiddle around with permission masks. So, I said fuck that. I was rather disappointed in rsync at that, but let’s face it, acls and xattrs aren’t that popular when 1970s unix permissions are an 80% solution.

After taking suitable backups (one local, one remote) of the critical files, I set about turning to tools that I know how to fuck with. The backup drive was sacrificed to create one disk of a RAID1 Mirror, and since madam allows specifying the drives like missing /dev/sdwhatever or vice versa, it was easy to spawn the array in degraded form. Then sync the data to the array from the master drive, before wiping and adding it to the mirror’s missing slot. About 10 or 11 hours later syncing at max speed, everyone’s all riled up and gone through the reboot test.

How did I migrate the data if rsync was being a bugger, you ask? Well, it’s slow as hell, but cp --archive and tar --acls --selinux --xattrs really does do what you want when you’re Rooty Tooty and want a lossless copy :P.

In the past, I would typically have used LVM2 pools to manage this sort of operation. It’s overly complicated command line administrata, but hey, it works well and it has features I like, such as snapshots and storage pools. The advantage for me of mdadm is that it is very simple to manage thanks to fewer moving parts.

Having been “That guy” at some point in my career who ended up writing the management software my old job used for mdadm software raid in their audio IRDs, and then later extended to custom hardware built ontop of firmware raid, I know how to use mdadm and more importantly, how reliable it is — and how easy it is to recover a mirror without fucking up. Which, you know, is like the number one way your data goes bye, bye when recovering, right next to oh shit the drive died before it was synced. As much as I appreciate LVM2, it’s got enough moving parts that I’m more leery about the failure scenarios. More importantly, I have more experience with mdadm failure and recovery than I do with LVM.

Of course, this does create a new problem and its own solution. Since my backup drive is now in hot sync with the master drive, it is no longer uber idle enough to be considered a ‘backup’. No, it’s redundancy to buy time to replace drives before the entire array goes to the scrap yard.

This doesn’t really change my original recovery scenario: which is “Go buy two drives if one fails,” it just means that there is a higher probability that both drives will actually fail closer together when that happens. What’s the solution to this? Why, my favorite rule of data storage: ALWAYS HAVE A BACKUP! Thus, a third drive will be entering the picture upon which to do periodic backups of the array, and be kept separate and offline when not being refreshed.

In practice though, this will be more like a fourth drive; in the sense of ‘smaller disk, most important data’ and ‘big ass disk, all the data’. My spare archive drives are large enough to easily do the former, and one can basically contain the entire ‘in use’ storage or close to it, but none of my spares for sporadic backup has the capacity to handle the entire array.

Networks and Pizza

Having finally merged some code that’s been stuck in my craw, I decided on a mini-celebration: pizza and eggplant parmigiana, although sadly I forgot about the beer in the fridge. Oh, well; it’ll be there to go with the leftovers 😋.

On the flip side, I think it’s almost time to declare Zeta an operational battle station.

The first problem was I/O performance. Her predecessor, Cream had been pressed into sharing its Wi-FI with Rimuru, leaving the SMB shares on Cream only accessible via wireless clients. Having fished out the aerials that came with Rimuru’s Motherboard 2.0, that solved that connectivity gotcha. But not the simple fact that the file server and the clients are within a meter or two of each other, and the access point is across the house! As much as I suspect a mesh system will be the upgrade path for my network, I’m not replacing that router until it dies or Wi-Fi 7 is ready to rock.

Thus, my shiny new file server was only achieving about 5 MB/s connectivity with my Mac and PC on the other side of the L-shaped monster. Now, I’ve never expected big things of Samba compared to NT’s SMB stack, but Samba’s got waaaay better performance than that and so does Zeta’s hand me down platter drives. My solution to this problem? Gigabit!

At first, I attempted to solve this problem using the combination of libvirt and pfSense. But, I didn’t have much luck getting the bridging to work in order to have a VM on the host be a router while the client functions as the physical. In the end, I discarded this idea and configured Zeta to function as the router for my little local IPv6 network. Yeah, that’s right: I said IPv6, baby! Since this is a local network intended to join Zeta (server), Shion (Mac), and Rimuru (PC) and the occasional other machine, I opted to set this up as IPv6. There’s no real need for IPv4 in my desk’s wired LAN. Maybe I’ll enable IPv4, so I can jack old PowerBook G3 into the switch since MacOS 9.x probably lacks IPv6 support the way Sonoma lacks AppleTalk support 🤣.

Configuring things was pretty easy. A little bit of radvd to enable the Router Advertisement and Router Solicitation issues and for good measure, setup DHCPv6 as an insurance policy, and configured the Ethernet port with the desired address and itself as the gateway. In the future, I may try setting up BIND, so I can have DNS A records map to Zeta’s IPv4 address on the household Wi-Fi and AAAA records map to Zeta’s IPv6 on the desk’s Ethernet, or perhaps even separate domains. But I’m a little hesitant of taking out DNS whenever I reboot the server.

On the flip side, thanks to the lack of fuckwittery, Samba and the SMB stacks on Mac and NT just handles this case fine. Navigating to \\ZETA or smb://ZETA while jacked into the local Ethernet switch nets me about 80 to 115 MB/s, or roughly how fast you can spew data over a Gigabit link to SATA powered things. Seems that the SMB stacks are smart enough to prefer the local Ethernet, but something more DNS aware is how to fix cases like SSH.

The next phase has been setting up the virtual machine environment, which will probably replace the Parallel’s VMs I sometimes spin up on my Mac and the WSL2 environments on my PC. For this, it basically amounted to setting up a bridge interface with the same IP information and using Zeta’s Ethernet port as its bridge port. Then setting the virtual machine’s second interface to bridge to LAN, so that it can be routable over the local switch.

Thus, Shion, Rimuru -> Zeta works. Shion, Rimuru, Zeta -> some VM on Zeta works. Muhuahuaha!