Backups, backups, backups

Now that Zeta is effectively operational, I’ve turned my master plan to its next stage: not losing data.

Cream’s drives had a very simple arrangement. One drive, designated “Master”, or M:, was the base of all the file shares kept there. A second drive, “Backups”, or Z:, was kept next to it, and a scheduled task would run a robocopy in mirror mode three times a week to sync the drives up. Nice and simple and cheesy, and for bonus points the mirrored drive racked up about half the power on hours over the past few years. For monitoring purposes, the log was saved to one of Cream’s internal drives which was periodically imaged to the host specific backups area on the master drive.

Zeta on the other hand doesn’t have the curse of NT, but I did kind of like this simplicity. It fits my recovery model where having an easily recovered copy of data is desirable, but changes are infrequent enough that rolling up the backups every 2 or 3 days is probably okay. At first, I decided to create one disk as ext4 to function as the backups, because dependable and trusted, while making the other xfs to function as the master, because that’s the default in AlmaLinux 9.

This created one small problem however, in that getting rsync to play nice with the SELinux, POSIX ACLs, and a few extended attributes proved to be a pain in my ass! For SELinux, you can just relabel the drive after. Not something I want to scale up to 8 TB but not too bad for the actual storage use (2 TB) today. But then we’ve got the issue of the POSIX ACLs and extended attributes used on my file share infrastructure.

Turns out that rsync’s --archive flag effectively breaks the flags you would want for synchronizing these, and then leaves you to go fiddle around with permission masks. So, I said fuck that. I was rather disappointed in rsync at that, but let’s face it, acls and xattrs aren’t that popular when 1970s unix permissions are an 80% solution.

After taking suitable backups (one local, one remote) of the critical files, I set about turning to tools that I know how to fuck with. The backup drive was sacrificed to create one disk of a RAID1 Mirror, and since madam allows specifying the drives like missing /dev/sdwhatever or vice versa, it was easy to spawn the array in degraded form. Then sync the data to the array from the master drive, before wiping and adding it to the mirror’s missing slot. About 10 or 11 hours later syncing at max speed, everyone’s all riled up and gone through the reboot test.

How did I migrate the data if rsync was being a bugger, you ask? Well, it’s slow as hell, but cp --archive and tar --acls --selinux --xattrs really does do what you want when you’re Rooty Tooty and want a lossless copy :P.

In the past, I would typically have used LVM2 pools to manage this sort of operation. It’s overly complicated command line administrata, but hey, it works well and it has features I like, such as snapshots and storage pools. The advantage for me of mdadm is that it is very simple to manage thanks to fewer moving parts.

Having been “That guy” at some point in my career who ended up writing the management software my old job used for mdadm software raid in their audio IRDs, and then later extended to custom hardware built ontop of firmware raid, I know how to use mdadm and more importantly, how reliable it is — and how easy it is to recover a mirror without fucking up. Which, you know, is like the number one way your data goes bye, bye when recovering, right next to oh shit the drive died before it was synced. As much as I appreciate LVM2, it’s got enough moving parts that I’m more leery about the failure scenarios. More importantly, I have more experience with mdadm failure and recovery than I do with LVM.

Of course, this does create a new problem and its own solution. Since my backup drive is now in hot sync with the master drive, it is no longer uber idle enough to be considered a ‘backup’. No, it’s redundancy to buy time to replace drives before the entire array goes to the scrap yard.

This doesn’t really change my original recovery scenario: which is “Go buy two drives if one fails,” it just means that there is a higher probability that both drives will actually fail closer together when that happens. What’s the solution to this? Why, my favorite rule of data storage: ALWAYS HAVE A BACKUP! Thus, a third drive will be entering the picture upon which to do periodic backups of the array, and be kept separate and offline when not being refreshed.

In practice though, this will be more like a fourth drive; in the sense of ‘smaller disk, most important data’ and ‘big ass disk, all the data’. My spare archive drives are large enough to easily do the former, and one can basically contain the entire ‘in use’ storage or close to it, but none of my spares for sporadic backup has the capacity to handle the entire array.

Networks and Pizza

Having finally merged some code that’s been stuck in my craw, I decided on a mini-celebration: pizza and eggplant parmigiana, although sadly I forgot about the beer in the fridge. Oh, well; it’ll be there to go with the leftovers šŸ˜‹.

On the flip side, I think it’s almost time to declare Zeta an operational battle station.

The first problem was I/O performance. Her predecessor, Cream had been pressed into sharing its Wi-FI with Rimuru, leaving the SMB shares on Cream only accessible via wireless clients. Having fished out the aerials that came with Rimuru’s Motherboard 2.0, that solved that connectivity gotcha. But not the simple fact that the file server and the clients are within a meter or two of each other, and the access point is across the house! As much as I suspect a mesh system will be the upgrade path for my network, I’m not replacing that router until it dies or Wi-Fi 7 is ready to rock.

Thus, my shiny new file server was only achieving about 5 MB/s connectivity with my Mac and PC on the other side of the L-shaped monster. Now, I’ve never expected big things of Samba compared to NT’s SMB stack, but Samba’s got waaaay better performance than that and so does Zeta’s hand me down platter drives. My solution to this problem? Gigabit!

At first, I attempted to solve this problem using the combination of libvirt and pfSense. But, I didn’t have much luck getting the bridging to work in order to have a VM on the host be a router while the client functions as the physical. In the end, I discarded this idea and configured Zeta to function as the router for my little local IPv6 network. Yeah, that’s right: I said IPv6, baby! Since this is a local network intended to join Zeta (server), Shion (Mac), and Rimuru (PC) and the occasional other machine, I opted to set this up as IPv6. There’s no real need for IPv4 in my desk’s wired LAN. Maybe I’ll enable IPv4, so I can jack old PowerBook G3 into the switch since MacOS 9.x probably lacks IPv6 support the way Sonoma lacks AppleTalk support šŸ¤£.

Configuring things was pretty easy. A little bit of radvd to enable the Router Advertisement and Router Solicitation issues and for good measure, setup DHCPv6 as an insurance policy, and configured the Ethernet port with the desired address and itself as the gateway. In the future, I may try setting up BIND, so I can have DNS A records map to Zeta’s IPv4 address on the household Wi-Fi and AAAA records map to Zeta’s IPv6 on the desk’s Ethernet, or perhaps even separate domains. But I’m a little hesitant of taking out DNS whenever I reboot the server.

On the flip side, thanks to the lack of fuckwittery, Samba and the SMB stacks on Mac and NT just handles this case fine. Navigating to \\ZETA or smb://ZETA while jacked into the local Ethernet switch nets me about 80 to 115 MB/s, or roughly how fast you can spew data over a Gigabit link to SATA powered things. Seems that the SMB stacks are smart enough to prefer the local Ethernet, but something more DNS aware is how to fix cases like SSH.

The next phase has been setting up the virtual machine environment, which will probably replace the Parallel’s VMs I sometimes spin up on my Mac and the WSL2 environments on my PC. For this, it basically amounted to setting up a bridge interface with the same IP information and using Zeta’s Ethernet port as its bridge port. Then setting the virtual machine’s second interface to bridge to LAN, so that it can be routable over the local switch.

Thus, Shion, Rimuru -> Zeta works. Shion, Rimuru, Zeta -> some VM on Zeta works. Muhuahuaha!

Return to Metal Gear Solid

One upside of it being the weekend and not spending all of it working on computer shit, is I finally get to dip my hands in the new Steam releases that just dropped early this week.

Metal Gear Solid is a game that I greatly enjoyed, but never really got all the way through since I had to borrow my brotherā€™s copy. I havenā€™t played the original ā€˜Solid in about twenty years.

Breezing through the really short VR training pre-amble to see just how rusty I am, was a great feeling. Nailed most on the first go, just had to remember the speed difference between crawl and run. making it through the docks in the beginning in complete stealth was certainly better than I ever did as a kid when the game came out in 1999.

I really got into Metal Gear Solid after the dedicated VR Training missions disc was released. Out of 300 training missions, I think I had completed somewhere into the upper two hundreds. Basically everything except for the more challenging time attacks. In particular, I was fond of the simulations where youā€™re given a handful of weapons and get very creative in eliminating enemies that far out numbered the ammunition provided. Those were always the more fun ā€œWho dares, wins!ā€ simulations that left you breathing hard and finding unique ways to make the most of things. I guess, it would prepare me for how many times Iā€™ve been jokingly told I have a roll of duct tape and a aluminum foil, only to have to make a satellite dish in twenty minutes šŸ˜‹.

Curious about how well my memory has held up after twenty years. Good enough to be wandering around B2 thinking m ā€œHey, arenā€™t there claymores or C4 to kill you if youā€™re careless here? Ahh, it was pit traps. C4s for the walls.ā€ Someways after the tank battle is where my recollections of the first game becomes more derivatives from reading the strategy guide twenty years ago rather than how far I got.

Metal Gear Solid 2 was the first in the series that I completed, and aside from the fun times mugging sentries for their dog tags aside, was enough of a trek that I donā€™t have as much interest in revisiting it as the first. Particularly due to some of the more annoying boss battles like chasing a fat man on roller skates around as he plays mad bomber.

Metal Gear Solid 3 is the one that truly impacted me, and thus, Iā€™m very much looking forward to the upcoming remake. If they basically made the game the same thing but in the engine from MGS 5 and modern textures, Iā€™d be happy.

In the mean time Iā€™m enjoying the trip back to 1999s original entry in the Solid series of Metal Gear games. As such a fan of Big Boss, itā€™s especially a nice contrast revisiting it with Solid Snake and Meryl at the focal point.

In MGS, Snake is already the legend who defeated Big Boss twice and lived. We all know his attitudes and that reality always kills your expectations, if theyā€™re not driven by the results. Meryl makes quite the foil, as the naive rookie yet to find her own path. Itā€™s a dramatic contrast from Big Boss, whose naĆÆvetĆ© paints the story of how his innocence is lost in MGS 3: Snake Eater, as heā€™s forced to define his own meaning to what it means to be loyal to the end. Becoming both the hero and the villain of future Metal Gear games.

Decommissioning Cream

As the process of migrating files from Cream to Zeta continues, and rather devolves into making more like 1983 than 2023, I am reminded of how much I despise using Windows machines in important roles on my network.

Yes, the whole experiment of using Windows 10 for my home file server worked out pretty well relative to what I expected. But also, yes–it has pissed me off a lot over the years.

More than a few times in the last 6 – 8 years or however long it has been, I’ve thought to myself, “Gee, if I had just loaded Debian or FreeBSD a few months later like I had planned…” that it would have been cheaper in the long run. To be fair, there have also been times that I found it rather neat, but most of those involved things like ssh/scp becoming (mostly) first class citizens in the land of NT.

I am sure, whether or not Zeta proves to be closer to the “Ten year server” plan than Cream did, AlmaLinux will at least be less of a pain in my ass than NT was.

RAM versus I/O

And this my friends, is why I love having extra memory!

[terrypoulin@zeta ~]$ dd if=/dev/zero of=./dd.test bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 0.38554 s, 5.4 GB/s
[terrypoulin@zeta ~]$ dd if=/dev/zero of=./dd.test2 bs=1M count=2000 oflag=direct
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 4.01215 s, 523 MB/s
[terrypoulin@zeta ~]

This machine has a cheapo 1 TB Inland Professional 2.5″ SATA SSD to serve as its system disk. But she’s got 64 freaking gigs of RAM. Yes, that’s right – sixty-four freaking gigs!

[terrypoulin@zeta ~]$ free -h
               total        used        free      shared  buff/cache   available
Mem:            61Gi       966Mi        57Gi       8.0Mi       4.1Gi        60Gi
Swap:           31Gi          0B        31Gi

The first dd command writes 2 GB of zeros to a file one MB at a time, as fast as the system can go. Thanks to the OS being able to say, “Hey, I’ve got memory to buffer that; carry on wayward son,” it is completed Really Damn Fast. This buffering isn’t good for the case of a slow removable disk (or IMHO, oh shit, batteries), but is very effective when doing a lot of file I/O such as compiling software or working on large projects with many files. By contrast if the system had little available memory, it wouldn’t go so fast.

The second command, effectively says the same thing but uses Direct I/O to ensure the data is spewed the disk quickly and immediately, meaning that we get the speed a decent SATA SSD can achieve when combined with its own little bit of internal buffering. But we don’t experience the crazy speeeeeeeed that is RAM.

Why is this important for Zeta? Well, Zeta is replacing Cream — which means she has an 8 TB storage array to take care of, spends most of her life dealing with networked file transfers, streaming media to client machines, and unlike Cream, will end up running several virtual machines thanks to having enough extra memory. Did I mention, fire sales on DDR4 have already begun to make way for DDR5? šŸ˜

One eye, two eye, blue eye, red eye

For some reason it bugs me that I’ve found it hard to get up all week, and the day I can actually sleep in, I find myself waking up on time \o/.

About an hour later, coffee fueled, fed, and quartered, and onto making the day’s agenda a more actualized plan than a loose concept. Also, I quickly determined mornings are more productive if you grab the coffee beans instead of the corn chips and wonder what the fuck you’re doing standing in front of the coffee grinder. Hahahahaha!

Passing thoughts

Tonight’s dinner turned out better than expected, despite using a rice bowl for portion control. I ended up with independent plating as a side effect of trying to avoid the “Bed of rice” and “Mixed in rice” approach, trying to get a higher veggies to rice/lentil ratio.

I can’t help but think, it’s probably the first time I’ve actually used the bowl for rice. It had quickly become the measuring scoop for the dogs’ dry food, and I had happened to remember its intended purpose was rice, when I was cleaning stuff out. On the flip side, at least I’m smart enough to have fed it through the dishwasher first!

An average, decent kind of day

This morning was rather slow, given that my “Get up early, get stuff done” plan was waylaid by the strong feeling that even for all the tea in China, I wouldn’t want to get out of bed. Nice and cozy ftw! But of course, eventually this had to be substituted with starting the day.

If an hour or so late, my experiment with tamogoyaki went quite well. This is the first time I’ve mad a bottle of mirin to incorporate the flavor, which makes for a notably sweeter result. Half the omelets for breakfast, half saved for part of Monday’s lunch. Then off to iron out the grocery shopping. By lunch time everything was done, and I finally had my coffee. Which went well with some cornbread, which of course was not a low carb lunch but solved hunger for the afternoon. The coffee that I ground last week has held up well enough that much enjoyment was had.

Finding myself in that fickle mood where I know that I can’t spend all my time working, and sometimes I’m not in the mood to do anything restful or useful; I’d say this afternoon was an example in borderline stir crazy. But on the flip side, a split off into playing a bit of Battletech 2018 made for a pleasant way to unwind.

I also got to test out a “Splurge” as part of my dinner plans. Typically, I roast veggies by tossing them into a mixing bowl with some olive oil and seasoning, toss on a silpat on a baking sheet, and in it goes. The downside of course is this then means I’m stuck wiping out a mixing bowl with a paper towel and then having a larger item for the dishwasher’s top wrack, or a thing to soak/rinse after the food’s in the oven. With the whole blood pressure thing, I’m roasting veggies more often. Trying things the less clean-up way of toss veggies on the silpat, drizzle with olive oil and sprinkle seasonings is a nice plan. But using the bottle of EVO, it just results in more puddling than oil properly applied to vegetables. Which in turn means more paper toweling down the silicone baking mat and sheet pan, and less even results.

In the past, I used to keep some of ma’s flask like cruets handy until I decided these were a bigger pain to clean than they made any real difference versus using the bottle directly. So, thinking on a solution, I decided to splurge on a nice leak free cruet that’s perfect for drizzling the veggies and that more importantly, is easily cleaned. For tonight’s dinner plans, roast carrots were on the menu which made a perfect opportunity to test this out.

I’m calling that cruet $16 very well spent, lol.

And the ‘SAS’ category is now converted to the ‘SAS’ tag. Any untitled posts that stood alone in the category have been assigned ‘Games’, as that’s usually the closest match. It having been a long time since those journal entries were made.

This was kind of fun, as it gives a stroll back down memory lane for things like the skins pack that I and a friend did, eons ago. Far nicer than stumbling on the computer posts I feared would never die from system stats, like my post on converting from one distro to another without reformatting.

Here’s one I’ll resurface here though: How he does it ā€“ Trees!

More than a decade later, I find my brain still largely functions this way. The key differences is that has my gaming habits and working environments have shifted over the years, I have less frequent need for ‘active’ navigation, leaving me with a more ‘passive’ form where my mind autonomously maintains a tree structure, but doesn’t have the need to track and replay paths and key points of interest along a navigational cycle through a building. That is to say, it’s less things like remembering what corner of a hallway my element took fire at and more things like remembering what room I left the tape measure in.

Plus there’s the upside, I now live in a place where you don’t need such a data structure just to drive around the darn roads without getting lost, lol.

Recategorization

I think that the categories to tags conversion that began earlier this year, is now ‘only mostly done’. Except for the SAS category from my old gaming group, I think all the big moves are done. E.g., Android, Amazon, FreeBSD, Google, Linux, PC-BSD, etc are now converted to tags and should be in appropriate categories (e.g., Computers or Programming).

For the handful that remain, these either have someone more vague distinctions yet to be decided (Lyrics vs Music) or more vague taxonomy (Anime vs TV Shows vs Movies; Blogger vs Live Journal, et.al.) that I’ve yet to decide upon more concretely.

In any case, those pertaining to the topics I most frequently post about beyond the what I’m watching or listening to, are basically done.

Considering that the current implementation of my journal has a lot of years of content from many different sources dating back to when I first started blogging as a means of maintaining my journal, it seems to have held up pretty well. Entries that were purely Diaspora or Google+ aren’t here, although I’m tempted to find or write a way of importing them. Entries that were purely file or paper based, aren’t here. I’m pretty sure the exceptionally rare ‘Private’ entries from the early days aren’t here or were simply declassified a decade ago. But for 17 years of blogging, I think my journal has held up decently well despite the many system changes and having begun with absolutely no idea how the categorization and retrieval of information would grow. Yeah, I’m fairly happy with this current setup. That said, I should probably journal less about computer stuff šŸ¤£