A nice gander at the Apple Lisa

While the video might be a tad boring by contemporary standards, unless like me, you have an interest in such ancient technologies πŸ˜›. I think that this does make a nice demonstration of the system.

Since the guy is using actual hardware, it is also slow as crap by modern standards. Let’s just say that the world has come a long way since a Moto 68k and a meg of RAM was plenty. But I think it’s fairly impressive and innovative a system for its day.

I kind of like the more Electronic Desktop metaphor than the conventional Files and Applications approach that the typical Windows 9x PC functioned as some decades later. I love the document centric rather than application centric view as a concept. Seems like it was a good attempt at creating an environment for office workers, who weren’t computer people. The ability to have files with the same name is odd, but interesting if likely impractical for software developers. The natural saving and manipulation of content is nice.

In addition to the UI design, its relationship to the early Mac seems fairly apparent. In particular, one of the odd things that I encountered digging into 1990s PowerBooks and System 7 is how the classic Mac OS treats placing files on the desktop (basically a flag saying its on the desktop) and handling of floppy diskettes. Both rather different than modern systems of any sort. The Lisa looks like a lot of its concepts made their way into the original Macintosh and later system versions.

It’s kind of a shame that the Lisa was insanely expensive and (IMHO) rather slow, like $10,000 for a basic system. While I’m not convinced that the original Mac could be a good idea without at least a second floppy, its base price of $2,500 was at least less comical than the Lisa. Or should we say, a 512k and way more storage would probably have been worth every penny and still way cheaper than the Lisa.

Random things

Powered on Stark to test a boot stick, and figured I’d let the system go update itself. Went downstairs to wash out my coffee cup, and coming back, the line of sight from down the hall to where I left it on my desk reminds me of one of the things I don’t miss about the old Latitude: the screen!

Stark was from a transient era. One in which more consumer oriented laptops began to adapt Intel’s concept of an “Ultrabook” and more business oriented laptops refused to give up their ports until you pry’em from their cold dead motherboards. But almost universally, they all agreed on having a shitty screen compared to basically everything else in computing at the time.

As such, while the laptop served me very well it wasn’t without compromises. The typical 1366×768 pixel screen was basically trash, but it did support external displays and that’s how I tended to use Stark. Onboard was a VGA port (ha!) and the size (mini)HDMI port that nothing else really adapted, but as it got older docking stations able to drive a pair of DisplayPort/HDMI outputs were cheaper than having one shipped off eBay, and the Intel chips back then maxed out at three display pipelines anyway. Ditto irksome things like having an eSATA at the price of a super speed port, having to dedicated a USB port to a Bluetooth dongle, and needing a fanny-pack type battery to get runtime that wasn’t a joke, and weighing almost a kilogram more than I wanted to lug around every day.

But the machine also had its upsides. Like a TPM for encryption, a modular slot that could be fitted with an OEM optical drive or a replacement fitting for a second 2.5″ SATA, and a Core i5 that actually served well up until the rise of Electron applications like Teams and Slack. It also helped that I had enough Latitude D/E series compatible chargers around to never worry, except when working away from an outlet.

All in all, Stark has the unique position of being a computer that managed to not piss me off more often than not. That’s not something many computers can say. So, I think Stark was a successful machine, even if it’s going to stay retired, lol.

Network evolutions

Thus far, this is looking to be the evilicious plan.

  1. The ns1 VM on zeta will be converted from primary name server that is authoritative for home.arpa and forwards to external DNS, to being only authoritative for my LAN and providing nothing else.
  2. Cream will be unretired and converted into infrastructure.
    • DHCP services (v4/v6)
    • DNS services ala ns2: secondary for my LAN and forwarding to external DNS.
    • Add CNAME aliases for ns3 and dhcp because I’m silly that way.
  3. Reconfigure Eero to use my Raspberry Pi zero ns2 and Cream as its name servers.

The reasoning for this is while Zeta could provide DHCP services easily and the VM running name services has been effective, Zeta not a machine that I want as a single point of failureβ€”but as a single point of truth, it’s convenient.

Notion here being if Zeta sees downtime, as long as it is less than ns2/ns3 take to expire the local zone information, the impact across my household is just the inconvenience of fixing a computer. That is to say, either transient enough of a failure not to cripple things, or epic enough to convert ns2 over into the authoritive using the already in place ready to swap over into master mode setup that I have. And that simply put, Cream wouldn’t be intended to be fucked with or rebooted, and would in tern have a similar failover of moving to Eero’s DHCPv4 and re-enabling forwarding on ns1.

Except for two pains in my ass:

  1. Cream’s CMOS battery has died in storage since its retirement.
  2. Cream is refusing to boot the install media.

Actually, I should probably just see if Magic (my old Raspberry Pi 2) is still laying around somewhere or donate Victory (my Raspberry Pi 3/8G) to the mission. I had intended to load RHEL9, but Debian and I are still on friendly terms :).

Plus, Cream’s tenor as my previous file server included a history of the NUC being a pain in my ass!

The Insane Engineering of the Gameboy

Nice video giving an overview of the classic handheld’s architecture. The opening may be a little bit harsh IMHO, but also not unwarranted. At least, the way that I look at it the hardware is closer to what a microcomputer could have passed for just over a decade prior, and devices like the Apple II or TRS-80 were hardly portable and battery friendly devices.

That’s a trend that I think largely tends to continue with really portable devices. I remember looking at data about the first Raspberry Pi, and decided it would likely be on par with a ten year old PC — except closer in size to a credit card than a microwave oven, and pretty darn cheap. Likewise, while I find the Steam Deck’s graphics very unimpressive, I find it amazing that someone crammed an Xbox One level of horse power into such a portable package.

It’s pretty darn cool how that sort of evolution plays out, even if my wrist watch literally has an order of magnitude more computing power than my first Personal Computer….

A Rough Network Plan

Now that the great network migration phase two is pretty stable, where I have the new Eero providing the backbone and a pair of name servers providing DNS with a hand managed zone, it’s time for planning the next phase of the operation. That is to say, DHCP services.

For the most part, I stick to IPv6 addresses now with an IPv6 Unique-Local providing the internal definition of my home.arpa environment. Eero seems to let clients SLAAC away but at some point, I need to snoop closer on its IPv6 support to see if it is doing any kind of goodness like RDNSS/DNSS, but it mostly does DHCPv4.

So, I’m seeing two points of interest here for my DNS arrangements.

Option A.) Ignore the IPv4 world totally and setup DHCPv6 services for my dynamic updates needs. After all, I mostly want AAAA-records not A-records.

Option B.) Set Eero to the good ol’ you’re on your own pal mode and setup both DHCPv4 and DHCPv6 services.

Perhaps I will start with option A, since it is closer to DWIW and should get me what I want; which is IPv6 with dynamic updates to my local domain. But here’s a little thought for option B using Eeero’s default /22 network as a point of reference for the addressing scheme.      -> - -> gateway.home.arpa -> broadcast -> range for routing services; e.g. eero/ap. -> deprecated, reserved for fallback to eero. -> range for network services; e.g., dns, dhcp -> DHCPv4 pool -> | ~512 addresses, half the space.

This would effectively give a static space equal to the first /24 worth of the network that’s intended to be hands of for the oh-shit plan. While the DHCPv4 leasing is pretty aggressive on the Eero, I don’t think I’ve see any devices allocated higher than the first octet. The idea being nothing that’s not a router, an access point, or something getting a lease from switching from my server back to the Eero’s DHCP server should land in this portion of the network.

Following that would be a static space equal to the second /24 worth of the network, intended to be allocated for services. Either by static allocation or just making a static reservation. Let’s just say, as much as I like technology, I’ve managed long enough with a lone /24 for my entire household that I’m sure I’m not going to pull 254 static IPv4 addresses out of my ass anytime soon.

Meanwhile the portion of the network roughly equal to the last /23 of the network, effectively the back half of the network and least likely to be interfered with if the Eero ever happens to ‘forget’ its in static mode, would be client addresses. Which are very much intended to be “Don’t give a fuck about” addresses in the sense that the only IPv4 addresses I use are cases like using dig to poke one of my name servers and not wanting to type its IPv6.

One of the food for thought items on my plate is whether or not I want to ‘unretire’ my old file server, Cream, and turn it into a replacement for ns2. Right now, ns2 is running off a Raspberry Pi Zero W that was originally intended to be a RaSCSI drive for my PowerBook.

Ahh, well, I’ve got other things to do for the moment.

Scaling Done Right

Before I undocked Shion, I had a desktop full of stuff, roughly in the form of three windows across the top of the display, two across the bottom, with a mini-player of music in the corner.

While I was undocked, I was mostly doing other things.

Coming upstairs and re-docking, I’m pleasantly happy that everything is effectively where I left it before transitioning from a 32″/2160p screen to my laptop’s 13.6″/1664p screen.

If anything, this is one of the reasons I’ve come to prefer one big-ass monitor over two proper sized monitors, and have come to appreciate macOS’s scaling methods being sane. That is to say, it’s not like going a dinner platter to a postage stamp, so much as roomy to cozy and back again. A far cry, from for example, eons ago having the problems of shifting a PC between 600p, 768p, and 1200p screens causing tons of ruckus and disorder.

Laptops > Desktops

Working on a screen full of files, listening to music, yada, yada when I remember I should’ve started cooking half an hour previously, I’m reminded of one of the reasons I prefer laptops for things that aren’t rack mount friendly.

The big juicy monitor ™ provides a nice 32″ workspace, the thunderbolt docking station nets me my keyboard/mouse/etc and a gigabit link straight to my file server within two hops. Heck, I even like the speakers πŸ˜€.

Yet just the same, it’s rather convienant to be able to undock, grab my laptop and bring it downstairs. Since the speaker’s and battery life aren’t shit, it’s an easy matter to have my music continue in the background while I’m cooking, and then pick up where I left off for a bit while I’m waiting for the oven to finish it’s share of cooking duty.

Increasingly, I’m inclined to believe that owning a desktop will fade by the wade side. The big honking GPU is the key reason that I still own one, since the need for expansion cards and reconfigurable internal drives have become less necessary as more compact form factors have become more capable and external connectivity have become faster. Plus, despite my early interest in diskless virtualized workstations and remote desktops, nothing really beats a good client machine for doing client machine tasks just like nothing really beats a terminal for doing terminal oriented tasks.

Maybe I’m just getting old πŸ˜›

Apparently, one of the reasons Steam Deck’s underlaying technology owes to Nier Automata if the itnerviewlets at Proton and Tier: Automata – the unique story behind what makes Steam Deck tick, are to be believed. Which really doesn’t surprise me.

Steam Deck’s graphics and battery life in my opinion aren’t as impressive as achieving them in such a small, portable package. You get roughly Xbox One grade graphics from roughly Xbox One grade hardware, and x86 will never offer great battery life under heavy load. But it’s got one thing I love most of all.

Video games work on it. There’s a fair bit of video games on Steam that actually have a native Linux version, and unlike the support for macOS, it’s not quite a joke. But the vast majority of games are Direct3D based games for Windows that require DirectX. That’s how video games are written in this world.

Yet, Steam Deck runs them well as the hardware is capable. In ways that I was never able to achieve back in the day, now more than a decade in the past, using purely Wine and derivative solutions. So I find myself very glad now that folks made a video game with 2B and 9S πŸ™‚

Actually, that reminds me: I’ve been debating picking up a copy of the game on Steam one of these sales. Haven’t played it since I was active on console, and I haven’t even bothered to hook up Deathstar One since moving thanks to getting Rimuru operational and Steam Deck largely taking over for both the ol’ Steam Link and Deathstar One.

TLS all the things

Passing thought, if I’m willing to go through the bollocks of setting up a bunch of name servers and probably rolling a DHCP host or two, I should investigate how possible it would be to run an ACME based setup on a private network; Ala auto renew your own self signed certificates.

Yes, yes, I know I’m a pain in the ass 😝