USB-C all the things

The way it used to be:

  1. Grab [Micro]SD card.
  2. Go get my card reader from my backpack.
  3. Unplug controller cable.
  4. Plug into front panel USB-A.
  5. Wish I had more USB 3.0 ports

The way me me like:

  1. Grab [Micro]SD card.
  2. Grab spare USB-C hub from closet bin.
  3. Plug into front panel USB-C.
Rimuru has mainly USB-3.0 ports, and her first refit was included a 10 Gbps USB-C expansion card to free up my front panel. So in the back I have two cables run up to my desk to handle older devices.
  1. USB-A 2.0 extension cable suitable for controllers and flash drives, jacked into one of the A ports in back.
  2. USB-A 3.0 extension cable suitable for old hard drives and portable devices, jacked into one of the A ports in back.
While leaving my front panel free with its USB-C, USB-A, and audio ports. So most of the time I just end up plugging into the front panel C port. If I want something with SD, rather than fishing my adapter out of my backpack: I just use a USB-C hub. Another perk of sorts is having two of those, one for my backpack and one for home. They were originally intended for my Galaxy Tab S3 and iPad Pro, since my previous PC was built at the dawn of the USB-A 3.0 era. But I had planned ahead based on the assumption that someday my PCs would get with the 21st century, lol.
Can you tell that I don’t really miss USB-A very much? It’s mostly retained here for equipment that lasts nigh for ever, like my web cam; or for flash drives that I usually use for booting older computers.
Actually, I don’t really buy USB-A flash drives anymore either. The newer ones that I have all came from the local Microcenter mailing out coupons, or to phrase it kindly: “Please folks, we wanna get rid of these things. Take a coupon for a free one, and please give a few coupons to your friends!”
Because of performance: I’ll usually reach for my hard disk and solid state portable drives that have a USB Micro-B 3.0 interfaces. Rather than using A to Micro-B cables, I’ve started to use C -> Micro B cable for that ^_^.

Ever since getting the Raspberry Pi Pico, there have been two experimental projects in the back of my mind.

The first is of course: how to run DooM on the Pico. Based on what I’ve seen, I suspect the main point of suffering would be the limited ram compared to a i486 machine. Most of the console ports back in the day managed to show horn things into fairly modest systems, and I bet the two cores would work great for doing video/controller input on one core while the actual game runs on the other. What I haven’t been able to decide on is what path to take to explore that project. In my mind: I kind of see it as a more “Game Boy” like hand held with a screen and controls than anything else. I certainly don’t want to do ASCII doom over COM port :P. It would also be preferable to have separate storage that can address the storage capacity of WADs without having to cookie cutter a level into available flash, making the hand held style even more appropriate.

Second is building what in essence would be a personal computer. In essence a lot like ’70s kit computers such as the Altair, but imagined through the eyes of a geek that grew up in front of an MS-DOS machine. It’s stuck in my head a while that the Pico is far more powerful than the early CP/M and DOS based systems, and that it isn’t that complicated to connect the pico to external devices. From the prospective of fun, I think it would be neat to design a simple system around the Pico and built out something like a PC around it. On the downside, while creating a disk operating system in the vain of CP/M isn’t that big a stretch: I can’t really say that I fancy bootstrapping a toolchain to write programs for a custom operating system. But it’s an idea that keeps floating around whenever I look at how powerful the Pico is.

As a side note, I kind of wonder how hard it would be to replace the CRT in an old Macintosh SE style case with a similar sized LCD panel. While gutting the rest of the insides, and just using it as the mechanical environment to mount stuff. Really, I’m not sure if that’s brilliant or sacrilegious of such historic machines. Although to be fair, people have done some strange things with the cases of old busted Macs over the years….hehe.

Now this is very interesting. Both because SCSI2SD is a bit expensive, and because the newer V6 boards would need an adapter to hook up to an old Mac. But on the flipside while the current SCSI2SD seems pretty swell for connecting to other SCSI devices via adapters; a Raspberry Pi itself is a pretty general reusable platform.

As far as I’ve been able to figure out, old Macs have ridiculously slow SCSI buses by modern standards of any mass storage device, and I think they didn’t even support DMA until the late ‘90s. But to be fair hard drives were typically in the 10s of megabytes in the late 80s – early 90s, and a few hundred megs at the most.

Signs of a simpleton having fun with a new microcontroller:

  1. Write a program that makes the LED blink like a mother fucker.
  2. Write a program that spams a hello world to USB serial.
  3. Write a Read Eval Print Loop over USB serial.
Compared to what I’ve done in C with simpler micros like the 8051 family, I’m finding the RP2040 really damned nice. Not only because of the Cortex M0’s horse power, but because of the really nice library that comes with the Raspberry Pi Pico. For the hell of it, I decided to abuse it with some simple C++ by for the REPL just to see that C++ I/O and string handling, does in fact work.
Of course, me being me, I ended up with a really simple set of commands:

static string evalline(const string& line)
 {
     if (line.empty())
         return “”;
     if (line == “monkey”)
         return “Willow?”;
     if (line == “monster”)
         return “Corky?”;
     if (line == “sweet”)
         return “Misty?”;
     if (line == “help”)
         return “Try nicknames with fur”;
     blink(100);
     blink(100);
     blink(100);
     return string(“Unknown command: “) + line;
 }

Because why not? 😜

Pimoroni’s New Pico Display Takes It to the Max

“Damn it, people! Stop making me want a handheld Pico that can play DooM!” — Terry Poulin upon seeing how many buttons this display has.
A reoccurring thought of late has been just how much of DooM could fit within the Pico’s memory constraints, and a practical way to handle storing the wad files externally.

WHERE ARE ALL THE CHEAP X86 SINGLE BOARD PCS?

Interesting picture it paints, but perhaps short sighted.
Part of the rise of the PC IMHO: owes to the level of binary compatibility Intel’s x86 processors maintained, and the relatively open hardware architecture around that processor. I don’t think I even met a 5 1/4” our Tandy failed to run until the late ‘90s. Which surprises me even more today than it did then.
I rather like ARM’s approach to the whole IP core thing. ARM processors are largely ARM processors the way Intel processors are largely Intel processors. But the relationship between architecture and a product in your hands is quite different. Because of this we have a very broad range of ARM based products and vendors out there, and while compiling code retains strong compatibility the overall hardware varies significantly.
While ARM largely focused on doing its share well, and other companies doing what they do well. Intel largely retained control over its niche, occasionally spreading into other hardware fronts. In practice Intel and AMD are the only big players in x86 today, and Intel has often helmed the development. You can get an ARM based processor from more vendors than you can shake a stick at, or given sufficient cash and effort start developing your own hardware around it. If you want an x86 then odds are you’re buying Intel, or second sourcing from AMD.
While I think the compatibility made a big difference, I’m not so sure that we benefited from Intel’s monopoly over its ISA. When I think about why there are few cheap ass x86 SBCs, I usually think of this as “Because that’s not Intel’s market” — and Intel’s the real stick in that mud.

APPLE ISN’T JUST A WALLED GARDEN, IT’S A CARRIER – The return of the Angry God of ARPU.

This is one of the more interesting metaphors you could apply. Walled garden has been used so long for Apple’s modern eco-system that it is the defacto definition, if not the dejure definition of the App Store. Using the metaphor that Apple is a carrier: seems highly appropriate, but sadly, I think paints the case that Epic should win.
You see companies are first of all in the business of staying in business. For some reason, the FUPM scene from Goodfellas is playing in the back of my mind. What is good for users, and customers, is not always what is perfect for businesses. I like to believe what is good for the customer should be good for the business if you achieve a fair compromise rather than a big stick.
The real moral of this story is that large one sided monopolies are bad. Carriers like Verizon and AT&T got away with all but murder because of the extent of their power over their own networks. To be fair, when it’s “Yours”, you should have some say in that. I believe that the whole digital signatures thing for installing apps on modern platforms is a great thing. The difference is kind of in implementation: Apple runs the App Store, and they should have power over it. Much as Google does over Google Play. But one of the twists is that on Android: the user is the last stop on the right to install software. On iPhone: Apple has total control. In my opinion users should have more control over the software they can install, not more control over a provider’s store front.
Having stopped sewing people at the drop of a hat, Apple has been doing a fair job of obeying Wheaton’s Law—don’t be a dick. Which is key to prolonging your monopoly and circumnavigating confrontations like Epic vs Apple. Because if you’re more benevolent than malevolent it is harder for your enemies to gather strong support, and come for your bottom line.
The greater your enrage customers and businesses in between: the more support potential adversaries can build. Carriers like AT&T and Verizon Wireless have done the big stick to beat down companies and shove up users keesters so well, that pretty much no one loves them for it. Eventually if you’re a big enough dick: someone will punch you in the nuts.

Rimuru – Refit 1

For me the distinction on Rimuru between 16 and 32 GB of RAM, has more to do with the my goal for the machine to last 10 years of service life. Centauri retired after 8, and I had designed it with 5 in mind.

So I decided to acquire two sticks of the same kind of memory, and fill the other two slots while it’s still possible to get them. Actually, I think this is the first time one of my personal machines has had 4 largely identical sticks; only difference is the color to help ID the slots.

On most of my Windows machines, it’s not uncommon for my “Idle” to hover somewhere between 3.5 ~ 6.5 GB of memory. Centuari had been designed as a 2×4=8 GB machine that grew to a 12 GB when her older sister, Dahlia was decommissioned.

Since my “Work” environment already stresses the hell out of my laptop’s 16 GB, I decided Rimuru’s decade outlook called for 32 GB.

Running a single go of PassMark’s Linux version in WSL2, I had scores of 3298 before and 3520 on the Memory Mark. Which at least confirms to me no performance lossage, that shouldn’t occur because there’s no reason. I like verification. The difference between scores is within margin.

One of the aspects of my old ass Logitech 2.1 system going wonkers was replacing it with a set of Creative Pebble v3s. Since the speakers’ USB mode would only function on any of my machine’s via USB-C, that’s been consuming Rimiru’s lone front USB-C port.

Well, now I have a pair of 10 Gbit/s rated USB-C ports in the back. Problem solved.

If I was a genius, I would probably put a C to C or a C to Micro 3.0 cable in the other port and route it to my desk/monitor area. Much as I have a USB-A 3.0 extension that makes it easier to hook up hard drives and Xbox controllers and such.

 In my opinion this video should be titled, “on why user space Linux sucks”.

In terms of what most users think about in terms of desktop this video has jack shit to do with you. Rather the video mainly focuses on the concerns of packaging your binaries and expecting it to run on Joe Random Linux Distribution.

I kind of applaud Torvalds for his long fought religious mantra of Don’t Break User Space. When you’re working with Linux itself, out of tree drivers breaking or needing pieces rewritten isn’t that unusual. Don’t maintain your driver, and you’re liable to go oh snap they replaced an entire subsystem or removed a deprecated API after comical number of years. But compatibility between the Linux kernel and user space software, is pretty superb.

One of the reasons why MS-DOS PCs took off, and CP/M before it, is the drive towards binary compatibility between customer machines. As much as Windows has often deserved its hate, backwards compatibility and stable ABIs–not I said, ABI, not API, has generally been pretty good.

Binary compatibility between Linux distributions has improved from the days where source systems were the best way to make shit work. But just the same, I did have to snicker at Torvald’s comments about the GNU C library (glibc), which has often pissed me off over the years with their concept of compatibility for such a core piece of user space.

As someone quite fond of desktop linux, I can’t say that binary compatibility of large applications between distributions is especially a fun thing. Not because it’s impossible, but because most of us involved just don’t care. I assume most, like me learned Unix systems in an environment where API compatibility was the only path to victory, or they simply don’t care about the zillion other Linux distributions.