The great parting between VLC and my TV

A while back, I did a bunch of video tests when ripping one of my favorite movies didn’t yield the usual results. This was actually, so crappy, that it lead me to reverting to x264/AVC encoding for later rips. Yes, it was that disappointing. But, I think I’ve come to conclusion the problem isn’t with x265–it’s somewhere between VLC and its handling of iOS based platforms.

One thing each of these tests shared was its reference view: my TV downstairs streamed via VLC. And lo-and-behold, it would rear its ugly head again. Recently, two projects for home improvement have come up.

First, is looking for a VLC replacement on iPad. The USB related woes I posted about with iPadOS 26.2 boiled over, causing me to both cease using VLC+USB on my iPad. It’s just so fucking bad. I’m inclined to believe this is either Tahoe or its support for APFS externals, anyway, it’s a road block enough to drop VLC. Something that’s been a staple since my Android -> iPad conversion now quite a few years ago. This lead me to adapting Infuse Pro as a viable replacement candidate. It experiences the same USB problems, and testing points the finger at Apple’s biscuit eating operating system in that regard.

However, that lead towards project number 2: I recently finished watching Picard seasons 2 and 3. Also, one of the few times I’ve used an actual Blu-ray player. After enjoying that, I opted to splurge on Star Trek: The Next Generation while The Complete Series edition Blu-ray set was near its 90 day low price. It’s one of those really-wants but never-gets. Because it’s expensive. Even on a great sale, we’re still talking like $100. I’ve only waited like a decade or so!

Well, watching the first disc or two on Blu-ray player wasn’t so bad. But of course, me being me, the longer term goal remains file server -> stream all the things. Honestly, the box set is a pain to jockey discs around. We’re talking about 6 BDs per season, packed line sardines, and with two or more discs per spindle. Yeah, screw that. It’s also a enough of a slog to rip though, that I created a new HandBrake preset with a modified audio selection scheme to expedite the processing.

So, imagine my surprise when I start to notice artifacting issues–using the same x264 reference. We’re talking wtf is this kind of artifacts. I nearly switched Hide and Q over to disc by the time the Enterprise-D reaches Q’s barrier. That’s circa the first 5 minutes. I wasn’t happy.

This lead to some further testing comparing video playback on my laptop (perfectly fine) and streaming to the Android version of VLC on the younger TV upstairs, perfectly fine. I’d consider choking up the latter to how modern TVs post-process video, but the same can’t be said for my PC monitor, which like many PC monitors doesn’t have those goodies. That testing was also dominated with IINA, basically a Mac version of MPV that isn’t annoying to install. My PC based laptop also had no issues. The only problem was the Apple TV, in VLC.

Deciding to try things a bit more scientifically, I made a reference conversion with x265 (HEVC) and a few encodings with Apple’s Video Toolbox in various H.265 and H.264 mode, to compare to the original x264 reference. I also uploaded the original MakeMKV rip, i.e., the full unadulterated Blu-ray video quality. It too sucked ass and artifiacted when played in the tvOS version of VLC!

Now, that’s where both home improvement projects intersect. Deciding to try Infuse on the Apple TV was going to be an experiment, and the Plex like TMDB integration made it worth installing for later testing. Faced with the issue with my ST: TNG rips, I decided to test this again. It’s there, why not try another data point? I really wanted to try another video player for comparison at this point.

This was followed by shouting and cursing, because it played fine. All fucking versions. As long as I didn’t use VLC to do it!

The outcome of this experiment has also lead to an unexpected shift. Since eliminating VLC from the picture solved the artifacts, I took a closer look at the hardware encoded files. The winner of which was made with one of HandBrake’s built in presets on Mac, which configures a 10-bit H.265 encode at CQ60 in quality mode. Not as high as the Video ToolBox tests I did with Pacific Rim in speed mode, but sufficient that ST: TNG looked good enough across data points to consider worthy of adoption. So, I’ve integrated a variant of the same profile I was using with this in place of x264. I was always a little miffed about the HEVC thing, but I now am pretty sure it just amounts to never use VLC on anything iOS-derived. Sorry, good ol’ x265. But on the flip side, I’ve also changed gears.

Results? Encoding time dropped to an average of about 4¼ minutes from around 20+ minutes per episode, while presenting similar quality and file sizes curtesy of the newer codec. This is a fairly drastic shift, delivering the joys that are +200 fps to encoding times but not having to tank file sizes to maintain the quality. Based on the results for my ST: TNG tests, sans VLC, I’m considering adapting this as my new ‘standard’ for video encoding instead of returning to my x265 reference point or sticking with my older x264 reference point.

Coming across “I transcribed hours of interviews offline using this open-source tool” in my news feeds, I can’t help but wish this approach to applied AI was more common in this era of ChatGPT.

There’s plenty of reason to run models in a cloud context, particularly if you want to have truly large or complex models. The more computationally invasive the task, the more a data center starts looking smart—ditto if handling many users. But that doesn’t mean it’s not possible to do useful things with LLMs on commodity hardware.

The catch of course, tends to be the need for a powerful computer by modern standards. PrivateLLM’s quantified models for example, range from models that probably fit on several year old iPhone (15/14 series) to a pimped out Mac Studio.

Considering that many Intel/AMD chipsets over the past decade max out in the 16-64 GB of RAM range, and that you basically need 16 GB in a modern laptop, I think people underestimate the possibilities for squeezing smaller models onto PCs for specialized tasks. Especially when given modern computer hardware. I mostly feel that the drive towards NPUs is marketing snake oil, but to be fair, it’s pretty unlikely that we’re going to start seeing beefier GPUs in the typical computer. As impressive as modern integrated graphics have been compared to when I was young, common designs still fall far short of even laptop dedicated graphics, never mind six pounds of RTX!

Here’s at least, hoping that those fancy ASICs see some useful value rather than being today’s equal of the Transistor Wars. If nothing else, I suppose it helps bring the base of installed RAM a little higher in-between price hikes and push faster CPUs and SoCs down people’s throats.

The backup strategy

Since my file server adopted hardware RAID as part of its 2024 refit, and even the mdadm array that preceded it as part of the original 2023 design, one of my concerns has been the need for manual backups. It’s at least a process that’s been tested under fire during the Thinkpad to the face incident. But, I’m never been a great fan of manual for what should be automated.

The process remained largely the same, aside from the drive’s contents exceeding the capacity of one of my spare drives, leaving me with only one external drive sufficient for backing up. How often I actually managed to ensure both drives up to date aside, it’s generally been a bigger priority to take care of things that backup to the file server on a nightly basis.

Well, one of the upside of the transition from Rimuru to Ranga, is it’s effectively seen my Steam Deck decommissioned from /dev/tv to its storage case. As such, the external drive used for augmenting my deck’s internal drive and microSD card, became freshly available for repurposing. A drive that quite conveniently has the same storage capacity as my file server’s RAID array.

An upside of the Christmas break, I was able to find the time to setup the drive alongside the file server. It’s now a backup target, the entire RAID array being rsync’d daily via cron. My largest external SSD (only half the arrays size) remains an additional backup, and my frequency of go plug it in / run the backup script / unmount will still likely average a monthly or bimonthly ad-hoc affair.

The difference that makes me somewhat happier though? This solves one of the annoying problems: location. As an extra incentive, the external SSD has generally been kept nearby Zeta, so that it’s safe as the server. Since its smaller compadre graduated to being too small, no onsite backup has been stored in a separate location. Now that there’s a drive dedicated for daily backups of the array, my external gets promoted to ‘stored across the building’ status.

Because it’s always bugged me when the backups are right next to the machine being backed up. Like that never goes wrong? 😑. That’s exactly why a subset of the data deemed critical is deemed offsite required. But it’s still nice to have the full backup in a physically separate location, because ya never know when that is going to come in handy in a pinch. One of those days, it’ll probably get upgraded to being the offsite backup.

Ahh, here’s hoping I don’t end up buying hard drives next year….

Reminders that Apple hates iPad users

So, for a while now I’ve been pretty pissed off with the iPadOS edition of Tahoe and how it handles files. At this point, I’m pretty sure that it’s just broken and I should hope for iPadOS 27.

The first indication of woe, the canary if you will, was VLC being a steaming pile of bantha poodoo. Now, admiringly, VLC on iPad is pretty crappy compared to how awesome it is on basically every desktop platform, and even a few TV centric ones. But its problems are in terms of usability and features. Also, sometimes getting shafted by the platform.

For a good while, I’ve noticed that VLC would lose access to files on USB. Initially, it would play content, but subsequently picking files would fail to playback when trying to access the files. At first, I actually considered the drive could be going bad, but this was ruled out by using other devices.

Simple solution to that of course is one of my network’s core resources: a file server, ya know, that thing that’s cut down on the amount of removable media that I’ve needed over the past fifteen years. VLC seems to work fine with that.

Then enter the “Why the fuck can’t I actually edit a text file” problem.

Trying to access files in the sense of Files -> app works fine. But the pipeline for saving them back seems to be broken. At first, I didn’t spot it, since the editor I was using falls back to saving to its application folder rather than throwing an error–yeah, that’s stupid. But it’s at least pretty obvious when you go open the file somewhere else (or even on the same iPad) and it’s missing your changes.

So, for sake of sanity, of course I tried a different editor and this was effectively the same. Except that one didn’t fallback to its application folder. At this point, I was pretty sure that it’s either the Files app or iPadOS’s APIs for brokering file access.

The part that removed all doubt, in what I’ve been suspecting since the issues with VLC started. The same thing happens when using my USB drive :).

There’s also the stupidity where attempting to paste another file over to the file server results in Files throwing a permissions error. While connected to a share with the exact same credentials my other systems use and successfully, ya know, edit and create damn files. I consider that double confirmation.

Ahh, sometimes I wish iPadOS was worth a damn. The only thing truly unique versus other tests is that it’s running iPadOS, where my other points of reference are running macOS, Linux, and NT–and just work fine.

Afraid its permanent

When you end up dragged out of bed, half asleep, and you still have the wherewithal to school people on more efficient basic usage of vi, you know that vi is now embedded permanently and deeply in the very fibre of your being.

I had some suspicion that the muscle memory wasn’t the only thing that is etched into me, but any doubts that I had, are now gone. vi is firmly paste the “You can pry it from my cold dead hands” level of integration.

From Rimuru to Ranga

Increasingly, I’ve been turning my mind to what will come after Rimuru; a machine that was originally built in 2021 using the COVID-19 stimulus as its foundation and the same general design of its predecessor, Centauri. Since then, it has undergone 6 refits between Rimuru experiencing a motherboard failure in addition to ordinary tech updates.

Simply put, the status quo for the last few years has been that only one slot on the board is still functional, and the intention was that there would be no third motherboard if it fails. Combined with what is now a 5-year-old Core i7, the single slot of RAM has proven to be the key bottleneck. Ironically, getting Oblivion: Remastered to run was more an exercise in getting the GPU load to a point where the CPU isn’t pegging out.

It’s also been a downside that between the old CPU being well loaded and the Big-Assed-GPU both cranked up practically turn the machine into a space heater. I decided the machine to handle sustained load while keeping system thermals under control. The catch-22 of course, is I can easily find myself sitting in a room that climbs towards +10 degrees after a long spell of gaming, like playing Silent Hill f over the weekend.

Following Maleficent, I considered swapping the GPU and NVMe drive over to Zeta, and converting it from a file and virtual machine server over to Rimuru’s successor. That actually was how Centauri had become my previous desktop. Of course, breaking down and cracking the case revealed roughly what expected to be with that plan: I could fit the PSU and the cooling system, or I could fit the GPU. Zeta’s PSU would be able to handle ‘technically’ fitting and powering Rimuru’s RTX 4070 Ti, but would require removing the liquid cooling system to accommodate the PSU. So, that plan failed.

One of my long-term plans over the past lustrum or so has been that Rimuru would likely be my last conventional “Desktop PC.” I’ve never really been a believer in gaming laptops, but it here we are.

Christened Ranga, since its job is to blow Rimuru away. Amusingly, using Oblivion: Remastered as a point of reference it delivers similar performance but the opposite bottleneck. Rather than being CPU bound, Ranga is GPU bound, but still firmly lands in the realm of pick your frame rate. Closer to 30 at Ultra/4K or closer to 60 at Medium/4K, and a pretty slick 40s-50s at High/4K.

A bit of rewiring all the things, and my dock is now situated underneath the monitor rather than within a passive Thunderbolt 3 cable length of the desktop. Somehow, the part that bothers me about this arrangement is that a 2 meter long active Thunderbolt 5 cable cost about the same as my shorter TB3/TB4 cables did, while being rated for 80 Gbps/240W, far higher than my dock can handle. On the flip side, for cooling purposes a small stand was necessary to ensure proper ventilation.

In tests so far, I’m finding that the Zephyrus G14 is a sufficient match. Its RTX 5070 Ti mobile just can’t match the horsepower of the RTX 4070 Ti desktop, but it comes close enough that no loner being bottlenecked on the Core i7-10700K and single slot resolve that pickle. It’s Ryzen AI 9 HX 370 both represent a major generational leap in performance, and while the RAM remains comparable, it isn’t so limited: so yay for being back to dual channel memory!

As an added benefit, when putting Shion in place to be my primary computer, I no longer have the problem of not being able to see where the fuck the port is, since it’s no longer facing the wall. I kind of liked having my laptop off to the side as previous, but the occasions where I actually use my laptop as a notebook PC make it grumble some to reconnect. More so than swapping between TB cables at the dock. Now? It’s simply swap laptops in the stand, a single cable running to the dock.

Another benefit is proving to be the heating. The Zephyrus G14 is very rapid to crank its fans into high-gear when gaming, to the point that one might want noise canceling headphones rather than speakers for some content. But it doesn’t raise the room’s ambient temperature as drastically as my desktop, and frankly, the late generation MacBook Pro 16s had louder fans :-P.

One of those random backlog of things to write my thoughts about

Bumping into “Apple found clever iPhone Air innovation for a thinner USB-C port” a few weeks ago, made me rather do a double take.

Also, it made me rather try to imagine what the engineers that worked on the F-14 Tomcat must have suffered. Electron beam welding a wing box from titanium, along with the more general “How the hell do we even build that” problem, were among the challenges back in the ’60s. A time frame where these solutions were more revolutionary than antiquated. We mostly remember those planes for the swept wings and cool movies, but I bet the engineers who worked on that wing box remembered it as a challenge of a lifetime 😅.

And then fast forward about sixty years, and we have people talking about 3D printing titanium.

When you name a server Maleficent

Recently, I’ve been having a good bit of grumbling more than usual where Zeta’s bridging of VMs into the local network segment gets borked by package updates, enough so, that pulling the trigger on my migrate to AlmaLinux 10 plan was accelerated. Rather than waiting for ELevate to consider this upgrade ‘not beta’ I went with the reinstall process.

In debating whether I wanted to go ahead and set up the libvirt environment again and keep grumbling, or perhaps just go with my original plan of using Docker, I opted to take a different tactic. The master nameserver being a VM, was mostly because hosting virtual machines was added to the expectations list when Cream was replaced; and some readers might recall, that the ol’ NUC7 got unretired into becoming nameserver 3 as part of the Asus->Eeero transition.

So, I decided on Plan B—bare metal. A MINISFORUM UN100L and a drive to MicroCenter later, and I had decided on two things. One, is that $180 on sale would be damn worth nothing having to screw with the virtual network bridge again, and secondly that I would name it Maleficent because I was pissed off at solving these problems.

The real question is stability. It’s been quite a while since I last edited the zone files (December), and more than a few incidents of the “Why the hell is ns1 not reachable again!” since Zeta’s inception. If Maleficent serves as the new name server 1 until Christmas without any fuckery, I will call that a solid win.

In unboxing the new hardware, I also considered a third alternative that may be for a longer reaching plan. The issue of lacking Thunderbolt aside and whether or not both Rimuru’s graphics card and the machine’s power supply can both fit in the case, Zeta’s hardware would actually be a great replacement for Rimuru. The issue of cramming a RTX 4070 Ti into a tiny ass case aside.

With Cream and my spare Raspberry Pi Zero W functioning as name servers 3 and 2, it would actually be simple enough to convert Maleficent into the central server. The bind instance functioning as the master / name server 1 for my internal domain is locked down, other than domain transfers, all the traffic actually goes through Cream and the Pi Zero. It’s existence as a separate entity is largely administrative, and in fact, the two name servers serving my home network are running a configuration designed so that either of them can be swapped over into becoming the SOA for the local domain. So, I wouldn’t feel too bad if bind and samba lived on the same machine. In fact, it would be quite effective since Zeta’s storage array is connected to a 5 Gbps USB-A host port, and Maleficent’s N100 is far faster than my old laptop’s aging Core i5.

That however, is a tale for another time. For now, all hail maleficent.home.arpa !

ARM ftw

Away from its charger for 4 days of light to medium usage, Shion is only down to 45% charge–I think it’s fairly safe to say the M2 has good battery life.

Makes me recall my first laptop, whose Sempron would generally reach 2 hours and 30 to 50 minutes if one was lucky. At the time, that actually wasn’t bad for an x86 laptop, never mind the third cheapest at Best Buy. It was a machine best used with a charger except for short spurts of being on battery, regardless of system load.

For the most part, I pretty much forget that my MacBook Air even has a battery.

One of the side effects of the RAID-mode oops incident has been having to re-rip and encode my Blu-rays and DVDs. At this point, most of the anime collection is basically done, but movies are in the “As needed” case because of the time/effort.

Recently, I was in the mood both, for watching Pacific Rim and taking a look at one of my original reference videos from back when I setup my previous AVC/x264 presets in HandBrake. I.e., Prometheus. In the years since then, I shifted over to an HEVC/x265 and slowly started to adopt it. Most discs since then have been anime or few and far in-between, so not as large a sample set.

So, naturally, this was the preset I chose when ripping Pacific Rim. However, I found myself disappointed in the video quality. Fortunately, I still enjoyed it greatly–as one of my favorite films and one that I haven’t seriously watched in a few years.

In particular, the opening sequence and numerous cases of darker scenes exhibited artifacts. Now, my original AVC preset wasn’t perfect but it wasn’t that bad either. Taking the first chapter, I decided to do a bunch of experiments, focused on the parts most prone to artifacts. The logo’s background fire effect, the star field, and the breach, followed by the more general video quality of the next 5~6 minutes of the opening.

EncoderQualitySize (MB)Bitrate (Mbit/s)TimeComments
Blu-rayN/A38,32022.8N/AReference Blu-ray ripped with MKV.
x264RF 201,01014.806:05Reference AVC.
Limited artifacts
x265RF 18949.113.813:11Like reference AVC
RF 2079411.511:39Close to AVC reference.
Not as good as RF 18.
RF 22688.19.806:56Reference HEVC
Too many artifacts.
Video ToolboxCQ 801,78025.701:08Close to AVC reference.
(not as good)
CQ 701,08015.701:08Close to AVC reference.
(not as good)
CQ 22448.96.501:07Like a mid 2000s video game
(only better)
CQ 18453.26.301:07Like a mid 2000s video game
(really)
CQ 104246.101:07Like a mid 2000s video game
(too close for comfort)

The AVC and HEVC reference referred to above, are my presets. For x264, high profile level 4.1 was used with the “medium” preset. For x265 auto was used for both profile/level, with the “fast” preset. The only adjustment for the experiments were the Constant Quality, which for those encoders is a logarithmic scale where higher numbers are worse quality.

For Video Toolbox, I couldn’t find any documentation about the scale but the tests obviously show higher numbers are higher quality. In each case, the “medium” preset was used.

Based on what I found, I’m kind of disappointed with the x265 cases. Perhaps it’s time to experiment with kicking it to the medium preset or enabling a deblocking filter to compensate. For the most part though, the quality is there sufficiently if comparable bitrates are thrown at it. The downside of course is that basically doubles the encoding time from x264.

The Video Toolbox case is more impressive, but also not so useful. I believe the M2’s encoder is a lot better than the ‘Bridge and early ‘Lake era Intel encoders. But in much the same way, they just don’t serve my purposes. To make my M2 achieve good enough quality for streaming, the file sizes balloon to near the original Blu-ray–so may as well not bother transcoding in that case. But still, we’re talking about a speed of 190~200 fps encoding versus about 30-40 fps encoding. I think it’s better suited for video editing than streaming video from my server to my TV.

The difference though is considerable. At the uber quality levels, it’s still subpar for Netflix/YouTube quality at this point, versus a Blu-ray reference.

Partly though, I’m tempted to revert back to using x264 and partly tempted to just leave it at Blu-ray. I didn’t really change from AVC to HEVC to save on disk space, so much as I did it because the more modern codec was now widely available on most of my hardware. The kind of perspective that AVC is still fine, but I assume devices will hold onto HEVC support longer once AVC becomes to new MPEG-2 :D.

There’s also the option to just stick with MakeMKVs output. My entire Blu-ray collection probably represents about 4 TB to 5 TB of data at this point, and ahem, it’s an 8 TB storage array with 6 TB free. My storage concerns were pretty much solved two sets of hard drives ago, back when my server’s storage was made up of 3 TB drives rather than 8s. The playback concerns, well, much like HEVC capable devices becoming the norm, most of my devices have less concern with Blu-ray quality bitrates at this point.