Over the years, I’ve screwed with a lot of build systems. Both in the course of my own projects and other people’s, and I’ve come to a conclusion over the past fourteen years.

At best: you can reimplement make poorly. At worst you can reimplment half of autotools, poorly.

That’s pretty much what I’ve seen. Thus as time has gone on, I see it very hard to do better than good old Make. Especially when the GNU version has about five hundred pages worth of voodoo to appease even the worst masterbaters, and the need for autotools is kind of waning IMHO.

Enter ninja.

What I’ve generally found with Ninja is that it’s very simple. Like C: the little bit of syntax you need to remember is a small quantity. Opening a build.ninja file is probably enough to grok what’s going on if you’ve ever used an actual build system that involves editing files.

Likewise answers to questions that tend to make it easier to build a wonky, hellish, broken build monstrosity, tend to be “No, you can’t because that would make this slow”. And let’s face it, if you want much more than a relatively simple Makefile, you’re probably building a case for pain.

Based on the past year, I think ninja will be sitting next to vim and dump in my toolkit of loved and trusty computing companions.

Working Copy makes my heart throbb

Working Copy is one freaking impressive feat of work.

One of my early bits of research into apps to solve less popular problems, was searching the app store for a Git client. Because I’m really more of a git and vim kind of guy than a cloud thing and browser based word processor kind of guy. On my old Tab S3 and on my Chromebook, it was easy enough to combine a git client and an editor to manage some repos, even keep a backup of some software projects for reference. Priorities being as they are, I started with iVim because muscle memory and most likely to freak out the fruity operating system. Combined with Pretext it gives me an editor I’m very familiar with, and a simple editor that matches what I’d want out of something neither vi nor emacs like.
After reading around Mac Stories, I decided to finally give Working Copy a whurl. I’m impressed, and I’m happy. Hell, judging by its user guide I could probably manage a nice local edit + git → remote build life cycle if I really wanted to.
For the most part, the software I use tends to be cross platform. E.g. developed on Linux, also available on Windows, cie; Android and iOS. And mostly the apps I use that are on both, are mostly the same on both. Except for the habit of iOS apps to use a scrunched landscape in portrait rather than going to a full screen view. Which is fine by me ‘cuz I’m a lazy git and have more than a few platforms to deal with.
Working Copy manages to be pretty native and runs with it all the way. You wanna know what my definition of professional grade, well made software for doing real shit would look like on an iPad? Well pal, Working Copy is now that definition, and what a damned stunning example it is!!!
Even more so, it appears to be feature complete enough that I don’t have to worry much. You see, I’m weird. I tend to like doing my work from the git command line client, and if I’m going to suffer a GUI then it’ll probably be git gui + gitk. And if Working Copy can’t do what I need to do, odds are I’d be running command line git regardless of the operating system, and probably quite out of my routine.
Something that makes me kind of happy about how native it is, is the file sync.
The way {App}/Documents is exported into the On My iPad provider as {App} is pretty nice. But doesn’t seem like the iOS Files stuff really has a concept of Unix style hidden files, so getting to .vim is a bit of a pickle.
Thus, I had Working Copy’s sync feature use On My iPad/iVim/vimfiles. Which for iVim, maps to ~/vimfiles. A quick :e ~/.vimrc and it only took a moment to get my stuff in order.
" For iVim on iOS.
" Working copy can sync my terryp/vim to ~/ or a subdir but not ~/.vim because iOS file goodies don't like dot files
" So let's use terryp/vim -> ~/vimfiles ala wintel.

set runtimepath+=~/vimfiles/
set runtimepath+=~/vimfiles/after
source ~/vimfiles/vimrc

Since Working Copy is trivially able to handle the submodules in my repo, which anger some GUI clients I’ve tried on PC and Android, all my stuff pretty much just works. Because my .vim/bundle gets synced to vimfiles/bundle like the rest of my stuff.

When someone makes an application as good as Working Copy,  we should all applaud. I know that I’m sure freaking happy! It takes a lot of work to make an application that great, and all to often when you find an application to scratch such a less popular itch, it can be hard to find a really great solution. Working Copy is one of those rare, great solutions.

Over the years I have uttered many words at the software I deal with, mostly profanity.

I’m pretty sure the loving to hateful words ratio between me and ALSA is about 0 : 1,000,000. Or in short if I ever say “I effing love ALSA”, it’s a pretty safe bet that I’ve been replaced by a bodysnatcher or something.

Generally I have used ALSA directly as much as possible over the years because at the end of the road on Linux systems you will always, sadly, end up with that. But I also find that configuring and living with it tends to be a bitch on wheels of fire the more complicated people make things. Let’s say that ALSA is something I suffer not something I love.

Well, recently I’ve had a bit of a pain in my arse dealing with ALSA, GStreamer, and trying to do audio passthrough. And I’ve learned that I really do like PulseAudio.

mpv is able to do passthrough but that doesn’t suit my purposes, or let’s just say scripting that ain’t my real objective.

$ mpv –aid={track #} –audio-device=alsa/{device} –audio-channels=5.1 {my file with fancy audios}

GStreamer is smart enough to passthrough audio if you send the bits to the sink. Most elements that manipulate audio expect audio/x-raw data like you would get out of your audio decoder. But the sinks can also take other formats–much like my surround system knows how to decode pretty much anything.

What I ran into was alsasink never reporting any of the compressed formats my graphics card supports, after GStreamer tries to decipher what the device is capable of.

Enter PulseAudio!

$ gst-launch-1.0 filesrc location=”{my file}” ! queue ! {demuxer} ! audio/x-ac3 ! queue ! parsebin ! pulsesink

Where I had no luck getting this to work with alsasink it was easy as pie with pulsesink.

Deciphering the documentation to configure the default profile for my card via pactl and add the formats I want to passthrough to my surround sound system was a snap that only took 15~20 minutes. Figuring out the device names used for pulsesink based on pactl list was a bit tricker. I spent 2~3 days screwing with ALSA before that.

For bonus points: I could test ahead of time using my laptop’s HDMI port and pavucontrol to configure the outputs, letting me know if this would be possible at all before I started learning how to do it with pactl.

I can’t say that I’m a big fan of the guy who wrote PA, or that I truly gave a flying hoot when the Linux desktop world went to PA and we all threw out things like aRts and ESD. My only horse in that race was I wanted audio to work in applications like mplayer and firefox without having to screw around.

In retrospect: I should have just learned how to use PulseAudio a long fucking time ago instead of dicking with /etc/asound.conf and amixer and all that BS. Because those aspects of PA really do suck less in my honest experience.

And then I find myself remembering FreeBSD and its OSS, in which the only issue I ever really had with audio was whether or not there was an suitable driver for my card, lol.

Draft FAQ: Why does the C++ standard ship every three years?

While Herb Sutter’s answers might be a tad strict I have to admit that I am pleased with the results they’ve been shipping.

C++14 makes a pretty damned good working language for the environment that I work in. If twiddling things forward to C++17 wasn’t a bunch of toolchain wrangling for my own sake more than customer driven, I’d be using it. This leads to reading the reference and thinking “Yay, someday!” as new features trickle out over the three year release cycle.

Over the years between C++98/03 and C++11 new toolchain releases usually revolved around their features. Like people agreeing on template vulgarities or improvements to code generation. Today I tend to be more interested in where the standard is headed, and what runtime library and compiler versions actually implement a given version of C++.

By contrast a very long time ago: it was just a plus if the compiler supported C89 and most of C++….lol

Solving the wrong problem?

Programming language Python’s ‘existential threat’ is app distribution: Is this the answer?

I kind of can’t help but wonder if this is really about solving the wrong problem.

In dealing with the developer side of things: pip and venv really aren’t that bad compared to some of the squirrelly means of distributing software the world has known. But much beyond ‘type pip install xxx and cross your fingers’, I wouldn’t really call it a user oriented system. It works well enough for Python developers but is not catered to Joe Average User or twelve year olds who just want to blow stuff up.

To make things ease on end users of course: you have to solve the actual problem. Linux has a good rule about not breaking userspace–but userspace doesn’t care about you! Personally I think that is the real pickle.

Over in NT land it’s pretty simple. You build some shit and the system ABI is probably the same across a bunch of Windows versions if you’re not too crazy, and most of the baggage is the problem of your install creation process. Whether that’s some fancy tool or some rules in your Makefile. It’s impressive when you load up a video game that’s old as hell and it just works, despite being almost old enough to buy a beer. It wasn’t made to be efficient: it evolved to become stable. It grew up in a world where people shipped binaries and pushing changes to users was costly.

Now by the time you have an actual Linux desktop distribution: all bets are pretty much off. A decent one will usually maintain a viable ABI for a major release number but that doesn’t mean everything will be compatible forever, nor does it mean the binary dependencies you require will remain in xyz package repo for the next fifty years. Some of this lands on distributions and how they deal with package management to squeeze binaries into nixie norms of hierarchy. Some of this also lands on developers, who may or may not know what an ABI is from a hole in the ground because they’re used to recompiling to APIs and configuring ten thousand build time knobs and don’t care that changing something impacts binary compatibility between their library and the works of others.

There are reasons why things like AppImage and Flatpak exist. Many of these I think owe to the source centric nature of unix systems. Different communities have different norms of sharing and reuse.

When I began learning unix systems, I chose a source centric flavour that would let me learn how things worked under the hood. The kind where you waited three and a half days because a new version of KDE or GNOME landed and many a dependency in the food chain needed to be rebuilt. The kind where you learned to diagnose linker problems and grumble knowing that changes to library X meant recompiling half your environment if you wanted to be sure your applications didn’t combust quietly in a corner just waiting for the day you actually needed to launch them again, or curse at some major framework linking to some piddly library that triggered same.

In the end my times with that system were dominated by two things: stability and compile times. But I didn’t chose that in order to have an easy one click and done system. I had chosen it because I wanted to learn how computers worked and develop the means of figuring out why the fuck programs broke. Today if you use that flavour of unix, you can pretty much live a purely binary world that wasn’t so easy when I was a padawan.

By contrast an acquaintance of mine back then, ironically a Python programmer, had chosen a more widely known distribution that focused on having the latest binaries available without having to compile all the things. One that’s still quite popular about ~15 years later. Let’s just say the usability of binary focused distributions has improved with time despite the warts that is binary distribution in *nix land. Or to summarize it thusly:

When it came time for a major version upgrade: I spent a few days compiling before getting back to work. He spent a few days cursing and then reformatted, lol.

Thoughts on Oracle v Google stuff

Or more specifically after parsing this, thanks Noles ;).

Personally, I think under that context, Oracle will likely win.  I do not believe that a language /should/ be copyrighted but that they technically can be, think about how the types involved might mix for various copyrighted works.

I’ll be the first to admit that our system for copyright, patents, intellectual property, trademarks, and the like is a maze with more than a few turns just full of bullshit. But let’s think a moment: what is it really about? Money. It’s not about fostering innovation (patents) or controlling your property (oi). It’s about money. That’s it, simple.

Java is a product and a creative work, sufficient to be copyrighted. So is the GNU Compiler Collection and that last book you read.

What is the jist of copyright? Wikipedia as of this writing, defines it as a sub class of Intellectual Property that is generally “the right to copy, but also gives the copyright holder the right to be credited for the work, to determine who may adapt the work to other forms, who may perform the work, who may financially benefit from it, and other related rights”

Java, as it applies to Android, is not very different than any other language applied to other systems. The devil is in the details as they say. An Android application is a collection of Dalvik bytecode, native code, and resources running under the Dalvik virtual machine: and Android provides a runtime.

The implementation is not “Java” as Oracle ships it. In fact, as Microsoft’s various efforts to make a .NET dialect of C++ and projects like JRuby confirm: you can have a fair bit of abstraction between *concept* and implementation. Android developers typically write code in Java to an interface documented in Java. They could just as easily write in any language you can eventually wrangle into Dalvik bytecode! Android applications can and have been written in other JVM languages, and non JVM languages. The interface, well hell, many things in the .NET world are done in C# but you could just as easily use Visual Basic or F#. Really helps to be able to read C# though. Just like how on many systems, it helps to be able to read C and or C++.

That runtime part that Android applications depend on is quite “Java like”. Many intrinsic components are provided. C programmer’s should think of the stdio library. Because that is the sort of thing that has been “Copied” from “Java”. Essential programming interfaces, not implementations but interfaces (as far as Oracle holds right to). GNU implements C’s I/O library in their runtime. So does Microsoft in their own. They didn’t have to supply crap like printf() and puts(), they could’ve supplied their own pf() and IoConsolePutLStr() functions! Nether group owes the other jack shit over that. But hey, printf() and puts() are what are widely used: even in other languages!!!!

A lot of things in Androids runtime are also unique. For example, the parts that make it Android specific rather than able to compile under Oracles development kits for PC. The implementation is not copied but the conceptual interface, yes.

So that’s a problemo, how far does that level of control and ownership apply to derivatives? And what actually constitutes a derivative work?

Is copying the script of a movie scene for scene, line for line, and reshooting it for your own release and profit, an issue? Yeah. Obvious. Is doing a movie about a train, going to allow whoever owns copyright on some other movie with a train, to sue your ass for it? It shouldn’t unless it’s close enough to the former, or similarly having a legal problem of some other sort.

It’s more of a question like, should Matz and Oracle be able to sue the developers of JRuby for copyright infridgement: because it provides an even stronger resemblance to both Ruby’s and Java’s programming interfaces than Android’s runtime does to Java’s. Things like C, C++, C#, Common Lisp, Scheme, and EmcaScript are formally standardized to some extent. Things like Java, Python, Perl, Ruby, and Lua are not. Could Niklaus Wirth (or Apple) have sued Boreland over Delphi?

I do not feel that it is responsible to exercise such strong-arm aggression against users. It’s bad for Java, it’s bad for business, it’s bad for the continuing evolution of the industry, and it’s bad for those who have already invested.

And as far as I am concerned, enough programming languages “Borrow” stuff that applications of copyright the way Oracle must be seeking, or not feasible—and may very well fuck up language development for the next decade. Now we have to worry what the fuck we name our classes? What next, companies are going to start exerting control over _our_ works made with their tools?

Thanks Oracle, hope your stock plummets and your employees find good jobs with businesses that offer better job security.

Before testing fork(bombs), break glass

Like everyone who spits off a fork bomb during a test, and realizes they’ve forgot the most important thing when testing a multi-process program for correctness and typo’s.



$ ulimit -u

15878

$ ulimit -u 20

$ ulimit -u

20

Lower the max user processes first!

Thoughts on Android Game architecture revisited

A while back, I blogged some thoughts on game architecture for Android. In the time since, I’ve looked a bit at the Light Weight Java Game Library (LWJGL) and Maven, and am developing a bit of an interest in just how much one could push shader programs to maintain performance.

I’d rather like a Maven setup, and something that will support both PC systems and Android, and it seems that Maven even has an Android plugin :-). That got me to thinking about source tree architecture and the notion of sharing a library between a PC / Android game setup and how that might fit into a source tree. Then it hit me! Break it up into separate sub modules, and create a master project for each. Nice, easy, and simple. Also has a perk that it should work with any decent version control system, not just Git.

Hehehe.

Thoughts on Android Game architecture

Games for Android are a little different than writing for PC or console, really seems to be less of the dull grundge as well. But at the price, for more advanced efforts there are less resources sitting right there on Fresh Meat or Source Forge for the hunting up. Me being me, of course, I aways have an interest in cross platform portability (as well as a general disdain for Oracle Java).

One thought that occured to me, is why not do it the same as I would on a PC?

What is the activity to a game? What the user sees. All it really needs to know is processing input, rendering output, and talking to the big cheese. That’s it. Hell, before we talk communication methods, step back and think there: an Android activiy and a Windows executable could both function as clients, talk to the same server, and have a cross platform multiplayer game. Or even a screen like detach/reattach feature where you could begin playing on your mobile phone, then switch to a PC with way better graphics. Threating the user side of the app as a “Client”, it would be possible to have a low-end client for basic phones and a high definition client for sexy tablets.

To make it work, we need a service program to communicate with. On a PC, I would probably use shared memory for offline play and sockets for network play. In favour of shared memory, would be the ease at which C++ code would likely be tweakable to use a shared memory allocator to store command objects, rather than having to do as much extra leg work to serialize the information within across a process boundry. Android land makes most issues a moot point. Network wise, I’d probably just use JSON.

Something that interests me, is how much of the core game design be implemented in such a way, that it could be used off Android by reusing the same library. C code, whatever C++ code the NDK can compile, or Java code should all work, as long as long as one watchs what non-portable bits are stuffed in there.

Putting this much thought into it, can probably be blamed on poking around the FreeBSD and Q3A source trees over the years, and finding the possibilities fascinating lol.