Google recently blogged a video showing off new features in gdocs, and it looks like much of what has been missing, is now coming promptly to Google Docs :-D. They don’t seem to have rolled out, at least to my level of access, but it’s looking good on the tiny screen.



I really have no love remaining for local office suites: they tend to be big, slow, expensive or time consuming to compile. Web applications can be made to work just as well, and with considerably less groaning involved. So, it is fair to say that I’ve really come to like web based solutions like gdocs, even if I’m not a big fan of all the hype in recent years about migrating to the ‘cloud’. Why should I put up with the bother of Microsoft, Gnome, Open, or K office, when most of the crap I care about, can be done on Google for almost zero maintenance?


When I have a document to get sorted out, I have the habit of selecting whatever method works best for the task at hand. Most often that is something I can hack at in vim, and then generate a suitable output for sharing. I don’t send Word files, I normally send PDF files and sources. That is a much better way of doing it, when you want someone to view the file, not edit it and send it back. When I expect someone else to be editing the document, I tend to employ gdocs over an office suite, because of Google’s sharing and collaborative features. Playing pass the pumpkin document is a morons errand compared to gdocs, and I reckon for some folks the publishing parts are handy. My interest is more so in the collab’ features, because that was the big incentive that brought me to gdocs in the first place. Now that those features are growing again, you can bet I’ll be putting them to use.


For me, Google Documents is just a means to an end: get the document done with minimal fuss. When I’m stuck dealing with people who wouldn’t know a DocBook from a troff, let along what the heck version control means… it makes life a lot easier without complicating MINE! The level of control over HTML/CSS offered with the word processor, even makes it easier for me to integrate gdocs into my work flow when more power is required; I’ve yammered about that before. If anyone has ever had to feed Word files through pre or post processing phases, uh, you will enjoy living with Googles method lol. Since I rarely need to do rocket science with spreadsheets, I’ve never had much to complain about their spreadsheet app. Recently I’ve used gdocs word processor and spreadsheet on numerous projects, including dependency tracking for our EPI Core Services package, and it works darn good for what we need to do.


My only big gripe over the years has been the lack of Google Talk integration with gdocs, compared to GMail. In our spare time efforts with EPI, GTalk/XMPP actually became our norm for development meetings, after efforts to deal with AOL and Microsoft’s solutions, only added extra interoperability problems. At least I can say ‘gdocs’ and people will usually know what I mean, if they know about Google Documents in the first place lol.


The video Google posted, demonstrates a much better way of dealing with the multiple editors problem then what’s been classic with gdocs. I can still remember a time when it was virtually impossible for two people to edit the same file simultaneously, haha. I am very intently interested in seeing these changes rolled out, and definitely have to give the drawing tool a go. Normally I use Dia for any diagramatical needs, and the GIMP for heavy lifting; if Google’s drawing app can get the job done, it really would save me the effort; we’ll have to play and see, hehehe.


Now that gdocs can handle documents, presentations, spreadsheets, forms, and drawings. I reckon fwiw, it is almost a fully functional office suite. I can’t say that I’ve used the presentations app, since I’m naturally against death by power point, but it would be my first stop if I needed to put something like that together.



Spending huge amounts of time draped over MS Word 2k2, taught me the value of using decent tools; where as learning how to use better tools, is what taught me the value of leverage software in general ^_^. Most of the time, I employ LaTeX or DocBook for large projects (the kind you don’t want to see the inkjet taxes on), but I will occasionally use gdocs for simpler documents of my own. When it comes to word processors, Google Docs is no worse then the rest, and in  my experience has improved more over the past few years, then Microsoft and Suns/Oracles solution. The ease of sharing and editing the doc with others, has made it one of the few officewares that I actually enjoy, except for the lack of vi and emacs keystrokes of course o/.


I’ve no real brand loyalty to Google, even though their software makes up a large part of routine. Sometimes it’s simply the best glove available :-/. For as often as this software has helped me out, I’m happy to use it when it fits.

Dealing with side-seat drivers

My way is to actively test and annoy, until she learns the meaning of the words “Shut your gob”  ^_^.

I’ve been treated like an animal often enough, that I’m not interested in being treated like a machine.

Today was a refreshing change, I spent most of it glued to my desktop: three command prompts open, with pidgin tabs, chrome tabs, and mplayer occasionally filling in the rest of the space :-D.

I basically rewrote the recipe parser for tmk, bringing it up to the specification. Generally I’ll avoid such leg work, except when it’s a simple grammar or no suitable tools are available; and tmk is very simple. Command directives and much of the expansion system were also implemented today, making tmk almost complete enough to compile a project. What needs doing, is proper macro expansions (tmk vars don’t work yet, but the rest of the expression syntax does), so that `magic` variables acline to makes automatics can be created. Once that is done, creating the desired directives is a fairly trivial process; hooking them up to language/tool independent and developer serviceable backends being a piece of cake.

While testing the directives code, I needed to come up with a temporary error message, one that had to be a bit absurd: being as it was intended for testing the domain in which the error applies, rather than an already iron clad reason to use any given message To that end, I came up with one that would really stick out among the other diagnostics: “{file name}:{line number}: the bad year blimp has landed”. This, as some might instantly see (;), is a slight homage to Mel Brooks’ Spaceballs, more specifically to the scene, “Uh oh, here comes the bad year blimp!”, in which Lone Starr calls for the switch to “Secret hyper jets”. And of course, I had to load that film into MPlayer and my DVD drive while I wrote code xD.

When developing something, I typically do a large amount of in flight testing to verify the codes behaviour. My efforts today have been no different, testing time perhpas, making up more than 60% of the time I spent working on tmk. It’s not unheard of, to find XXX marked comments denoting details about some edge case that hasn’t been covered, and a note to kick me in the head if that theoretical hotspot ever occurs; such things usually happen when I’m extremely pressed for think-time, due to (ofc) being interrupted every upteen times. At which point, priority is usually on implementation speed over perfection. Today, I was lucky enough to be able to work largely uninterrupted: a rare pleasure.

For me, programming is a very relaxing effort. It’s one that I can absorb myself into the art and craft, designing programs bit by bit, constantly improving them with each iteration, until my aims have been achieved, or I’ve passed out. Despite being a very exhausting task to keep working at, I rarely find it to be an intellectually arduous one, so much as a test of endurance: to stay in the zone for long stretches and deal with the effort required for the, eh, shall we say more paltry and menial aspects of coding a decent program. There are some parts of programming that really do tax the brain, the fun stuff to work on solving ;). On the other side of the coin of course, is a lot of things are more straight forward to sort out. Many aspects of programming overlap both the engineering and the every day groan of getting ti done. Both are needed, in any non-trivial application, that you’ll be seeing more than a couple hours of use with.

That being said, being able to look at things from several vastly different directives, does help a lot. Because of the amount work needed, it seriously helps if you can stay in the zone for a good 6-8 hours or more per sitting; but becoming tunnel visioned on the creative and problem solving parts, is generally a bad thing. One must slip gently through the mind of no mind, and know how to leave your box behind. There can be an insane amount of stuff to deal with in some programs, and I find that the level of such, increases both with the complexity of the problem and the pitfalls of the language utilised. For example, C, lisp, perl, and python can each express many problems quite well; yet each excels at expressing certain ideas more naturally, or with less effort, than the others. I actually was missing lisp for a good hour today lol.

Days where I can just sit and focus on getting stuff done (in peace) are terribly rare here. When I get into work mode, just get out of my way, or I’ll be quickly annoyed. I don’t like it when people waste my stack space.

In one of the rare moments that I actually stop to read my RSS feeds techy side, I noticed that WebKit2 has been announced. The only thing I can’t help but wonder, is what the flub does a layout engine have to do with processes? Not a damn thing! Personally, I would appreciate a separate API/library for such separate of interest: in particular, one not tied directly to WebKit lol. Ok, so maybe I’m crazy.

Whether you are a user or a developer o/, nether Xembed or the various (oft’ fugly) incarnations of Microsoft COM really make anyones lives just easier. Under X based systems however, it is possible to mate separate processes running WebKit into a central controller without to much heartache; there are already some bare bones browsers out there worth looking into, and patching when they don’t measure up. Non however have become common place, and even among PCs, there are really only a handful of common web browsers out of dozens of products to choose from.

Somethings are always going to be closely bound; first takes of an idea, even more so an idea geared for getting the current project done, tend to do that even more so. Personally, I will just be happy when there are less web browsers out there that suck… and keep on sucking.

Like clockwork it seems, I’m awake, and it was just a few minutes before 0700 local time. I had tried, and failed, to make it a habit last year, or waking up early; but recent stuff has cemented it into my internal ordering. It’s like I just start waking up until my eyes flutter open wide awake.

I was dreaming that I was at church, and the helping the pastor look, until we finally found some important clicker that got lost lol.

RvS -= 1; SWAT4 += 1

I spent part of the playing around with Raven Shield and SWAT 4: TSS. Although to the best of my knowledge, Unreal Engine 2 did have support for joysticks, both these games shipped with that support half-assedly disabled 8=). In short, the games basically ignore all joystick input.

Never being one easily daunted, the three obvious solutions occurred: A/ configure the games for keyboard only operation and the joystick to emulate keyboard input; B/ use AHK; and C/ write a small toy to emulate a mouse by way of joystick input. I have already done A, and plan to test out B tomorrow (eh, today), if need be, perhaps play with C at some later date just for fun.

Under the Unreal Engine, or at least UE2: movement is a fairly simple thing. Basically you apply a positive or negative “Speed” factor to a given axis, resulting in some kind of movement: such as translating the players pawn(?) or moving the cursor. It’s kind of simple, +/- base X and Y axises are more or less your walk. Where as the aBaseX and aBaseY axises correspond to the mouse. For SWAT, the task is basically as simple as binding a group of keys to apply +/- Speed to those axises. The bigger the speed, the more reaction you get per key press. In Raven Shield however, despite several methods tried, only positive and negative X (left/right) movement was fully working. Irregardless of change, only upward Y movement was possible in RvS o/. After 6 years of it, I am often the first to call Raven Shield a pile of crap. Tuning my retired joystick to trigger those keys, is fairly simple: although the profiler lacks mapping JS to mouse aixses, sadly.

While it is possible to configure SWAT 4 for keyboard only operation, and thus JS based aiming; it creates somewhat of a problem. It’s virtually impossible to both be able to turn/maneuver around obstacles and to aim and fire at targets. The reason for this is somewhat Unreals fault, that and the fact that “Keyboard acceleration” is not quite, eh, the same as mouse acceleration. In testing with my stick, I found values of +/- 3.75 to 4 tended to work good for aiming, where as +/- 5 to 8 work better for turning. Since a joystick should garner a form of movement more acline to that of mouse acceleration, rather than a keyboards uniformity, it causes a conflict of interest. Mouse acceleration works on the indea, of increasing the speed of mouse movement in proportion to the distance you move it, e.g. it gets faster as the further you move it; where as accelerated it always moves at a steady rate. Perhaps a good if incomplete explanation, for anyone whose played a Playstation with an analog controller: mouse = stick, keyboard = d-pad; thus mapping js to keyboard = d-pad != analog stick. Obviously to play an FPS with a joystick, you don’t want it to behave like a sluggish ‘d-pad’. One way to solve this, would be to dynamically modify the Speed= value used by the key, incrementing/decrementing it by some stepping per use; while not as elegant as it might sound to some, is also impossible. UE2s console and command system could only handle the ++ and — operations by writing out the increment steppings using the pipe(|) operator, and following it up with an OnRelease operator to reset it back.

A better solution, obviously is just playing a ficken game with joystick support ^/.

As for the aforementioned method B, wouldn’t you happen to know that it’s already there. It would be the best solution, and AutoHotKey is a fine bit of software; one I’ve always wanted to find a good use for in games. Depending on how well it can be made to work at converting a JS into a mouse compatible HID, in particular with games in general, I might actually give up using the mouse for regular desktop usage. Thanks to my laptop and having encountered a fair bit of hardware in life’s travels, I have no special attachment to PC mice: only hatred for ones without tails. Than again, I don’t like wireless hardware for much, period.

The third method (C), well, is one that I would only consider worth the effort, because of the learning about Windows specifics that it would involve. I wouldn’t be surprised if Microsoft had it as a sample app somewhere either. The libraries I rely on for input backends (e.g. in Stargella) have their own portable handling of joysticks as is, so I’ve no real reason to care lol.

Tweaking my noise at the old API

In fooling around with the Windows API, I’ve just had an enjoyable moment of guffawing. As a quick test of the JS stuff in winmm, I hooked up MM_JOY1MOVE to MessageBox() and ran the program under the debugger. It resulted in an endless stream of MessageBox(), resulting in the Windows task bar hanging, and taking at least 25-30 seconds to recover, after the program had finally overflowed the stack, been examined, and finally terminated manually.

I almost died laughing lol.

A little fun with RSS

In a bit of experimentation, I’ve been thinking about ways to improve the way a certain popular web platform plays with the services I utilise. So, today I began playing with two new toys: FeedBurner and Yahoo! Pipes.

Feed burner offers a bit better control over ones RSS feeds, than most web services that I’ve encountered do; in particular, much better than both Live Journal and Blogger. For what it’s worth, I’ve converted my blogger feed over to the burner, allowing me to trivially add a few things to the feeds without disturbing any existing subscribers. The main difference, is now I can tweak things for stuff that I feed my journals RSS into, hehehe.

One downside of FeedBurner, is that its ability to merge feeds with the “Link Spicer” feature is quite limited. In particular, it’s little value beyond a limited set of common services. Enter Yahoo! Pipes: using it, I was able to (trivially) munge together several of my service feeds into a singular one, e.g. combing several photo album steams into one pipe. I’ve created several feeds, that I doubt anyone will be interested in; but allow me to route selected information sources into RSS aware entities.

Although Really Simple Syndication has been around for more or less, a solid decade: few people understand it’s true value. Properly managed, web feeds whether built on RSS or not, can achieve part of that interoperability that certain keyword jugglers puddle about with XML, and it’s been here for years. If you want to cram steams of data somewhere, odds are you should be looking to see if some type of web feed will fit the bill, rather then throwing together yet another obscure XML format to juggle. Bonus points include that decent libraries are already available, which can save some time and make easier to read web app code later ^_^.

Recharging time

As today marks the first in six days off work, my plan is to spend it on rest and relaxation, assuming no one has any more nukes to juggle 8=). If anything explodes, people can push a fix it task out to my RTM, but I’m taking it easy for a while lol.

The most stressful thing I’m doing this week, is moving over more old entries from Live Journal!