Google recently blogged a video showing off new features in gdocs, and it looks like much of what has been missing, is now coming promptly to Google Docs :-D. They don’t seem to have rolled out, at least to my level of access, but it’s looking good on the tiny screen.



I really have no love remaining for local office suites: they tend to be big, slow, expensive or time consuming to compile. Web applications can be made to work just as well, and with considerably less groaning involved. So, it is fair to say that I’ve really come to like web based solutions like gdocs, even if I’m not a big fan of all the hype in recent years about migrating to the ‘cloud’. Why should I put up with the bother of Microsoft, Gnome, Open, or K office, when most of the crap I care about, can be done on Google for almost zero maintenance?


When I have a document to get sorted out, I have the habit of selecting whatever method works best for the task at hand. Most often that is something I can hack at in vim, and then generate a suitable output for sharing. I don’t send Word files, I normally send PDF files and sources. That is a much better way of doing it, when you want someone to view the file, not edit it and send it back. When I expect someone else to be editing the document, I tend to employ gdocs over an office suite, because of Google’s sharing and collaborative features. Playing pass the pumpkin document is a morons errand compared to gdocs, and I reckon for some folks the publishing parts are handy. My interest is more so in the collab’ features, because that was the big incentive that brought me to gdocs in the first place. Now that those features are growing again, you can bet I’ll be putting them to use.


For me, Google Documents is just a means to an end: get the document done with minimal fuss. When I’m stuck dealing with people who wouldn’t know a DocBook from a troff, let along what the heck version control means… it makes life a lot easier without complicating MINE! The level of control over HTML/CSS offered with the word processor, even makes it easier for me to integrate gdocs into my work flow when more power is required; I’ve yammered about that before. If anyone has ever had to feed Word files through pre or post processing phases, uh, you will enjoy living with Googles method lol. Since I rarely need to do rocket science with spreadsheets, I’ve never had much to complain about their spreadsheet app. Recently I’ve used gdocs word processor and spreadsheet on numerous projects, including dependency tracking for our EPI Core Services package, and it works darn good for what we need to do.


My only big gripe over the years has been the lack of Google Talk integration with gdocs, compared to GMail. In our spare time efforts with EPI, GTalk/XMPP actually became our norm for development meetings, after efforts to deal with AOL and Microsoft’s solutions, only added extra interoperability problems. At least I can say ‘gdocs’ and people will usually know what I mean, if they know about Google Documents in the first place lol.


The video Google posted, demonstrates a much better way of dealing with the multiple editors problem then what’s been classic with gdocs. I can still remember a time when it was virtually impossible for two people to edit the same file simultaneously, haha. I am very intently interested in seeing these changes rolled out, and definitely have to give the drawing tool a go. Normally I use Dia for any diagramatical needs, and the GIMP for heavy lifting; if Google’s drawing app can get the job done, it really would save me the effort; we’ll have to play and see, hehehe.


Now that gdocs can handle documents, presentations, spreadsheets, forms, and drawings. I reckon fwiw, it is almost a fully functional office suite. I can’t say that I’ve used the presentations app, since I’m naturally against death by power point, but it would be my first stop if I needed to put something like that together.



Spending huge amounts of time draped over MS Word 2k2, taught me the value of using decent tools; where as learning how to use better tools, is what taught me the value of leverage software in general ^_^. Most of the time, I employ LaTeX or DocBook for large projects (the kind you don’t want to see the inkjet taxes on), but I will occasionally use gdocs for simpler documents of my own. When it comes to word processors, Google Docs is no worse then the rest, and in  my experience has improved more over the past few years, then Microsoft and Suns/Oracles solution. The ease of sharing and editing the doc with others, has made it one of the few officewares that I actually enjoy, except for the lack of vi and emacs keystrokes of course o/.


I’ve no real brand loyalty to Google, even though their software makes up a large part of routine. Sometimes it’s simply the best glove available :-/. For as often as this software has helped me out, I’m happy to use it when it fits.

In one of the rare moments that I actually stop to read my RSS feeds techy side, I noticed that WebKit2 has been announced. The only thing I can’t help but wonder, is what the flub does a layout engine have to do with processes? Not a damn thing! Personally, I would appreciate a separate API/library for such separate of interest: in particular, one not tied directly to WebKit lol. Ok, so maybe I’m crazy.

Whether you are a user or a developer o/, nether Xembed or the various (oft’ fugly) incarnations of Microsoft COM really make anyones lives just easier. Under X based systems however, it is possible to mate separate processes running WebKit into a central controller without to much heartache; there are already some bare bones browsers out there worth looking into, and patching when they don’t measure up. Non however have become common place, and even among PCs, there are really only a handful of common web browsers out of dozens of products to choose from.

Somethings are always going to be closely bound; first takes of an idea, even more so an idea geared for getting the current project done, tend to do that even more so. Personally, I will just be happy when there are less web browsers out there that suck… and keep on sucking.

RvS -= 1; SWAT4 += 1

I spent part of the playing around with Raven Shield and SWAT 4: TSS. Although to the best of my knowledge, Unreal Engine 2 did have support for joysticks, both these games shipped with that support half-assedly disabled 8=). In short, the games basically ignore all joystick input.

Never being one easily daunted, the three obvious solutions occurred: A/ configure the games for keyboard only operation and the joystick to emulate keyboard input; B/ use AHK; and C/ write a small toy to emulate a mouse by way of joystick input. I have already done A, and plan to test out B tomorrow (eh, today), if need be, perhaps play with C at some later date just for fun.

Under the Unreal Engine, or at least UE2: movement is a fairly simple thing. Basically you apply a positive or negative “Speed” factor to a given axis, resulting in some kind of movement: such as translating the players pawn(?) or moving the cursor. It’s kind of simple, +/- base X and Y axises are more or less your walk. Where as the aBaseX and aBaseY axises correspond to the mouse. For SWAT, the task is basically as simple as binding a group of keys to apply +/- Speed to those axises. The bigger the speed, the more reaction you get per key press. In Raven Shield however, despite several methods tried, only positive and negative X (left/right) movement was fully working. Irregardless of change, only upward Y movement was possible in RvS o/. After 6 years of it, I am often the first to call Raven Shield a pile of crap. Tuning my retired joystick to trigger those keys, is fairly simple: although the profiler lacks mapping JS to mouse aixses, sadly.

While it is possible to configure SWAT 4 for keyboard only operation, and thus JS based aiming; it creates somewhat of a problem. It’s virtually impossible to both be able to turn/maneuver around obstacles and to aim and fire at targets. The reason for this is somewhat Unreals fault, that and the fact that “Keyboard acceleration” is not quite, eh, the same as mouse acceleration. In testing with my stick, I found values of +/- 3.75 to 4 tended to work good for aiming, where as +/- 5 to 8 work better for turning. Since a joystick should garner a form of movement more acline to that of mouse acceleration, rather than a keyboards uniformity, it causes a conflict of interest. Mouse acceleration works on the indea, of increasing the speed of mouse movement in proportion to the distance you move it, e.g. it gets faster as the further you move it; where as accelerated it always moves at a steady rate. Perhaps a good if incomplete explanation, for anyone whose played a Playstation with an analog controller: mouse = stick, keyboard = d-pad; thus mapping js to keyboard = d-pad != analog stick. Obviously to play an FPS with a joystick, you don’t want it to behave like a sluggish ‘d-pad’. One way to solve this, would be to dynamically modify the Speed= value used by the key, incrementing/decrementing it by some stepping per use; while not as elegant as it might sound to some, is also impossible. UE2s console and command system could only handle the ++ and — operations by writing out the increment steppings using the pipe(|) operator, and following it up with an OnRelease operator to reset it back.

A better solution, obviously is just playing a ficken game with joystick support ^/.

As for the aforementioned method B, wouldn’t you happen to know that it’s already there. It would be the best solution, and AutoHotKey is a fine bit of software; one I’ve always wanted to find a good use for in games. Depending on how well it can be made to work at converting a JS into a mouse compatible HID, in particular with games in general, I might actually give up using the mouse for regular desktop usage. Thanks to my laptop and having encountered a fair bit of hardware in life’s travels, I have no special attachment to PC mice: only hatred for ones without tails. Than again, I don’t like wireless hardware for much, period.

The third method (C), well, is one that I would only consider worth the effort, because of the learning about Windows specifics that it would involve. I wouldn’t be surprised if Microsoft had it as a sample app somewhere either. The libraries I rely on for input backends (e.g. in Stargella) have their own portable handling of joysticks as is, so I’ve no real reason to care lol.

Interesting…

I’ve noticed that if I place my screen window into copy mode while the compiler runs, Firefox doesn’t become so massively unresponsive, slower, but no where near as slow. This makes me wonder a bit.

Either way, I can run less bloated web browsers on this meager laptop without any hiccups from a running compiler 8=).

Oh Brother, Where Art Thou Noggin?

With the questions my brother has been asking me about his computer, and his “Plans” for rebuilding her into a higher end machine, I’m starting to wonder how he even got one built in the first place… what was it, in one ear and out the other? lol.

And I am the moron, who was barred from building his own computer !?

As is my custom when encountering a blog post, I’ll usually check the current entries and grep for anything of interest after perusing what I came for; in this case, it was a blog post that floated into mention on #vim, during a short discuesion of git and hg. When I hit the ‘home’, I noticed a pair of entries: 1, 2; on the iPad.

It reminds me of why I stuck up my nose at the iPad about 5 seconds into the news report: because if it’s not *at least* as easy to screw with as OS X, it’s just one more over priced pile of garbage as far as I’m concerned… Then again, I’m kind of an odd ball, that I expect mobiles and tablets to be at least as good fun as early microcomputers were 15 years ago ^_^. I wonder how many decades I’ll have to wait for that in the American market place :-/. It’s actually possible to get outdated PDAs that are more fun, but unfortunately require some what of an import cost and learning some Japanese lol.

Updating Qt, hehe.

Tonight I updated SASs TeamSpeak 3 server, and discovered that my TS3 client was too darn out of date to work with it, haha. After updating things, I also noticed in the nifty about dialog they shipped, that the version of Qt used, denoted the GNU LGPL v2.1.

It has been a good while since I updated Qt on my windows system, last time was about one year ago. So I dropped by Qt’s website to download an updated SDK, and also found that they had MinGW and Visual C++ 2008 library packages available. Last time I really focused on Qt/C++ development, Microsoft Visual C++ was just becoming supported by the Open Source Edition (OSE), having long been supported by the commercial editions of Qt.

In perusing the website, I noticed that GPLv3 is now also a supported license for Qt. They really have gone through a few licenses over the years, I still remember when the OSE was a chose between GPLv2 and their own Qt Public License agreement.

While I really hate doing cross platform development in C++, Qt is both the least painful widget toolkit I’ve ever seen, and really makes the process *a lot* less painful. Well, as less painful as dealing with template implementations between GNU/MS C++ compilers anyway.

It is note worthy that the SDK only includes the necessary library files to link using MinGW, the port of the GNU Compiler to Windows. So if you plan on using Microsoft’s compiler, you will want the vs2008 package, or the source code if you need to shoe horn into an older version.

One thing I like about all the *decent* operating systems shipping a system compiler on their install disk, that usually means pre-compiled packages will be in sync with your compiler. Microsoft Visual C++ is not quite so lucky, since being a separate product, most people shipping binary packages of libs/headers, usually support 7.1 or 8.0 instead of 9.0. Oh well, maybe when VC10 is released :-/.

Yipee-Kai-Yay Terminus font now avail. on Windows !

As some no, after spending a night of debugging only to learn that I had typed structobj,member instead of structojb.member, after a 6-8 hour coding run, I went in search of a new font. The font I found, was Terminus, and ever since I have _absolutely fucking loved it_ in fact, I can’t even look at my terminal in another font without missing it.

When filing a bug report to the libmng folks, I left a comment in the bug entry about using a font where O != 0; then went in search of my dear terminus, and then found this and just had to install them :-D.

Terminus is my favourite font, but my only compliant has been needing X to actually enjoy it…. now that’s solved!

Thoughts drift to backup technology

As has been on my todo list for a while, is setting up cron jobs for running dump on my file server, as an “Extra” safe guard to the fact, that my data is mirrored across three different computers already, hehe. (I also do periodic backups to a separate cold storage partition, and priority files to CD-R every year or two.)

My main concern there of course being, how to do it without compromising disk space to much, after all we’re talking about a lot of crap lol. In writing my test script, I’ve also experimented with piping dump into lzma for compression, but at least with Vectra’s scarce resources, is a bit toooo much for the box to handle the data sets involved. Then I started to think, gee, wouldn’t it be cool to just keep a SQLite database that stores logs of changes (cron’d from a script), and then periodically run ZIP on the target, excluding unchanged files since the last backup. Effectively creating a smart form of dump, that functions at a different file system layer (e.g. like tar or cpio).

Then I started to think, well, the best existing solution that I’ve ever bumped into my travels, is a program called Bacula, but I’ve never had to to explore it. With a little poking around, it seems that Bacula is very much the kind of system that I would like to have.

Which poses three questions:

  1. How well does it work with OpenBSD?
  2. How well does it handle disk space/compression tasks?
  3. When will I have time to read all the excellent documentation?

So, sadly it will probably be some time after the new year has come and gone, that I’ll have time to return to this loop; my RTM updated accordingly. On the upside, if three hard disks in separate locations of the building, and with very controlled data replication patterns, some how fails before then…. the entire building will likely have collapsed, so it would be the least of my worries lol.

A little $HOME, NPM, and EPI fun

Been cleaning up my home directory a bit, particularly ~/Projects and the dumping ground in ~/2besorted lol. I’ve managed to find a lot of old notes, even have imported a couple saved items into del.icio.us. While I’ve mostly dropped all my bookmarks, there are still a few files with old bookmarks in them. I merged them, using the old `cat files | sort | uniq > newfile` method ;). Sometime I’ve gotta shift through them and see what’s worth keeping. Since ma.gnolia’s great lossage, I’ve still have been maintaining the habbit of using Google and my brains limited RAM as a substitute for bookmarking >_>.

I’ve also taken some time to sort out the change over from Subversion to Git on NPMs source forge page. Last night, I started work in converting my updater.sh to Python… figured to dub it ‘neo’ and incorporate it as part of NPM. Hey, I’ve always wanted to remove portupgrade from the picture… lol. The neo script is basically meant to be comparable to portmaster or portupgrade, and whatever century time brings me back to Neo Ports Manager, will likely serve as the backend. However, I must admit, unlike NPM, this part is somewhat of a selfish agenda at heart: it will be able to do more then my updater.sh can without help.

Finally found the time to add a few pages to the wiki being setup with the Encapsulated Package Installer project. The stuff there is still mostly bare, because I’ve been focused else where’s the past couple of days. Mostly the content added was centred around development tasks, going to have to settle down and plan out what changes need to be made. All in all, things have been moving foreward. When it comes to my personal projects, EPI has the lead seat, and the rest are enqueued, in terms of programming.

That being said, of course I have my strokes of working on other things when shorter periods of time are free: the odds and ends, hehe.