XSLT, where have you been all my blankin’ life!

I spent a couple hours to play around with a few style sheets, after inhaling all the XSL/XSLT and XPath related data I could get my mits on.

I wrote one for converting DocBook XML into a portable subset of Bulletin Board Code, and one for HTML-aware blogs. The textproc/docbook-xsl port offers html/xhtml outputs, but it is better suited to generating stand alone web pages; mine targets it for posting to my Live Journal ;-).

For quite a while, I’ve generally skipped dealing with eXtensible Stylesheet Language (XSL) and her friends, but now I’m happy as a clam! XPath expressions provide a relatively simple form of addressing XML elements and attributes, kind of like basic regular expressions and CSS selectors rolled into a hierarchal package. XSLT processing has it’s ups and downs but for generating output for creatures like web browsers, where formatting is different from matter, is quite trivial.

Most non-trivial HOWTO’s, guides, and reviews that I post, are actually taken from files kept in my ~/Documents/ folder. I’ll likely be adjusting them to DocBook and integrating them into their own private git repositories, mauhauha!

Codes, Designs, and EPI

Today has been a fairly productive day, despite a heck of a lot of interruptions; to the point in fact, that at least 2 hours of work time was lost to it… Thanks ma!

Most of my mental energy was devoted to refining the interface between epi-add and $EPI_WIZARD, and figuring out how best to document it. My original vision for it was using bidirectional communication between finite state machines running in separate processes (the installer and the wizard). Amid the 21,000 interrupts of the day, I’ve managed to balance out the problem, and have come up with a more interesting solution; one which vastly simplifies the wizard programming interface and grants more freedom to anyone who wants to write an $EPI_WIZARD, which should be easy as pie.

By contrast, most of my code time was spent working on epi-unpack and prototyping ideas for the previous problem. Other then a few changes that I would like to make, epi-unpack is basically done; I’ll probably work on epi-verify next, while the others are reviewing the code for epi-unpack. One thing that distinguishes our Encapsulated Package Installer (EPI) system from PC-BSDs PushButton Installers (PBI; formally PcBsdInstallers), is that PBI is a static monolith from the undocumented garage; ours is knit atop a framework of UNIX programs, with standards and manuals to be shipped along with them ;).

I can not lie, UNIX has effected my philosophies on software design—for the better.

Generally, I don’t discuses business or classified projects on my Live Journal as a matter of ethics, but since EPI is now public knowledge, I’m free to blog about it’s development. The same can’t be said of all things SAS or work related lol. Most likely more things will filter through about EPI, so I’ve created a `tag` for it. Over 3 years and 1500+ entries, and I have never really gotten into Live Journals tagging feature, but have been contemplating it for the last few weeks.

The only way I can ever find my old entries is through Google or sequential search, nether of which are reliable; so utilizing memories and tags would be a darn good idea by now. The problem of categorizing my thoughts, as always remains a problem :=(+).

EPI, the facts.

Since it has been brought up recently, I’ve decided to air out the facts about the “Top secret community project” [sic] known as EPI. It is also my request, and being that I am an Admin here, one that I will enforce — that any comments about this go into a separate thread. First to start one has my blessings for collecting comments.

[color=blue]This post will be locked along with the thread; everyone shall respect this. A copy will also be retained else wheres.[/color]

[i][u][b]
Project Status?
[/b][/i][/u]

Stalled (i.e. postponed until further notice), because of “Real life” taking priority for the developer.

4 people were involved in the project at it’s height, and provisions made to approach 2 or 3 others at a later date. In answer to some peoples questions, Yes Graedus and Myself were involved; the “One guy” tasked with all the coding tasks was none other then myself. I enjoyed it greatly while off-work hours permitted.

[i][u][b]
EPI, what the heck is that?
[/i][/u][/b]

EPI is short for “Encapsulated Pacakge Installer”. It is a method for integrating existing FreeBSD software management systems with an easier to use means for distributing third party packages (like PBI) and the ability to integrate with any desktop environment or user interface system.

[i][u][b]
EPI Project Goals?
[/i][/u][/b]

[list=1]
[*]Remove the need for the user to deal with the “Package, Port, or PBI” question.
[*]Make creating “EPI” files a snap with a minimal of fuss on the maintainer, and trivial to automate the build (but not like PBIs build system)
[*]Allow support of console, and desktop environment agnostic installation without inconvenient to maintainers.
[*]Create something that could be owned, managed, and operated by the community as a whole; not the PC-BSD developers: who proved to be incompetent and incapable with their miss management of PBI, at least in our (4 sets of) eyes.[/list]

It was intended that once the system was developed, that it would be further refined by the community and someday replace the PBI system, becoming the [i]de facto[/i] standard way of managing software on PC-BSD. Like wise, it was also intended that once the EPI system matured, it would become the means by which PC-BSD itself would manage system software – instead mucking with peoples ports.

While EPI is not my concept of what kind of package management system PC-BSD needs, it is our concept of what PBI should have been in the first place, and what [i]PBIs[/i] could have become if properly managed by PC-BSDs developers.

[i][u][b]
How does it (EPI) work?
[/i][/u][/b]

First a FreeBSD port is created for the given program; this has been done for most software that is worth running on FreeBSD and should be done for anything else. There are is approaching 21,000 programs in the ports tree, so much of that is done for us all.

Second a maintainer writes out a description of the particulars. This basically amounts to stating:

[list]
[*]What port(s) does this EPI provide?
[*]What EPI does this EPI depend on?
[*]Who are you?
[*]Special Needs Hook[/list]

By stating what ports to be provided, for example “www/firefox35”. The build system would then automate the process from there on without maintainer intervention. Firefox would be fetched, built, and stripped down to the minimal required dependencies. This would all be done inside of a private jail on a build server, where in there is nothing else to interfere with the process (that is, inside the jail).

The “Firefox EPI” would depend on several other EPI, in this case it would depend on the following ones: EPI Core Services (the EPI system tools), X Windows System, and GTK+ Runtime. Because of this issues relating to X and GTK dependencies are removed from the Firefox EPI, creating a *much* smaller download and more manageable interface for people who just want to install Firefox, and have it just freaking work without trouble! Because of this design decision, unlike with PBI; the issue of dependencies are automated and can be checked. PBI does not support that. An advantage of EPIs way of doing it, results in ease of maintenance, more effective use of disk space, and more effective integration with FreeBSD. Another great perk is it makes writing things like Flash or MPlayer plugin EPIs much less painful then with PBIs.

For security reasons the EPI file would be digitally signed during the creation process. Every maintainer has their own “Key” that is used for signing EPIs that they create. This allows a package to be traced back to its creator, who must manage their reputation within a “Web of Trust” distribution model.

In case of “Special Needs”, there is a special Encapsulated Package Installation Language, or “EPIL” for short. EPIL is a simple declaratory way of scripting the installation process. This is analogous to the various ‘PBI.*.sh’ Bourne Shell scripts used in PBI. [b]Unlike PBI, the EPIL system is designed for use by non-programmers and is rarely required[/b]. Every thing that can be done for you, will be done for you, in order to create the best possible encapsulation of work, minimize your hardship, and make life easier on both users and maintainers. By contrast creating a PBI requires an understanding of UNIX shell scripting and programming, and employs a flawed Application Programming Interface, which usually results in poorly created PBI and fewer maintainers. EPI solves this problem by applying sound software engineering practices, and makes creating an EPI a snap. [b]Under normal conditions the maintainer has to never write an EPIL script, and even then it is less trouble then writing a forum post[/b]. The maintainer has no need to worry whether the installation is text, graphical, attended, or unattended; all standard needs are done by magic; that is the massive opposite of traditional PBIs.

After creation, the EPI file makes its way to a sacred repository for evaluation; a way of downloading it is provided to the author. Trained people inspect the maintainer serviceable parts, i.e. no hidden delete all files on your system kind of bug. Both those trained folk and regular but trusted people then test the individual EPI to make sure it works as advertised. A simple check list is used to note that down correctly, reports shall be publically posted and the maintainer notified.

If no show stoppers were found and the maintainer is in good standing with the community authority, their package is then hosted for public download in accordance whatever community policy is deemed appropriate. The community website (think like the PBI Directory) would then host a download link and all necessary data, so that end users may download the created EPI.

If enough end users complain or have problems with EPIs created by a specific maintainer, that maintainers rights to use the community systems will be temporarily revoked (permanently if need be), and the maintainers “Key” will become untrusted by the EPI Community Authority – thus invalidating our trust in that maintainers EPIs safety for general consumption; individual end users have the ability to ignore the communities decision and install those EPI anyway.

A end user then downloads the EPI file, it could be from the Community Authorities website, from a friend, or even direct from the third parties website! (E.g. Adobe, KDE, Gnome, etc.)

The end user chooses the installation method: graphical or textual.

To install via graphical mode, simply double click the .epi file on your desktop and it will begin a graphical installation wizard. The wizard run is user servicable with a default based on your environment; i.e. Gnome & Xfce users get a GTK+ based wizard, KDE users get a Qt based wizard. Users and Developers could create their own wizards.

To install via textual mode, simply run the installation program in the shell:

[code]
sample# epi-install ./Mozilla_Firefox-3.5.3-i386.epi
[/code]

Both methods invoke the same program, but with different arguments.

The epi-install program updates its understanding of “Trusted Keys” published by the Community Authority, or any other source the user chooses to trust; the user can even skip this step.

Assuming all has went well, epi-install then unpacks Firefox accordingly, verifies the maintainers signature and the packages integrity. If found, the compiled EPIL script is run – the user can choose not to run the script. Normally this is a moot point, because there shouldn’t be any script needed. Of course, the EPIs installation is recorded in a database.

What the user sees depends on how it was run. In text mode they get a console friendly way of doing the installation. In graphical mode, they get a GUI install wizard like PBI. Environment variables and command line switches are provided override behaviour – for example, choosing to run the Qt wizard under Gnome. All this is so easy because the EPI maintainer was never arsed with dealing with it, it was done automatically for them.

Firefox is now installed, the end user can run it from its locatiion in $EPI_ROOT. The default $EPI_ROOT would likely be /usr/epi if adapted by PC-BSD. When “Installed as a third party product” on FreeBSD or PC-BSD, the default $EPI_ROOT would likely be /usr/local/epi.

Our way of doing things would enable both shell users and desktop users a fairly painless way of accessing firefox, without favoritism to KDE or Gnome.

[i][u][b]
Ok, so how does this relate to PBI?
[/i][/u][/b]

PBIs are managed by the PC-BSD developers, and the people trusted with watching over the safety of end-users are either corrupt, derelict, or incompetent. [i]EPI[/i], would instead be placed into community hands, so that no one person or entity has total control.

As a format, how they work is very different. An EPI is a compressed archive containing program files and meta data; an external program located on the users machine is used to handle the installation procedure. This is how many package management systems designed for UNIX work, Microsoft’s own Windows Installer is not to far off either. APT, DPKG, RPM, and FreeBSD Packages work this way as well. The PBI format on the other hand, is a self extracting executable with an embedded archive containing additional meta data, program files, and an embedded installation wizard. The PBI system is dependant upon the FreeBSD version, KDE version, and the presence of system programs — PBI is written in C++ but done like a shell script. Internally, PBI is both ugly and non-unix like. [i]EPI[/i] instead provides a more platform and version independent way of doing things.

The format of PBI files, how they work, what they do, and how they are managed by the system is generally undocumented. [i]EPI[/i] would provide a totally documented system, making it easy for developers, system administrators, end users, and businesses. Heck, you could even create your own EPI system that is totally compatible – you can’t do that with PBI, unless you read a lot of bad code and kuddle up to a nasty API.

In order to make a PBI, you need to know quite a bit about shell script and do a lot of leg work that should be done for you automatically; end result is most PBI install scripts are bad, even Kris Moore’s are shotty. [url=http://sas-spidey01.livejournal.com/389068.html]I found an old PBI script that I wrote a while back[/url], that is done ‘properly’.

Because [i]EPI[/i] files are throughly checked at install, it tries to ensure that what you download is exactly what the maintainer created. By contrast the PBI file you download is not guaranteed to be what the maintainer created, there is no safe guard against tampering with the files on the mirror – the user is on their own without so much as a checksum of the actual .pbi file that was sent to the mirror!

[i]EPI[/i] has a simple and documented dependency model. If you don’t have GTK+ Runtime EPI installed, Firefox EPI should warn you. The EPI Core Services provides as a contract, a set of ultra-common dependencies used by many programs. This reduces disk space waste and allows EPI to work across differing versions more easily. Our decisions would have created a more robust system then PBI, while minimizing dependency hell to the same level. The way PBI does things creates more work on the maintainers and cause more interoperability/integration problems that the PC-BSD developers simply don’t care to address. PBI with hidden or undocumented dependencies are also not uncommon, because the only ‘standard’ of what the PBI can depend on is the release notes for PC-BSD X.Y.Z and the PBI “Guidlines” as they have become, which used to be rules that were just often thrown in the trash can.

[i][u][b]
OK, OK, enough already, but who the heck are you Terry?
[/i][/u][/b]

I am a person who has used computers almost since he learned how to walk. Someone that loves software engineering, and shares his grandfathers work ethics, that if it has your name on it, then it has to be GOOD.

Several years ago, I encountered UNIX and embraced it with an open heart.

The first date with PC-BSD was 1.0RC1, a release candidate shipped in 2005. I have since various versions of PC-BSD on home servers, desktops, and laptops. During the 7.0Alpha testing cycle, before KDE4 had even entered the picture, I had made the decision to transition my personal-workstation to FreeBSD, and never looked back.

As a part of this forum, I joined to see if I could help people and learn a thing or two from my superiors. Even after they left, I remained, eventually becoming a Moderator and later an Forum Administrator at the request of Kris Moore – likely because I shouted loudest of all about the spam problems. My activity on the forums over the past ~two years has rubber banded with my work schedule and the rest of living.

For a time, I created PBIs of programs that interested me, such as Blackbox, which was the first alternative to KDE to be had in PBI form. After a while, flaws in the PBI design and the developers disregard for their own rules caused me to “Give up” on creating and maintaining PBIs. I have seen everything to be seen with PBI, down even to the point of Charles breaking the PBI rules, PBI developers publishing their own PBI without testing, Kris Moore changing the rules after breaking them for Win4BSD, Kris changing them back over community outcry (and increasing lousy PBIs lol). Throughout it all, the process of getting PBIs published has made me sicker then watching a corrupt Government at work.

I’ve generally kept a safe distance from PC-BSD development, this is why I never became involved with the development team; their actions over the years also do not give me desire to “Volunteer” my services.

When the question that PBIs could be created automatically was brought up many moons ago, it was shot down, most strongly by none other then the (then) host of the PBI Directory website (Charles). I supported the idea and it was generally held in contempt as something “Impossible”, only later to see it become a part of PC-BSD. Exploring it ahead of everyone else, was actually how I learned much about shell scripting.

My skill set includes C, C++, Java, Perl, PHP, Python, and all the way to parts of X86 assembly, Scheme, and Common Lisp. More task specific issues such as SQL, HTML, LaTeX, sh, bash, batch/cmd, AWK, SED, and indeed, even [i]ed scripting[/i], are apart of my abilities.

I think no one will debate that I know a thing or two about that which I speak.

Bugs can be fun, as long as I didn’t write them

In order to make optimal use of tonight, I did a portsnap and fed a list of ports to be updated, into my updater.sh script; then went to work on playing with pthreads. A short while later, when things got to ImageMagick, I got the shock of my week—pkg_delete crashed during a make deinstall!

In looking through the code, I’ve found the reason why, there’s a package name passing through the code as a null by the time it finishes passing through pkg_do in src/usr.sbin/pkg_install/delete/permform.c. From the looks of things, it goes bonkers once Plist is setup via read_plist(). Hmm, well well, the Package (_pack) structure being passed to it has rather interesting contents at the time.

I just don’t have much more time to fiddle with this damn thing! I’ve got to be up for another groaning day of work tomorrow.

OK, found it, there’s some funkyness here. When it hands off to add_plist (basically every damn thing in the bloody +CONTENTS), it has NULL’d the dohicky that gets copied in later. read_plist() sucks a file line by line, looks like if the trailing character is a space, read_plist() sets it to the null character ().

That creates a bit of a problem, because the +CONTENTS file for ImageMagick has a line ‘@pkgdep ‘, which results in pissing off the whole damn thing… lol.

So… how to handle this problemo? I see two things: 0.) pkg_delete should NEVER FUCKING CRASH!!!! No matter what is in a +CONTENTS file, at least, that is my opinion!!! And 1.) if ‘@pkgdep ‘ is not valid in a +CONTENTS file, whatever causes ImageMagick/ports to be shoved there needs to be found and fixed. Digging into +CONTENTS file creation is a beast for another hour. Why the pkg_delete program chooses to pass a NULL through I have no bloody idea, maybe shifting through CVS logs might hold the answer to that mystery. The pkg_install suite has some rather ugly and quickly hacked together parts, that really makes me wish they had used shell or (like OpenBSD) imported Perl into the base for the job, rather then doing it in C. Don’t get me wrong, I like C, but please don’t write functions with over 1,000 Lines Of Code ;). Either way, when it comes to fixing the pkg_install issues… that’s something I’m not going to touch unless a developer suggests what they would like to see in a patch, because whomever is maintaining it, should have a better overview of things then I do at the moment; I’m in no shape to do any more thinking tonight. Perhaps I’ll just file a bug report on it and see what comes of it.

Right now I just need to get some freaking sleep before work. Ugh, stairs here we come…….

Notes over lunch

Work yesterday was fairly uneventful, so it left me time to concentrate on programming, doubt Friday will be so lucky… lol.

I worked out the basic architecture for pipin’s daemon, and have had the details on my mind for most of time since ~1400Q yesterday. Current on the hit list, is a pair of prototypes: one built around libpurple, that is to deal with the issue of a bare bones get connected task, the other is an experiment in moving the “Gatekeeper” component into its own thread. Once both prototypes are done, I’ll look at merging them and re-evaluate how they play together versus mucking around with GLib’s main loop. Conceptually, pipin-imd consists of three units: purple, dispatcher, and gatekeeper. The purple unit deals interfacing libpurple into our own kit; the dispatcher in mapping between purple/pipin-im events and notifying all registered clients; and a gatekeeper to manage incoming data from pipin-im clients. I also have an idea of how the communications protocol between daemon and server might work, not to mention the fact that I want a simple net command shell that would allow communicating with the daemon via a shell or batch script lol.

The plan is for the daemon to be written in C and licensed under the GPL, since purple forces use of GLib, there is no reason to use C++ for sake of the Standard Template Library. Whatever the legality of using Python ctypes based or SWIG generated code to interface with libpurple is, I doubt it would be in the spirit of the damn blasted GPL, even if the license was less restrictive >_>. The client unit, I plan to write in Python using whichever widget toolkit proves most appropriate (Qt, GTK, Wx). The daemon is a pretty simple program, the client side stuff gets all the fun, and a license more in line with my ideals of freedom, then the uglicious GPL.

Originally I had planned to work on the threaded gatekeeper prototype during the time before dinner and afterwards, but never got around to it. It wasn’t a good nights sleep, but at least I went to bed early for a change…. lol

The glory of Raven Shield / Unreal Engine 2….

OS: Windows XP 5.1 (Build: 2600)
CPU: GenuineIntel Unknown processor @ 3003 MHz with 2045MB RAM
Video: NVIDIA GeForce 8400 GS (8250)

Assertion failed: Actor->ColLocation == Actor->ColLocation [File:.UnOctree.cpp] [Line: 1703]

History: FCollisionOctree::RemoveActor <- ULevel::MoveActor <- NormalSubUzi37 <- UObject::ProcessEvent <- (R6TMilitant04 Alpines.R6TMilitant31, Function R6Engine.R6Pawn.SpawnRagDoll) <- AR6Pawn::UpdateMovementAnimation <- AActor::Tick <- TickAllActors <- ULevel::Tick <- (NetMode=3) <- TickLevel <- UGameEngine::Tick <- UpdateWorld <- MainLoop

Both Raven Shield and SWAT 4 display crash messages like these, so perhaps it is an Unreal Engine 2 thing rather then specific to RvS/S4, but if it is, I would assume there is a way to turn it off. My feelings: This is good stuff to see if you are one of the games developers or testers—but should _never_ be seen by retail customers! Not only is it Martian to regular people, since we can’t go edit and recompile code ourselves, all it does is display information that we didn’t need to know. If I was going to do something like that for crash handling in a *release* product, I would probably make it said “Programmer fuck up, please sue the company for idiocy” 🙂 This seems to remind me, of one time I was on the website of a large north-american company, when for doing nothing at all but routine, their website gave me the most interesting error messages…. telling me enough data to find out several server side paths, there otherwise hidden implementation language, and enough data to clue in on what “stuffs” were being used to make the whole show go. I nearly died laughing lol. Maybe I’m a freak, but I don’t think user should be allowed to see developer information in a closed product like that.

post script: this was my 1500th journal entry

Despite a more miserable then not day, I’ve actually made some progress with my game project. Fetched Ogre 3D’s trunk via Subversion, built it with CMake/MSVC, setup a suitable SDK spot, then got my project building against it with CMake and executing.

The main adjustments that are needed atm, is building up the configuration file handling and implementing the principal movement commands that remain (e.g. creep, walk, run, sprint). There really is no game engine in the traditional sense, because I don’t want any of the ones I’ve looked at! My intention is to refactor the prototype into a suitable baseline for use with other games I would like to build in my free time; I hate to repeat myself :-P.

It’s still very early for me, but I think I’ll turn in for the night, after I have the servers prepped for tomorrows Live Op. I’m interested to see how it will go, and very much wondering who will end up as the Element Leader, hehehe.

Oooh, I’ve just had a little brain fart of interest.

For lack of anything more interlectually interesting to do with my sleep deprived mind, I’ve been thinking a bit about some coding I could do on tpsh; then this hit me. The shell expands ‘, “, and ` quotes using a simple table to map the symbols to appropriate transformations, why not use the same code for () and {} grouping?

Internally the look up table for quote expansions is hard coded into the principle tokenization subroutine, because that is the codes only designated purpose in the program; it didn’t need to be more general, then being easily converted to something more generic. After getting a working implementation that I could drool over, I thought about posting a modified version on a forum, as a demonstration of how to do quote handling in config files and such.

Now I’m thinking of more places I can use that little blighter with a few minor changes lol.

Thoughts on how IM software can be implemented

Pidgin is implemented in the fairly typical way, libpurple provides the support for the network protocols and a ‘chunk’ of plugin crap for everybody. Pidgin and Finch provide graphical and textual user interfaces around it. I don’t think I really want to know more about the relationship plugins have with the purple library and their respective UIs then I already do. Although I’ve never used it, I reckon that InstantBird would be designed like Pidgin, but with a Mozilla Firefoxian XULRunner UI layer in place of the GTK/GNT UIs of Pidgin/Finch.

That kind of implementation for a program like Pidgin is basically Computer Science 101, using XULRunner may be considered CS 102. Pidgins approach to solving the problem is about as standard as putting butter on your toast in the morning. Early today, I was thinking about how I would implement an Instant Messenger program; let’s just say I’ve used many and hate them all :-P.

The first thing that can to mind is splitting the program in half: a multiplexing daemon and a chat client.

The daemon would take care of the per-protocol issues, likely using libpurple (as its so damn popular 8=) or a plugin architecture built around per-protocol libraries (libmsn, libgfire, etc). The client would provide the part of users actually see, and communicate with the daemon over network sockets to perform its task. You would launch the daemon and it would log you into the various networks, then you would launch the chat client and connect to the daemon. About 10 minutes later I figured it would likely be worth trying to implement the server daemon to chat client communication, by making the daemon expose its interface to the client as an XMPP server; instead of rolling my own protocol to do the same type of job. I am personally not partial to XMPPs functioning, but find it exceptionally useful in practice.

What side effects would my decision have?

A.) The daemon (your accounts logged in) could run on a different machine then your chat client (chat windows). This would make it trivial for example, to run the daemon on my file server and connect to it from my other computers – without logging out of AIM, MSN, blah blah. It would also be trivial to allow tunneling the two programs communication over an arbitrary program, like SSH, which would provide local-encryption of traffic between client and daemon 😉

B.) Because of the client / server design, it would be possible to access those IM networks from multiple locations at once, even if the protocol doesn’t support this. This is because the server daemon is logged in, not your chat client, hehe. Meaning that the daemon would have to implement the concept of multiple clients connected, which would be fairly easy (thinking of how this might be done if daemon-client commu was XMPP, really made me smile when I realized that XMPP supports something close enough to this).

C.) Because the client has no concept of a network protocol beyond what it needs to talk with the daemon (that does the real network leg work for various protocols), it has no physical connection with the implementation of code that does.

A and B would create an Instant Messenger program like Pidgin, that is to IM what TMux and GNU Screen are to terminals! Point C would just be awesome in my opinion. It would evade creating something like the UI related fields of PurplePluginInfo’s and make installation and maintenance more independent then any other IM program I’ve seen.

To me, the daemon+client thing it is just the most obvious way of doing the job…. lol. Ahh, I wish I had the time to write it!

What a day at crockery design, inc.

Nothing like a late night to remind you that you’re alone, nothing like waking up in the morning for work; to remind you that your more of an asset then a person in your families bottom line.

I managed to survive work without to much damage, even got to be out in the rain for a bit after getting home (which I relish). Most of the afternoon was spent, well honestly I don’t remember much of it.

I did however come up with a possible solution to a small “Problem” I have with getting cwRsync to obey. Basically, I can’t make the S.O.B. contact the server (vectra) from sal1600. The idea is, instead of running rsync on sal1600 (client) over SSH to vectra (server), to do the usual one-shot daemon over encryption trick. Instead, make a script that causes sal1600 to launch a one-shot rsync daemon, and then to “phone home” to the server and cause it to trigger the rsync from server to client. The logic of fixing things under windows, the client doesn’t work, so make the client a server that tells the server to use a client like it was a server… ok, that’s just fucked up. I think however, with a little trickery to make it work that way via SSH; it will probably solve the problem. With much less pain then doing some kind of diff and patch crazyiness in place of rsync.

GCHQ’s also decided to re-elevate the priority on my fixing the mighty page. The basic problem, I’m a WO1 assigned to the “Special Operations Wing”, which belongs between the Commissioned Officers and the Senior Non-Commissioned Officers groups. Instead, I (and now Timbo, since his becoming a fellow WO1) was displaying at the end of the page, under the Vets list. It’s been that way ever since I ‘repaired’ a majority banaided and brain damaged bit of code that is integral to things working smoothly. Since I consider such a matter of ‘personal vanity‘, I’ve never had a problem with it being broken. But I reckon, those on high are right, it doesn’t look good if a high rank is listed below the rankless, lol.

In studying what makes the whole thing tick, I’m not sure whether or not it is just a huge crock of shit or a clever attempt at trying to produce faster code. Either way, if I ever meet the original author, I’ll punch the fuckers lights out. Then I’ll give each of my fellow admins a chance ^_^. While the initial problem was painfully obvious, finding ‘why’ it happens was not quite so simple. When I found the next clue, I got curious and began to concentrate further until the light bulb turned on.

When the light bulb turned on, I decided either the original programmer must’ve tried to tune every ounce of speed out of the algorithm—working under the assumption that every instruction is equal to the sum of their quantity, rather then their cost times their quantity… or it was put together by some asshat who expected it to result in a “Write once, run always, read never” body of code with no concept of software engineering.

Either way, if I ever meet the person, they will loose teeth if we discuses software-stuff.

On the upside, out of all of todays stuff, I did finally get some SWAT action in… with all the work I’ve been doing lately, it was probably 3 days since I got a decent game. Or should we say, around 15GB of encrypted network traffic to shuffle to and frow, ain’t fast and don’t mix well with games.

Right now though, most everything is done; whew.