Writer’s Block: Good Days and Bad Days

What is your least favorite day of the week? And your favorite?

Live Journals Writer’s Block

least favorite: Thursday
Of all the days of the week, Thursdays have consistently been some of the worst days of my life. If I added up all the times at work that my brain has drawn a mental line in the proverbial sand, that if I’ve got to take anymore crap… it’s time to walk home—there would be many Thursdays in the equation.
favorite: Mondays
I often have seasons where I’m off work on a Monday, either every other week or every week ;). As such, I like it: because I can take it easy on Saturday or Sunday, and finish up things on Monday before I’ve got to go back to work lol.

Thoughts on how IM software can be implemented

Pidgin is implemented in the fairly typical way, libpurple provides the support for the network protocols and a ‘chunk’ of plugin crap for everybody. Pidgin and Finch provide graphical and textual user interfaces around it. I don’t think I really want to know more about the relationship plugins have with the purple library and their respective UIs then I already do. Although I’ve never used it, I reckon that InstantBird would be designed like Pidgin, but with a Mozilla Firefoxian XULRunner UI layer in place of the GTK/GNT UIs of Pidgin/Finch.

That kind of implementation for a program like Pidgin is basically Computer Science 101, using XULRunner may be considered CS 102. Pidgins approach to solving the problem is about as standard as putting butter on your toast in the morning. Early today, I was thinking about how I would implement an Instant Messenger program; let’s just say I’ve used many and hate them all :-P.

The first thing that can to mind is splitting the program in half: a multiplexing daemon and a chat client.

The daemon would take care of the per-protocol issues, likely using libpurple (as its so damn popular 8=) or a plugin architecture built around per-protocol libraries (libmsn, libgfire, etc). The client would provide the part of users actually see, and communicate with the daemon over network sockets to perform its task. You would launch the daemon and it would log you into the various networks, then you would launch the chat client and connect to the daemon. About 10 minutes later I figured it would likely be worth trying to implement the server daemon to chat client communication, by making the daemon expose its interface to the client as an XMPP server; instead of rolling my own protocol to do the same type of job. I am personally not partial to XMPPs functioning, but find it exceptionally useful in practice.

What side effects would my decision have?

A.) The daemon (your accounts logged in) could run on a different machine then your chat client (chat windows). This would make it trivial for example, to run the daemon on my file server and connect to it from my other computers – without logging out of AIM, MSN, blah blah. It would also be trivial to allow tunneling the two programs communication over an arbitrary program, like SSH, which would provide local-encryption of traffic between client and daemon 😉

B.) Because of the client / server design, it would be possible to access those IM networks from multiple locations at once, even if the protocol doesn’t support this. This is because the server daemon is logged in, not your chat client, hehe. Meaning that the daemon would have to implement the concept of multiple clients connected, which would be fairly easy (thinking of how this might be done if daemon-client commu was XMPP, really made me smile when I realized that XMPP supports something close enough to this).

C.) Because the client has no concept of a network protocol beyond what it needs to talk with the daemon (that does the real network leg work for various protocols), it has no physical connection with the implementation of code that does.

A and B would create an Instant Messenger program like Pidgin, that is to IM what TMux and GNU Screen are to terminals! Point C would just be awesome in my opinion. It would evade creating something like the UI related fields of PurplePluginInfo’s and make installation and maintenance more independent then any other IM program I’ve seen.

To me, the daemon+client thing it is just the most obvious way of doing the job…. lol. Ahh, I wish I had the time to write it!

It’s getting to be that time, that time frame where I know I need to be getting to sleep… yet, I don’t feel like sleeping. I’m also to tired to do much of anything else :-/. I don’t have to be up early for work tomorrow, unlike today’ish; but I instead have to work most of the day!

What I really need is a vacation. Time away from it all, with not a care in the world…. but then again, I would probably still be miserable lol.

I’m so sick of watching time pass by.

What a day at crockery design, inc.

Nothing like a late night to remind you that you’re alone, nothing like waking up in the morning for work; to remind you that your more of an asset then a person in your families bottom line.

I managed to survive work without to much damage, even got to be out in the rain for a bit after getting home (which I relish). Most of the afternoon was spent, well honestly I don’t remember much of it.

I did however come up with a possible solution to a small “Problem” I have with getting cwRsync to obey. Basically, I can’t make the S.O.B. contact the server (vectra) from sal1600. The idea is, instead of running rsync on sal1600 (client) over SSH to vectra (server), to do the usual one-shot daemon over encryption trick. Instead, make a script that causes sal1600 to launch a one-shot rsync daemon, and then to “phone home” to the server and cause it to trigger the rsync from server to client. The logic of fixing things under windows, the client doesn’t work, so make the client a server that tells the server to use a client like it was a server… ok, that’s just fucked up. I think however, with a little trickery to make it work that way via SSH; it will probably solve the problem. With much less pain then doing some kind of diff and patch crazyiness in place of rsync.

GCHQ’s also decided to re-elevate the priority on my fixing the mighty page. The basic problem, I’m a WO1 assigned to the “Special Operations Wing”, which belongs between the Commissioned Officers and the Senior Non-Commissioned Officers groups. Instead, I (and now Timbo, since his becoming a fellow WO1) was displaying at the end of the page, under the Vets list. It’s been that way ever since I ‘repaired’ a majority banaided and brain damaged bit of code that is integral to things working smoothly. Since I consider such a matter of ‘personal vanity‘, I’ve never had a problem with it being broken. But I reckon, those on high are right, it doesn’t look good if a high rank is listed below the rankless, lol.

In studying what makes the whole thing tick, I’m not sure whether or not it is just a huge crock of shit or a clever attempt at trying to produce faster code. Either way, if I ever meet the original author, I’ll punch the fuckers lights out. Then I’ll give each of my fellow admins a chance ^_^. While the initial problem was painfully obvious, finding ‘why’ it happens was not quite so simple. When I found the next clue, I got curious and began to concentrate further until the light bulb turned on.

When the light bulb turned on, I decided either the original programmer must’ve tried to tune every ounce of speed out of the algorithm—working under the assumption that every instruction is equal to the sum of their quantity, rather then their cost times their quantity… or it was put together by some asshat who expected it to result in a “Write once, run always, read never” body of code with no concept of software engineering.

Either way, if I ever meet the person, they will loose teeth if we discuses software-stuff.

On the upside, out of all of todays stuff, I did finally get some SWAT action in… with all the work I’ve been doing lately, it was probably 3 days since I got a decent game. Or should we say, around 15GB of encrypted network traffic to shuffle to and frow, ain’t fast and don’t mix well with games.

Right now though, most everything is done; whew.

After more then 8 billion bytes of compressed data has been transfered, I now have *all* of my personal files under one hard disk; and am hoping it doesn’t HCF before I’m done with this lol.

Depressingly, this and a few things in my room, really is the sum of my existence here, isn’t it? *sigh* maybe this wasn’t such a good idea after all.

Everything is archived in the cold storage partition, once it is copied over to an NTFS partition (as FAT32 is havoc on unix file permissions), I’ll begin sorting, colating, and cleansing data. Very annoyingly hulu.com is not functioning correctly at the moment, so I’ve a bit of a battle for something to watch as well!

Once I’ve got all the data setup, it’ll be time to setup information distribution between each box… that’s gonna be tomorrow I am sure, considering the time of night.

Thinking and ramlbing out loud

Because of the games box, any solution needs to be efficient on the network; and only maintain an interesting connection on demand. While the laptop could (and has) put up with NFS/CIFS type solutions, I don’t want it clogging up bandwidth, the desktop is a bad enough piece of shit as it is >_>.

The concept that I was thinking of last year, was using a mixture of CVS and RSYNC to branch and version important files on a host specific basis, although important stuff like my $ENV and vimrc files are made highly-portable, not everything else is, hence the value of a VCS/SCMS. Whilist all the mumbo jumbo about the algoritm used by rsync would be used for mirroring file sets between boxen at login/logout.

I reckon now is a great time to abuse it into working.

Since I began initial testing, I have looong since moved to using Git for all my active projects instead of CVS and SVN. However the master source of everything still is Dixie, that is my laptop! While things have gotten fairly cramped on the desktop, mostly due to development files being shoe-horned into Windows XP; my laptop has plenty of free space, despite having A LOT of software and a plethora of development files.

My file server is still holding quiet nicely:

To days date is: Sun Aug 16 21:34:08 UTC 2009
Terry@vectra$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0a 147M 41.8M 98.1M 30% /
/dev/wd0h 393M 46.0M 327M 12% /home
/dev/wd0d 98.3M 670K 92.7M 1% /tmp
/dev/wd0g 6.7G 995M 5.4G 15% /usr
/dev/wd0e 148M 40.6M 99.7M 29% /var
/dev/wd1a 11.8G 275M 10.9G 2% /usr/local
/dev/wd1d 44.3G 14.8G 27.3G 35% /srv

wd0 is an old Maxtor 8GB EIDE drive, wd1 is an 80GB EIDE drive. I also have a 40GB disk somewhere with the systems original Windows XP Home install somewhere.

wd1a was used to off load installed software to a second drive; basically a way of moving /usr/local off the original /usr file system. wd1d was created primarily as a sotrage depot for use with NFS/Samba.

In lacking a suitable place to dump crap, and not having a notion to put it under /var; I created a /srv directory to hold data for services. As such, it basically holds my code repositories, backups, and webserver files. It would be easy to rebuild the partitioning on wd1, or just create a new bsd partition:

# disklabel -p g wd1
# Inside MBR partition 3: type A6 start 63 size 156234897
# /dev/rwd1c:
type: ESDI
disk: ESDI/IDE disk
label: IC35L080AVVA07-0
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 16383
total bytes: 74.5G
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # microseconds
track-to-track seek: 0 # microseconds
drivedata: 0

16 partitions:
# size offset fstype [fsize bsize cpg]
a: 12.0G 0.0G 4.2BSD 2048 16384 328
c: 74.5G 0.0G unused
d: 45.0G 12.0G 4.2BSD 2048 16384 328
#

‘c’ means the entire disk, meaning there is roughly 17GB held in reserve.

I think I will begin cleaning up stale files in /srv, and moving unimportants to the desktop, which has around 40-50GB of free space for cold-storage.

tbc

I’m feeling one of those moods best marked as, “some people train cockroaches, I write things” in nature.

I think if I don’t find something to do right now, pretty soon I’m going to go stircrazy1. I’m really not in the mood for games, I know it too well… I reckon the best thing atm is continue to kick my operating environment into a higher order of work flow.

Lately a lot of things have been passing through my mind, until the marbles resemble scrambled eggs better then brain cells; and as much as I enjoy thinking, sometimes one can overthink. I’m all thunk out at this point, I can’t take anymore. As such, I really need something to focus on right now…. which is problematic with being driven crackers in this place at every twist and turn :-/.

Since I’ve been spending a lot more time on my desktop lately then my laptop (I miss the late nights with my darling Dixie :'(, but SAL1600 compiles faster). This again puts me in the old boat – sharing data between systems; as well as having to deal with shuffling between tools. After 3 weeks of using Windows XP for development tasks, I’ve learned a few new curse words and how to use the childish cmd.exe for automata needs.

Basically the problem at hand is thus:

  1. operate on the ‘same’ fundamental data set across all ‘working’ environments
  2. be capable of going mobile (laptop), and continue to work even without access to ANY network
  3. make the usage of backup and version control packages more uniform
  4. further refine the Standard Operating Environment (SOE) to my ever evolving needs

The biggest problem of them all, is Windows XPs user interface really kicks a huge freaking battle-ship sized hole in my ideal work flow…. lol. Alas, at least I can always build up tools as I go. GOD bless Perl, Python, and the rest ;). In the end, I hope to likely be using rxvt/rxvt+tpsh or console2+tpsh for my main user environment; rather then console2+cmd.exe and rxvt+screen+zsh.

1 In reading further on this issue, I can’t help but wonder if the way life has been, has influenced why I so often take pleasure in being out in rainy weather lol.

Several hours of testing has shown where the problem lays in regards to the kinks in tpsh’s basic control flow…. a strict ficken implementation lol.

Without a few extra ‘;’ between language elements, the parser doesn’t see the keywords as keywords, and instead they get fed into the code generator as arguments to the preceding keyword. Testing the same test code against the version of bash with MSYS1.0, their shell doesn’t give a hoot, nor should any other bourne-style shell I have access to. So in a way, you could say my code took a stricter interpretation of the sh script syntax, then what is required (and desired).

Somehow this actually makes me feel better, lol.

Also something that needs working on, is dealing with crappy old-style paths like CP/M, DOS, and Windows NT use.

$ P:/Editors/Vim/vim-personal/gvim.exe
tpsh: command not found: P:/Editors/Vim/vim-personal/gvim.exe at S:Visual Studio 2008Projectstpsh-dirtpsh line 883.


$ cd P:

$ P:/Editors/Vim/vim-personal/gvim.exe
tpsh: command not found: P:/Editors/Vim/vim-personal/gvim.exe at S:Visual Studio 2008Projectstpsh-dirtpsh line 883.


$ /Editors/Vim/vim-personal/gvim.exe

$

It seems to have a bit of an issue about drive letters.

In more detail, the search_path() function that converts an external command name into the direct path to it (done in order to avoid a dependency on the systems shell, e.g. /bin/sh or %COMSPEC%), fails to find a valid file when the drive-letter: notation is used. Since the function returns undef to indicate program not found, simply put search_path() needs fixing. The change should be trivial though.

I managed to get some coding in today, maybe that’s why I’m feeling a bit better. On the down side though, my cable connection seems to be running half as fast as normal lately, games were nearly unplayable for much of this afternoon for some reason.

I think my shells codegen branch has lived to the virtual extent of it’s usefulness. Today I sorted a few minor things and enabled nested control flow for statements using the ‘then’ and ‘do’ keywords, e.g. control flow statements can contain control flow statements as well as commands within their block. There are still a few kinks to iron out but the most important are done; I’m currently uncertain if the remaining problem lays in the code generation phase or somewhere further up the processing chain, but I expect it lays in generating the perl code.

Basic conditional flow and looping is ready, which was the primary purpose of this branch of development: to explore implementing the shells scripting language by dynamically generating Perl code for it, as well as replacing the static command sequence executor by generating it as well.

Oddly, the main things that need doing in the short-term are actually script related, rather then interactive. But then again, tpsh has always been meant for interactive usage first and support for batch jobs as a secondary concern. Main points of lacking at the moment, is a concept of “exit status”, in Bourne shell parlance the $? variable isn’t used for a hill of beans yet. Likewise all $variable expansion is done on the fly during tokenization, so:

for X in 1 2 3; do echo $X; done

The value of $x in the loop body would be expanded before the code is evaluated, resulting in the following:

for X in 1 2 3; do echo; done

because X is not defined at the time the statements are being parsed, the expansion is “”.

I haven’t decided how this will be solved yet. Solving it however, is well outside the scope of the codegen branch. I would say the only thing left to do in the branch, is implement the break num and continue num to enable a way out of the while/until loops. The former will take a bit more testing but the latter, is not likely to be to hard. Perl supports a robust way of breaking out of loops that can be easily used. Once that stuff is done, I’ll likely merge the branch into master and move on to other tasks.

Since I’ve been stuck working more often on the NT machine, development of tpsh gets a slightly higher priority lol.