A C programmer can write C in any language, until he gets lazy and remembers Perls regular expressions still work at four thirty in the morning.
Programming
Useful way to pass the time
Got bored, have never found something like xkill that I can just say, “xkill somewindowname”. Found xwininfo last week, thought about this hehe:
#!/bin/sh
#
# kill X client by window name
#
if [ -n "$1" ]; then
xwininfo -name "$1" | grep 'Window id:' | awk '{ print $4 }' |
head -n 1 | xargs xkill -id
else
echo "usage: `basename $0` windowname"
fi
xkillname xconsole and poof – the xconsole window is killed, hehe.
Common Lisp ?
Hmm, Steel Bank Common Lisp (SBCL – compiled), CLisp (Bytecode), or Armed Bear Common Lisp (Java bytecode)
Decisions, decisions ^_^
My laugh of the day….
Early Unix hackers struggled with this in many ways. In the languages of 1970 function calls were expensive, either because call semantics were complicated (PL/1. Algol) or because the compiler was optimizing for other things like fast inner loops at the expense of call time. Thus, code tended to be written in big lumps. Ken and several of the other early Unix developers knew modularity was a good idea, but they remembered PL/1 and were reluctant to write small functions lest performance go to hell.
Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked…
— Steve Johnson
Hmm, I’ve always wondered why some really old programms written in C look so odd, as if the person had never heard of a function call (or macro) before. I’ve never been able to figure out if it was because many function calls were more expensive on the hardware back then, because the programmer was used to assembly, or loyality to some “style of the day”.
I guess that clears that up a bit more; if so, thank GOD he lied!
cvs / git cooperation
http://issaris.blogspot.com/2005/11/cvs-to-git-and-back.html
hmm, might be useful.
Somethings I’d like to work on, are using git, cvs, or svn. My own system here runs CVS, since that’s what OpenBSD comes with, but I’m more used to workign with Subversion. Git and Bazaar-ng are two programs I really should evaluate, since git is likely a program I would love using, and bzr is one I may find more useful in the wider world.
It also pays to know svn/cvs, hehe.
GET SOME SLEEP SPIDEY01 !!!!!!
I’m wondering for an hour, why the frig I’m getting missmatched checksums…. then it hit me, and I feel like punching myself in the head lol.
I tried this:
sub checksum {
my $file = shift || return;
my $md5 = Digest->new("MD5");
my $sha = Digest->new("SHA-256");
open my $fh, $file or die "Can't checksum file: $!";
binmode($fh);
return ( MD5 => $md5->addfile($fh)->hexdigest();
SHA256 => $sha->addfile($fh)->hexdigest();
);
}
and then compared the output agaisnt checksums generated by FreeBSDs md5/sha256 utilities (they are actually the same program). The MD5 matched perfectly, but the SHA256 were different.
After much thinking, I figured it out….
sub checksum {
my $file = shift || return;
my $md5 = Digest->new("MD5");
my $sha = Digest->new("SHA-256");
open my $fh, $file or die "Can't checksum file: $!";
binmode($fh);
my $mh = $md5->addfile($fh)->hexdigest();
seek($fh, 0, 0); # <----- you've got to rewind it to the start of the file before the next I/O
my $sh = $sha->addfile($fh)->hexdigest();
return ( MD5 => $mh,
SHA256 => $sh,
);
}
The thing that really irks me, I was wondering if the Digest:: classes would be so nice, as to rewind the file handle when they were done, but I think that idea occured to me about 4 hours ago, lool. Actually, if I’ve got to rewind the fscking handle between objects, I may as well just walk the file and append the data to each as we go… then get the digests.
I really need more sleep….
EDIT: Take III
=pod internal
checksum FILENAME
Open filename, and return a hash containing ALGORITHM => check sum
key/value pairs, for each algorithm supported. Both MD5 and SHA256 will
always be supported.
=cut
sub checksum($) {
my $file = shift || return;
my $md5 = Digest->new("MD5");
my $sha = Digest->new("SHA-256");
open my $fh, $file or die "Can't checksum file: $!";
binmode($fh);
# suck the file, and Digest::add it. We would have to parse
# the entire file per object, and seek to 0,0 between calls other wise.
while (my $ln = <$fh>) {
$md5->add($ln);
$sha->add($ln);
}
close $fh;
return ( MD5 => $md5->hexdigest(),
SHA256 => $sha->hexdigest(),
);
}
Hmm you know you’re crazy when
instead of writing the file path, and using tab completion to view a file listed in a programs output: you pipe the programs output into a command to get the desired line, pipe then pipe that into AWK, so you can pipe that into xargs ….
or maybe it’s just being lazy….
Terry@dixie$ perl script-name project --debug | head -n 1 | awk '{ print $2 }' | xargs cat
Quote of the Day
This is a consequence rather than a goal. I abhor a system designed for the “user”, if that word is a coded pejorative meaning “stupid and unsophisticated”.
— Ken Thompson
Hmm, some how this makes me laugh when I think of ed and notepad (ed is like the most basic editor I’ve ever met, but it’s still 1000 times better then notepad)
Ok… I need to sit down
because I’m sorry to say, I’ve actually seen horse shit like this: http://thedailywtf.com/Articles/Mentors,_the_Freshmaker.aspx
lol. I still remember one day finding some even worse shit… Found some code (looked like a lazy patch someone did to tweak it) that did nothing but a series of database lookups, set iterations, and I/O calls (without error checking) just to figure out how to print a non breaking space haha. That’s when I threw up my hands and said, FUCK IT I’m done reading this thing looooooooool.
Maybe I really should get a life again >_<