PostScript (.ps) versus Portal Document Format (.pdf) – from a users perspective

Generally, PS is pretty much a dead format today. The only time a user is likely to see significant amounts of it in my experience, is if they are dealing with quality printing. Reality is almost everything comes in PDF (or Word/Excell if from retards), and morons assume that Adobe Acrobat is the only reader for it. Most people don’t know that PS exists and to many are just to lost to figure out what a PDF is (ugh, grow up!). The simple facts:

  1. Post Script is text based
  2. PDF is a binary format annd offers some programmatic options where supported

Here is what makes that interesting! I have numerous files in PDF and PS formats. Most are under 5M and are PDF. Now there is a monster called The_Complete_FreeBSD.ps that weighs in at a whopping 60M!!! My largest PDF files were around 12M to 16M. I’m compressing stuff I don’t use frequently but don’t want to erase for space reclaimation. So I compressed everything over 2M with XZ to see what files will compress enough to be worth having to uncompress them next century. Most PDF files contained a few embedded images, so savings were on average about 1M, e.g. 15M becomes 14M. Useless. A few files that were more exessive on text shrunk from between 3.5M and 4M to between 1.5M and 2M. That is nice but still to small to care about, because of the original file size. The 60M PostScript file compressed down to 1.6M!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Size checks were done using du -h and compression with xz -e. Being more textual and less binary, I infer that we can get better compression ratios out of Post Script than PDF. Although it is possible to make a PS file that is pretty binary for all practical intents and purposes. But that was surprising….lol. Post Script: In point in fact, I think the only organizations that I have ever seen acknolledge other PDF readers exist, have been Canadian govermenta, which even went so far as to note XPDF, an old choice among Linux/Unix users. +1 for the Blogger app for Android remembering what I started writing ages ago witout me even having to click a draft entry, hehehe.

Hmm, finally a chance to kick back and update my journal, before dinner ^_^. Something that’s been on my mind, is task management and record keeping, or more broadly the issue of “Productivity”.

When it comes to general coding or computing tasks, I stopped worrying years and years ago. I’m noted as a VIM user who loves its editing efficiency, and I would take something like vi or emacs any day of the week over a glorified Notepad, which is a class of code editor that excels at inserting efficiency. Likewise, I spend much of my time in a command line environment—which is far more productive for me than what most `normal` people use on their computer. Being at home in a terminal is also kind of a necessity where I work, hehehe.

So what does concern me, is at it’s essence Getting Shit Done and Keeping Track of Shit Done. Over the past couple months, I’ve extended my natural workflow to something more effective for a working environment. In the end, it’s still the same objective though, GSD and KToSD.

Traditionally, by project I would usually keep a set of notes: in the project root, either in a file called TODO or buf; which is a convenient name when your editing source code and want to write out a scratch buffer. Technical and personal notes would generally fall somewhere under ~/Documents with temporary notes and snippets in ~/tmp or ~/misc, depending on how long I need to cache it. Obviously, most of the interesting things would get converted to HTML and blogged here! My vim setup also defines an “Outline” mode that I utilize for brain storming and various other things that belong in a .outline file.

At the present, I’m utilizing a note taking program called TomBoy, which has become quite the useful notebooking tool! While it has a few things I find irksome or overly minimalist, it also has a few that I quite like, and fits the #0 rule: it just works. TomBoy is written in C# and works great on every platform I’ve tried it, including Windows. The main lack to it, is having to fumble about to find which window on screen belongs to my current note. I would like to find the time someday to write plugins for interfacing it with other software in my workflow but that’s more so for use outside of work. My systems all run the TomBoy client and utilize one of the many backends for synchronization, so I always have an up to date set, which is good because (*groan*) sometimes I may have to work from home.

What I do with TomBoy, is maintain a set of notes: the Current Action List and . Whatever is most relevant to the here and now gets scribbled into the current action list. So for example, if I’m implementing some change to a program I will have outlined the phases needed to do it as a set of bulleted lists in my Current Action List note, which is saved in a notebook named after the company I work for. During a `change of gears`, or as certain information becomes less associated to the Here And Now of what I am working on, it gets cut and pasted into a note named after the project in the same notebook—so I know exactly where to find things. If a note becomes overly concentrated, becoming more like a multi-section collection, the notes get split and prefixed by the project name. I also have a “Programming” notebok for things that are of a more general programming nature, like comments about APIs or how to use certain tools. Today I added a “Personal” notebook, which will probably gain a Current Action List note of its own someday.

An example of this workflow in practice, earlier in the month I was focused on writing code to model a problem and code to test that for correctness. Kind of like an individual Test Driven Development. But I didn’t have time to review things very well, yet I had gotten into this sweet work flow between coding, noting, and testing.  It got done and so did the code that needed to use it.

I outline what I need to do in TomBoy, using as little or as much detail as necessary, then I do it and adjust the note as necessary. It’s for insurance: in case doing X takes N*4 times as long, or I suddenly have to take a shit in the middle of something, I won’t forget to do the Xth thing. Out of convention, I prefix completed “Tasklets” with a ‘-‘ and highlight problem areas or WIP’s with a ‘+’ prefix, e.g.

     

  • – change A
  •  

  • – change B
  •  

  • + change C
  •  

  • change D

Where D is still to be done, C is giving me more overhead, and A and B have been hashed out.

This is helpful because for example, you might think of something Important(tm) that you don’t need to focus on *now* but will need to _soon_. Or at least, it is helpful to me. I’ve been doing this for a while now and it’s really meshing well. When a subject get hammered out, like A, B, C, and D all completed, tested, and committed; then I either delete the note text or transition it out of Current Action List and into Project Name. Just depends on whether or not the info needs to be retained.

Something that proved the utility of all this to me, was when I decided to take a step back and focus on examining the work I had done. So I took a couple hours to go over the programs code base, subject by subject, and making corrections as I saw preferable. To do that, I created two indented sections in my Current Action List: one for notes on application code and one for enumerating what the test cases actually cover. For each significant element that had changed during that cycle of development (probably ~50 commits), I did two things: review the test cases from the outside in, writing down what they actually tested (the assumptions), and an occasional (highlighted) comment about what needed to be tested for or changed. Then I opened the associated source code, and went through it and made any practical comments, in a form of super-short-review. This resulted in a Current Action List that looked like this:

Subject A tc ->
  * tests quux domain
    + includes ...
    + excludes ...
    + how it deals with X possiblity
  * and so on as appropriate, covering:
    + domains and ranges
    + state assumptions
    + error
    + and dealing with 'wtf was that?'
Subject B tc ->
  * ditto

Subject A ->
  Short, concise reviews
  * including comments with pointing out
  * and areas for change

Subject B ->
  ditto.

Looking at the assumptions the test cases make rather than the code it tests, showed me areas where the tests needed to expand, which showed me places that needed to be changed. To keep myself “Fresh”, I paused after each subject/module to play “Fix it” to the flaws that I found in examining the tests and code, then update the note contents accordingly. Then I moved that chunk out of Current Action List and into Project Name. So if for example, before moving onto compiling the stuff for B, I would make the corrections to A that compiling the notes had found, then cut/paste it into the note for the project. I think this is a good thing, kind of like the old UNIX guys. I think it was DMR, who once commented on writing the BUGS section of a man page, only to go fix it rather than release it.

Doing that check the tests, check the code, fix the code cycle, allowed me to cover a lot of ground fast without having to worry about keeping track of minute details. I also didn’t space out from reading to much code and writing to little, and still have it saved in case I need to reference it later.  After everything was done and I was ready to start the next phase of what I was working on, I transitioned all that stuff out of my Project Name note and into a note named “Project Name Test and Code Review”, and left a line in “Project Name” to see that file for that. Clear, clean, and simple as eating.

Two reasons I love TomBoy: it has *excellent* support for lists and indentation. It totally and completely blows away software like gmail and Word in that regard, and provides the ease of use that my outline modelet in VIM was created to fill. It is incredibly easy to manage and manipulate info in a TomBoy note, and adapt it using list or indentation structure to organize a note in such a way that you can parse it at a glance. I generally use indents for sectioning, lists for enumerations. The second thing I love, is that whenever you write the name of a note or a file path it becomes a clickable hyperlink, very much like a wiki. But unlike the majority of wiki’s it is tied to the name of an actual note (entry) rather than a SillyNamingConvention. There’s also great support for renaming notes.

The thing that TomBoy is really Not So Good At, is managing TASKS instead of NOTES. It just wasn’t designed to do more serious task stuff then a sheet of paper. Yet it is pretty damn good at the whole notebooking thing.

There is a program called Tasque but for my needs, it is much to simplisitic to be useful. In fact, Google Tasks in GMail is probably as good (and sucks). I gave Tasque a wurl because it had both a TomBoy plugin and a RTM backend. In testing that out, I found the TomBoy side of Tasque really ain’t that spectacular either. In point of fact, just integrating RTM back into my work flow as proven useful. My usage of Remember The Milk has totally gotten overhauled, and it would be worth while to someday write a TomBoy plugin to help associate their notes/tasks.

I’ve basically replaced what fixed (and standard issue) lists I had with purely “Smart” lists, letting tagging and inheritance do the organizational leg work for me. This suits me, because I can set myself a task with tags that will make it show up in each list I want it to, so e.g. a task can be in both my list for work tasks and my list of people to contact, be readily searchable, and have a way of associating notes, due dates, and all sorts of handy stuff with it. Basically all the good stuff for tasks that TomBoy lacks.

That’s the thing of it though,

  • Just Work without Kicking.
  • Let the software do the house keeping.
  • Allow me to focus on working rather than structuring
And I’m quite happy, lol.

The other day I was thinking about a young semi-student programmer that I know, and thought about presenting him a small set of “Teeth cutting” exercises. Small tasks that would serve a double purpose, help me evaluate his present aptitude for development tasks, and try and prepare him a wee bit for what his future education is likely to throw out. Unlike what seems to be the most common norm in college environments, I can also gently push in more, ahem, practical directions that what most students I’ve met have learned. I still have yet to find out if the number of stupid programmers on earth is due to the schooling or the students. Alas, that’s drifting off topic.

When I stopped thinking about the whole teeth cutting thing, I had done so because no ideas of what to use as a starting exercise had come to mind. Today while chatting, one did: a bare bones version of the UNIX tail program.

(06:30:27 PM) Spidey01: A first exercise:
 language: your choice
 description: implement a program called ‘tail’ that displays the last N lines of a file, where N is supplied by the user. It need not be a GUI, but can be if you wish.
 goals:
  A/ Minimise the scope your variables are accessible from.
  B/ Describe the procedure (algorithm) you came up with for finding the last N lines in the file.
  C/ Think and discuss, is there a way to improve on your algorithm?

Tail is complex enough that some C implementations are horrendously overcomplicated, yet simple enough that it is an easily completed without a gruelling mental challenge. Especially if the -n option is the only one you care about. The choice of A was chosen it’s a very common foul up among programmers, young and old a like.

I wrote a more complex program that that in C years ago as a learning process, that was more or less a fusion of the unix cat, head, and tail programs. Since the student in question was using Visual Basic .NET (oi), I opted to use C# so as to keep things at least, in the same runtime family. Here is a listing of the example code I wrote, the display here was done by feeding it into gvim and using :TOhtml to get syntax highlighted HTML to post here, than clipping a few things, hehe. The gvim theme is github.


  1 /**
  2  * comments having // style, are notes to young readers.
  3  *
  4  * CAVEATS:
  5  *  line numbers are represented by int, and thus have a size limit imposed by
  6  *  the 32-bit integer representation of the CLR.  Whether the users computer
  7  *  will run out of memory before that is irrelevant.
  8  *
  9  *  If there are less lines read than requested by the user, all lines are
 10  *  displayed without error message. I chose this because the error message
 11  *  would be more annoying than useful.
 12  */
 13 
 14 using System;
 15 using System.IO;
 16 using System.Collections.Generic;
 17 
 18 class Tail {
 19     enum ExitCode { // overkill
 20         Success=0,
 21         Failure=1,
 22         NotFound=127,
 23     }
 24 
 25     static void Main(string[] args) {
 26         if (args.Length != 2) {
 27             usage();
 28         }
 29 
 30         using (var s = new StreamReader(args[1])) {
 31             try {
 32                 var n = Convert.ToInt32(args[0]);
 33                 foreach (var line in tail(n, s)) {
 34                     Console.WriteLine(line);
 35                 }
 36             } catch (FormatException) {
 37                 die(ExitCode.Failure,args[0] + ” is not a usable line number”);
 38             } catch (OverflowException) {
 39                 die(ExitCode.Failure, args[0] + ” to big a number!”);
 40             }
 41         }
 42     }
 43 
 44     static void usage() {
 45             Console.WriteLine(“usage: tail.exe number file”);
 46             Console.WriteLine(“number = number of lines to display from “
 47                               +“end of file”);
 48             Console.WriteLine(“file = file to read from tail”);
 49             Environment.Exit((int)ExitCode.Success);
 50     }
 51 
 52     // Instead of doing the display work itself, returns a sequence of lines
 53     // to be displayed. This means this function could be easily used to fill
 54     // in a textbox in a GUI.
 55     //
 56     // It could also take a delegate object to do the display work, thus
 57     // improving runtime performance but that would be less flexible.  In this
 58     // particular programs case, just doing Console.WriteLine() itself would
 59     // be OK. See the foreach loop over tail() up in Main() for reference.
 60     //
 61     // This method also sucks up memory like a filthy whore because it stores
 62     // the whole file in memory as a IList<T>.  That’s fine for a quick and
 63     // dirty protype. In real life, this should use a string[] array of length
 64     // ‘n’ and only store that many lines. That way it could handle files 5
 65     // billion lines long just as efficently as files 5 lines long.
 66     //
 67     // I chose not to make that change in this example, in order to make the
 68     // code as simple to read as possible.
 69     //
 70     // Incremental development +  code review = good idea.
 71     //
 72     static IEnumerable<string> tail(int n, TextReader s) {
 73         string line;
 74         var list = new List<string>();
 75 
 76         try {
 77             while ((line = s.ReadLine()) != null) {
 78                 list.Add(line);
 79             }
 80         } catch (OutOfMemoryException) {
 81             die(ExitCode.Failure, “out of memory”);
 82         } catch (IOException) {
 83             die(ExitCode.Failure, “error reading from file”);
 84         }
 85 
 86         if (n > list.Count) {  // a smart bounds check!
 87             n = list.Count;
 88         }
 89 
 90         // implecations of a GetRange() using a shallow copy rather than a
 91         // deep copy, are left as an exercise to the reader.
 92         return list.GetRange(list.Count – n, n);
 93     }
 94 
 95     static void die(ExitCode e, string message) {
 96         Console.Error.WriteLine(“tail.exe: “ + message);
 97         Environment.Exit((int)e);
 98     }
 99 }
100 

This is a backtrace of the development process involved.
The program started simple: with the Main() method in Tail. The first thing I did was a simple check to see if args.Length was == 0, and exiting with a usage message. Then I remembered while writing the Console.WriteLine() calls for the usage message, that I really wanted (exactly) two arguments. That’s how the test became what’s written above in the code listing. A couple minutes later I moved the usage message code from inside that if statement to a usage() method. I did that to keep the Main() method more concise: unless you have reasons to groan over function call overhead, doing that is often a good idea. (Up to a point that is.)
From the get go, I knew I wanted the meat and potatoes to be in a method named tail() instead of Main(). For the same reasons that I created usage(). So that because a short using statement over a new StreamReader object.
First up was converting the string arg[0] from a string representation of a number, to an integeral representation of a number. At first I used uint (Unsigned Integer, 32-bit length) but later decided to make it plane int (Signed Integer, 32-bit length) because that’s what subscripts into a collection are defined in terms of. I don’t care if the user wants to display more than ~2,147,483,647 lines, it’s only an example program, damn it! Because tail() shouldn’t give a fuck about converting the programs argument vector to a type it can use (which obviously needs to be numeric), the caller (Main()) does the conversion. First tried args[0].ToUInt32() and when that compiled with errors, I hit Google. That gave me the System.Convert documentation on MSDN, from which it was easy to find the proper method. Because MSDN lists what exceptions System.Convert.ToInt32 can throw, and I know from experience that testing for such things is necessary ^_^, I quickly stubbed out catch clauses for FormatException and OverflowException. I wrote a simple set of messages to the standard output stream and an exit for each one. Than I converted them to using the standard error stream and wrote an enum called ErrorCodes, complete with casts to int when needed.

It was about this time, that I decided that implementing a simple method like Perls die() or BSDs err() would be convenient. Thus I implemented die() and replaced the repetitive error code. Functions are almost like a reusable template in that way. Then I decided that ExitCode was a better than for the enumeration than ErrorCodes, since it was being used more generally as an exit status (code) than an error report; unlike Microsoft I do not consider Success to be an error code ;). That was a simple global search and replace, or :%s/ErrorCodes/ExitCode/g in vim. Followed by a quick write (save) and recompile to test. Job done.

While I was at it, I also had an intentional bug encoded into the exception handlers for Convert, originally n variable was in a higher scope than the Convert (the using instead of try block). The error message for handling FormatException, used n.ToString() and the one for OverflowException used args[0]. The bug here was a subtle food for thought: one displays the result of the conversion, which might not match what the user supplied -> thus confusing the user. The other displayed what the user entered, which might not be what the program had tried to used. That also pushes an interesting thought on your stack, since the same data is used by both die()’s why do we have to write and maintain it twice? Alas, I realised the n variable was in too wide a scope and thus made that mind-play a moot point (by removing n from the scope of the catch statements). If you recall: using minimal scope for variables was actually the intent of the exercise, not error handling and code reuse.

Next I focused on implementing tail(). At first it was a simple. Just take a number and a StreamReader, and do a little loop over reading lines—for a quick test. When I checked the documentation on MSDN, I noticed that StreamReader was an implementation rather than a base class for TextReader. I always find that weird, but that’s outside the scope of this journal entry. Thus I made the using statement in Main() create a StreamReader and pass it to tail(), now taking a TextReader. Originally it also had a void return type, and simply printed out its data. I did that to make testing easier. The comments above make a sufficient explanation of why IEnumerable is used, and what I’ve already written about StreamReader/TextReader may suggest why it doesn’t return a straight string[] (e.g. array of strings).

The heart of it of course, is just feeding lines from a file into a generic List of strings. Since the exceptional possibilities are more straightforward, I wrote the catch blogs first. After that it is merely the question of extracting the correct lines from the tail end of the list. That’s a simple one to one (1:1) abstraction to how you might do it manually. I believe simple is the best way to make a prototype. Since the student in question was joking about how his implementation would likely crash if the line numbers were out of whack from what’s really in the file, I was sure to include a simple check. If the # of lines requested is greater than what really is there, just scale down. Volia. The comments at the top of the listing above, show why there is no error message displayed.

Extracting the items was a bit more of a question, my first implementation was a simple C-style for loop over the list using Console.WriteLine(). In the conversion to returning the data to be displaced, in which the tail() call in Main() became the above foreach loop. I added the comment about GetRange() more so as food for thought (from a code reuse and optimizational perspective). The math needed to extract the correct range of lines is trivial.

I then took a few moments to look at things over, doing a sort of code review. A few things were rearranged for clarity. I also introduced a bug, breaking the specification goals. If you look close enough at tail(), you will see that the variable line is only used inside the try block, yet it is declared at method scope. The #1 goal of the exercise was to avoid such things, hehe. I also thought about adjusting things to use an n sized cache of lines, rather than slurping the entire file in memory but decided against it. To keep the code easier to read, since the target-reader knows neather C# nor a lot of programming stuff, I just left comments noting that pro and contra of the matter.

Some people might find the method naming style odd for C#, but it’s one that I’ve come to like, thanks to Go and C#. I.e. publicly exposed functions get NamesLikeThis and those that ain’t, get namesLikeThis. Although personally I prefer C style names_like_this, aesthetically speaking.

The test file I used during the run was this:

line one
line two
line three
line four
line five

and most tests were done using various adjustments on:

terry@dixie$ gmcs tail.cs && mono tail.cs 2 test.txt

After sending the files over, I also whipped up a Visual Studio solution in MonoDevelop, and than realised that I left a rather unprofessional bug. If the filename in args[1] didn’t exist, the program would crash. That was easily fixed on the fly.

Overall the program took about an hour and a half to write. For such a simple program, that’s actually kind of a scar on my pride lol. But hey, I’ve barely written any code this month and I had to look up most of the system library calls in MSDN as I went along, I also tried to make it more polished than your typical example code. Which usually smells.

I can also think of a few ways to incrementally adopt that first exercise, into several other exercises. That might be useful.

Reflections on C#

Lately, I’ve been trying to use C#. No sense in learning a language and never using it, ever, lol. Over the years, I have generally skipped getting into C# – to much like Java for my tastes. Some months ago I picked up the lang’ as just a way of passing time. Found it interesting to note that C# was also about 3-4 times more complex than Java, syntactically. By contrast most of the complexity in Java comes from APIs or hoops you have to jump through to do xyz.

In putting my C# knowledge into practice, I’ve found that most of my linguistic gripes against learning it have been solved in .NET 3.0 / 3.5, and making portable code that works under Winows and Unix is just as easy as expected: in fact I test everything against the compilers from Microsoft and Mono. I’ve not had any troubles, and I am using like last years Mono version.  Although, I must admit that I think of Monos setup as the “Novell implementation” and .NET as Microsoft’s >_>. The portability of C# is every bit as good as Java and dynamic languages. In fact, if it wasn’t for the Mobile version (Java ME), I would say C# is more portable than Java these days.

C# already have features that are expected in Java 7 and C++0x, but everyone will be damned if they will get to use any time soon. To top it off given the blasted prevalence of Windows machines, just about everyone will have a liveable version of the .NET runtime that you can program to in a pinch. Between actually using the computer, newer Windows versions, just about all of them will have a modern version. Plus several popular unix applications (and parts of the Gnome software stack) are written in C#, so the same goes for many Linux distributions. Alas the same can’t be said of getting various C/C++ libraries compiled….

Compared to Java, C# is a mixture of what Java should have evolved into as a business language, and a bit of C++ style. C# also goes to lengths to make somethings more explicit, in a way that can only be viewed as Java or COBOL inspired. I’ll try not to think about which. I think of professional Java and C# programming as our generations version of Common Business Oriented Language without the associated stigmatism.

The concept of “C++” style in C# however, is something of a moot point when we talk about Java too. Here’s a short comparison to explain:

// C++ class
class Name : public OtherClass, public SomeInterface, public OtherInterface { /* contents */ };


// Java class
public class Name extends OtherClass implements SomeInterface, OtherInterface { /* contents */ }

// C# class
public class Name : OtherClass, SomeInterface, OtherInterface { /* contents */ }
It should be noted in the above example, that C++ trades the ease of control over class visibility for fine grained control over inheritance. AFAIK, Java/C# has no concept of private / protected / virtual {x} inheritance. Likewise C++ is multiple inheritance, while Java and C# are single inheritance. This all leads to a more verbose class syntax in C++. 
Now this one, is where you know how Java is run 😉
// C++ foreach, STL based
std::for_each(seq.end(), seq.begin(), func);

// C++ foreach, common technique
for (ClassName::iterator it = seq.begin(); it != seq.end(); ++it) /* contents */elementtype

// C++ foreach, to be added in the *future* standard (see below for disclaimer)
for (auto elem : seq) /* contents */

// Java foreach, <= 5.0
for (Iterator it = seq.iterator(); it.hasNext();) /* contents */

// Java foreach, >= 5.0
for (ElementType elem : seq) /* contents */

// C# foreach
for (var elem in seq) /* contents */
As you noticed, there’s three different examples for C++. The first uses the for_each algorithm and leads to rather simple code; the second is the most common way; the third is being added in C++0x and I haven’t bothered to read the details of it, since the version of GCC here doesn’t support it.
C++ again gives very fine grained control here, the for_each algorithm and iterator methods are extremely useful once you learn how C++ really works. If you don’t, than please don’t program seriously in C++! The C++0x  syntax is +/- adding a foreach keyword, exactly what you would expect a foreach statement to look like, if C++ had one. Some things like Boost / Qt add a foreach macro that is mostly the same, but with a comma.
Java enhanced the for statement back in 2004, when Java 5 added a foreach like construct. Java hasn’t changed much since then. When you compare the keyword happy syntax of Java to the punctuation happy syntax of C++, it becomes clear that Java’s developers had decided doing it C++ style was worth more than adding any new keywords, like foreach and in. Guess they didn’t think to steal perls foreach statement for ideas on how to naturally side step it.
C# on the other hand, uses the kind of foreach statement that a Java programmer would have ‘expected’, one that actually blends in with the language rather than sticking out like a haemorrhoid. I might take a moment to note, that javac can be so damn slow compared to C++/C# compilers, that the lack of type inference in Java is probably a good thing!
In terms of syntax, Java is like C among it’s OO peers: it’s about as small and minimalist a syntax as you can get without being useless. I wish I could say the same about Java in general. Some interesting parts of C#, include properties and the out and ref keywords. 
Here’s a comparison of properties in Java and C#:
class Java {

private PropType prop;

public PropType getProp() {
return this.prop;
}

public void setProp(PropType prop) {
this.prop = prop;
}

public void sample() {
PropType old = getProp();
setProp(new PropType());
}
}

class CSharp {

public PropType prop { get; set; }

public void sample() {
PropType old = prop;
prop = new PropType();
}
}
C# has a very sexy way of doing getter/setter like methods for properties. Personally I prefer the more C++ like style of just having a public field, unless you need to hook it (with a getter) or use a private setter. I like how C# can make it look like a field, when it’s actually a getter/setter method like construct. That means you don’t have to remember which fields are accessed directly and which need member function calls when writing code. Java convention is getter/setter bloat; C# convention is to use properties for anything non-private. I hope C# 5.0 or 6.0 will replace { get; set; } with just add a property keyword alongside the access modifier.

C++ is just as lame as Java in doing getter/setter methods, except you can (ab)use the pre processor for creating such basic accessors as the above, as well as any similar methods you need but don’t want to copy/paste+edit around. Java and C# always make you write your own, unless they are the basic kind. Tricks involving Java annotations and subclassing can kiss my hairy ass. It’s also worth noting that some Java projects can use an insane amount of getter/setter code. Come on guys. Using an external tool is not the right solution.

When we compare the age of these languages: C++ => 27 years old; Java => 15 years old; C# => 9 years old. It becomes obvious that C# is the only one that doesn’t suck at the concept of “Properties” and getter/setters in general. Perl made love constructs that respect the programmers time more than the compiler writers: you should too.

To anyone who wants to dare note that Java IDEs can often auto-generate getter/setters for you, and dares to call that better than language level support, I can only say this: you’re a fucking moron. Go look up an Abraham Lincoln quote about silence. Now if someone wants to be constructive and create another Java example equal to the C# example in the above listing, I’ll be happy to add it: rules must be shorter than existing Java example, uses: no subclassing, no beans, no external programs or libraries. Be sure to note what Java version it requies. Cheers.

The ref and out keywords in C#, are actually kind of oddities, if you come from another main stream language. In C it is not uncommon to pass a variable (perhaps more correctly a block of memory, if you think about it) to a function: and have the function modify the variables value instead of returning it.

    /* Common if not enjoyable idiom in C */
    if (!do_something("some", "params", pData) {
        /* handle failure */
   }
   /* use pData */

In this case, pData is a pointer to some data type, likely a structure, to be filled out by the do_something function. The point is, it’s intended as a mutable parameter. In C/C++, it’s trivial to do this for any data type because of how pointers work. Java passes by value just like C and C++ do: you can modify non-primitive types because a reference is used, not the ‘actual’ value. Making it more like a reference than a value type, in CS speak. C# does the same thing.

    // Java pass by value
    public void test() {
        int x=10;
        Java r = new Java();

        r.setProp(PropType.OldValue);
        mutate(x, r);
        // x = 10; r.prop = PropType.NewValue
    }

    public void mutate(int x, Java r) {
        x = 20;
        r.setProp(PropType.NewValue;
    }

Now a little fun with the self documenting ref keyword in C#:

    public void test() {
        int x = 10;
        var r = new CSharp;

        r.prop = PropType.OldValue;
        mutate(ref x, r);
        // x = 20; r = PropType.NewValue
    }

    public void mutate(ref int x, CSharp r) {
        x = 20;
        r.prop = PropType.NewValue;
    }

The out/ref keywords are similar, the difference has to do with assignment; RTFM. The important thing is that it is a compiler error to pass the data without the ref/out keywords at the call site.  I’m both enough of a Python and C++ programmer to enjoy that. This explicitness helps catch a few typos, and helps document that it’s meant to be passed by reference, not value. That’s good because a lot of programmers suck at documentation and some also blow at naming parameters. I think the contractual post/pre conditions in Spec# are a good thing: by removing writing the handlers form the programmer, and not having to make the programmer rewrite the flibbin’ things in prose somewhere in the documentation. Not to mention the classic “Oops” just waiting to happen in less DbC oriented syntaxes. Hate to say it but the ref/out keywords presence in vanilla C# are likely due to Win32 API documentation conventions o/.

Where C# really rocks is in the CLI. Java has something good going for it, over the past 15 years the Java Virtual Machine (JVM) has been heavily tuned for performance, Mono and Hotspot also present quite an interesting set of options (that .NET lacks afaik). I assume that Microsoft’s implementation has also been battle tested for performance as well.

The thing of that is, the JVM was originally designed to run JAVA, first and foremost at the end of the day, that is what it had to do. The Common Language stuff on the other hand was intended to run several different languages. Although admittedly languages native to CLI tend to be similar, but so are most languages in general. The interoperability between CLI languages is wonderful, and at least in native .NET languages tends to be “Natural” enough. By contrast things crammed into JVM bytecode tend to become rather ugly IMHO, when it comes to interfacing with Java. I’m not sure if that’s due to the JVM or the language implementations I’ve seen, the changes coming in Java 7 make me guess it’s the former. The CLI is likely the next best thing to making a group of languages compile down to native code (for performance) and share some form of common ABI. Fat chance that will ever happen again. I’m sure I want to ponder about VMS, but the whole CLI thing tends to work quite nice in practice The performance cost is worth it for the reduction in headaches.

I’m sure that in terms of performance that Java mops the floor with Mono in some areas, because of how much hacking has gone into it making it a cash cow. That the C# compilers seems to run ring around the defacto standard Java compiler, is what really catches my interest performance wise. Using the mono 2.4.4 and Java 1.6.0_18 compilers, on my very modest system mcs processes a minimal program about 30% faster than javac. In real opeartion it tends to kick ass. When you consider that each compiler is also implemented in the target language, Java really gets blown away. O.K. maybe I care more about compile times than many people, it’s the virtue of using an older machine :-P. Combine that with how many slow, buggy, monstrosities have been written in Java—I’ll salute C# first.  Another plus is less “Our tools demand you do it THIS WAY” than what Sun threw at everyone. Piss on javac and company.

What has hurt C# in my opinion is the Microsoft connection. The thing with Novell doesn’t help either. That Java is not exactly an insanely popular language among hackers, so much as enterprises, is another. The things that have hurt Java, being so closed and being academics choice for stupifying students.

What’s the upside to using Java over C#? Easier access to Java libraries, (J2ME) mobile phones, and more finger exercise from all that needless typing! Beyond that it’s essentially a win, win in favour of C#.

Stupid, just stupid MS!

Another from the same Microsoftee publication:

Consider using events to allow users to customize the behavior of a framework without the need for the users to understand object orientation.

Now I ask you, what part of using an object oriented language to build an obviously object oriented assembly (the guide is on class libraries), in an environment almost universally thought by users to be object oriented, and on top of that in a language that uses object oriented programming techniques to implement event handling.

What next, telling the programmer to only use stack allocated data because relying on a garbage collector is too hard a concept to comprehend? Seriously, who the **** writes **** like this.

LEARN HOW TO ****ING PROGRAM **** IT!!!
Post Script: In the above quote from MSDN, I added the bold on ‘understand’ for emphasis: where as the MSDN library displays the entire message in bold.

Stupid Warning Sign for Programmer

Important! The term “protected” does not imply any security checking or caller validation. Protected members can be accessed simply by defining a derived class of the declaring type.

I do think that is the Object Oriented Programming version of a stupid warning sign . That quote above about protected members comes from Microsoft’s design guideline for developing .NET class libraries. Which obviously means common language object oriented principals apply…. you would hope.

That has also got to be the stupidest cautionary message that I have ever seen in a document intended for programmers. What next, writing “May irritate eyes” on a can of pepper spray!?

The Ultimate Zombie Fighting Kit

This is in my humble opinion, the ultimate kit for fighting zombies, in the event that you should ever find yourself stuck in such a horror flick:

2 x Swords in scabbards across the back. A pair of wakizashi or a wakizashi and katana pairing would work perfectly, or something else along those lines. While technically caring for a katana in some form of post-apocalyptic zombie invasion would be a major drag, such a weapons cutting ability would be seriously useful. Either way a single sword is more than likely going to be overwhelmed and shorter blades are less likely to get in the way in doors. Using a chain saw is more likely to get you or a friendly killed than the zombies.

4 to 6 x Knives. 1 or 2 along side each boot for backup; plus another pair along the waist/legs for use as needed. Personally I think a Kukri or two would be handy, save the rest for emergencies or quick throws. You can never have enough knives in a zombie fight. Include some shuriken if possible (needle type are probably more relevant).

2 to 3 x Semi-automatic pistols. Capacity is more important than stopping power, as long as  it handles head hunting sufficiently. Although going akimbo (bring a 4th pistol) might be useful against a zombie swarm, using your strong hand to fire until dry then swapping it to the weak, as you draw the next pistol with your strong hand is likely better. Few people can reload like Lara Croft and head shots tend to be more reliable in such dire straights, unless having a minigun would be the only thing capable of saving your bacon ;). Probably best if they all can share magazines as well as cartridges, for obvious reasons: reloading on the run and through a rare lull in targets is easier that way. Arguably the main limitation is weight. If dual wielding, make sure to use tracers so you leave a round chambered by reload time. The reason? Because if you’ve got to slap in two mags and rack two slides, you’re gonna be zombie chow.

1 x A primary weapon. An assault carbine or SMG is probably the best balance of accuracy/ammo. Shotguns are useful against swarms but limited in ammunition capacity, and the “Dream” weapon for zombie fighting doesn’t exist. Namely a double barrelled, semi-automatic shotgun fed from a bag of shells. Save the scatterguns for when there’s a well rounded group of people to fend off zombies with. Personally I would fancy an H&K MP5 or an old M2 carbine, both being much lighter all around than the CAR15/HK416 family and more than powerful enough, unless again it’s time to reach for a minigun. If not using swords, slinging an uzi or a sawed off shotgun across the back for backup, is a good idea.

+ Explosives and Water. Always useful in a pinch, and sometimes an incendiary would be useful. Exposed hand grenades would be unwise, or anything else that could be “armed” by a zombie yanking at it, or pulling/crushing. This makes something like dynamite more valuable, but has the downside of course being that you might be dead before it gets lit and blown. At least one canteen is useful, in case of being separated from other survivors or temporary stranded.
+ Steel tipped boots. ‘nough said.
You now weigh at least 15-20 lbs heavier without counting the ammunition, explosives, or water. The ability to take on a good 80+ zombies without stopping on the other hand, is worth it. Just be sure to get in shape before dooms day.
When you consider how unlikely it is that you’ll be able to find a hill top surrounded by motion-tracking minuguns and a mega load of ammo/power, let along reach it alive for one reason or another: it is a very good thing that need for such an “Arsenal” is only needed in Hollywood or video games! Besides, if such a thing could ever happen in the first place, you would probably be infected by a virus turning you zombie before you would have time to worry anyway 😛

A technical and pseudo psychological peek into Raven Shields AI


In putting the last touches on Private Airport (kai), I’ve been spending some more time to study how the games Artificial Intelligence works, special thanks to [SAS]_Maj_WIZ for pointing out a more thorough list of developer diagnostics ;). Since this is the closest look I’ve taken at Rainbows AI in about 5 years, and an even closer study of Terrorist and Hostage behaviour, I think it’s only fair to make a journal entry about it.

Here is a summery of my findings, and annotations about theories I’ve maintained for years:

The laser eyebrow problem

From the standing position, the source of each pawns aim is directly behind the skull, roughly where the head would be if the pawns stood erectly (like a tango). The point of aim, passes directly though eye level. When crouched the point is roughly near to behind the collar bone, and passes through the lower nasal cavity. In prone, it is much the same.

In real life (and any decent shooting game), that aim point should be chained to the weapon in the players hands.

How Tangos Fire Out Their Ass

Experiments with ShowFOV and GunDirection, demonstrate that there is no real connection between pawn animations, where the AI is looking, and where the AI is actually aiming; or as I have been saying for years, What You See Is Not Always What You Get. The most humerus moment in my testing, occurred with the Rear Guard “Facing” the rear but covering the elements front. My research shows that even if Rainbow is aiming at the target, they fail to engage terrorists outside their point of view. I.e. in the case of that rear guard, if a tango had walked up behind the element: it would’ve taken a moment for his aim point to realign with his field of view, resulting in the death of 2/3 the Rainbow AI element!

In short, this means that the AI walks around much like a Tank with an independent turret, only the artificial intelligence is riding in the drivers seat, not the gunners. Coincidentally, this is why the game has no real concept of muzzle clearance (as I have also been saying for years).

This may explain some of the more rolling on the floor laughing moments that often occur, when a “How close can you go” opportunity crops up in game. To prove a point, I cornered a tango in a corner and had him empty his magazine into me. Side stepping away and deactivating GOD mode, he was able to fire several rounds point blank into my pawn, before his aim point rotated to my new position – I died once the aim point got to me, not when the animations showed him shooting me.

Also it seems that hostages always seem to aim directly at Rainbow, but luckily the terrorist AI doesn’t notice that (or they would always see us comming).

So far, these tests satisfactory seem to prove that my ~6 years old hypothesis, about the “Tango firing out of his arse” problem being is indeed codified into Raven Shield by design. Between network latency, the (usual) use of unreliable UDP communication methods for multiplayer game play, and the systems divergent means of rendering and applying these actions (seeing, aiming, firing, hitting), suggest that there is no way to solve the aforementioned (annoying) problem without fundamental changes to the way Raven Shield works. Since that is not viable, one can only look at working around the problem; even with more processing power then a Cray XT5 super computer, you also need a very high throughput network link between clients and server, likely to an extend that is unobtainable over the modern internet.

In laymen’s terms, this means no matter how good your computer hardware or internet connection is,  the computer will always be able to cheat you. Should that change, most likely it will be so far in the future, we will be dead by then.

On the upside, I do believe that why the game is like this, was probably done in order to give the player more ‘time’ to shoot first (yes, some tangos have very slow reaction times: this appears why). It also appears to explain many of the discrepancies between common online play, single player, and LAN parties. However it is also worth noting, that this may have instead occurred due to limitations of the Unreal Engine (2) or Raven Shields own design and implementation.

Interesting Note: Now that an illegal RvS 1.60 SDK is available on the internet, it may be possible for cheaters to develop a method to take advantage of these problems. Imagine walking up on some one in Adversarial because you think they are looking away, then they shoot you out their arse ;). Luckily the engine has some respectable counter messures to such becoming (more) common.

Looking at the AIs skills

One of the very few things, that I have ever been able to praise Rainbow Six 3: Raven Shield for, is that the Rainbow AI often “Appears” to be covering their sectors, even if they totally suck at room clearing. In fact, their room clearing behaviour suggests that either the AI engine is extremely limited, or the game was made by people who know as much about CQB as the average RvS player, that is: absolutely nothing.

Since there is no connection (see above) between where the character on screen is looking (seeing), and where they are aiming (pointing), this means What You See Is Really Not What You Get. Have you ever seen in Single Player, where Rainbow is looking straight at a tango and gets owned without a shot? This is why. It is also why we can do the same to tangos online.

Terrorist movement is closer to What You See Is What You Get, then Rainbows; IMHO the movement for Rainbow was done to ‘look’ more realistic then it actually is (or the AI programmer sucked even worse then everything thinks). They also seem to have a tendency to remain fixed on hostages with their aimpoint, even while walking around a fair distance away. Ever got first sight on a tango, shot him trice, only to curse at him “Magically” shooting the hostage with barely a twitch? Yeah baby… he had that gun pointed at the hostage, all the while he was looking into your eyes.

Every type of AI in the game, demonstrates very poor skills at getting around the maps. I remember when I first started learning about pathing in Unreal Engines, I couldn’t help but think, “We’re still living in a dark age”… and that’s all I will say on that lol.

Hostage behaviour, well, what can I say… what you see is exactly what you get: a stupid slug. On the upside, the terrorists do not show any signs of being aware to hostages; this is why for example, if a Rainbow goes down while escorting, and the hostage becomes a prisoner again, the terrorist may continue walking past. All the fancy stuff about the tangos shooting the hostage, is a mixture of the games rudimentary AI, and things that map designer has programmed.

Since I don’t believe in taking advantage of the map design, or exploiting things I shouldn’t know in real life about the missions, I will not make a closer study of that kind of stuff, nor will I tell others much about it. If you want to figure out how to take even that (ugly) edge over the game, you can go learn how RvS works for yourself. Beat it punk.

I just study the mechanics and psychology of the games AI, you know, the whole know your enemy thing. In actual ops with SAS, I tend to employ more knowledge of human psychological behaviour then how the game was designed. That is both by intention

A Random Bullet Test

A tango surrendered and I fired a shot into his head, the round should have impacted his hand (placed on his head). There was a blood puff and a bullet hole in the wall directly behind his head, however the tango survived. Weapon used was an M16A2 at approximately 350-450 unreal units.

This suggests that terrorists have no brains, since the angle means the bullet must have penetrated his skull, and his hand (I pray) being all that slowed the high velocity FMJ round down enough to prevent a kill. Suggesting that any hit box modifiers applied, were for a ‘hand’ (arm) shot rather then a head shot.

N.B. other tests I have done over the years suggest this kind of problem and the ballistics model used, is why sniper rifles may incur a two-shot requirement on tango kills, and the exaggerated effects of JHP/FMJ selection on SWAT 4.

Anyone still awake and scratching their heads?



All this is based on roughly 6+ years of playing the game, much more then trivial knowledge of such matters, and being a very, highly observant individual.

Two schools of thought: random thinkings from a spider.

DRAFT todo: fix footnote indexes and further proof reading / copy editing.

Two schools of thought

random thinkings from a spider.

There is certainly more then one way to solve most problems, it’s just a matter of their merits. This paper serves to compare and contrast two common methods of solving a simple problem.

In our example, let us say we have a large and complex E-Mail message. We began editing it on one computer, then copied the draft to a USB stick0 and took it home. We then begin making some further adjustments. Later, we return to the first machine to finish the draft, but realize we didn’t copy the file back to the USB stick! We continue to edit the draft, and take it home again. We now have two different but related forms of our message. The message is quite big, and we don’t want to rewrite any of the good parts, so what do we do? We have to compare and merge the two different messages into a single final draft.

How might we solve this problem?

Most people I know1, would open each version in a mail client (e.g. Outlook, Thunderbird) or an editor (e.g. notepad.exe, gedit) and place the windows side by side; and then go about visually comparing the files; either merging one into the other, or using a third window. This is slow, error prone, and rather clumsy.

Since I’m not willing to take that much time, I would apply software to fix this problem2. This means we would need software to compare, merge, and edit the files. How might these programs work?

Common software that comes to mind: diff, patch, diff3, kdiff3, kompare, meld, windiff, winmerge, and most decent (programmers) editors support the task as well (vim, emacs, jedit, etc), along with decent Version Control Systems (VCSs) and Integrated Development Environments (IDEs). Some programs are textual, some are graphical, some are an amalgamation of parts, while others are heavy lifters3 in their own right.

We’ll take a look at how two different styles might result in software to complete these simple tasks:

  1. Compare two or more files.4
  2. Merge two files into a third.
  3. Allow the editing of changes to be done.

I’ll call them styles A and B.

Let’s start off easy, how can we compare the files? It’s rather easy for a program to do a byte by byte or character by character5 comparison of a files contents, but it would be nice to be able to see what actually changed – in some format we can understand.

In Style A, we write a simple program that can pretty print the differences to a text file (or better yet, an output stream6). If we want a more interactive interface, we can view the output file in an editor or pager; or write such a program of our own. One that understands our pretty printed format, and can display it to the user. So let’s say we now have a simple ‘compare’ program that outputs text, and a simple ‘viewer’ program that accepts the result of the compare, and allows the user to browse it on their display.

In Style B, we write a program ‘compview’ that generates a format suitable for it’s own internal use; perhaps a list of change-nodes with data on where to display it and how to display it. Then set out and write a viewer program to display this to the users display – in essence creating a pager like program, whether it is textual or graphical in nature. For ease of viewing the differences over time, someday we may add an “Export to file” feature that dumps the data to a suitable format.

Now let’s take a step back, and look at what we can do with these kinds of solution. Since style A developed ‘compare’ to output a very simple textual stream, we can view the file in any program that we like, without having to use the supplied ‘view’ program. We might even develop a program (or change compare) to [re]generate the output in HTML, so that we can view the comparison in a web browser instead.

In short, the design choice makes the view program almost superficial, not to
mention that it can be kept quite simple; for people without better
tools7 rather then the whole kit and kabootle. The ‘compview’
program built in style B, will likely have a close relationship between the
file compassion and viewing operations; perhaps to the point of excluding the
ability to do the view in an external program without exporting to a suitable format8,
which may or may not be easy to use with other tools. In style B, even if the
internal format was XML or HTML, compview would still likely contain half a web
browser9.

Now that our software can compare two files in a way we like, let’s move on to the process of merging the two files.

Style A might create a program, ‘merge’ that understands the output of compare or a filter that can convert the output into instructions for another program (edit) to complete the changes itself. Some operating systems (i.e. UNIX and DOS) provide suitable editors for this task: some people might be inclined to implement their own hbatch or stream editor (I would suggest installing ‘ex’ and ‘sed’, or make it a *really* expressive line editor). The up side of the latter approach, the compare format and the file merging can be more readily separated; the user could even find other ‘edit’ programs or intersperse the chain of commands with other compatible tools.

Style B would likely take it upon itself to conduct the merge operation directly. One down side of this would be the means of which we save or ‘Export’ the comparisons from compview. If an format based on compview’s internal data structures was used, different versions of compview might not even be able to understand it; oh joy, now we have to remember what versions of our compare & merge app knew what! But all in all, the program will probably develop some good interactive comparison and merging features, if the programmer doesn’t go postal first.

What about editing the file or adjusting the file comparisons to create a more complex merge? If style A provided an ‘edit’ back end, we don’t even need another text editor to work with compare’s output: but we could use our own11 and feed the result back into the tool chain. Style B might provide support for an external editor or build one into compview’s interface. Since an external editor would mean ‘compview’ would have to translate it’s data to text for the editor, then back into the internal data, it would be a major pain. Building in an editor might be a fairly simple task or a problem; most GUI toolkits provide an editable text area, some TUI toolkits may not. Depending on the libraries being used, the programmer might have to hash out a HTML WYSIWYW12 text editor component built into compview13; assuming compview isn’t half a web browser in it’s own right yet.

Bonus: change of interface

Let’s say we want to convert from a graphical to a textual or from a textual to a graphical interface.

The compare and related utilities developed in style A, could have been done largely in a display agnostic view; what does a file comparator need to know about user interfaces? In this case, the ‘view’ program would only need replacement with a tool that supports the other interface style. Another benefit, because of the separation between tools, even the changed interface may be change yet again in some strange new way21. The possibilities are virtually endless and shared libraries may be utilized to ease related tasks.

While compview on the other hand, developed with style B is likely chained to its interface. If the code monkey was smart, as much of the code base as possible would be abstracted20 away from the interface, and kept simple enough to be used as a library. If the library didn’t exist previously, or was not user interface agnostic: it will be much more labour intensive to make it so (or create it) after the fact, then to have done it in the first place22. The ever increasing bloat and complexity of compview, may be its untimely my downfall; because it must either adapt to the changing world, or be replaced. If it can not do so easily, it will either fall by the way side or restrict its users through its own (lame) limitations.

Discourse on the Results

Either of these programs, the compare, view, merge, and edit suite or the compview jack of trades would be suitable for completing the task at hand, but what comes of these two courses or styles of solving it?

Style A may have a tendency to fragment things, or depending on the programmers mind, fall into more ‘odds and ends’grouping then a useful tool set. A great advantage, because each element is a separate program, they have a very minimal knowledge of one another’s internal workings. Because all communication is accomplished through simple streams of input and output; you can even replace parts of the suite with other tools14. Small or big changes could be done with a change of program, and have very little impact on how the operation is performed by the tool set overall. The only major issues being the comparison format and editor instructions15. Properly documenting the protocols and relationships between the programs would make things all the more easier; both for maintainers and other monkey’s who may need to replace or tweak parts of it. Adding features can be done fairly easy, but making big changes may break compatibility with the last decades version.

Style B takes a swiss army knife point of view: whatever needs to be accomplished should be done by ‘compview’ whenever possible. Depending on the competency of the programmer16, compview can fall into many subtle traps17. In most probability, compview will either be troublesome to inter-operate with other such tools, or require irksome filters or worse; building the filters into compview in a way that may or may not be user serviceable. How closely entwined the various elements are, would likely depend on the attentiveness and skill of the programmer; some people create maintainable tools, others create balls of horse dung that they may end up hating years later. Adding new features may also require massive restructuring of the program, dpeneding on how it was implemented.

In my opinion, compview would likely be capable of becoming more efficient at what it does then the compare/view/merge/edit suite, but is more likely to become inefficient and more difficult to maintain in the long run; because it is more difficult to engineer such a complex program correctly. In a way, you could say it just has to many moving parts…. Why cram the engine, transmission, and power steering into one huge moving part, when you can have 3 smaller moving parts?18. One interesting side effect, the software created by style A may be fairly easy to script, but the program from style B would have to embed it’s own scripting language23.

I generally opt for pieces that work together on simple protocol; because it helps keep me from shooting off my own foot later19.

Footnotes

0. Rather then using an USB Stick, I actually would use Webmail or some other network solution – and avoid this kind of problem altogether.

1. Most people that I know, would first have to figure out how to get the e-mail message shuffled between computers, let along view it side by side

2. I would probably use Vi IMproved’s ‘diff’ mode to interactively compare, edit, and merge the files.

3. I believe the modern GNU diffutils and friends have grown horns of their own.

4. We might want to compare our final draft against the two old drafts!

5. These may or may not be the samething, depending on who, what, where, and when.

6. I.e. allow feeding the programs output into another program, without the need for shared memory or (insecure) temporary files.

7. I like less, but using vi as a readonly ‘view’ is sometimes fun.

8. XML, CSV, Binary dump of internal data; etc.

9. And just like ‘compare’ would without a ‘compare2html’ filter, bloat out with having to escape various character data into the format, or risk breaking interoperability with other programs (e.g. Internet Explorer, if compatibility is possibility with it in the first place)

10. In point of fact, because of the flexibility that pipes and redirection offered the UNIX system, it was possible to use the early ‘diff’ program and ‘ed’ editor to carry out this kind of solution. To deal with the early systems simplicity, as more useful ‘diff’ output formats became the normal, Larry Walls ‘patch’ program was created to heuristically apply the changes to the file set more effectively then was previously possible. Replacing the ed program and simple ‘ed diffs’ for once and for all (actually, patch could feed ed diffs into ed if that format was used). Since the take over of non-scriptable screen editors had become more common by then, I can’t help but wonder if a more expressive program then ‘ed’ had been available, what shape Larry Walls patch may have taken.

11. I use Vi IMproved (vim); Emacs, KATE, jEdit, and TextMate are also good choices. I have little love for tools like Notepad, Edit, or feature-packed clones.

12. I say What You See Is What You Want because What You See Is Not Always What You Get.

13. A particularly poor programmer, or poor engineer might make this editor component very tightly integrated with compview, rather then something that could be reused on other projects and plugged into our current one. Given the nature of compview, I think the former is a more likely psychological trap then the latter.

14. Much like patch has superseded ed for batch processing of diffs.

15. A smarter programmer will make it easy to retool ‘merge’ to feed instructions into a new editor; a brillant programmer might anticipate the need, and choose to do see this ahead of time, and choose to supply an instruction set telling merge how to generate said instructions for the associated ‘edit’ tool; rather then designing it for any specific editor component.

16. And the stress to get it done ‘on time’

17. Subtle traps to most, but obvious to me. I’ve had to deal with to much software that just ‘sucks’ over the years, not to notice ;-}.

18. I’m scared to think about the auto-industry.

19. That, and I’ve found many more powerful and flexible tools that can be used that way, then I have ever found swiss army knives that can match such flexibility.

20. Note that I do not mean a group of abstract base classes.

21. Such as from Tcl/Tk to C/Gtk+ or Java/Swing.

22. This is one of the traps new or casual programmers seem to fall into.

23. I would suggest a language like JavaScript or Python if possible for such a task. Unless it resembles a common language or is suitably domain specific, I dislike programs that create their own scripting or extension languages just for a specific applications plugin/automata. The last thing a user needs to do is learn YOUR apps language, that is also highly specific to your specific program that it is also highly useless everywhere else. A customized dialect of LISP or a class library mated with a known language is much better.

Installing SWAT 4 TSS on GNU/Linux

Disclaimer

The game is not supported by code weavers, and I don’t support users of this posting ether. However, comments and corrections are always welcome – spam will be killed. Also note well, I prefer the command line interface, so I do not use file mangers such as Explorer, Nautilus, or Filer often.

Prerequsists:

  • Basic understanding of GNU/Linux user accounts and permissions
  • Having a working GNU/Linux distribution installed (Ubuntu or Debian recommended)
  • The ability to understand file paths and simple file operations (e.g. copy) without a photograph or explanation of how to copy/move/delete files.
  • Legal copies of both SWAT 4 and SWAT 4: TSS, total of 3 CD-ROM

Step one: Purchase a copy of “CrossOver Games for Linux” from CodeWeavers, or download the free 7 day trial.

Step two: install Cross Over Games for Linux, hence forth called simply cxgames — for brevities sake!

The installation process is simple, the format I suggest is to install it as root for all users. This basically amounts to executing the install shell script as root, either with su or sudo

$ su - root
# ./install-crossover-games-demo-7.1.0.sh

$ sudo -H ././install-crossover-games-demo-7.1.0.sh

Either way, root will need access to you’re display. Via the su method, on a secure system to will have to grant root this access. The proscribed method is you’ll need to run

xhost +local:root

, and set your $DISPLAY value accordingly once you’ve su’d to root. N.B. a lot of documents say

xhost +localhost

, which will grant everyone on your machine access to the display; instead of adding root.

Once you’ve launched the install script, it will walk you through it — mostly by a graphical installation wizard.

Step Three: Install SWAT 4

Insert your SWAT 4 disk #1 into the drive, and make sure it is mounted. Some Linux distributions are set to automount it like Windows (Ubuntu), others require you to manually mount it. N.B. that secure systems will only allow root to mount file systems by default.

Launch the cxgames windows program installer; this may be in your applications menu (gnome/kde users) or have to be run manually from a prompt. If you have to run it manually, the program you want to execute is /where/you/installed/cxgames/bin/cxinstall, for example: /opt/cxgames/bin/cxinstall.

Since SWAT 4 is not one of the supported games, you’ll want to go with the “install other games” option, and pass by the disclaimer. You’ll thne need to tell it where to find the disk, usually this is just clicking a checkbox but it is quite flexible. You should install SWAT 4 into it’s own “bottle”, because Linux doesn’t care much for spaces (and command lines are fun), a good bottle name is “SWAT4”.

A bottle is just a simulated windows installation, so if desired – every game can be installed into it’s own separate place. This really makes back ups a snap, limits one program screwing with another, and improves situations if you need to make ‘tweaks’ to the bottle. The installer will then create the bottle in your personal space, e.g. /home/terry/.cxgames/SWAT4. N.B. that .files-and-dirs in Linux are considered ‘hidden’ files, in a GUI file manager, you’ll need to b/p to deal with that.

When the installer starts, run it, there is no need to change the default install paths, unless you are installing into a very customized bottle. Since SWAT 4 uses two CD-ROMs, you will have to umount your first disk, and mount your second before pressing “retry” on the change disks dialog. Same thing happens again at the end of the install, to put the play disk back into circulation. When you’re done, you can close the install program and tell the cxgames installer that it has completed, it will warn you for paranoias sake — close it.

Step Four: Launch SWAT 4

SWAT 4 uses a SecuROM based copy protection system, when I set this up – it thought cxgames version of wine was a cd-rom emulator! And refused to run, a quick fix is to google for the ‘swat 4 no cd’. You will need to backup the old Swat4.exe, and then replace it with the no cd fix in order to launch the game.

Since I prefer the command line, I use cp and mv to copy and move files around; rather then copy/pasting icons in a file manager.

$ cd ~/.cxgames/SWAT4/drive-c/Program Files/SierraSWAT 4/Content/System
$ mv Swat4.exe Swat4.exe.orig-1.0
$ cp /path/to/fixed/1.0/Swat4.exe ./Swat4.exe.nocd-1.0
$ cp Swat4.exe.nocd-1.0 Swat4.exe

Now launch the game, either via your applications menu (gnome/kde users), or manually with the wine subsystem of cxgames, e.g.

$ /opt/cxgames/bin/wine --bottle ~/.cxgames/SWAT4 ~/.cxgames/SWAT4/drive-c/Program Files/SierraSWAT 4/Content/System/Swat4.exe

Then go ahead and test the training level or something like that. SWAT 4 must take care of first run chores before you install the path. When you’re done, just exit the game like normal. In the vent of catrostrophic disaster, you should be able to ctl+alt+function key to vtty, login to a command prompt, and us the ps and kill commands to terminate the process, cxgames also has it’s own way of nuking bad windows processes hehe.

Step Five: Patch SWAT 4

Because the patch checks for a legit copy of SWAT4, we have to replace Swat4.exe with Swat4.exe.orig-1.0, otherwise the patch will fail and scramble a few game files.

$ rm Swat4.exe
$ cp Swat4.exe.orig-1.0 Swat4.exe

Now insert you SWAT 4: TSS disk and run the installer, just like when you installed SWAT 4. Except use the same bottle, by selecting it from a list of existing bottles when prompted for a bottle to use. Once the TSS installer pops up, click the button to install the SWAT 4 1.1 patch.

Step Six: Test run SWAT 4 1.1

Now you need to use a no cd fix for SWAT 4 1.1 in order to continue, download it and do like in Step Four.

$ mv Swat4.exe Swat4.exe.orig-1.1
$ cp /path/to/fixed/1.1/Swat4.exe ./Swat4.exe.nocd-1.1
$ cp Swat4.exe.nocd-1.1 Swat4.exe

and launch the game as in Step Five.

Step Seven: Installing TSS

If you left the TSS installer open from Step Six, fine, if not, open it again — same exact way.

You will have to replace the no cd fixed Swat4.exe with the original one from the path, else the TSS installer will bomb out on you.

$ rm Swat4.exe
$ cp Swat4.exe.orig-1.1 Swat4.exe

Now get the TSS installer up and click the install swat 4 tss expansion pack button. Then proceed through the installation process; you shouldn’t have any major problem, as long as you remembered switch executables first.

Step Eight: Run SWAT 4: TSS

Now you should be able to launch SWAT 4: TSS, when I did it… there was no problem with the Swat4x.exe file in the expansion pack, but the game soon refused to run, having verification problems, despite being a legally paid for copy, with the legally paid for copy in the drive. So one will need to employ another no cd fix to be able to use this legally paid for copy of the game.

$ cd ~/.cxgames/SWAT4/drive-c/Program Files/SierraSWAT 4/ContentExpansion/System
$ mv Swat4X.exe Swat4X.exe.orig
$ cp /path/to/fixed/Swat4X.exe ./Swat4X.exe.nocd
$ cp Swat4X.exe.nocd Swat4X.exe

You can launch it the same way as SWAT 4, but in the case of the command line, the path changes. Just like on Windows, from ContentSystemSwat4.exe to ContentExpansionSystemSwat4.exe.

Known Issues

If the game and your display are not running at the same resolution, moving your mouse pointer to the ‘edge’ will make it scroll outside the window, effectively creating a pan and scan across your display. I have a 1600x1200px primary screen and a 1280×1024 secondary screen. Having SWAT in 1024x768px mode, was horrible… edit: this happens even when the game and X screen are at the same resolution.

I have yet to determine under what conditions dual heads will work, but luckyally most X Window Managers handle dual heads better then Microsoft Windows (in my indignant opinion), at least that has been my experience under FreeBSD 7 and Ubuntu GNU/Linux 8.04 with X.Org 7.2,a nd Windows XP Media Center Edition. So the main problem is a question of mouse handling.

Because of the way things work, people who use mods (e.g. SSF) may have problems launching them, without a corresponding no cd exe. Other wise the game seems to be useable, I’ve only to acertain whether or not graphical performance can be maintained.


Free Image Hosting at www.ImageShack.us

My word, wouldn’t it be heaven to only need windows to play Raven Shield once in a blue moon?