Ugh, it’s been a long and unpleasant day! Never the less, I’ve almost got the MSVC builds sorted to where I want them. Basically why unix builds are shared libraries and windows builds are static libraries, has to do with the respective linkers.

At least on FreeBSD i386, the (GNU) linker doesn’t complain about the common and sys modules referencing one another, you could say it has more of a view that the shared lib is a chunk of code and all is fine as long as it all resolves enough in the end. I generally prefer dynamic linking over static, although I have nothing against static libraries internal to a project; when it comes to Windows  however, I’m particularly found of Microsoft’s SxS technology.

While the GNU stuff on my laptop is hapy enough to obey, the link tool provided by MSVC however, won’t cooperate with that model of behaviour for shared libs (DLLs), only static libraries. Other then the increasingly becoming stuff that belongs together, the common and sys modules were merged into a single ‘core’ module, and tonight, prepped to better handle compiler specifics as well. Secondary is that, simply put link makes shared libraries a bit more typing then need be. Every other sane OS/Compiler pair I’ve encountered, has the lovely habit of assuming that you wrote a function in a library, and might want to share it with other programs. Visual C++ on the other hand,  presents several ways of doing it: that all basically amount to telling the linker which things an application may slurp up from it. Basically resorting to writing a “.def” file, or in wrapping up function definitions with a __declspec(export) attributes, and the correct __declspec(export) or __declspec(import) attributes at their declarations.

Microsoft’s way of doing things is more flexible, but one might fairly argue that the inverse behavour (e.g. export anything not specially marked) would have been better.

Generally I like MSVC, I think it’s better then GCC, if you are willing to put up with the major lack of C99 compliance and lack of stdint.h (I use one written by Paul Hsieh). The main downside is the tools tend to be a bit, eh, stupider then GNU brew, and the best parts of the system are like wise fairly specific to both MSVC and Windows NT. Personally I would enjoy a professional edition of the MS’s offerings, because it would net access to their 64-bit C/C++ compiler and much stronger profiling tools, that are simply missing from the express editions.

The sad part, is that Visual Studio is the only software package I have seen Microsoft release in my entire life that, that’s worth buying…. lol. Not even their operating systems can say that much, from where I sit.

My thoughts on “Debugger Tips: 8 ways breakpoints can save your next software project”

Debugger Tips: 8 ways breakpoints can save your next software project: “Here are eight fairly simple techniques for using breakpoints and other features of your C/C++ debugger that can give you enormous power and visibility into your program.

Email this Article
Add to digg
Add to del.icio.us

An interesting article that’s worth the reading, for anyone who is ever going to get stuck running a debugger. Personally, I prefer log files and analyzing the code in my brain, but when it’s a task you can’t cram up there in grey matter, or you need to cuddle up to the run time—a good debugger is your best friend.

Overloaded and still bit shifting

Ugh, I’m freaking tired. Started the day off computing what changes would be needed for setting up nmake based builds of Stargella, and everything has been on a snow ball since then.

I spent a considerable amount of time cursing at the MSDN Library over some very shotty docs, and realising that despite the overall quality of MSVC, the actual build tools behind it, has to amount to the stupidest ones I’ve ever seen. Although to be fair, the very first C compiler probably was worse, but this isn’t the ’70s :-P.

The depreciation of Code::Blocks for building things, and switching to appropriate make systems should mate more smoothly to my work flow. It also pisses me off that after all these years, the best tools for the job haven’t improved much. Unlike the typical morons^H^H^H^H^H^Hprogrammers I’ve had to suffer, I also know how to cook up a build set that shouldn’t be an almighty pain in the neck, just to use on another computer then my own work station. Applying basics of computer science to software construction, many hours; having to use tools that quadruple your work load, priceless!

As soon as I battle test one last makefile for MS’s nmake, all should be ready for committing to the repo. Then I can worry about the next goal, a proper merge of the common and system modules into a central core, shuffling the Windows builds to using DLLs (to match the unix builds), and integrate PCC into the unix build stack. (For ease of compiling dependencies, only MSVC is supported on Windows: MinGW users are on their own).

During the course of the day, I’ve also done plenty of the server admin loop, and have serviced more interupts today, then my processor sees in a week of abuse.

and all while carrying on several conversations, hahaa!

Marc Espie on portability

Marc Espie on portability: “
A short while ago, Marc Espie (espie@) wrote to the ports mailinglist with a short rant about autoconf. His mail gives good insights into the problems porters face when dealing with GNU software, especially those using autoconf.”

This actually describes one of the many reasons I despise working a lot of other ‘programmers’ in this world, many of whom exhibit even greater levels of brain injury then that post takes as examples.
In my not so humble opinion: GNU autotools is either a good idea gone horrible wrong (in practice), or a royal brain fuck that just got out of hand. Users who can use such tools properly, seem to be falling by attrition to younger developers, many of whom (in my experinces anyway) wouldn’t know portability if it strung them on the keester like sitting on a scorpion. The GNU build system can an extremely good tool chain to work with, or it can be your worst freaking nightmare; a lot of people just don’t seem to care any more. My policy has always been, to support what I use (BSD/Win), and trying to minimise the heartache of getting code built on another platform.
Although the GNU project is perhaps the largest distributor of infectious disease on the Internet, I blame the developer idiocy I see around me, on the youth and not on the software. Most “Old wizards” seem to actually know their tools…..

Mario: Treat your tools like a friend. Keep ’em by you. Never let ’em down, and they’re always at your side.

Luigi: Hey, Mario, how is it that for every situation that could possibly come up, you always got a saying about tools? 

Mario: I got ’em from Papa. 

Both: He got ’em from Grandpapa!

Building better memory management for high performance wired/wireless networks: Part 1

Building better memory management for high performance wired/wireless networks: Part 1: “The authors describe a variable pool memory management scheme that has been implemented for LTE and WiMAX protocol stacks and has exhibited excellent performance, especially when compared to traditional fixed-pool implementations.

Email this Article
Add to digg
Add to del.icio.us


Maybe I’m a freako, but I find these this article set to be intensely interesting.
In my travels, I’ve read plenty of miles of code, including more then a few programs that go as far as memory pools, and even writing a real memory allocator for all practical intents and purposes. This ranges from programs simpler then most (non UNIX) users would think trivial, all the way to more “Complex” systems. In fact, I’ve even spent time spelunking kernel virtual memory and file system code, which can be a truly interesting set of experiences in their own right.
In working on Stargella, I’ve wondered whether or not using such techniques would be a viable method of improving the games performance, but for this point in time, everything relies on the C library to work wonders where dynamic memory is needed. While I can see potential savings from adapting a more elaborate memory management schema, it’s rare that I’m kicking something around, that really warrants the extra time for creating and debugging code for it, over just rolling with the local libc brew. Of course being the code monkey I am, I always keep an open mind for what the future may bring in the way of change.
Although I will use malloc() quite freely in coding, I also look at it like a reload in a CQB situation: if you’re reloading, you’re not able to engage more threats, and that (at least for me in SAS) is often the slowest element to an aggressive and dynamic advance. On the other hand though, I generally expect the operating system to provide a decent memory allocator for most tasks, rather then a brain damaged one o/. Ways to minimize the cost of allocating memory however, is something that I always consider a plus.
Hmmm 🙂

Bug smashing on the stack, hehe

Ok, I’ve spent some time longer then expected working on my games net code, mostly because I both needed one nights sleep this week, and wanted proper IPv4/IPv6 support going on. Sunday I basically finished principal work on the net module, and completed a pair of test cases: a really simple client and server program. After that went smooth, I had planned to complete the finishing touches.

The problem: the server example segfaulted. Now those who know me, I consider it a plight upon my honour as a programmer, for any code that I’ve written to crash without due cause (i.e. not my fault). So I spent work on Monday taking a break: refining code quality and then porting it from unix to windows. During the nights final testing runs after that however, I had not solved the mysterious crash yet, and got the same results. I switched over to my laptop and recompiled with debugging symbols, only to find that my program worked as normal, only dying with a segmentation violation once main() had completed, and the programs shutdown now “Beyond” my control.

My first thought of course, was “Fuck, I’ve probably screwed my stack”[1], a quick Google suggested I was right. I also noted that turning on GCCs stack protection option prevented the crash, so did manually adding a pointer to about 5 bytes of extra character data to the stack. Everything to me, looked like the the return address from main was being clobbered, or imagine invoking a buggy function that tries to return to somewhere other then where you called it from. Before going to bed, I narrowed it down to the interface for accept(). Further testing showed that omitting the request for data and just claiming the new socket descriptor, caused the crash to end but still, some funky problems with an invalid socket. Inspection of the operation also showed that the changes were well within the buffers boundary, yet it as still causing the crash. So I finished the remaining stuff (i.e. free()ing memory) and went to bed.

Having failed to figure it out this afternoon, and starting to get quite drowsy, I played a trump card and installed Valgrind. It’s one of those uber sexy tools you dream of like driving a Ferrari, but don’t always find a way to get a hold of lol. For developers in general, I would say that Valgrind is the closet thing to a killer app for Linux developers, as you are ever going to get. In my problem case however, Valgrind wasn’t able to reveal the source of the problem, only that the problem was indeed, writing to memory it I shouldn’t be screwing with o/.

So putting down Valgrind and GDB, and turning to my favourite debugging tool: the human mind. It was like someone threw on the lights, once I saw the problem. Man, it’s wonderful what a good nights sleep can do!

Many data structures in Stargella are intentionally designed so that they can be allocated on the stack or heap as needed, in order to conserve on unnecessary dynamic memory allocation overhead, in so far as is possible. So the example server/client code, of course allocated its per socket data structures right inside main(). Because there is no real promise of source level compatibility between systems, the networking module is implemented as a header file, having function prototypes and a data structure representing a socket; which contains an opaque pointer to implementation specific details, itself defined in unix.c and windows.c, along with the actual implementations of the network functions. Because of that,the behaviour of accept() can’t be emulated. Net_Accept() takes two pointers as parameters, first to a socket that has been through Net_Listen(), and secondly to another socket that will be initialised with the new connection, and Net_Accept() then returns an appropriate boolean value.

All the stuff interesting to the operating systems sockets API, is accessed through that aforementioned  pointer, e.g s->sock->whatever. What was the big all flugging problem? The mock up of Net_Accept(), was originally written to just return the file descriptor from accept(), allowing me to make sure that Net_Listen() actually worked correctly. Then I adjusted it to handle setting up the data of the new socket, in order to test the IPv4/IPv6 indifference and rewrite the client/server examples using Net_Send() and Net_Recv(), and that’s when the crashes started.

I forgot to allocate memory for the sub structure before writing to the fields in it, resulting in some nasty results. When I say that I don’t mind manual memory management, I forget to mention, that programming while deprived of sleep, is never a good idea, with or without garbage collection ^_^.

Now that the net code is virtually complete, I can hook it into my Raven Shield server admin tool, which will make sure to iron out any missing kinks, before it gets committed to my game. Hehehe.

My games net module is almost complete under unix, and in theory should be able to handle both IPv4 and IPv6 communication fine; not that I have much to test the latter with. Windows support will need a bit more tweaking, and then it’ll be possible to plug it into my Raven Shield admin quiet easily.

Pardoning interruptions, I’ve spent about 6 hours of my day working in straight C, followed by about 15-20 minutes for a little rest. For some sickening reason, my weekends almost always fall into the category of working all day, eating dinner, then working until dawn lol.

Doing things in C, I find more time consuming then more dynamic languages, chiefly because of how much testing I (try to) do, coupled with how much lower-level stuff one has to keep in mind. Having to deal with memory management issues, is not a problem for me, although I do admit that garbage collected languages can be very enjoyable. To be honest, I find the portability problems of doing anything interesting, to be a greater drawback then managing memory; e.g. by design Python is not very portable in comparison to C, but it more then portable enough for anything you’re likely to bump into on a PC, and can do ‘more’ with less bother, for an acceptable level of portability. They are very different languages at heart, and their designs reflect it strongly. A lot of people (including myself) call Cs portability a myth, and it is in the sense of what most people want (especially me), I doubt is possible without a more modern rendition of the language (NOT Java or C++). Where C truly excels at portability, well I reckon you’ll just have to study more assembly language to understand the value of it.

Now if only I had something better to do, then spend all my time behind a computer screen, monkeying around with GCC on one side, and MSVC on the other 8=).

In being dragged across the grocery store yet again! I spent some time contemplating what I was thinking about last night, as I was finishing up part of my games net code. Wouldn’t it be practical, to just implement a simple Virtual File System? It would make adapting the code base to different uses easier, since pathes and I/O could then be defined through a set of VFS_routines, but on the downside, making it pluggable would push greater overhead on all those I/O routines at every use.

The zpkg, system input/output, and network modules present very similar interfaces. Main differences being that zpkg doesn’t have write support (an unneeded non-trivial feature), and seeking between positions in a socket, just doesn’t make the same sense as with local files. If a virtual file system layer was built on top of it, it would be rather easy to define a “Plugin” data structure providing the necessary function pointers as needed, and simply use a hash table to handle the mappings between paths. Of course that leads to the bugger of a look up operation o/.

Really, most of the places where it matters wouldn’t impact game play, since most I/O can be divided between startup phase, loading stuff, client/server communication, and shutdown phase; the chatter between client and server obviously being implemented directly on top of the networking module, and therefore a moot point. It would make it possible for the resource loading code to become more flexible at run time; i.e. being able to load game assets both out of zpkg files and local system files without a recompile or a  restrictive version of the VFS.

I think it would be worth while, as an added plus, it would even allow splitting the path argument to Zpkg_Open, and pulling out the interesting bits into the VFS adapter function, which would be replacing that feature of the zpkg module.

For today however, my primary goal is to port the networking code (almost) completed last night, from the BSD Sockets API over to the Windows Sockets API. That way I can replace the less appropriate network code in my RvS admin program with it, and save having to complicate its design to make the most of Qts networking API. All while improving my games code base ^_^.

Although WinSock was based on the old Berkeley interface, Winsock has arguably grown more over the last decade, then the Unix interface has over the last 20 years. Not that there was much need beyond adding IPv6 capability, which the common Unix interface already grew ages ago. I personally dislike both the Berkeley and Windows interfaces immensely, why? Because in my humble opinion, the proper way would have been something like:

int s;

if (s = open("/net/tcp/host:port", O_CREAT | O_RDRW | O_EXLOCK)) == -1) {
perror("Unable to open a connection to host:port");
}

/* do usual stuff here, like read() or write() */



where /net would be an arbitrary mount point for a special file system, in which file system operations reflect their corresponding network operations. Flags for system calls like open() and fcntl() could have been suitably extended to cope, and others like accept() implemented as needed. In the light of things like FUSE, it would be even more interesting to do it that way today, then it would have been in the 1980s.

Instead of that simple form, whenever we want to create a socket: we have to setup suitable socket-specific data structures, which and how many depending on the operations to be conducted; common practice is to zero over the most important (sockaddr_in / sockaddr_in6) structures before using them, and leave it to the compilers optimizer whether it actually happens at run time; look up in the system manuals what pre processor definitions correspond to type of network connection we want (let’s say TCP over IP) for the socket() call; initialise the field structures, using the same pre pre processor flags, and even converting details like the port # into suitable formats, all by ourselves. After which we might finally get around to doing the socket I/O ourselves, pardoning any intervening system calls needed for your particular task.

/*
 * Assumes open flags in previous theoretical example corresponded to a connect
 * operation, rather then a bind() and listen() operation. Like wise for the
* sake of terseness, I'm "Skipping" support for real.host.names rather then IPs.
 */

int s;
struct sockaddr_in addr;

memset(&addr, 0, sizeof(struct sockaddr_in));
addr.sin_family = AF_INET;
addr.sin_port = htons(port);
if (inet_pton(AF_INET, "xxx.xxx.xxx.xxx", &addr.sin_addr) != 1) {
/* handle AF_INET, address parsing, or unknown errors here.
* error is indicated by return value.
*/
return;
}

if ((s = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) == -1) {
perror("Unable to create socket");
goto CLEANUP;
}
/* use system calls and a data structures as needed to setup things as desired */

if (connect(s, (const struct sockaddr *)&addr,
   sizeof(struct sockaddr_in)) == -1)
{
perror("Unable to connect socket");
goto CLEANUP;
}

/* do I/O here: send/recev, or read/write */

CLEANUP:
shutdown(s, SHUT_RDWR);
close(s);

I reckon the Berkeley interface was the right choice, for portability between systems (including non-unix ones; making it easy to write stuff like WinSock), and probably easier to graft onto the 4.3BSD kernel, but it’s damn inconvenient for a UNIX programmer. That ain’t changed after about 25-30 years.

Oh wells, at least it works pretty darn near every where, give or take a few kinks.

Feeling inspired

As always I’ve got plenty of loops open, always have, probably always will… I hate sitting idle. While I like time for R&R, I prefer to stay fairly busy. Right now I’m focusing on

I feel inspired in a way, to throsh along with work on my game projects, it’s been a bit since I’ve had time to work onit, but the SYSIO sub system is almost complete, once that’s done, I’ll try to unify the ZPKG and SYSIO interfaces and work on using DevIL for texture loading code. When I pause for a moment and think about the sources before me, I can see what it could become, and all I need is the time and strength to do it.

Today I also thunk up the most perfect unit test for epi-sum, and one monster data set to test an internal library against. Overall, our EPI implementation isn’t designed for to compete with C/C++ runtime speed, in fact, language selection was chosen with it as an after thought. The thing is though, while it still can keep pace with stuff like apt-get or PBIs, I want it to be faster then any valid competition :-D. It’s also good geek fun to see where algorithms can be adjusted for savings. An extra bonus, since the ECA code is under a BSD style license, I can also ‘barrow’ the best parts for other projects, hehe.

When it comes to optimization, I generally “Skip it” whereever possible, and I rarely use the corresponding compiler flags either. The place where I focus my attention on, is doing things in a good way: data structures and algorithms that fit well, solve the problem, and that scales with it. You could say, my focus is on finding the best solutions that don’t shoot you in the foot, nor complicate trying to understand wtf the code actually does. If a real bottleneck enters the picture, then I dig into the code monkies bag and start fine tuning things.

Updating Qt, hehe.

Tonight I updated SASs TeamSpeak 3 server, and discovered that my TS3 client was too darn out of date to work with it, haha. After updating things, I also noticed in the nifty about dialog they shipped, that the version of Qt used, denoted the GNU LGPL v2.1.

It has been a good while since I updated Qt on my windows system, last time was about one year ago. So I dropped by Qt’s website to download an updated SDK, and also found that they had MinGW and Visual C++ 2008 library packages available. Last time I really focused on Qt/C++ development, Microsoft Visual C++ was just becoming supported by the Open Source Edition (OSE), having long been supported by the commercial editions of Qt.

In perusing the website, I noticed that GPLv3 is now also a supported license for Qt. They really have gone through a few licenses over the years, I still remember when the OSE was a chose between GPLv2 and their own Qt Public License agreement.

While I really hate doing cross platform development in C++, Qt is both the least painful widget toolkit I’ve ever seen, and really makes the process *a lot* less painful. Well, as less painful as dealing with template implementations between GNU/MS C++ compilers anyway.

It is note worthy that the SDK only includes the necessary library files to link using MinGW, the port of the GNU Compiler to Windows. So if you plan on using Microsoft’s compiler, you will want the vs2008 package, or the source code if you need to shoe horn into an older version.

One thing I like about all the *decent* operating systems shipping a system compiler on their install disk, that usually means pre-compiled packages will be in sync with your compiler. Microsoft Visual C++ is not quite so lucky, since being a separate product, most people shipping binary packages of libs/headers, usually support 7.1 or 8.0 instead of 9.0. Oh well, maybe when VC10 is released :-/.