In looking closer at things, somehow I think that by cica GCC 5.0, either the GNU compiler will have imploded upon it’s own weight :-o, or it will become an impressively powerful compiler, in place of an impressively portable one.

The feature set being grown, may even give old MSVCs optimization setup a good run for it’s money someday, only the best tools with Visual C++ cost a few thousand dollars and GNUs is given away for free lol.

Me, I would just settle for a generally portable compiler that generates decent code, and complies with the bloody standards… So far I personally like pcc.

Jokes sometimes place the yoke on you

In writing a small module, that in part of it looks like this:

    switch(function parameter of some enum type) {
      case SomeEnumValue:
        handle it
        break;
      // done for each enumerated value
      default:

        // crash program? *evil grin*
        2/0;
     }

    // use the function parameter

This was written as a joke to allow me to test the function by forcing the compiler to pass an invalid integral value, which would trip the default label. Obviously the final code needs to do something besides float around oft’ undefined behaviour, but one has to have a little fun in idiot proofing your code ;).

The funny thing was instead of crashing, it continued on and triggered the (testing) assert() checking the function parameter, which then caused the program to terminate. Even more enjoyable was changing it to `int x = 2/0;`, causes the program to halt due to a floating point exception. Someday I need to thumb through the C++ standard and take a look.

Oh well, I had planned to throw something from stdexcept anyway, or carry on with reduced functionality; so it’s no real loss lol.

Why C++ is a failure

In reading through Scott Meyers book,  Effective C++, his expression that it should actually be viewed as a federation of languages, is actually a great way to look at it. He describes it in terms of C++ the C, Object-Oriented, Template, and STL groups (I would’ve skipped the STL) is fairly accurate.

The true gem however, I think is Item 19: Treat class design as type design. This IMHO, is more true in languages like C++, Java, and C#, than what some folks are accustomed to. You’re playing with the type system, so act like it.

He points out 12 issues involved with developing a new type or ‘class’, I’ll summarize them briefly:

  1. How should objects of your new type be created & destroyed?
  2. How should initialization differ from assignment?
  3. How should passing-by-value work with your type?
  4. What restrictions are their on legal values for your new type?
  5. How does inheritance effect your new type?
  6. What kind of type conversions should be allowed?
  7. What operators and functions make sense on it?
  8. What standard functions should be disallowed?
  9. Who should have access to its members?
  10. What kind of guarantees does it make?
  11. How general is it?
  12. Is a new type really what you need?

If anything, I would say rip out those pages of the book, and make it a required `check list`  of ground new programmers must cover before they are allowed to add a new type to the code base. The book gives excellent explanation of each point, so I won’t offer much deep exposition here on them: I’ve just tried to condense them. (Buy or borrow the book lol.)

If you’re going to be creating a new type; which you are effectively doing when creating a new class in C++, then these issues all apply to you. I would argue, that most of this applies to any language with user defined types and operator overloading; which is also most main stream language supporting OOP.

Points 2, 3, 4, 6, 7, and to an extent 8, all make sense in the domain of creating a new ‘type’. Personally, I tend to skip much of it unless for example, overloading certain operators offers serious savings on expressiveness, or the default copy constructor / assignment operators are insufficient. These points that the book outlines, really are the source of most complexity in developing C++ classes, perhaps because like C-strings and good ol’ malloc(), it exposes a lower level picture of things to the programmer. Everyone can program in C, but not everyone should.

Points 1, 5, and 9, are more unique to designing a class than the others are, at first glance. Simply put you can’t create a class without taking 1 and 5 into consideration, it’s just required. Although admittedly you can skimp a little on 5 in some cases but probably shouldn’t. If you know jack about OOP, let along designing software, you know the answer to 9 is the minimalist level of access required. I actually enjoy that Scott Meyers work demonstrates, that OOP goes a lot further than the class keyword! Abstraction, encapsulation, and modularity are an integral part of doing quality work, and object oriented programming is but a useful paradigm for modeling that, especially when polymorphic inheritance is applicable. Point 5 becomes a bit easier to live with as time goes on, although I still suggest favouring object composition over inheritance when a class hierarchy isn’t the most appropriate thing for solving the problem at hand (and minimizing the ruddy singletons).

Points 10, 11, and 12 are true of most elements of composition within a program: from even for functions. Those kind issues get easier with experience, assuming you learned a slugs worth about software design in the first place. Some people never learn it, period.

Why based on those oh so true points, would I conclude that C++ is a failure? Because the “Average” slep is just not competent enough to deal with most of it as often as necessary. Heck, I still encounter code bases where the programmer can’t tell the fucking difference between signed and unsigned integers when designing their interface. There are some truly brilliant programmers out there, and many of them do use C++ quite heavily, but in the wider majority, as with most majorities: you’ll find more Homer J. Simpson’s than Sir Isaac Newtons in the crowd. This goes for just about every thing on earth :'(. We could use some more clones of Newton and a few less Apple and Fig Newtons walking around. So long as the average is sufficiently uneducated, and still allowed to (ab)use the language, I think it’s a failure; but hey, who ever said C++ was designed with the incompetent people in mind ;).

It is simply to much for the average Tom, Dick, and Harry to be able to consider it. Not that those proverbial average three bozos should be screwing with a complex system… half as often as they are found to be. Maybe I’m more inclined to make such statements, as I still know average adults who can’t understand a + b = b + a; yet, and I have met a whole lotta stupid people: without even counting programmers.

disclaimer: I drew this conclusion ages ago after reading EC++s item 19, that “C++ is a failure”, and was in a more stable state of mind at the time then the one used to finally type this out.

Lately in my spare time, as one might guess: I’ve been picking up C#. That, and reading about electrical wiring and stuff, but I always new I’d light myself up one day xD.

Before bed, I was experimenting with building and structuring assemblies. Being my typical self, this of course means playing with the command line csc (MS) and mcs/gmcs (Mono) compilers, as well as their associated tools. IDE wise, I experimented with MonoDevelop under FreeBSD and the express edition of Visual C# 2010 under XP.  I must admit that as far as IDEs go, MonoDevelop is a pretty good one: the only negative things that I can say about it, being the vi mode is very minimalist (G doesn’t even take a count), and it’s not the most responsive program when the computers under heavy load: but still knocks Mozillas socks off by 9 warp factors :-P. Visual C# on the other hand, I can’t say how the 2010 version differs from the 2008 one: only that it’s not nice. To be honest, my first encounters with the express editions for Visual Studio 2010, shows me that Microshaft seems to have a policy of (yet again) hiding much of the tools from the user. Just starting Visual C# makes me remember how long Windows has hidden file permissions from the user by default. Perhaps most Windows users are to damn stupid to understand the concept of “Privacy”, but any jackinape permitted to touch source code, should at least be made to understand the concept of debug and release builds (a heck of a lot better then VS’s new defaults).

In my experiments using MonoDevelop and Visual C#, it only took a few seconds before I became glad that Vi IMproved doesn’t emulate Intellisense; but it is fair to say that I’m a freak: my customised vim setup even disables syntax highlighting lololol.

And considering how much this build of Firefox has Flash burning through CPU cycles, I think my laptop is going to heat, if I don’t call this a morning :-S.

An idiosyncrasy no one else gets

Whenever someone asks me how I am, I often phrase ‘and how are you?’ as ‘&you ?’, which is something usually lost on everyone. In the C programming language, the ampersand is used as an address-of operator used to create a reference of sorts, and is integer to utilising pointers. So litterally ‘address-of you ?’ makes a very explicitly reference while remaining a syntactically correct substitution of ‘&’ for ‘and’, in English anyway.

If anyone finds that odd, just try not to think about how Lisp and Perl have impacted my brain over the years lol.

Tweaking my noise at the old API

In fooling around with the Windows API, I’ve just had an enjoyable moment of guffawing. As a quick test of the JS stuff in winmm, I hooked up MM_JOY1MOVE to MessageBox() and ran the program under the debugger. It resulted in an endless stream of MessageBox(), resulting in the Windows task bar hanging, and taking at least 25-30 seconds to recover, after the program had finally overflowed the stack, been examined, and finally terminated manually.

I almost died laughing lol.

Ugh, it’s been a long and unpleasant day! Never the less, I’ve almost got the MSVC builds sorted to where I want them. Basically why unix builds are shared libraries and windows builds are static libraries, has to do with the respective linkers.

At least on FreeBSD i386, the (GNU) linker doesn’t complain about the common and sys modules referencing one another, you could say it has more of a view that the shared lib is a chunk of code and all is fine as long as it all resolves enough in the end. I generally prefer dynamic linking over static, although I have nothing against static libraries internal to a project; when it comes to Windows  however, I’m particularly found of Microsoft’s SxS technology.

While the GNU stuff on my laptop is hapy enough to obey, the link tool provided by MSVC however, won’t cooperate with that model of behaviour for shared libs (DLLs), only static libraries. Other then the increasingly becoming stuff that belongs together, the common and sys modules were merged into a single ‘core’ module, and tonight, prepped to better handle compiler specifics as well. Secondary is that, simply put link makes shared libraries a bit more typing then need be. Every other sane OS/Compiler pair I’ve encountered, has the lovely habit of assuming that you wrote a function in a library, and might want to share it with other programs. Visual C++ on the other hand,  presents several ways of doing it: that all basically amount to telling the linker which things an application may slurp up from it. Basically resorting to writing a “.def” file, or in wrapping up function definitions with a __declspec(export) attributes, and the correct __declspec(export) or __declspec(import) attributes at their declarations.

Microsoft’s way of doing things is more flexible, but one might fairly argue that the inverse behavour (e.g. export anything not specially marked) would have been better.

Generally I like MSVC, I think it’s better then GCC, if you are willing to put up with the major lack of C99 compliance and lack of stdint.h (I use one written by Paul Hsieh). The main downside is the tools tend to be a bit, eh, stupider then GNU brew, and the best parts of the system are like wise fairly specific to both MSVC and Windows NT. Personally I would enjoy a professional edition of the MS’s offerings, because it would net access to their 64-bit C/C++ compiler and much stronger profiling tools, that are simply missing from the express editions.

The sad part, is that Visual Studio is the only software package I have seen Microsoft release in my entire life that, that’s worth buying…. lol. Not even their operating systems can say that much, from where I sit.

Building better memory management for high performance wired/wireless networks: Part 1

Building better memory management for high performance wired/wireless networks: Part 1: “The authors describe a variable pool memory management scheme that has been implemented for LTE and WiMAX protocol stacks and has exhibited excellent performance, especially when compared to traditional fixed-pool implementations.

Email this Article
Add to digg
Add to del.icio.us


Maybe I’m a freako, but I find these this article set to be intensely interesting.
In my travels, I’ve read plenty of miles of code, including more then a few programs that go as far as memory pools, and even writing a real memory allocator for all practical intents and purposes. This ranges from programs simpler then most (non UNIX) users would think trivial, all the way to more “Complex” systems. In fact, I’ve even spent time spelunking kernel virtual memory and file system code, which can be a truly interesting set of experiences in their own right.
In working on Stargella, I’ve wondered whether or not using such techniques would be a viable method of improving the games performance, but for this point in time, everything relies on the C library to work wonders where dynamic memory is needed. While I can see potential savings from adapting a more elaborate memory management schema, it’s rare that I’m kicking something around, that really warrants the extra time for creating and debugging code for it, over just rolling with the local libc brew. Of course being the code monkey I am, I always keep an open mind for what the future may bring in the way of change.
Although I will use malloc() quite freely in coding, I also look at it like a reload in a CQB situation: if you’re reloading, you’re not able to engage more threats, and that (at least for me in SAS) is often the slowest element to an aggressive and dynamic advance. On the other hand though, I generally expect the operating system to provide a decent memory allocator for most tasks, rather then a brain damaged one o/. Ways to minimize the cost of allocating memory however, is something that I always consider a plus.
Hmmm 🙂

Bug smashing on the stack, hehe

Ok, I’ve spent some time longer then expected working on my games net code, mostly because I both needed one nights sleep this week, and wanted proper IPv4/IPv6 support going on. Sunday I basically finished principal work on the net module, and completed a pair of test cases: a really simple client and server program. After that went smooth, I had planned to complete the finishing touches.

The problem: the server example segfaulted. Now those who know me, I consider it a plight upon my honour as a programmer, for any code that I’ve written to crash without due cause (i.e. not my fault). So I spent work on Monday taking a break: refining code quality and then porting it from unix to windows. During the nights final testing runs after that however, I had not solved the mysterious crash yet, and got the same results. I switched over to my laptop and recompiled with debugging symbols, only to find that my program worked as normal, only dying with a segmentation violation once main() had completed, and the programs shutdown now “Beyond” my control.

My first thought of course, was “Fuck, I’ve probably screwed my stack”[1], a quick Google suggested I was right. I also noted that turning on GCCs stack protection option prevented the crash, so did manually adding a pointer to about 5 bytes of extra character data to the stack. Everything to me, looked like the the return address from main was being clobbered, or imagine invoking a buggy function that tries to return to somewhere other then where you called it from. Before going to bed, I narrowed it down to the interface for accept(). Further testing showed that omitting the request for data and just claiming the new socket descriptor, caused the crash to end but still, some funky problems with an invalid socket. Inspection of the operation also showed that the changes were well within the buffers boundary, yet it as still causing the crash. So I finished the remaining stuff (i.e. free()ing memory) and went to bed.

Having failed to figure it out this afternoon, and starting to get quite drowsy, I played a trump card and installed Valgrind. It’s one of those uber sexy tools you dream of like driving a Ferrari, but don’t always find a way to get a hold of lol. For developers in general, I would say that Valgrind is the closet thing to a killer app for Linux developers, as you are ever going to get. In my problem case however, Valgrind wasn’t able to reveal the source of the problem, only that the problem was indeed, writing to memory it I shouldn’t be screwing with o/.

So putting down Valgrind and GDB, and turning to my favourite debugging tool: the human mind. It was like someone threw on the lights, once I saw the problem. Man, it’s wonderful what a good nights sleep can do!

Many data structures in Stargella are intentionally designed so that they can be allocated on the stack or heap as needed, in order to conserve on unnecessary dynamic memory allocation overhead, in so far as is possible. So the example server/client code, of course allocated its per socket data structures right inside main(). Because there is no real promise of source level compatibility between systems, the networking module is implemented as a header file, having function prototypes and a data structure representing a socket; which contains an opaque pointer to implementation specific details, itself defined in unix.c and windows.c, along with the actual implementations of the network functions. Because of that,the behaviour of accept() can’t be emulated. Net_Accept() takes two pointers as parameters, first to a socket that has been through Net_Listen(), and secondly to another socket that will be initialised with the new connection, and Net_Accept() then returns an appropriate boolean value.

All the stuff interesting to the operating systems sockets API, is accessed through that aforementioned  pointer, e.g s->sock->whatever. What was the big all flugging problem? The mock up of Net_Accept(), was originally written to just return the file descriptor from accept(), allowing me to make sure that Net_Listen() actually worked correctly. Then I adjusted it to handle setting up the data of the new socket, in order to test the IPv4/IPv6 indifference and rewrite the client/server examples using Net_Send() and Net_Recv(), and that’s when the crashes started.

I forgot to allocate memory for the sub structure before writing to the fields in it, resulting in some nasty results. When I say that I don’t mind manual memory management, I forget to mention, that programming while deprived of sleep, is never a good idea, with or without garbage collection ^_^.

Now that the net code is virtually complete, I can hook it into my Raven Shield server admin tool, which will make sure to iron out any missing kinks, before it gets committed to my game. Hehehe.

My games net module is almost complete under unix, and in theory should be able to handle both IPv4 and IPv6 communication fine; not that I have much to test the latter with. Windows support will need a bit more tweaking, and then it’ll be possible to plug it into my Raven Shield admin quiet easily.

Pardoning interruptions, I’ve spent about 6 hours of my day working in straight C, followed by about 15-20 minutes for a little rest. For some sickening reason, my weekends almost always fall into the category of working all day, eating dinner, then working until dawn lol.

Doing things in C, I find more time consuming then more dynamic languages, chiefly because of how much testing I (try to) do, coupled with how much lower-level stuff one has to keep in mind. Having to deal with memory management issues, is not a problem for me, although I do admit that garbage collected languages can be very enjoyable. To be honest, I find the portability problems of doing anything interesting, to be a greater drawback then managing memory; e.g. by design Python is not very portable in comparison to C, but it more then portable enough for anything you’re likely to bump into on a PC, and can do ‘more’ with less bother, for an acceptable level of portability. They are very different languages at heart, and their designs reflect it strongly. A lot of people (including myself) call Cs portability a myth, and it is in the sense of what most people want (especially me), I doubt is possible without a more modern rendition of the language (NOT Java or C++). Where C truly excels at portability, well I reckon you’ll just have to study more assembly language to understand the value of it.

Now if only I had something better to do, then spend all my time behind a computer screen, monkeying around with GCC on one side, and MSVC on the other 8=).