A little size of fun.

Generally, I’m rather finicky about making assumptions about the sizes of types, and even conversions between signed and unsigned types. Although I occasionally skirt dangerous gronud, such as feeding a function pointer into a object pointer, and expect to be able to cast the void* back to the function pointer (basically implementation defined by C, but required by POSIX), I also tend to make notes of when (fully aware) I do things that are non portable, but not necessarily obvious. At least in the example case I just mentioned, I didn’t know that was dangerous ground until I reviewed code under -pendantic, and scratched my head at the required warning message.

Normally I take things in stride, and just cringe when I see, “Portable” software doing stupid things like using unsigned int where they mean uint32_t, or the (flawed) assumption that a pointer to xyz will be as large as an object of xyz. So I thought I’d just take a look see here, and then wrote a program to display it in bits rather then bytes, since most of the folks I know will better get the picture that way :-o.

Also being a practical man, I kind of like to know what is off the edge of the map, just in case I some day have to jump off o/.

Here is a simple program to solve my curiosity:

 #include 
#include
#include
#include
int
main(void) {

    printf("sizeof(char)t= %u-bitsn", sizeof(char)*CHAR_BIT);
    printf("sizeof(char*)t= %u-bitsn", sizeof(char*)*CHAR_BIT);
    printf("sizeof(wchar_t)t= %u-bitsn", sizeof(wchar_t)*CHAR_BIT);
    printf("sizeof(wchar_t*)t= %u-bitsn", sizeof(wchar_t*)*CHAR_BIT);
    printf("sizeof(short int)t= %u-bitsn", sizeof(short int)*CHAR_BIT);
    printf("sizeof(short int*)t= %u-bitsn", sizeof(short int*)*CHAR_BIT);
    printf("sizeof(int)t= %u-bitsn", sizeof(int)*CHAR_BIT);
    printf("sizeof(int*)t= %u-bitsn", sizeof(int*)*CHAR_BIT);
    printf("sizeof(long)t= %u-bitsn", sizeof(long)*CHAR_BIT);
    printf("sizeof(long*)t= %u-bitsn", sizeof(long*)*CHAR_BIT);
    printf("sizeof(long long)t= %u-bitsn", sizeof(long long)*CHAR_BIT);
    printf("sizeof(long long*)t= %u-bitsn", sizeof(long long*)*CHAR_BIT);
    printf("sizeof(size_t)t= %u-bitsn", sizeof(size_t)*CHAR_BIT);
    printf("sizeof(size_t*)t= %u-bitsn", sizeof(size_t*)*CHAR_BIT);
    printf("sizeof(float)t= %u-bitsn", sizeof(float)*CHAR_BIT);
    printf("sizeof(float*)t= %u-bitsn", sizeof(float*)*CHAR_BIT);
    printf("sizeof(double)t= %u-bitsn", sizeof(double)*CHAR_BIT);
    printf("sizeof(double*)t= %u-bitsn", sizeof(double*)*CHAR_BIT);
    printf("sizeof(long double)t= %u-bitsn", sizeof(long double)*CHAR_BIT);
    printf("sizeof(long double*)t= %u-bitsn", sizeof(long double*)*CHAR_BIT);
    printf("sizeof(ptrdiff_t)t= %u-bitsn", sizeof(ptrdiff_t)*CHAR_BIT);
    printf("sizeof(ptrdiff_t*)t= %u-bitsn", sizeof(ptrdiff_t*)*CHAR_BIT);
    printf("sizeof(intptr_t)t= %u-bitsn", sizeof(intptr_t)*CHAR_BIT);
    printf("sizeof(intptr_t*)t= %u-bitsn", sizeof(intptr_t*)*CHAR_BIT);

    return 0;
}


The C standard defines CHAR_BIT in limits.h, as being the number of bits  for the smallest object that is not a bit field, roughly meaning that CHAR_BIT = number of bits in a byte, for all practical intents and purposes. Like wise, the sizeof operator is defined as returning the size of its operand in bytes, as an implementation defined unsigned integer value having the type size_t, from stddef.h. For the fuckos out there, the standard also says that a char object is large enough to store any character of the basic execution set (A-Z, a-z, 0-9, space, plus the required punctuation and control characters—roughly a character set of 99 symbols that fit within a single byte), and that those characters will have a non negative value while doing it. It also declares that sizeof(char) == 1. From this we can infer that sizeof(x) * CHAR_BIT should be the size of x in bits, and that ‘x’ is basically as good as off the edge of the map, for any x that you can’t make on my grandmothers type writer.

Having the size of each type followed by a pointer to it displayed, is mostly done to emphasis that a pointer to a size, means dick all about the size of the pointer. You’ll notice an interesting connection between pointer size and your hardware however. Gee, that just doesn’t sound right, LOL.

Some examples:

Run on FreeBSD 8.0-STABLE i386:

sizeof(char)    = 8-bits
sizeof(char*)   = 32-bits
sizeof(wchar_t) = 32-bits
sizeof(wchar_t*)        = 32-bits
sizeof(short int)       = 16-bits
sizeof(short int*)      = 32-bits
sizeof(int)     = 32-bits
sizeof(int*)    = 32-bits
sizeof(long)    = 32-bits
sizeof(long*)   = 32-bits
sizeof(long long)       = 64-bits
sizeof(long long*)      = 32-bits
sizeof(size_t)  = 32-bits
sizeof(size_t*) = 32-bits
sizeof(float)   = 32-bits
sizeof(float*)  = 32-bits
sizeof(double)  = 64-bits
sizeof(double*) = 32-bits
sizeof(long double)     = 96-bits
sizeof(long double*)    = 32-bits
sizeof(ptrdiff_t)       = 32-bits
sizeof(ptrdiff_t*)      = 32-bits
sizeof(intptr_t)        = 32-bits
sizeof(intptr_t*)       = 32-bits

and FreeBSD 8.0-RELEASE amd64:

sizeof(char)    = 8-bits
sizeof(char*)   = 64-bits
sizeof(wchar_t) = 32-bits
sizeof(wchar_t*)        = 64-bits
sizeof(short int)       = 16-bits
sizeof(short int*)      = 64-bits
sizeof(int)     = 32-bits
sizeof(int*)    = 64-bits
sizeof(long)    = 64-bits
sizeof(long*)   = 64-bits
sizeof(long long)       = 64-bits
sizeof(long long*)      = 64-bits
sizeof(size_t)  = 64-bits
sizeof(size_t*) = 64-bits
sizeof(float)   = 32-bits
sizeof(float*)  = 64-bits
sizeof(double)  = 64-bits
sizeof(double*) = 64-bits
sizeof(long double)     = 128-bits
sizeof(long double*)    = 64-bits
sizeof(ptrdiff_t)       = 64-bits
sizeof(ptrdiff_t*)      = 64-bits
sizeof(intptr_t)        = 64-bits
sizeof(intptr_t*)       = 64-bits

I also have access to 32-bit versions of Windows NT and OpenBSD running on Pentium 4-grade hardware, but don’t feel like booting the wintel tonight, I’m to comfortable with Dixie hehe. Perhaps I will run the program on other systems and implementations, for the sake of testing, and add it to this entry as a comment.

Here’s my notes from installing JPEG-7 on Windows

Take note: I install libraries into C:DevFilesLibrariesWhat; with compiler specific files dumped under sub folders, e.g. C:DevFilesLibrarieszlibmsvczdll.lib and C:DevFilesLibrarieszlibmingwlibzdll.a. Like wise, I leave a README.TXT file in the root, noting anything I will need to remember when it comes to using the library.

# build for Visual C++ 2008 / 9.0
> unzip "pathtojpegsr7.zip"
# I want it in jpegsrc for safe keeping
> mkdir jpeg
> move jpeg-7 jpegsrc
# use the corresponding .vcX files for version
> copy makeasln.vc9 apps.sln
> copy makejsln.vc9 jpeg.sln
> copy makewvcp.vc9 wrjpgcom.vcproj
> copy maketvcp.vc9 jpegtran.vcproj
> copy makervcp.vc9 rdjpgcom.vcproj
> copy makedvcp.vc9 djpeg.vcproj
> copy makecvcp.vc9 cjpeg.vcproj
> copy makejvcp.vc9 jpeg.vcproj
> copy jconfig.vc jconfig.h
# I'm using vcbuild, since I read .vcproj files in vim; you may want the IDE
> vcbuild /nologo jpeg.sln "Release|Win32"
...
> vcbuild /nologo apps.sln "Release|Win32"
# I put compiler specific files in a suitable folder
> mkdir ..msvc
> copy Releasejpeg.lib ..msvc
# jconfig.h is compiler specific, so we dump it in our compiler directory
> copy jconfig.h ..msvc
> del jconfig.h
# build for MinGW/MSys
$ pushd /C/DevFiles/Libraries/jpeg/
$ pushd src
# works for most packages
$ ./configure --prefix=/C/DevFiles/Libraries/jpeg/ --exec-prefix=/C/DevFiles/L
ibraries/jpeg/mingw/
$ make
$ make install
# move jconfig out of independent include and into compiler specific dir
$ mv ../include/jconfig.h

Now copy jerror.h, jinclude.h, jmorecfg.h, jpegint.h, and jpeglib.h into ../include/. Those are the portable headers, that won’t very based on compiler (MinGW/MSVC).

and here’s my notes file:

MinGW -> static link as normal
MinGW -> Use import library libjpeg.dll.a for dynamic linking with libjpeg-7.dll
MinGW -> Last build JPEG-7 on 2009-12-24

MSVC -> Can only static link agaisnt jpeg.lib; must use /MD
MSVC -> also add msvc/ to include path, for jconfig.h
MSVC -> Last build JPEG-7 on 2009-12-23

In all cases, add include + compiler folders to your include path.

Oh what fun it would be: a compiler with useful error messages

the code:

#include "common.hpp"


template<typename ACHAR>
class GameException
: public std::exception
{
public:
GameException() throw();
GameException(const ACHAR*) throw();
GameException(const basic_string<ACHAR>&) throw();
virtual ~GameException() throw();
virtual const ACHAR* what() const throw();
protected:
const ACHAR *why;
};

the error:

s:visual studio 2008projectstacfpsgameprojectnamesourceincludegameexceptions.hpp(17) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int

the solution:

fully qualify basic_string<> as std::basic_string<ACHAR>, or add ‘using std::basic_string’ to common.hpp along side std::string and std::wstring, like I thought I did last week !!!

simple fact: compiler errors usually suck, and C++ templates don’t help any.

Maybe I’m just tired, or now I’m ready for a nap

compiler is pissed off:

1>pathsourcegameconsole.cpp(53) : error C2039: 'setPostiion' : is not a member of 'Ogre::OverlayElement'
1> pathsourceogreogremainincludeogreoverlayelement.h(104) : see declaration of 'Ogre::OverlayElement'

ogreoverlayelement.h

class _OgreExport OverlayElement : public StringInterface, public Renderable, public OverlayAlloc
{
// ...
public:
/** Sets the position of the top-left corner of the element, relative to the screen size (1.0 = screen width / height) */
void setPosition(Real left, Real top);


// ...
};

The header file and the API documentation both agree, the Ogre::OverlayElement class has a public member named setPosition. The compiler however seems to assert, that the header is wrong?

No, the compiler, the header, and the API docs are all right: I’ve just been sitting here for about 5 1/2 hours without break…. and can no longer tell the difference between ‘void setPosition(Real, Real);’ and ‘void setPostiion(Real, Real);’

!!! TIME TO TAKE A WALK !!!

Hahaha, here I am wondering ok, what the hell is wrong with

if (statement that will return false)
DEBUG(...);
do action;

and then I noticed, uncommenting the DEBUG() macro a few minutes ago changed the dohicky.

This is exactly why whenever I’ve written a ‘style’ file for any of the projects I’ve done (and in fact, usually follow) a note that it should *always* be if () { } and never if () just for this reason!!!

Ok, I’m so freaking stupid… I should just get some sleep and code when I can pay attention (and remember my old tool Perl).

Compiler errors: the art of reading Geek and laughing at your typos

The code, “typo”

RenderManager RManager;

The resulting compiler error

1>main.cpp
1>.Sourcemain.cpp(9) : error C2146: syntax error : missing ';' before identifier 'RManager'
1>.Sourcemain.cpp(9) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
1>.Sourcemain.cpp(9) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int

The correction

#include "RenderManager.hpp"

RenderManager RManager;

What makes me laugh: syntax error : missing ‘;’ before identifier.

The good thing: “missing type specifier – int assumed” actually hints that RenderManager is not yet a known type, as in I forgot to include the appropriate header… hehe.

Is it just me or…

/* from Tutorial 2, Direct3D 9; DirectX SDK */
if( FAILED( g_pd3dDevice->CreateVertexBuffer( 3*sizeof(CUSTOMVERTEX),
0 /*Usage*/, D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT, &g_pVB, NULL ) ) )
return E_FAIL;

The tutorial describes the arguments, stating that ‘The final parameter is the address of the vertex buffer to create.’ Ok, I think it is obvious that it means the address of the vertex buffer being passed; further more the API documentation says the final argument is “Reserved. Set this parameter to NULL. This parameter can be used in Direct3D 9 for Windows Vista to share resources”.

I can’t help but chuckle a little bit at the tutorial, maybe it is just me and my brains crazy English parser lol.

GCC spitting error: stray ‘1’ in program (etc)

Ok, so I am wondering why the bloody heck I’m getting messages like main.o:x:y: error: stray ‘1’ in program and related errors * near infinity whilst nitting object files into an executable .

Note to self: never allow your Makefile to say g++ -x c++ foo.o … -o program !!!!

I don’t know what is worse….

(EDIT: actually this reminds me, a friend said it sounded like an encoding error; and in retrospec it looks like GCC was interpreting main.o as a C++ file because of the -x flag lol)

The Simpler Direct-media Library (SDL) has proven more impressive then originally anticipated. I’ve downloaded the MinGW (GCC) and MSVC (V8.0/2005) development libraries along with the source code: much to my surprise, the Borland and Watcom compilers are also supported. While I’m using GCC for the unix side of things, I fully intend to make use of Microsofts compiler for the windows builds. My desktop system also has the OpenWatcom compilers installed on the Windows partition, never have used them, but they are available (I them installed ages ago, mainly out respect for the old watcom-c compiler). Since I need the DirectX SDK to compile SDL from source on Win32, and it is like a 512 meg download, it’ll have to wait a while lol. The binaries available are from MSVC8, so I really woul dprefer compiling SDL from source: not to mention feel more comfortable using the combination for projects, knowing it built well…. hehe.

I’ve been taking the effort to study the Visual Studio-style build system in preparation, it will get the job done. My desktop has the Express Editions of Visual C++ (V9.0/2008), C#, and Basic installed; along with MinGW and OpenWatcom, but I avoid C/C++ development under Windows as much as possible — just not a comfortable environment. If I ever opened a shop, I would probably nab a few copies of Visual Studio proper, and just use it for building stuff ^_^.

I’m accustomed to having an entire operating system as my integrated development environment, so I do not care much for traditional IDEs, they are just not my bag. Visual Studio (particularly the more professional oriented versions) however are one of the best as far as such things go; and perhaps the only Microsoft product that I have ever met, and did not *hate* eventually. The various Visual {lang} Express Editions are also sufficient for many things; I have them setup because it was the quick route of getting something that might come in handy later, and I have no need to buy VS Standard or Professional.

GCC for Unix and Visual C++ Express for Windows, will do fine for SDL, but I have yet to decide on an XML parser… The only XML parsing I’ve ever done in C++, has been done though the Qt toolkit. Normally, I would expect to use libxml++ for this, but using libxml under MSVC might be more annoying them I am willing to tolerate at compile/link time. Another option I reckon, would be to try out Xerces-C++.

All development is basically going to be done on a FreeBSD machine, as that *is* my concept of an IDE lol. The only interest I have in Visual C++, is to get the most ‘bang’ out of the Win32 builds. So with luck, I will never have to bugger with the S.O.B. beyond getting my project built, hehe.