What exactly, ‘is’ my development environment?

I reckon this is something rather confusing these days, in most cases among younger folks, it will likely mean an Integrated Development Environment. To me, it just means the environment in which one develops stuff ;).

Being a lower level polyglot in terms of languages and tools, I generally keep a ‘pallet’ associated with each of my main languages, keeping things quite simple to work with:

  • Build Tools:
    • Some viable form of Make is required, generally I’ll use local brew if an extension is needed. I prefer GNU Make over BSD PMake, as I find it more reliably cross-platofrom.
    • CMake, while often little more than a poorly strung together bother, many projects now use CMake based build systems. It is actually a good tool but I don’t favour it for use outside of a single OS family.
    • SCons: powerful and effective, but often irksome to get a portable build. It’s usually worth having available.
    • Ant: you never know when you’re gonna need it.
    • Local brew of IDE and their background stuff, for example Visiual Studio for the vcbuild/msbuild modules and/or Code::Blocks. If I had a Mac, I’d likely have XCode handy.
  • Documentation Tools
    • Unix: troff/nroff and the usual macro packages. I actually like it.
    • DocBook and XML/XSLT processing utilities. LibXSLT comes in handy.
    • ReSTructured text and company
    • Any local language related tools (e.g. for Java, Perl, Python, and C#)
    • Doxygen: a multi-lingual documentation generator.
    • Exuberant CTags: improved and vastly multi-lingual upgrade over ctags.
    • TeX and LaTeX setups. I like TeXLive.
  • Source Code Management / Version Control Systems
    • Git — must have!
    • Anything I need to be handy with:
      • CVS
      • Subversion/SVN
      • Bazaar/BZR
      • Mercurial/HG
  • C/C++
    • I generally setup and maintain several compilers, multiple versions being welcomed. Generally I try to hang onto a member of the GCC 3 and 4 branches, and a fairly recent version of Microsoft Visual C++. Under unix-like and Windows systems respectively, I also tend to carry about a copy of PCC and Watcom.
  • Java
    • A suitable JDK, or a complete software development kit where appropriate.
    • The GNU Compiler for Java can be useful.
  • C#
    • Mono and preferably the full stack of technology.
    • Under Windows: several versions of the .NET framework and at least a workable version of Microsoft Visual C#.
  • Python
    • A copy of CPython, preferably both modern versions of 2.x and 3.x releases.
    • The usual parts of CPython that some distrios strip out, like SQLite3 or Tk bindings.
    • Another implementation for testing (e.g. IronPython) is appreciable.
  • Perl
    • A standard perl distribution, preferably the current major version or the one before it.
    • Common perl modules one is actually likely to use someday.
  • Lisp
    • CLISP for general use, i.e. common lisp
    • Armed Bear Common Lisp (ABCL) in case it eases deployment issues
    • GNU Guile: my normal way to use scheme.
    • Bigloo: a scheme compiler that’s worth poking around
    • Some other readily available Scheme implementation available, preferably one that is at least moderately R5RS compliant
  • PHP
    • Fairly recent version of PHP setup with
      • Command line interp.
      • Suitable Apache modules
      • The CGI/FastCGI friendly thingy
  • Ruby
    • Current local-main line version.
    • Rake build tool.
    • A collection of handy modules
  • UNIX shell scripting
    • Something fairly portable, ash/dash based is nice.
    • GNU BASH.
    • Real and public domain versions of the Korn Shell.
    • ZSH, my favourite.
  • Go
    • Standard distribution compiled from source.
GUI and Console versions of Vi IMproved being a very obvious requirement ;). I also tend to keep versions of Emacs, some flavour of MicroEMACS, and SciTE available in a pinch.  I like having ed available.
Generally some form of webserver, be it a quick tester (ala Python) or dedicated (I like nginx and Apache), is usually required: plus a decent web browser with javascript support.
Profiling, code generation, analysis, and debugging tools are almost universally welcome. I in particular like to keep Valgrind and GDB handy for a rainy day.
Like wise I prefer having certain libraries fully integrated into that stack, i.e. where appropriate having interfaces the common GNU/Gnome libraries (GTK+/cie), Qt3 and Qt4 libraries, bindings for SQLite3 and a major player (MySQL, MSSQL, etc), OpenGL, and so on and so forth. I tend to leverage both languages and tools whenever possible.
Someday I’ll likely incorporate Lua, and dialects of Forth and ML into the mixture. Like wise I prefer a reasonably NAWK friendly version of AWK to be available. I also have interests in picking up Prolog, Haskell, Erlang, Ada, and a few lesser known languages, but just don’t have the time to screw with such things a lot these days :'(. 
Simply put, where I go, a whole freaking lot of development tools go with me!

My thoughts on “Debugger Tips: 8 ways breakpoints can save your next software project”

Debugger Tips: 8 ways breakpoints can save your next software project: “Here are eight fairly simple techniques for using breakpoints and other features of your C/C++ debugger that can give you enormous power and visibility into your program.

Email this Article
Add to digg
Add to del.icio.us

An interesting article that’s worth the reading, for anyone who is ever going to get stuck running a debugger. Personally, I prefer log files and analyzing the code in my brain, but when it’s a task you can’t cram up there in grey matter, or you need to cuddle up to the run time—a good debugger is your best friend.

Wew it’s been a jumpin’ hopin’ day!

I was up all night fiddling with Code::Blocks and Stargella, plus work this morning, and ideas for an interesting project. However it’s a project that would call for C++, and I hate C++, lol. I like C, but hate C++… funny. On the upside, I’ve finally gotten Stargella builds sorted out, and I’m tempted some what to hunt down and eliminate ‘itches’ here and there, but I’m not sure whether or not patches would be welcomed for some things that seem to be, to be good ideas. They’ve accrued a fair number of open patches and bug reports, so I don’t think it’s a highly active project. The code base is only about sixty some thousand lines, not to big for all it does.

Sigh, nothing but work on the horizon o/. At least there’s something on TV tonight that I haven’t seen before, the original Night of the Living Dead. Geeze, what is it with all the zombie flicks lately? Compared to what I’ve seen of the remake (beginning / last half), it certainly has a more spirited beginning, bah I need some pop corn lol. I’ve no idea ottomh what they filmed it in, but it does give an interesting feel to the movie, different then most contemporary films of the era.

The price of pissing me off.

A word of warning, this post contains quite a bit of profanity after the jump break

After being up until well past 0400 last night, setting up a decent SCons build set on FreeBSD, for testing— its usefulness to this project. I started setting up the required config tweaks today on the Windows machine.

When suddenly, I found a odd difference between how the same SCons versions behave on FreeBSD/unix and Windows, in fact, this kind of thing is why I’ve given up on the more popular CMake.

lib = os.path.join('#', outdir)
#
# For some really mother fucking stupid reason, Glob('*.$OBJSUFFIX') works
# on FreeBSD scons 1.2.0_20091224 but not on the Win32 scons 1.2.0_d20091224
# from the installer. So we have to use this fucking method, which will require
# us to wrap it in exceptions in case OBJSUFFIX is None or missing totally on 
# platform X.Y.Z. from fucking hell -- I fucking hate software build tools.
#
src = None
try:
    src = os.path.join('#', outdir, '*'+str(env['OBJSUFFIX']))
except:
    print("Fatal error: I can't make libs from object files without a file "
          + "extension. TERMINATING THIS SCONSCRIPT")
    import sys
    sys.exit(127)

env.StaticLibrary(lib, Glob(src))



# edit: an explanation of why subst() isn’t used in place of the [] look up: once you violate the consistency part, I’m not going to trust that there will be an $OBJSUFFIX to subst() on.

I was going to make most of this code a simple library function for the sake of code reuse, instead of having a virtual copy/paste of the same few SConscript files between all my games source modules—after this, I’ll stick to having one of those comment blocks in each source modules SConscript file as appropriate.

Generally I like SCons, it’s rather similar to what I’ve invisioned as the “Perfect build tool”, even though SCons still requires about 3 x the amount of work for Stargellas build system then tmk should. However, SCons is here today, tmk won’t be for a long time. What annoys me, is that SCons is still just as bad as all the other build tools out there.

For crying out loud people, it is the year twenty first fscking century…. and asking some consistency of our modern build tools, still demands limiting ourselves to the 1970s era make tool from PWB/UNIX.


Oh how I love git, let me count the ways!

Initially I kept separate repositories for each portion, most notably the EPI core and build system. Since Trac was selected for part of our web stack for getting stuff done, and it can be a tad pissy about multiple repositories, I’ve opted to create a “Merged” repository from the others. Essentially the Trac will require multiple Trac environments or a singular repository; while my we take the time to decide which to use, I just whip up a solution out of my hat like a good little geek &(^_^)&.

The trees were located in dixie:~/Projects/EPI/repo/ and still are. After backing up the repositories, I created a new ‘work’ tree to become the home of this merger, threw them together, and did some clean up. First I tried using git filter-branch and git format-patch together with a few other mungies to get the history retained how I wanted it, then I decided to screw it and just make branches reflect the history—even better then what I wanted.

I then used git format-patch to create patch sets, placing them in temporary directories. Rather then change the patch sets to reflect the merge (good task for perl scripting), I decided to rely on git mv for something more full proof then hacking patch files by hashed out automata.

Creating a new ‘work’ repository, I made an initial commit with a stub file, then created suitable branches for each in my original repos, which is a task easily automated in sh or Perl, for people with lots of branches. A little bit of git checkout and git am, then slurped up the patch sets bringing each repository (and it’s associated branches) under one roof.

Creating the new merged ‘master’ was a simple octopus merge.

$ git checkout master
$ git merge repo1 repo2 ...
$ git mv ...
$ git commit -a

Job done, good night!

Note also, I wanted the trees merged, so conflicts were not even present, hehe.

A little fun with git: publically exporting your local repositories

One of the great things about git, is it’s distributed nature; in my humble opinion, being able to tell your partners to pull your latest code is a useful stop gap for code review (without better tools… for now, lol), then having to e-mail the flubber as a tarball.

In my case, I maintain my working tree on Dixie, usually stored under ~/Projects/ some where. To prevent freak data loss, I also push things out to bare repositories stored on Vectra, under /srv/git. Those repo’s on Vectra are my “Centrals”, which will usually get pushed out somewhere else (e.g. SourceForge) if the projects public. The fact that my home directory on Dixie is also backed up is also a bonus hehe.

In order to setup a suitable means for people to clone, fetch, and pull from my git repositories, I edited my Routers configuration, and set up a NAT (Network Address Translation) to forward a suitable port to Vectra. In Vectra’s pf rulesets, I unblocked said port.

For write access, I use SSH and public key authentication to manage the repositories: and no one is permitted SSH access to any of my machines, unless they manage to break into my home wireless (or penetrate and suitably spoof my workstation), discover my username and hostname mappings, and brute force their way through the key pair before the internal firewalls tell you to F-off for good ;). In which case, good job monsieur or mademoiselle psychic!

Public read-only access may be setup with the humble git-daemon. Read-only access with controls, well is a task for something else ^_^.

The git daemon can be a fairly strict prickly pare about what it does export, so I feel reasonably comfortable with it. I created a simple whitelist file, called /srv/git/exports that describes what repositories may be exported: the file format is a simple line indicating the path to the repository to export publically, blank lines and those starting with a # comment being ignored.

I wrote a simple /etc/rc.git-daemon script that I can call from /etc/rc.local when OpenBSD starts, like so:

Terry@vectra$ cat /etc/rc.git-daemon                                            
#!/bin/sh

if [ "$1" = stop ]; then
logger -t GIT stopping git daemon
kill -9 $(cat /srv/git/git-daemon.pid) && rm /srv/git/git-daemon.pid && logger -t GIT git daemon stopped
else
logger -t GIT starting git daemon
echo "$(cat /srv/git/exports | grep -E -v '^$|^#' /srv/git/exports )" | xargs git daemon --user=nobody --group=git --pid-file=/srv/git/git-daemon.pid --verbose --detach --export-all --syslog --base-path=/srv/git && logger -t GIT git daemon started

echo -n ' git-daemon'
fi

After this is executed, it’s possible to:

$ git clone git://my.ip.addr.here/exported/repo/relative/to/base-path

as an extra bonus, since /srv/git uses my ‘git’ group for permissions but my umask by default tells everyone to screw off, I have to manually set permissions on repositories I wish to export, before someone can access them through the git-daemon.

Ok, so I’m nuts.

Combing vcbuild and vcexpress running at the same time, can lead to such interesting linkage errors if you try running vcbuild before saving the changed solution file back out ^_^.

Ok, no wonder I so much prefer just opening project files in vim, it saves me from having to launch the bloody IDE 8=).

Mating Vi IMproved with Visual C++, part I

Maybe it’s because it is an Integrated Development Environment, but Visual C++ seems to be a little lacking in its handling of external editors (at least in the express edition I have avail). It seems the best way to get MSVC to work with Vi IMproved for editing files, is to right click on a file in the solution explorer docklet, and click “Open with”. From there one can specify a program to open the file with and force it as the default editor; the down side is the bloody thing seems to reject the concept of command line arguments.

As such, I created a new win32 application in the IDE, and stripped the fundamental code down to the following

#define GVIM_EXE    _T("P:/Editors/Vim/vim-personal/gvim.exe")
#define GVIM_ARGS _T("--servername"), _T("MSVC"), _T("--remote-tab-silent")

int APIENTRY
_tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,
LPTSTR lpCmdLine, int nCmdShow)
{
UNREFERENCED_PARAMETER(hPrevInstance);
UNREFERENCED_PARAMETER(hInstance);
UNREFERENCED_PARAMETER(nCmdShow);

_texecl(GVIM_EXE, _T("gvim"), GVIM_ARGS, lpCmdLine, NULL);

return 0;
}

Which means I get one instance of Vim running and double clicking files in the solution explorer, will open a new tab in the GVim window; gotta love an editor with a client-server feature hehe.

I have Michael Graz’s visual_studio.vim installed along with the required python for windows extensions. The plugin loads and appears to be exactly the *first* vim plugin that I can actually find a purpose for using! Except for one small problem…. the plugin can’t seem to chatter with the running instance of Visual C++ 2008 Express Edition!

Of course, I could likely jerry rig vim’s :make command to invoke vcbuild for me without much trouble.

Heh, and just for the heck of it, I wonder if a similar plugin could be written for other IDEs, like Code::Blocks, XCode, and KDevelop?

I’m kind of happy with myself at the moment. I’ve found that Visual Studio project files are about the only reliable way I can get things to compile with Visual C++’s compiler on my lone windows machine. Since i greatly prefer using a command prompt for development work, having to alt tab between programs like VCExpress/devenv and Explorer windows are not something I’ll put up with if I don’t have to lol.

Since I do require a real mans editor, I set the IDE to load gvim as the default editor for source files; soon I’ll rig it to use vim’s client-server feature (:he client-server). Like wise that made Visual C++ little more then a very big project management and build system. Some weeks ago whilst looking for cl/link (compiler/linker) switches and a reference to nmake (Microsoft’s make utility), I found the vcbuild utility; which is the “Visual C++ Project Builder – Command Line Version”. So far it seems to be very suitable, which would mean the IDE is now only needed for managing the project and solution files, right? Well, not really!

When I started using Microsoft’s development environment (this box only has the Express editions; I usually do development on my FreeBSD powered laptop!) the first thing I did was look at the .sln and .vcproj files it creates. The solution file (.sln) basically describes the bigger picture in XML format; for my present projects it does little more then reference the project files (.vcproj) that make up the solutions. The project files are also in XML, ‘‘. It defines the various details, essentially what you get in the IDEs configuration manager for a specific project, and the files in the solution explorer. I was very happy to see Visual C++ using XML files, because it means I can *read* the things before opening them with an IDE.

Since the files are XML, and the format is pretty obvious: one can modify the project file quiet easily, and adding/subtracting things like source files, include directories, and libraries is a trivial task. If it wasn’t for the use of ProjectGUID’s I might never need to run the IDE again :-/. Who knows, maybe I won’t even need that in a while lol.

Delightfully enjoyed this article

http://freshmeat.net/articles/stop-the-autoconf-insanity-why-we-need-a-new-build-system

I can also sympathize with the fictitious Joe and Jane in the examples — I have no love for the GNU build system / autotools. I’ve also had to waddle though ugly auto* files and deported documentation on occasion ;). The only part of autotools I do like, is GNU Make: because it is the most portable make implementation available, short of limiting things to a subset of the standard syntax.

I don’t quite understand the authors comments about m4, because it is a pretty simple tool. Heh, I still remember watching The One on TV one night, and interspersing it with learning the m4 macro processor. IMHO m4 is an incredibly useful tool: being a fairly generic (yet expressive) macro processor lends itself to virtually any tasks that can benefit from pre or post processing. Although to be fair, for what most (smart) people would use m4 for doing, I typically (ab)use the C Pre-Processor (cpp) into doing for me (^_^)/. The main reason I avoid using m4, is because I can never seem to count on a *consistent* set of behaviours whereever m4 can be found/ported, the last time I required m4 for a project, I ended up in a “F it, I’ll make a C||C++ compiler a dependency” like situation: because the platforms m4 would not behave IAW the norm. It is a shame really, GNU M4 adds some nifty features (and even more shamefully, it was a ported/tweaked version of GNU M4 that was the problem child in the aforementioned situation lol)

When it comes to building things of my own, I usually create a Makefile; exception being Qt based stuff, in which case I generate makefiles with qmake ;). I’ve also considered implementing a Perl script that automagically does the right thing (or should I say, the infering the right thing) through a quick bit of build rules written in XML — but why do that, when there is a tool like ant? I personally like makefiles and GNU Make; then again I’ll put up with virtually any make with $() and documented inference rules… hehe

SCons has been something tha thas been increasingly interesting to me, but unfortunately time constrants mean writing custom makefiles is a more economical use of time then learning a new tool like SCons :-/. Like wise, the main reason I have never adapted the Boost libraries is no time to fiddle with their build tool, which also interests me.