On my way to the head, I was thinking about ways to improve the robustness of a program that I’ve been tinkering with on the side. Simply put it defines an ordered set of roughly hierarchical data, that is integral to processing later based on certain groupings of it. The set is such a collection of information, that recalling it later would best be done through an associative container, where in the keys may be any unique attribute of the data set being processed, rather then having to be any given accessor.
The obvious solution to bundling the data, is to create an abstract record representing the keys that need querying, e.g. each objects attributes are expected to correspond to an unique instance formed by the data set. In thinking about how such a thing might be implemented without losing the speed of a hashed table look up; the first thought to come to mind, of course was the most simple and straight forward idea. If the implementation language was C, it would be trivial to throw together a set of structures for representing items in the data set, and wrap them in an opaque record that binds together a group of hash tables or balancing BSTs for each key type we want, which would then look up a pointer to the individual records, through a structure tuned for minimal memory usage. Second to come to mind, was a rather interesting tree structure to minimise cost for retrieving any given node. In which of course I remembered that this particular implementation case dealt with a certain language that traded such memory conciousness for a garbage collector.
On my way back to my work station, I thunk to myself, “Well hell, I just defined a relational data structure!”, and with the idea of running an SQLite database in process memory now floating through my mind, sure enough the API that I had envisioned, was little more then a relational algebra tuned to the problem domain being worked in.
The implementation language might lack an equivalent to Cs memory management, and gives no guarantee that the amount of copying and GC work involved wouldn’t grow exponentially with the data set size… but it does have a binding to the SQLite database, which is fairly handy, hehehe. So the obvious question is which way handles things more efficiently: relying on the language implementation to avoid unnecessary copying of memory, or going through the overhead of a lite weight SQL database.
Sometimes I love how these kind of things are so simple to work out in the course of short trip down the hall, lol.