Feeling inspired

As always I’ve got plenty of loops open, always have, probably always will… I hate sitting idle. While I like time for R&R, I prefer to stay fairly busy. Right now I’m focusing on

I feel inspired in a way, to throsh along with work on my game projects, it’s been a bit since I’ve had time to work onit, but the SYSIO sub system is almost complete, once that’s done, I’ll try to unify the ZPKG and SYSIO interfaces and work on using DevIL for texture loading code. When I pause for a moment and think about the sources before me, I can see what it could become, and all I need is the time and strength to do it.

Today I also thunk up the most perfect unit test for epi-sum, and one monster data set to test an internal library against. Overall, our EPI implementation isn’t designed for to compete with C/C++ runtime speed, in fact, language selection was chosen with it as an after thought. The thing is though, while it still can keep pace with stuff like apt-get or PBIs, I want it to be faster then any valid competition :-D. It’s also good geek fun to see where algorithms can be adjusted for savings. An extra bonus, since the ECA code is under a BSD style license, I can also ‘barrow’ the best parts for other projects, hehe.

When it comes to optimization, I generally “Skip it” whereever possible, and I rarely use the corresponding compiler flags either. The place where I focus my attention on, is doing things in a good way: data structures and algorithms that fit well, solve the problem, and that scales with it. You could say, my focus is on finding the best solutions that don’t shoot you in the foot, nor complicate trying to understand wtf the code actually does. If a real bottleneck enters the picture, then I dig into the code monkies bag and start fine tuning things.

EPI rolls on

It’s been a bit of a busy hectic, if slowly progressing day :-S. Most of my time has been a juggle between being driven crackers, and trying to actually get stuff done. I’ve made about 9 commits to EPI and created more wiki content, so at least it’s been a viable use of time.

Since I’m more familiar with git then any of my colleagues, I’ve been focusing on the git side of wiki content for now. Source code management / version control technology is pretty simple to use, and although I can’t speak for the early days, modern git (1.5+) is down right easy lol. Well, it sure makes me happier then Subversion ever did >_>. Most of my experience with git, has been as sole developer, and should we say, without having to worry about patches, so I’m more intent on having this kind of stuff documented in our wiki then I normally would be about such a matter.

Also did experiments in writing the program documentation in DocBook XML. So far so good, the manual page outputs from docbook-xsl is pretty damn good, and the HTML will do perfectly enough I think, when mated to a little CSS, hehe. I’m not sure what we will be using for writing our documentation  beyond the Wikiverse, but the best options are DocBook, Plain Old Documentation, and GNU TexInfo. Personally I prefer POD but DocBook and TexInfo are more appropriate. I shared the DocBook manual I did this afternoon, along with links to the documentation for the GNU Compiler Collection, to serve as examples of what the two systems look like. If we need any custom XSL files, I can probably whip them up; already have rudimentary stylesheets to convert a book to forum and blog posts ;).

For the most part, I’ve been rather melancholy mood of late, so being able to focus on EPI has helped me a lot today. I’m usually happier when I’m able to ‘geek out’ then when I’m assaulted and bored…

A little $HOME, NPM, and EPI fun

Been cleaning up my home directory a bit, particularly ~/Projects and the dumping ground in ~/2besorted lol. I’ve managed to find a lot of old notes, even have imported a couple saved items into del.icio.us. While I’ve mostly dropped all my bookmarks, there are still a few files with old bookmarks in them. I merged them, using the old `cat files | sort | uniq > newfile` method ;). Sometime I’ve gotta shift through them and see what’s worth keeping. Since ma.gnolia’s great lossage, I’ve still have been maintaining the habbit of using Google and my brains limited RAM as a substitute for bookmarking >_>.

I’ve also taken some time to sort out the change over from Subversion to Git on NPMs source forge page. Last night, I started work in converting my updater.sh to Python… figured to dub it ‘neo’ and incorporate it as part of NPM. Hey, I’ve always wanted to remove portupgrade from the picture… lol. The neo script is basically meant to be comparable to portmaster or portupgrade, and whatever century time brings me back to Neo Ports Manager, will likely serve as the backend. However, I must admit, unlike NPM, this part is somewhat of a selfish agenda at heart: it will be able to do more then my updater.sh can without help.

Finally found the time to add a few pages to the wiki being setup with the Encapsulated Package Installer project. The stuff there is still mostly bare, because I’ve been focused else where’s the past couple of days. Mostly the content added was centred around development tasks, going to have to settle down and plan out what changes need to be made. All in all, things have been moving foreward. When it comes to my personal projects, EPI has the lead seat, and the rest are enqueued, in terms of programming.

That being said, of course I have my strokes of working on other things when shorter periods of time are free: the odds and ends, hehe.

Oh how I love git, let me count the ways!

Initially I kept separate repositories for each portion, most notably the EPI core and build system. Since Trac was selected for part of our web stack for getting stuff done, and it can be a tad pissy about multiple repositories, I’ve opted to create a “Merged” repository from the others. Essentially the Trac will require multiple Trac environments or a singular repository; while my we take the time to decide which to use, I just whip up a solution out of my hat like a good little geek &(^_^)&.

The trees were located in dixie:~/Projects/EPI/repo/ and still are. After backing up the repositories, I created a new ‘work’ tree to become the home of this merger, threw them together, and did some clean up. First I tried using git filter-branch and git format-patch together with a few other mungies to get the history retained how I wanted it, then I decided to screw it and just make branches reflect the history—even better then what I wanted.

I then used git format-patch to create patch sets, placing them in temporary directories. Rather then change the patch sets to reflect the merge (good task for perl scripting), I decided to rely on git mv for something more full proof then hacking patch files by hashed out automata.

Creating a new ‘work’ repository, I made an initial commit with a stub file, then created suitable branches for each in my original repos, which is a task easily automated in sh or Perl, for people with lots of branches. A little bit of git checkout and git am, then slurped up the patch sets bringing each repository (and it’s associated branches) under one roof.

Creating the new merged ‘master’ was a simple octopus merge.

$ git checkout master
$ git merge repo1 repo2 ...
$ git mv ...
$ git commit -a

Job done, good night!

Note also, I wanted the trees merged, so conflicts were not even present, hehe.

Full throttle

So far so good, managed to burst through my weekend todo’s hehe. Completed my work for [SAS], got time to play a couple games, did the setup for ticket and documentation management needs on a personal project. Plus I’ve been aggressively moving ahead with the [SAS] Dept. of Agriculture map for SWAT 4.

All this while upgrading my laptop >_>.

Remaining to do with The DoA, is setting up the access from the outside spawn on level 1, to the window breaching point up on level 3. I’ve completed the relighting and half the general cleanup of the building, so the only big task: redoing the AI. I plan on setting up an interesting little “Scenario”, hehe. If I could, pushable barricades would be a nice touch but I’ve yet to figure that out in SwatEd.

Most of my time has been spent on the Encapsulated Package Installer (EPI) project. We now have a forum, issue tracker, and wiki system going. Things are not moving as quickly as I would like with EPI, but the past two weeks have advanced rapidly on infrastructural issues. When our new systems are more established, I’ll likely make references to it here.

Overall, the highlight of my weekend, has been (at long last) getting my invitation request to Google Wave accepted. So far it seems to be coming along well, but I have almost no one to wave with at the moment lol. The ability to invite 8 others came along with it; most of which I’ve mentally reserved for a few close friends and a couple for good uses. The main downside of Wave, is simply that e-mail and traditional IM systems have been around a really fraking long time; so it’s an uphill battle. One that I expect the world is not quite ready for yet, although it is very much a technology in the direction of our Internet future.

I’ve even found the time to migrate most of my gaming related pictures on to the WWW! The old home directory is getting thinner ^_^. Another thing I’ve gotten down, is transferring Live Journal entries from November 2008 to this Blog. I really need to pick up the pace on that. As much bother as it is, one thing I really do like about this migration procedure, is it gives me the ability to organise my old entries using lables; walks me down memory lane, and helps me to find ‘useful’ posts that were forgotten.

Life is busy, the serious issue is advancing forward quick enough for my tastes… rather then being bogged down and omni-tasked to death. My family is good at doing that.

Codes, Designs, and EPI

Today has been a fairly productive day, despite a heck of a lot of interruptions; to the point in fact, that at least 2 hours of work time was lost to it… Thanks ma!

Most of my mental energy was devoted to refining the interface between epi-add and $EPI_WIZARD, and figuring out how best to document it. My original vision for it was using bidirectional communication between finite state machines running in separate processes (the installer and the wizard). Amid the 21,000 interrupts of the day, I’ve managed to balance out the problem, and have come up with a more interesting solution; one which vastly simplifies the wizard programming interface and grants more freedom to anyone who wants to write an $EPI_WIZARD, which should be easy as pie.

By contrast, most of my code time was spent working on epi-unpack and prototyping ideas for the previous problem. Other then a few changes that I would like to make, epi-unpack is basically done; I’ll probably work on epi-verify next, while the others are reviewing the code for epi-unpack. One thing that distinguishes our Encapsulated Package Installer (EPI) system from PC-BSDs PushButton Installers (PBI; formally PcBsdInstallers), is that PBI is a static monolith from the undocumented garage; ours is knit atop a framework of UNIX programs, with standards and manuals to be shipped along with them ;).

I can not lie, UNIX has effected my philosophies on software design—for the better.

Generally, I don’t discuses business or classified projects on my Live Journal as a matter of ethics, but since EPI is now public knowledge, I’m free to blog about it’s development. The same can’t be said of all things SAS or work related lol. Most likely more things will filter through about EPI, so I’ve created a `tag` for it. Over 3 years and 1500+ entries, and I have never really gotten into Live Journals tagging feature, but have been contemplating it for the last few weeks.

The only way I can ever find my old entries is through Google or sequential search, nether of which are reliable; so utilizing memories and tags would be a darn good idea by now. The problem of categorizing my thoughts, as always remains a problem :=(+).

EPI, the facts.

Since it has been brought up recently, I’ve decided to air out the facts about the “Top secret community project” [sic] known as EPI. It is also my request, and being that I am an Admin here, one that I will enforce — that any comments about this go into a separate thread. First to start one has my blessings for collecting comments.

[color=blue]This post will be locked along with the thread; everyone shall respect this. A copy will also be retained else wheres.[/color]

[i][u][b]
Project Status?
[/b][/i][/u]

Stalled (i.e. postponed until further notice), because of “Real life” taking priority for the developer.

4 people were involved in the project at it’s height, and provisions made to approach 2 or 3 others at a later date. In answer to some peoples questions, Yes Graedus and Myself were involved; the “One guy” tasked with all the coding tasks was none other then myself. I enjoyed it greatly while off-work hours permitted.

[i][u][b]
EPI, what the heck is that?
[/i][/u][/b]

EPI is short for “Encapsulated Pacakge Installer”. It is a method for integrating existing FreeBSD software management systems with an easier to use means for distributing third party packages (like PBI) and the ability to integrate with any desktop environment or user interface system.

[i][u][b]
EPI Project Goals?
[/i][/u][/b]

[list=1]
[*]Remove the need for the user to deal with the “Package, Port, or PBI” question.
[*]Make creating “EPI” files a snap with a minimal of fuss on the maintainer, and trivial to automate the build (but not like PBIs build system)
[*]Allow support of console, and desktop environment agnostic installation without inconvenient to maintainers.
[*]Create something that could be owned, managed, and operated by the community as a whole; not the PC-BSD developers: who proved to be incompetent and incapable with their miss management of PBI, at least in our (4 sets of) eyes.[/list]

It was intended that once the system was developed, that it would be further refined by the community and someday replace the PBI system, becoming the [i]de facto[/i] standard way of managing software on PC-BSD. Like wise, it was also intended that once the EPI system matured, it would become the means by which PC-BSD itself would manage system software – instead mucking with peoples ports.

While EPI is not my concept of what kind of package management system PC-BSD needs, it is our concept of what PBI should have been in the first place, and what [i]PBIs[/i] could have become if properly managed by PC-BSDs developers.

[i][u][b]
How does it (EPI) work?
[/i][/u][/b]

First a FreeBSD port is created for the given program; this has been done for most software that is worth running on FreeBSD and should be done for anything else. There are is approaching 21,000 programs in the ports tree, so much of that is done for us all.

Second a maintainer writes out a description of the particulars. This basically amounts to stating:

[list]
[*]What port(s) does this EPI provide?
[*]What EPI does this EPI depend on?
[*]Who are you?
[*]Special Needs Hook[/list]

By stating what ports to be provided, for example “www/firefox35”. The build system would then automate the process from there on without maintainer intervention. Firefox would be fetched, built, and stripped down to the minimal required dependencies. This would all be done inside of a private jail on a build server, where in there is nothing else to interfere with the process (that is, inside the jail).

The “Firefox EPI” would depend on several other EPI, in this case it would depend on the following ones: EPI Core Services (the EPI system tools), X Windows System, and GTK+ Runtime. Because of this issues relating to X and GTK dependencies are removed from the Firefox EPI, creating a *much* smaller download and more manageable interface for people who just want to install Firefox, and have it just freaking work without trouble! Because of this design decision, unlike with PBI; the issue of dependencies are automated and can be checked. PBI does not support that. An advantage of EPIs way of doing it, results in ease of maintenance, more effective use of disk space, and more effective integration with FreeBSD. Another great perk is it makes writing things like Flash or MPlayer plugin EPIs much less painful then with PBIs.

For security reasons the EPI file would be digitally signed during the creation process. Every maintainer has their own “Key” that is used for signing EPIs that they create. This allows a package to be traced back to its creator, who must manage their reputation within a “Web of Trust” distribution model.

In case of “Special Needs”, there is a special Encapsulated Package Installation Language, or “EPIL” for short. EPIL is a simple declaratory way of scripting the installation process. This is analogous to the various ‘PBI.*.sh’ Bourne Shell scripts used in PBI. [b]Unlike PBI, the EPIL system is designed for use by non-programmers and is rarely required[/b]. Every thing that can be done for you, will be done for you, in order to create the best possible encapsulation of work, minimize your hardship, and make life easier on both users and maintainers. By contrast creating a PBI requires an understanding of UNIX shell scripting and programming, and employs a flawed Application Programming Interface, which usually results in poorly created PBI and fewer maintainers. EPI solves this problem by applying sound software engineering practices, and makes creating an EPI a snap. [b]Under normal conditions the maintainer has to never write an EPIL script, and even then it is less trouble then writing a forum post[/b]. The maintainer has no need to worry whether the installation is text, graphical, attended, or unattended; all standard needs are done by magic; that is the massive opposite of traditional PBIs.

After creation, the EPI file makes its way to a sacred repository for evaluation; a way of downloading it is provided to the author. Trained people inspect the maintainer serviceable parts, i.e. no hidden delete all files on your system kind of bug. Both those trained folk and regular but trusted people then test the individual EPI to make sure it works as advertised. A simple check list is used to note that down correctly, reports shall be publically posted and the maintainer notified.

If no show stoppers were found and the maintainer is in good standing with the community authority, their package is then hosted for public download in accordance whatever community policy is deemed appropriate. The community website (think like the PBI Directory) would then host a download link and all necessary data, so that end users may download the created EPI.

If enough end users complain or have problems with EPIs created by a specific maintainer, that maintainers rights to use the community systems will be temporarily revoked (permanently if need be), and the maintainers “Key” will become untrusted by the EPI Community Authority – thus invalidating our trust in that maintainers EPIs safety for general consumption; individual end users have the ability to ignore the communities decision and install those EPI anyway.

A end user then downloads the EPI file, it could be from the Community Authorities website, from a friend, or even direct from the third parties website! (E.g. Adobe, KDE, Gnome, etc.)

The end user chooses the installation method: graphical or textual.

To install via graphical mode, simply double click the .epi file on your desktop and it will begin a graphical installation wizard. The wizard run is user servicable with a default based on your environment; i.e. Gnome & Xfce users get a GTK+ based wizard, KDE users get a Qt based wizard. Users and Developers could create their own wizards.

To install via textual mode, simply run the installation program in the shell:

[code]
sample# epi-install ./Mozilla_Firefox-3.5.3-i386.epi
[/code]

Both methods invoke the same program, but with different arguments.

The epi-install program updates its understanding of “Trusted Keys” published by the Community Authority, or any other source the user chooses to trust; the user can even skip this step.

Assuming all has went well, epi-install then unpacks Firefox accordingly, verifies the maintainers signature and the packages integrity. If found, the compiled EPIL script is run – the user can choose not to run the script. Normally this is a moot point, because there shouldn’t be any script needed. Of course, the EPIs installation is recorded in a database.

What the user sees depends on how it was run. In text mode they get a console friendly way of doing the installation. In graphical mode, they get a GUI install wizard like PBI. Environment variables and command line switches are provided override behaviour – for example, choosing to run the Qt wizard under Gnome. All this is so easy because the EPI maintainer was never arsed with dealing with it, it was done automatically for them.

Firefox is now installed, the end user can run it from its locatiion in $EPI_ROOT. The default $EPI_ROOT would likely be /usr/epi if adapted by PC-BSD. When “Installed as a third party product” on FreeBSD or PC-BSD, the default $EPI_ROOT would likely be /usr/local/epi.

Our way of doing things would enable both shell users and desktop users a fairly painless way of accessing firefox, without favoritism to KDE or Gnome.

[i][u][b]
Ok, so how does this relate to PBI?
[/i][/u][/b]

PBIs are managed by the PC-BSD developers, and the people trusted with watching over the safety of end-users are either corrupt, derelict, or incompetent. [i]EPI[/i], would instead be placed into community hands, so that no one person or entity has total control.

As a format, how they work is very different. An EPI is a compressed archive containing program files and meta data; an external program located on the users machine is used to handle the installation procedure. This is how many package management systems designed for UNIX work, Microsoft’s own Windows Installer is not to far off either. APT, DPKG, RPM, and FreeBSD Packages work this way as well. The PBI format on the other hand, is a self extracting executable with an embedded archive containing additional meta data, program files, and an embedded installation wizard. The PBI system is dependant upon the FreeBSD version, KDE version, and the presence of system programs — PBI is written in C++ but done like a shell script. Internally, PBI is both ugly and non-unix like. [i]EPI[/i] instead provides a more platform and version independent way of doing things.

The format of PBI files, how they work, what they do, and how they are managed by the system is generally undocumented. [i]EPI[/i] would provide a totally documented system, making it easy for developers, system administrators, end users, and businesses. Heck, you could even create your own EPI system that is totally compatible – you can’t do that with PBI, unless you read a lot of bad code and kuddle up to a nasty API.

In order to make a PBI, you need to know quite a bit about shell script and do a lot of leg work that should be done for you automatically; end result is most PBI install scripts are bad, even Kris Moore’s are shotty. [url=http://sas-spidey01.livejournal.com/389068.html]I found an old PBI script that I wrote a while back[/url], that is done ‘properly’.

Because [i]EPI[/i] files are throughly checked at install, it tries to ensure that what you download is exactly what the maintainer created. By contrast the PBI file you download is not guaranteed to be what the maintainer created, there is no safe guard against tampering with the files on the mirror – the user is on their own without so much as a checksum of the actual .pbi file that was sent to the mirror!

[i]EPI[/i] has a simple and documented dependency model. If you don’t have GTK+ Runtime EPI installed, Firefox EPI should warn you. The EPI Core Services provides as a contract, a set of ultra-common dependencies used by many programs. This reduces disk space waste and allows EPI to work across differing versions more easily. Our decisions would have created a more robust system then PBI, while minimizing dependency hell to the same level. The way PBI does things creates more work on the maintainers and cause more interoperability/integration problems that the PC-BSD developers simply don’t care to address. PBI with hidden or undocumented dependencies are also not uncommon, because the only ‘standard’ of what the PBI can depend on is the release notes for PC-BSD X.Y.Z and the PBI “Guidlines” as they have become, which used to be rules that were just often thrown in the trash can.

[i][u][b]
OK, OK, enough already, but who the heck are you Terry?
[/i][/u][/b]

I am a person who has used computers almost since he learned how to walk. Someone that loves software engineering, and shares his grandfathers work ethics, that if it has your name on it, then it has to be GOOD.

Several years ago, I encountered UNIX and embraced it with an open heart.

The first date with PC-BSD was 1.0RC1, a release candidate shipped in 2005. I have since various versions of PC-BSD on home servers, desktops, and laptops. During the 7.0Alpha testing cycle, before KDE4 had even entered the picture, I had made the decision to transition my personal-workstation to FreeBSD, and never looked back.

As a part of this forum, I joined to see if I could help people and learn a thing or two from my superiors. Even after they left, I remained, eventually becoming a Moderator and later an Forum Administrator at the request of Kris Moore – likely because I shouted loudest of all about the spam problems. My activity on the forums over the past ~two years has rubber banded with my work schedule and the rest of living.

For a time, I created PBIs of programs that interested me, such as Blackbox, which was the first alternative to KDE to be had in PBI form. After a while, flaws in the PBI design and the developers disregard for their own rules caused me to “Give up” on creating and maintaining PBIs. I have seen everything to be seen with PBI, down even to the point of Charles breaking the PBI rules, PBI developers publishing their own PBI without testing, Kris Moore changing the rules after breaking them for Win4BSD, Kris changing them back over community outcry (and increasing lousy PBIs lol). Throughout it all, the process of getting PBIs published has made me sicker then watching a corrupt Government at work.

I’ve generally kept a safe distance from PC-BSD development, this is why I never became involved with the development team; their actions over the years also do not give me desire to “Volunteer” my services.

When the question that PBIs could be created automatically was brought up many moons ago, it was shot down, most strongly by none other then the (then) host of the PBI Directory website (Charles). I supported the idea and it was generally held in contempt as something “Impossible”, only later to see it become a part of PC-BSD. Exploring it ahead of everyone else, was actually how I learned much about shell scripting.

My skill set includes C, C++, Java, Perl, PHP, Python, and all the way to parts of X86 assembly, Scheme, and Common Lisp. More task specific issues such as SQL, HTML, LaTeX, sh, bash, batch/cmd, AWK, SED, and indeed, even [i]ed scripting[/i], are apart of my abilities.

I think no one will debate that I know a thing or two about that which I speak.

Well, never made it to the PS2, but made it t dinner and a good nap xD

Tomorrow I need to clean the newts tank, work n deploying the next site mock up for Rouge, and finish more of todays work. It shouldn’t take to long to have this site mock up ready, so I’ll probably be working on my own stuff son, which reminds me; I wanted to make a quick back-scratcher for taking notes.

The personal project I’m working on, calls for a domain specific language; so I need to start writing the specification for it. Luckily the output is in a language I know well. All in all, it should be about as complex, as implementing a bc that compiles to dc, which is a lot simpler then implementing bc in C, lol.

Frustration factor := (3.14159265 ^ 2)*(1024^64)

Since it seems to be to much to ask, too be able to work on things in peace in this hell hole. I’ve closed my terminal for the time being.

What the hell do I have to do, in order to get work done? Buy a rat fucking shotgun and take hostages?! No wait, that wouldn’t work, my family doesn’t give a flying fuck about anyone else. I sat through a five and a half hour project meeting… and despite the insane length of the meeting. I should have had the first milestone completed the day after the meeting, instead fast forward almost two weeks…. The first phase of the damn thing isn’t even complete yet.

If I actually had decent conditions to work in, things would be COMPLETE by now. But no, my family can’t leave be alone to work in peace.

I think if I as about five years old again, I would wait for my family to be doing something…. Go take out a large pot from the kitchenand a pair of equally large wooden spoons. Then take a belt and make a sling for the pot (for ease of marching). And then walk back and forth across the room, banging the spoons on the pot while shouting

“Now whose working, now whose working”.

As absolutely childish as doing that would be, I think that is about the only thing my family would understand…. Without having to take very drastic measures to get the point across… (e.g. using pepper spray, a TASER, or even deadly force e.t.c.),

I fear, one day I’ll look back upon my memories of this place and cry, “Just another picture to burn”.

I spent about five and a half hours in a project meeting today. So maybe it is a good thing I didn’t have the second set of charts done… We did managed to settle on the important stuff needed before we could continue, along with a couple of good ideas coming from it. The only bad part was it took so dang long to complete lol.