casual fun with the Perl profiler

call: dproffpp -ap shift.pl

there are three sets of data, each meant to represent a small, medium, or large set. Each set is a list of words, 5, 25, and 100 words long respectively. (realistically the elements would average within 2 to 5 words inclusive). For simplicity N is 10.

There are 2 functions, xx and yy; representing different ways of solving the same problem: pretty printing the last N items of a given data set. In xx(), the set is reversed and then $#set -= N’d to clip all but the last N items, then reversed again to put it back into proper order. In yy() we avoid any reversals and just shift off the front of the list one at a time, until we reach N items left in the set. If the set contains less then N elements, no adjustment need be done.

Each function is called 3 times per iteration, once with each data set, over 3000 iterations (that’s 9000 calls to each function, or 3000 times with each set). The test was then executed 10 times.

The things so simple, it’s not important how long it takes to finish, but I’m interested in how big the the difference is between for (expr1; expr2; expr3) { shift @list } and $#list -= expr; and how much those two reversals hurt.

Every time, yy() ran faster by at least a half second. Then ran tests with xx() doing one reversal, then no reversals and yy() still beat it.

Now out of more curiosity, let’s see how larger data sets work. Each data set now contains 3 words instead of 1, and N is now 43; with the data sets being 5, 25, 100, 250, 500, and 1000 elements long.

A new function, zz() which is xx() without the reversals is also executed during the tests. After running the tests a short duration, it seems that the $#set -= N’ing is a bit faster, more so then the cost of the reversals.

here’s the new run down:

$ do_test() {
> local T=10
> while [ $T -gt 0 ]; do
> N=$1 dprofpp -ap shift.pl | tail -n 7 >> dp
> T=$(($T - 1))
> done
> }
$ for NUM in `builtin echo "3n10n43n51n227n"`; do do_test $NUM; done

The above (z)sh code will execute the test on shift.pl 10 times with an N of 3, 10, 43, 51, and then 227; appending the report (3 subs = (4+3) lines) to the file ‘dp’ for post-cpu meltdown review, otherwise we would have to take a look at all the I/O the tests generate before the report is printed by dprofpp.

Yes, I’m to damn lazy to use command history, let along retype the commands each time; why else would they have invented functions and loops 😛

About 15 minutes and 17 degrees Celsius later, some of the arithmetic involved finally caught up with my throbbing head.

Recap:

  • 5 tests, each test has a different value of N
  • 10 runs of each test, meaning 50 runs
  • each run examines the data sets 3000 times, for 150,000 examinations
  • each examination calls 3 functions once with each of 6 data sets, 18 function calls per examination.
  • The six data sets consists of a list of 5, 25, 100, 250, 500, and 1,000 elements; each element is 3 words long. So like 1880 elements in the data set, and 5,640,000 elements processed per examination

So that is what, maybe 2,700,000 function calls to &xx, &yy, and &zz; without counting the calls within those functions… and 846,000,000,000 list elements processed overall? After a little estimation based on the original data set/run time, I stopped counting after the estimated execution time passed 8 hours * X on this old laptop. Hmm, how’s that old saying go, curiosity fried the geeks cooling fan? lol.

I’m beginning to understand why some peoples workstations have like 32 ~ 64 GB of ECC RAM, and Symmetrical Multiple Processor (SMP) configurations to drool $! for, haha!