Monday, December 24, 2007

2007 Recreational Reading Summary

As a younger man, recreational reading was a passion of mine. But once I started college I found I did less and less reading outside of the the curriculum. Things got worse once I started my professional life in earnest. Keeping up with IT requires volumes of technical material. The explosion of self published technical content in the form of blogs, white papers, and online documentation has reduced the number of books that I buy but has increased the amount of content that I read. Trying to assimilate all this technical content has led me to a very bad habit of scanning. I no longer enjoy reading because I'm not really reading. I'm scanning for information to solve a particular problem or trying to get to the root of how something is done. The last technical book that I can remember reading without scanning was, Mastering Regular Expressions, 1st Edition, by Jeffrey Friedl. That was years ago. So it was within this context that at the start of 2007 I decided I needed to return to recreational reading.

So far I've only managed to complete three non tech books; (1)Who Moved My Cheese; (2)Rich Dad Poor Dad; and (3)Perfectly Reasonable Deviations From The Beaten Track. The Letters Of Richard P. Feynman. The first two where not on my official reading list. My brother handed them to me after he finished reading them because he wanted to hear what I thought. They aren't the types of books I would select for myself because they fall under the category of "self help". I'm not a fan of "self help" because they always tell the reader things the reader already knows. What's the fun in that? Nevertheless, I read them. They aren't bad books and if you don't already know what they have to teach they are worth checking out. They are small enough (especially, Who Moved My Cheese) that if you don't want to buy them you can read them in one or two sittings at your local library or bookstore.

I finished reading, Perfectly Reasonable Deviations From The Beaten Track. The Letters Of Richard P. Feynman, moments before starting this blog entry. I've been reading it since March but couldn't finish it because of work and all the technical content related to work. I finally finished it because I'm sick. I have a vicious cold that has had me bed ridden since yesterday. During patches of clarity I read. It is the first non fiction work I've read for recreational purposes in at least a decade. It has no traditional narrative. It is a collection of letters sent and received by a man named Richard P. Feynman. Feynman is a renowned Nobel Prize winning physicist who died in 1988. When I first started reading it I was creeped out because I felt like a third party reading this guy's personal mail. But by the end of the 3rd or 4th chapter I had settled into a first person point-of-view and was on occasion, surprised by the letters I wrote and received. This book is not an all around crowd pleaser. If you are the type of person who would be interested in the life of one of the giants in physics, it won't disappoint. Otherwise, your mileage may vary.

Thursday, December 06, 2007

Finished Reading: Beautiful Code

Disclaimer**: I'm not a professional book reviewer so my rating system is succinct and should be taken with a ginormous grain of salt.

I recently finished reading, Beautiful Code, from Oreilly Publishing. It sucked! Avoid it if you can.

IDEA-7+Glassfish, First Impression

It's been years since I've worked with Java EE in any meaningful way. Back then it was called J2EE and my application server of choice was Resin because it was blazingly fast and allowed me to bypass all the J2EE crud --like EJBs, deployment descriptors, war files, ear files, etc-- and just get stuff done. Truth be told, I've always disliked J2EE. It just always smacked of self important (read as, Sun/Scott McNeally) bullshit to me. Sun and their "partners" (BEA, IBM, and others) managed to turn something as simple as serving dynamic content over HTTP into a multi billion dollar application server industry that hoodwinked a lot of people.

I made a conscious decision to avoid J2EE like raw broccoli when Caucho started transitioning from Resin 2 to 3. The transition was an abomination of galactic proportions. They completely redid the configuration system and did not provided any tools to move from the old version to the new one. And to guarantee that migration was a herculean effort, they provided documentation that was grossly incomplete and mostly inaccurate. But they didn't stop there. They were just warming up. The 3.0 release was beta quality software at best but was billed as production ready. The word went out that development on the Resin 2 branch had stopped and all development effort was going to be on 3. So any existing open issues (bug reports) against 2 was null and void and would be addressed in the 3 release. That may sound reasonable but it presupposes that 3 is usable in a production capacity. It wasn't. I spent weeks chasing deadlocks and other concurrency issues in the Resin code. So if you were a user that was affected by Resin 2 bugs you were asked to move to Resin 3 and since Resin 3 had even more bugs you were just fuc*ed. The final insult was, while Resin 2 made the J2EE stuff optional Resin 3 made it mandatory. I gave up on Resin 3 and went live with the 2.1.x and never looked back. That was over 3 years ago.

I recently started dabbling w/ J2EE again in a limited capacity. I needed to provide an HTTP interface to a server application I am working on and there was no way in hell I was going to try to climb the whole J2EE mountain just to provide HTTP access to the app. The simplest way I found to provide HTTP support was to embed Jetty and the simplest way to hook into Jetty's HTTP engine is via the Servlet interface. So though I'm using a servlet I really don't consider it J2EE.

As I said before, it's been approximately 3 years since I deployed my last J2EE app on Resin 2.1.x and they have just come due for upgrading. So the quest has been on to find an application server to replace Resin 2.1.x. Don't worry. This isn't going to turn into some long winded diatribe about all the application servers on the market and their pros and cons and yada yada yada. The truth of the matter is I've only tried one. Glassfish. I downloaded it yesterday minutes after 8PM and installed it at around the same time. My first impression can be summed up in one word, Wow!

The installation and startup was an absolute breeze. No config files in sight. The web based admin interface is simple and intuitive. I have yet to click on the help button for clarification on anything. There is also a command line interface that does everything the web based interface does. This is extremely cool because it means configuration becomes scriptable and thus can be completely automated.

There were only 3 pain points in my journey from getting the software to deploying a test servlet (that tests database connectivity). The first pain point was setting up the connection pool for the database. The JDBC drivers for PostgreSQL isn't bundled with Glassfish. The reason it's a pain point is because the configuration screen listed PostgreSQL as one of it's supported databases. It even prepopulates the Datasource Classname field with the correct PostgreSQL specific classname. So as a n00b, seeing all this, my default assumption is Glassfish comes with everything needed to communicate with the database. After a bit of head banging I finally turned to Google where I learned that I simply needed to copy the PostgreSQL JDBC driver to ${INSTALL_DIR}/domains/domain1/lib and restart the server.

The second pain point is not a Glassfish pain point but an IDE one. I've been using IntelliJ+IDEA for years --I'm using IDEA 7-- and this was the first time I've tried to use it's J2EE facilities. Pretty slick! Granted, my previous experience with this kind of thing is limited to emacs and nano on a Resin config file, so IDEA 7 to me, represents a major leap forward. I mentioned my bias because if this sort of thing is your day to day shtick, I hear Netbeans 6 provides an even better experience than IDEA 7 for Glassfish integration.

The first problem I ran into is understanding the difference between the local and remote configuration. It turns out local means IDEA manages the actual running of the application server. So if you already have it running, like I did, then local can't work with it. If you don't want IDEA to manage the running of the application server you have to use the remote configuration option. Remote seems limited to deployment only.

The third pain point is also an IDE one. This time I couldn't get deployment to work. This one actually pissed me off because the failure was silent on the part of IDEA. Without error messages how the hell am I supposed to know what's wrong? Fortunately, Glassfish is not the silent type. I nuked the Glassfish logs and restarted everything. After the deployment failed yet again I checked the log. This time there was a line in the log about a failed login attempt by the admin user. Finally, a clue. When I configured IDEA to work with Glassfish it prepopulated the username and password fields. My assumption was it was grabbing the data from the same place that Glassfish stores it. Wrong! The other clue was the number of asterisks in the password field was longer than the length of the new password --I changed the password from the default during the intial configuration of Glassfish--. Obviously, the problem was the password that IDEA was using was wrong. I manually entered the correct password and that solved the problem.

Other than those 3 minor issues IDEA+Glassfish has been a pleasure to use. I just hope Glassfish's performance lives up to the hype

Tuesday, November 13, 2007

Of Blogs and Google Docs

This is my first post from Google Docs. I've been searching for a decent blog editor for years and have not been able to find anything I really like. Most of the problem stems from the fact that the pickings are pretty slim if you are a GNU/Linux user. But things may be turning around. While surfing doggdot this morning I came across this link. It lists five blog editors for GNU/Linux with Google Docs being the fifth. So I thought I would take it out for a spin and this entry is the test drive.

Thursday, November 08, 2007

QOTD, 8 Nov 2007

As the amount of RAM installed in systems grows, it would seem that memory pressure should reduce, but, much like salaries or hard disk space, usage grows to fill (or overflow) the available capacity.

----Jake Edge November 7, 2007

Google, I'm Still in Love

God I love Google!

I've been trying to get to the Gentoo wiki since last night. But the forces that be have conspired against me. The evil Internet gremlins doth deny me [and everybody else, for that matter] access. 'Tis hopeless it seems. Or, it would have been hopeless if not for Google. I just searched for "gentoo wiki paludis", and Google has the Gentoo wiki page cached. Friggin' brilliant. Now I know this feature has been around forever but I just wanted to remind you that Google popularized it. If Google hadn't come along and shook up search, down to it's core, I would remain at the mercy of the evil gremlins.

Friday, October 26, 2007

KernelTrap.org

One of my favorite sites on the net is KernelTrap. Though KernelTrap describes itself as, "... a web community devoted to sharing the latest in kernel development news.", all of the heavy lifting is done by one person, Jeremy Andrews. So I would like to take this opportunity to say thank you to Jeremy for his tireless efforts at making KernelTrap a great site and one of my favorite destinations on the net.

The feature I use the most is on the home page and it's basically Jeremy summarizing and distilling the [essence of the] conversations that happen on many of the kernel development mailing lists. Anyone who is or has ever been a member of an extremely voluminous mailing list, know how noisy it can be. Where the worst case scenario is an abysmally low signal-to-noise ratio. Plus it's no fun exploring the list after the fact because it becomes very tedious very fast, pointing and clicking your way through messages, trying to find something interesting. KernelTrap eliminates the noise and makes pointing and clicking fun again [or at least more productive]. It does this by organizing the different conversations from the different kernel development mailing lists into atoms.

An atom is simply a title and a summary of what the original thread/conversation was about, which includes quotes from the source. If the subject matter peaks your interest and you are not satisfied by the summary, you can click on the title or the "read more" link to, (wait for it ...) read more! Reading more takes you to a single page that contains the individual messages that make up the original conversation, no pointing or clicking required, all you have to do is scroll and enjoy. There is even a comments section at the bottom of each entry. The comments don't actually link back to the original mailing list so you can't really use it as a mechanism for joining the conversation. The purpose it does serve [to me] is comic relief. Probably 99% of the comments posted are from people who have never written a lick of kernel code in their life and probably wouldn't know a pointer if jumped up and poked them in the eye. Yet it doesn't stop them from complaining and passing judgment on the people who are actually involved in the conversation. I can't help but laugh.

Jokes aside, the reason I love KernelTrap is because it focuses on kernel development. And though I'm not a kernel developer, nor aspire to be one, the information provided is useful none-the-less. You see, the kernel is the most important piece of software that runs on your computer, because it is responsible for managing the resources that is the computer (CPU, memory, disk, etc). So whether your computer is running 1 or 1,000 processes, or your network application is handling 1 or 1,000 thousands connections, it's the kernel that is responsible for keeping things running smoothly or at least running. The consequence of being responsible for the computer is the kernel ends up being the most scalable piece of software on the computer. It is this feature of kernels that interest me. Because the lessons of scalable design and implementation, inherent in [good] kernels, aren't limited to kernel software. A lot of the lessons can be applied to user land software (my domain). So though the conversations may not tell you how things are implemented (the exception is the Linux Kernel Mailing List because patches [code] are included directly in the messages themselves) it can tell you why and who is doing it.

The newest KernelTrap feature is quotes. A quote is another type of atom that is simply a quote lifted from a larger conversation that is either insightful, funny, or both. My favorite for this week comes from Theo de Raadt of OpenBSD fame:

"You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes."
— Theo de Raadt in an October 24th, 2007 message on the OpenBSD -misc mailing list.

So if you have never visited KernelTrap I highly recommend you take a look and if you are looking for a more Linux centric world view, LWN can't be beat.

Tuesday, October 23, 2007

7 REPSLM-C, Expanded

This post is a follow up to 7 Reasons Every Programmer Should Love Multi-Core and a direct response to this comment.

Maybe I should have put 6 before 4 because 6 makes the point that most of today's programs aren't written to take advantage of multi-core. So what exactly do I mean by take advantage? It seems you think I'm saying, it means simply running today's GUI, client/server, and P2P apps as is, on multi-core machines and expecting magic to happen. But that is not what I'm talking about.

Aside:

With some existing apps like Postfix, WebSphere, SJSDS, IntelliJIDEA 7.0, PVF, and most Java bit torrent trackers/clients [just to name a few] magic can happen. While others require tuning (i.e. Apache, PostgreSQL, and many others). Most applications, especially desktop GUI apps, will require a major rewrite to take full advantage of multi-core machines.

What I'm talking about is programmers finding opportunities to exploit parallelism at every turn, which is what items 1-4 are about. Let's take something as mundane as sorting (i.e. merge sort, quick sort) as an example. Merge sort and quicksort are excellent use cases for applying a divide and conquer strategy. They consist of a partitioning step, a sorting step, and a combining step. Once partitioned, the partitions can be distributed across multiple threads [and thus multiple processors/cores/hardware-thread] and sorted in parallel. Some of you may say, "that's only 1 out of 3 steps, big deal." Others may take it even further and say, "1 out of 3. That means 2/3'rds of the algorithm is sequential. Amdahl's Law at work buddy!". But what you would be over looking is, [in the serial version] for a large enough dataset, the sorting step would dominate the runtime. So even though we have managed to only parallelize a single step we can still realize substantial runtime performance gains. This behavior is expressed quite eloquently by John L. Gustafson in his [short] essay, Reevaluating Amdahl's Law.

So what does all of this have to do w/ your comment? Let's start with your admonition of GUI applications taking advantage of multi-core and I'll use boring old sorting to make my point.

It is sometime in the future and there is a guy name Bob. Bob's current computer just died (CPU burned out) and he goes out and buys a new one. Bob doesn't know or care about multi-core [or whatever the future marketing term is for [S]MP]. He just wants something affordable that will run all his applications. Nevertheless, his new machine is a 128 way box (it is the future after-all), with tons of RAM. Bob takes his new machine home and fires it up. Bob keeps all his digital photographs and video on a 4 terabyte external storage array. He bought the original unit years ago before 32 terabyte hard drives came standard with PCs. You see, Bob's daughter is pregnant and is in her final trimester and her birthday is just around the corner. Bob wants to make her a Blue-HDD-DVDDD-X2 disk containing stills and video footage of her life, starting before she was even born, and up to her current pregnancy. It begins with the ultrasound video of her in her mother's womb and ends with the ultrasound of his grandchild in his daughter's womb. So Bob fires up his [hypothetical] image manager and tells it to create a workspace containing all the images and videos on the storage array, sorted by date. It's almost 30 years worth of data. And though the image manager software is old, some programmer, long ago, wrote a sorting algorithm that would scale with the number of processors available to it. So Bob clicks a button and in less than 5 minutes 3.5 terabytes of data has been sorted and ready to be manipulated. So what's the point? The point is it doesn't matter than "99%" of the CPU time was spent "waiting for some event", because when it mattered, (when Bob clicked the button) all the available resources were employed to solve the user's problem efficiently, resulting in a great user experience. Now I know the example is contrived but the premise upon which it is based is real. If you look at most GUI applications of today, very few of them can handle multiple simultaneous events or even rapid fire sequential events. In large part because most of the work (the action to be performed) happens on the same thread that is supposed to be listening for new events. Which is why the user interface freezes when the action to be performed requires disk or network access or is CPU bound. The classic example is loading a huge file into RAM from disk. Most GUI apps provide a progress meter and a cancel button but once the I/O starts, clicking cancel doesn't actually do anything because the thread that's supposed to be processing mouse events is busy reading the file in from disk. So yes, GUI application programmers should Love Multi-Core!

Client/Server and P2P are in the same boat in that they are both network applications. But they, like GUI and every other problem domain, can benefit from data decomposition driven parallelism (divide and conquer). I'm not going into great detail about how network applications benefit from multi-core because that subject has been beaten to death. I'll just say a couple things. The consensus is more processors equal more concurrent connections and/or reduced latency (user's aren't waiting around as long for a free thread to become available to process their requests). Finally multi-core affects vertical and horizontal scaling. Let's say you work at a web company and the majority of your web traffic is to static content on your web server (minimal contention between requests). Let us also assume that have unlimited bandwidth. The web server machine is a 2 socket box and quad-core capable but you only bought one 1P processor. A month passes and you got dugg and the blogosphere is abuzz about what you are selling. Customers are browsing and signing up in droves. Latency is climbing and connections are timing out in droves. You overnight 2 quad-core CPUs and additional RAM. Latency drops to a respectable level and you just avoided buying, powering, and cooling a brand new machine that would have cost you 3x as much as just spent for the CPUs and RAM. That is scaling vertically. If you were building a cluster (horizontal scaling), multi-core means you need less physical machines for the same amount of processing power. In other words, multi-core reduces the cost of horizontal scaling both in terms of dollars and latency. Access to RAM will always be faster than the network. So there is a lot less latency with performing the work locally --pushing it across the FSB, HyperTransport, etc, to multiple cores-- than pushing it out over the network and [eventually] pulling the results back. So yes, if you are coding or deploying network applications, P2P, client/server, or otherwise, you should Love Multi-Core!

Saturday, October 20, 2007

7 Reasons Every Programmer Should Love Multi-Core

  1. The technology is not new it's old. It's just really cheap SMP and the SMP domain (shared memory model) is a well understood domain. So there are tons of resources (books, white papers, essays, blogs, etc) available to get you up to speed.

  2. Shared memory concurrency is challenging. It's guaranteed to grow your brain.

  3. Most programming languages [already] have language and/or API level support (threads) for multi-core, so you can get started right now.

  4. There are a plethora of computing domains that benefit from increased parallelism. The following are just a few off the top of my head: GUI applications, client/server, p2p, games, search, simulations, AI. In other words, there won't be a shortage of interesting work to do in this space.

  5. Most programmers [and their managers] don't have a clue about concurrency so you can easily impress them with your skills/CNM (Concurrency Ninja Moves).

  6. The majority of today's programs aren't written with multi-core in mind so mastering concurrency programming means you won't be out of a job any time soon. Somebody has to write the multi-core aware version of all those apps.

  7. Since most programmers are clueless about concurrency, mastering it means you'll be smarter that millions of people. Being smarter than your [so called] peers is really satisfying.

Friday, October 19, 2007

Saved By Chappelle

The last three weeks have been a hell. If it could go wrong it went wrong. It started with the death of my workstation followed quickly by a fender bender. But I'm not going to dwell too much on the fender bender because I got off easy. The short version is, the moron that ran into me didn't get beaten to a pulp, thus I'm not writing this from a prison cell (thank you Dave Chappelle). My car [miraculously] only sustained minor scratches to the rear bumper (you should have seen the other guy), which I'm not going to fix it because it means repainting the bumper, which would make my car visually lopsided. That happened with my very first car [89 Honda CRX Civic]. The insurance company wouldn't pay for the whole thing to be painted and I didn't have the money to do it out of pocket. So the passenger side door and front panel was a different shade of yellow than the rest of the car. It drove me crazy. My current car is a '06 Ford Mustang GT, black. I [still] plan on tricking it out so the rear bumper was going to go anyway. So it's pointless to paint the bumper now and suffer unnecessarily.

What really pissed me off about the accident was, (a) it was completely avoidable (I have no idea why people think a yellow light means "speed up") and (b) the car just celebrated it's first birthday. The thing that kept me from losing it was the, "When Keeping it Real Goes Wrong", skits from the Chappelle Show. First there was contact; then there was blinding rage; then out of nowhere, Dave Chappelle. Weird! Network TV should run them as public service announcements. I think it would benefit Type A personalities.

Tuesday, September 18, 2007

[BUG] JE never stops logging ...

This is entry is a partial repost of a message I posted to Oracle's Berkeley DB JE Forum. The forum software does not allow for the proper formatting of source code and I personally hate reading unformatted source code. Therefore, I have reposted it here so people like me can't read the code right off the page.

The code

import java.io.File;
import java.lang.reflect.Field;
import java.util.List;
import java.util.Random;

import com.sleepycat.je.Cursor;
import com.sleepycat.je.Database;
import com.sleepycat.je.DatabaseConfig;
import com.sleepycat.je.DatabaseEntry;
import com.sleepycat.je.DatabaseException;
import com.sleepycat.je.Environment;
import com.sleepycat.je.EnvironmentConfig;
import com.sleepycat.je.OperationStatus;
import com.sleepycat.je.Transaction;
import com.sleepycat.je.evictor.Evictor;

import it.unimi.dsi.fastutil.ints.IntOpenHashSet;

public class CrushPunyJE
{
    private static final int NUM_DOMAINS = 1000;

    private static final int RECIPIENTS_PER_DOMAIN = 1000;

    public static void main(String[] args) throws Exception
    {
        crush(new File(args[0]));
    }

    private static void crush(File envHome) throws Exception
    {
        MyDbI dbi = MyDbI.openToFill(envHome);
        Cursor queue = dbi.queue().openCursor(null, null);
        Cursor recipients = dbi.recipients.openCursor(null, null);
        Cursor unsent = dbi.unsent.openCursor(null, null);
        DatabaseEntry ename = new DatabaseEntry();
        DatabaseEntry dname = new DatabaseEntry();

        try
        {
            Random rand = new Random();
            byte[] buff = new byte[512];
            for (int a = 0; a < NUM_DOMAINS; a++)
            {
                rand.nextBytes(buff);
                dname.setData(buff);
                for (int z = 0; z < RECIPIENTS_PER_DOMAIN; z++)
                {
                    rand.nextBytes(buff);
                    ename.setData(buff);
                    if (OperationStatus.SUCCESS == unsent.putNoDupData(dname, ename))
                    {
                        recipients.put(dname, ename);
                        queue.put(dname, ename);
                    }
                    else
                        z--;
                }
            }
        }
        finally
        {
            for (Cursor c : new Cursor[]{recipients, unsent, queue})
            {
                try
                {
                    c.close();
                }
                catch (DatabaseException e)
                {
                    e.printStackTrace();
                }
            }
            dbi.close();
        }

        final MyDbI dbi2 = MyDbI.open(envHome);
        final Runnable runnable = new Runnable()
        {
            public void run()
            {
                Evictor evil = (Evictor)getField(getField(dbi2.env(), "environmentImpl"), "evictor");
                while (true)
                {
                    evil.runOrPause(true);
                    try
                    {
                        Thread.sleep(10);
                    }
                    catch (InterruptedException e)
                    {
                    }
                }
            }
        };
        Thread t = new Thread(runnable);
        t.setDaemon(true);
        t.setPriority(Thread.MAX_PRIORITY);
        t.start();
        dbi2.clearQueue().sync();
        dbi2.close();
    }

    private static Object getField(Object o, String fieldName)
    {
        Field field = getField(o.getClass(), fieldName);
        if (null == field)
            throw new AssertionError("Field '" + fieldName + "' not found.");
        else
        {
            try
            {
                return field.get(o);
            }
            catch (IllegalAccessException ex)
            {
                throw new AssertionError("IllegalAccessException thrown while accessing: ".concat(fieldName));
            }
        }
    }

    private static Field getField(Class co, String name)
    {
        for (Class stop = Object.class; co != stop; co = co.getSuperclass())
        {
            for (Field field : co.getDeclaredFields())
            {
                if (name.equals(field.getName()))
                {
                    field.setAccessible(true);
                    return field;
                }
            }
        }
        return null;
    }
    //====================================================================================================================//
    //====================================== Inner Class Definitions Start Here ==========================================//
    //====================================================================================================================//

    private static class MyDbI
    {
        /**
         * A small cache of prime numbers.
         */
        private static final IntOpenHashSet PRIMES = new IntOpenHashSet();

        /**
         * The maximum number of lock tables.
         */
        private static final int MAX_LOCK_TABLES = 523;

        /**
         * The databases.
         */
        public final Database recipients, unsent;

        /**
         * The delivery queue.
         */
        private static final String QUEUE_DB = "queue";

        /**
         * The database name of the unsent recipients.
         */
        private static final String UNSENT_DB = "unsent";

        /**
         * The database name for the recipient list database.
         */
        private static final String RECIP_DB = "recipients";

        private static final int CACHE_SIZE = 1024 << 10 << 4;

        public Database queue;

        /**
         * The database environment object.
         */
        private final Environment env;

        private final boolean dw, readonly;

        private MyDbI(File home, EnvironmentConfig ecfg, DatabaseConfig dbc) throws DatabaseException
        {
            dw = dbc.getDeferredWrite();
            readonly = dbc.getReadOnly();
            dbc.setSortedDuplicates(true);

            env = new Environment(home, ecfg);

            Transaction txn = ecfg.getTransactional() ? env.beginTransaction(null, null) : null;

            if (dbc.getAllowCreate())
            {
                queue = env.openDatabase(txn, QUEUE_DB, dbc);
                unsent = env.openDatabase(txn, UNSENT_DB, dbc);
                recipients = env.openDatabase(txn, RECIP_DB, dbc);
            }
            else
            {
                List names = env.getDatabaseNames();
                queue = names.contains(QUEUE_DB) ? env.openDatabase(txn, QUEUE_DB, dbc) : null;
                unsent = names.contains(UNSENT_DB) ? env.openDatabase(txn, UNSENT_DB, dbc) : null;
                recipients = names.contains(RECIP_DB) ? env.openDatabase(txn, RECIP_DB, dbc) : null;
            }

            if (null != txn)
                txn.commit();
        }

        /**
         * @return The Environment object that backs this DbI.
         */
        public Environment env()
        {
            return env;
        }

        /**
         * Get the queue database.
         *
         * @return The queue database.
         */
        public synchronized Database queue()
        {
            return queue;
        }

        public synchronized MyDbI clearQueue() throws DatabaseException
        {
            DatabaseConfig config = queue.getConfig();
            queue.close();
            env.truncateDatabase(null, QUEUE_DB, false);
            queue = env.openDatabase(null, QUEUE_DB, config);
            return this;
        }

        /**
         * Synchronizes {@link #unsent} with {@link #queue}.
         *
         * @return this
         *
         * @throws DatabaseException If there is a database error.
         */
        public synchronized MyDbI sync() throws DatabaseException
        {
            if (readonly)
                throw new IllegalStateException("read-only mode");

            DatabaseEntry k = new DatabaseEntry();
            DatabaseEntry v = new DatabaseEntry();
            Cursor uc = unsent.openCursor(null, null);
            try
            {
                if (dw)
                {
                    while (OperationStatus.SUCCESS == uc.getNext(k, v, null))
                        if (OperationStatus.SUCCESS != queue.putNoDupData(null, k, v))
                            assert false : "Duplicate email addresses not allowed in queue.";
                }
                else
                {
                    boolean commit = false;
                    Transaction txn = env.beginTransaction(null, null);
                    try
                    {
                        Cursor qc = queue.openCursor(txn, null);
                        try
                        {
                            while (OperationStatus.SUCCESS == uc.getNext(k, v, null))
                                if (OperationStatus.SUCCESS != qc.putNoDupData(k, v))
                                    assert false : "Duplicate email addresses not allowed in queue.";
                            commit = true;
                        }
                        finally
                        {
                            qc.close();
                        }
                    }
                    finally
                    {
                        if (commit)
                            txn.commit();
                        else
                            txn.abort();
                    }
                }
            }
            finally
            {
                uc.close();
            }
            return this;
        }

        /**
         * Synchronizes {@link #unsent} with {@link #queue}.
         *
         * @return this
         *
         * @throws DatabaseException If there is a database error.
         */
        public synchronized MyDbI sync2() throws DatabaseException
        {
            if (readonly)
                throw new IllegalStateException("read-only mode");

            DatabaseEntry k = new DatabaseEntry();
            DatabaseEntry v = new DatabaseEntry();
            Cursor uc = unsent.openCursor(null, null);
            try
            {
                if (dw)
                {
                    while (OperationStatus.SUCCESS == uc.getNext(k, v, null))
                        if (OperationStatus.SUCCESS != queue.putNoDupData(null, k, v))
                            assert false : "Duplicate email addresses not allowed in queue.";
                }
                else
                {
                    boolean commit = false;
                    Transaction txn = env.beginTransaction(null, null);
                    try
                    {
                        Cursor qc = queue.openCursor(txn, null);
                        try
                        {
                            for (int x = 0; OperationStatus.SUCCESS == uc.getNext(k, v, null);)
                            {
                                if (OperationStatus.SUCCESS != qc.putNoDupData(k, v))
                                    assert false : "Duplicate email addresses not allowed in queue.";
                                commit = true;
                                if (++x == 1000)
                                {
                                    x = 0;
                                    commit = false;
                                    qc.close();
                                    txn.commit();
                                    txn = env.beginTransaction(null, null);
                                    qc = queue.openCursor(txn, null);
                                }
                            }
                        }
                        finally
                        {
                            qc.close();
                        }
                    }
                    finally
                    {
                        if (commit)
                            txn.commit();
                        else
                            txn.abort();
                    }
                }
            }
            finally
            {
                uc.close();
            }
            return this;
        }

        /**
         * Closes the databases and environment.
         */
        public synchronized void close()
        {
            try
            {
                for (Database db : new Database[]{unsent, unsent, recipients})
                    db.close();
            }
            catch (DatabaseException e)
            {
                e.printStackTrace();
            }
            try
            {
                env.close();
            }
            catch (DatabaseException e)
            {
                e.printStackTrace();
            }
        }

        /**
         * Use when populating the databases.
         *
         * @param home The home directory of the blast.
         *
         * @return An environment optimized for single threaded write only access.
         *
         * @throws DatabaseException If there is a problem opening the databases or the database environment.
         */
        public static MyDbI openToFill(File home) throws DatabaseException
        {
            DatabaseConfig dcfg = new DatabaseConfig();
            dcfg.setAllowCreate(true);
            dcfg.setDeferredWrite(true);
            return new MyDbI(home, getFillConfig(), dcfg);
        }

        /**
         * Use for normal blast database access.
         *
         * @param home The home directory of the blast.
         *
         *
         * @throws DatabaseException If there is a problem opening the databases or the database environment.
         */
        public static MyDbI open(File home) throws DatabaseException
        {
            DatabaseConfig config = new DatabaseConfig();
            config.setAllowCreate(true);
            config.setTransactional(true);
            return new MyDbI(home, getDefaultConfig(), config);
        }

        /**
         * @return The number of lock tables based on the number of CPUs.
         */
        public static int getLockTableSize()
        {
            int cpus = Runtime.getRuntime().availableProcessors();
            if (cpus < 4)
                return 1;
            for (cpus = Math.min(MAX_LOCK_TABLES, cpus); cpus > 0 && !PRIMES.contains(cpus);)
                cpus--;
            return Math.max(1, cpus);
        }

        /**
         * Creates a new EnvironmentConfig object suitable for single threaded, write intensive access.
         *
         * @return An EnvironmentConfig object suitable for a single write only thread.
         */
        private static EnvironmentConfig getFillConfig()
        {
            EnvironmentConfig config = getNormalConfig();
            config.setCacheSize(1024 << 10 << 4);
            config.setAllowCreate(true);
            config.setLocking(false);
            return config;
        }

        /**
         * Creates a new com.sleepycat.dbi.EnvironmentConfig object suitable for normal data access patterns.
         *
         * @return An com.sleepycat.dbi.EnvironmentConfig object suitable for normal data access patterns.
         */
        private static EnvironmentConfig getNormalConfig()
        {
            EnvironmentConfig config = new EnvironmentConfig();
            config.setConfigParam("je.log.faultReadSize", "4096");
            config.setConfigParam("je.lock.nLockTables", Integer.toString(getLockTableSize()));
            return config;
        }

        /**
         * Configures an environment for normal transactional access.
         *
         * @return A configuration for normal transactional access.
         */
        private static EnvironmentConfig getDefaultConfig()
        {
            EnvironmentConfig ecfg = getNormalConfig();
            ecfg.setCacheSize(CACHE_SIZE);
            ecfg.setTransactional(true);
            ecfg.setTxnNoSync(true);
            return ecfg;
        }

        static
        {
            PRIMES.add(2);
            PRIMES.add(3);
            PRIMES.add(5);
            PRIMES.add(7);
            PRIMES.add(11);
            PRIMES.add(13);
            PRIMES.add(17);
            PRIMES.add(19);
            PRIMES.add(23);
            PRIMES.add(29);
            PRIMES.add(31);
            PRIMES.add(37);
            PRIMES.add(41);
            PRIMES.add(43);
            PRIMES.add(47);
            PRIMES.add(53);
            PRIMES.add(59);
            PRIMES.add(61);
            PRIMES.add(67);
            PRIMES.add(71);
            PRIMES.add(73);
            PRIMES.add(79);
            PRIMES.add(83);
            PRIMES.add(89);
            PRIMES.add(97);
            PRIMES.add(101);
            PRIMES.add(103);
            PRIMES.add(107);
            PRIMES.add(109);
            PRIMES.add(113);
            PRIMES.add(127);
            PRIMES.add(131);
            PRIMES.add(137);
            PRIMES.add(139);
            PRIMES.add(149);
            PRIMES.add(151);
            PRIMES.add(157);
            PRIMES.add(163);
            PRIMES.add(167);
            PRIMES.add(173);
            PRIMES.add(179);
            PRIMES.add(181);
            PRIMES.add(191);
            PRIMES.add(193);
            PRIMES.add(197);
            PRIMES.add(199);
            PRIMES.add(229);
            PRIMES.add(241);
            PRIMES.add(241);
            PRIMES.add(241);
            PRIMES.add(271);
            PRIMES.add(283);
            PRIMES.add(283);
            PRIMES.add(313);
            PRIMES.add(313);
            PRIMES.add(313);
            PRIMES.add(349);
            PRIMES.add(349);
            PRIMES.add(349);
            PRIMES.add(349);
            PRIMES.add(421);
            PRIMES.add(433);
            PRIMES.add(463);
            PRIMES.add(463);
            PRIMES.add(463);
            PRIMES.add(523);
            PRIMES.trim();
        }
    }
}

Saturday, September 15, 2007

The Missing Minute

Sometime earlier today
A minute of mine went away
So I thought about main
But that thought was in vain
The minute was hiding okay?

I opened the profiler quick
And attach to the app in a nick
I collected a sample
It wasn't quite ample
The minute continued to tick.

I decided to give it a rest
In order to give it my best
I went for a drive
And in about five
My mind was finally unstressed.

But on the way home I found it
The place where the minute was grounded
It was caught in a latch
With a timer dispatch
But no signal handler around it.

Let this be a lesson to all
When debugging processes stall
Take a step back and maybe a nap
And the bug is likely to fall.

Thursday, July 12, 2007

Becoming a concurrency expert. Rule number not one, Relax.

The holy grail for a concurrency expert is wait-free, and if you can't achieve that, lock-free code because generally they are more scalable than algorithms that use locks. The previous link and this one describes some of the benefits of non-blocking synchronization but I'll give you a contrived example.

LF/WF algorithms can't deadlock! No locks, no deadlocks. Deadlocks are a bane to scalability because in the wild they are probabilistic events whose probability increases as concurrency increases. So you can imagine a situation where you have some code that runs smoothly on your old single processor system, then you upgrade to a dual core system and suddenly the code starts freezing every once in a while. You think to yourself, "gremlins" and continue along your merry way. Christmas comes early and you win a shiny new quad core box from some tech event in Atlanta. So you are thinking, "4x the processing power, 4x the performance, yeah!" But instead your program is freezing all the time. Simply killing it and restarting is not good enough anymore. You have a deadlock on your hands. You've gone from a single processor to a 4-way system and the scalability of the program has not followed suit. This is not atypical of a lot of Java programs in the wild, because though it may come as a shock to you (it sure as hell shocked me!), most Java programmers don't know spit about concurrency, even though Java has concurrency baked in from birth! And now that multi-core systems are becoming the norm, concurrency bugs that have laid dormant for years are waking up and stinging users.

It's possible, with a great deal of effort, to eliminate deadlocks in non LF/WF code. So (a) given that writing LF/WF code is as hard or harder than writing deadlock free code and (b) you are willing to solve any deadlock problems in the non LF/WF code, is LF/WF worth it? As with all questions relating to trade-offs, the answer is "it depends". In this case, it depends on how much throughput is enough. In the WF case you are guaranteed system-wide throughput with starvation freedom while in the LF case you are still guaranteed system-wide throughput but with the possibility that individual threads may starve. The bottom line is, progress is always being made. There is no such guarantee with blocking synchronization.

Because of the complexity associated with LF/WF algorithms most programmers never tackle LF/WF head on. Contact with LF/WF algorithms and code come in the form of using LF/WF datastructures (i.e. java.util.concurrent.ConcurrentLinkedQueue). But it may surprise you that in your own code there may be opportunities to write LF/WF code.

Disclaimer:

I'm not advocating everybody going through every line of code and trying to make it LF/WF (though I do advocate going through every line of your code and making sure it's thread safe). You really, really, need to have an extremely strong grasp of the Java Memory Model before you can even begin to think about writing LF/WF code, especially the happens-before rules.

You should limit your LF/WF tinkering to critical paths only. Critical paths are hot sections of code (code that is executed frequently). You need two tools to fix critical paths. Firstly, you need a Java profiler to tell you where the critical path is. The critical path is going to be the method call chain where the program spends the majority of it's time while under load. The under load distinction is extremely important because if you take a server application as an example, and profile it when there is no load, the profiler is going to report that the app is spending most of its time in [something like] Socket.accept(), which doesn't tell you anything about the performance of the app. In your quest for the critical path the best any Java profiler can do is tell you what methods are consuming the most amount of time. They cannot peer into the method and tell you which specific line of code is slow or if a lock is hot (a hot lock is one that is highly contended). This is where the second tool comes into play. You need a hardware profiler.

A hardware profiler differs from a Java profiler in that it show events at the CPU. Every modern CPU comes with all sorts of counters that enables programs to know what's going on inside the CPU. It can tell you things like cache hit/miss rates, stalls, lock acquisitions and releases, etc. Some operating systems comes with hardware profilers baked right in. Solaris 10/OpenSolaris on Sparc is the gold standard when it comes to observability. mpstat, corestat, plockstat, and [the big daddy of them all] DTrace are some of the tools baked into Solaris 10/OpenSolaris that allow you to dig deep into the bowels of the system to figure out exactly what's going on. If you aren't running Solaris but Linux or Windows on AMD you can use AMD's CodeAnalyst Performance Analyzer. Finally, if you are running Linux or Solaris (Intel or AMD) you can use Sun Studio 12 to get at the data. All the hardware profiler tools I've mentioned are free and/or open source. So you have no excuse not to have at least one installed.

So here are the steps you've completed so far:

  1. Profiled the app under load.
  2. Identified the critical methods (hotspots).
  3. Tuned the critical method(s) as best you can.
  4. Repeat steps 1-3 until you hit the point of diminishing returns.
At this point if the throughput is where you want it you can stop. You didn't have to write a lick of LF/WF code. Good for you. But what happens if you want to see how far you can really push the system? You start really cranking up the amount of threads (assuming you have available processors to execute them concurrently). You take at the look at the system's CPU utilization and it's redlining. You need to make sure it's actually doing useful work. So now it's time to fire up the hardware profiler to see what's going on in the CPU.

Aside:

Sun Studio 12 does the best job, of the tools listed, of associating CPU events back to the source code line(s) that produced them.

So you fire it up and the first thing that jumps out at you is you have a smokin' hot lock along your critical path. If you can't reduce the granularity of the lock or its scope, going LF/WF may be your only option.

Aside:

It's entirely possible that you can neither reduce the granularity of the lock nor rewrite the critical section as LF/WF. At this point you are screwed. You are just going to have to buy another box (or virtualize) and spread the load. If you can't spread the load you are royally screwed so the best thing to do is degrade gracefully.

I've gone through all of that setup just so I dump some code on you:

public class RecipientLimits
{
    private static final long ONE_HOUR_MILLIS = TimeUnit.HOURS.toMillis(1);

    private static final long THREE_DAYS_MILLIS = TimeUnit.DAYS.toMillis(3);

    private static final long ONE_MONTH_MILLIS = TimeUnit.DAYS.toMillis(31);

    private final ConcurrentMap hosts = new ConcurrentHashMap();

    /**
     * Tracks the last time we pruned the table.
     */
    private final AtomicLong lastrun = new AtomicLong(System.currentTimeMillis());

    /**
     * Get the recipient limit for host.
     *
     * @param host The host.
     *
     * @return The maximum number of recipients the host will accept.
     */
    public int get(InetAddress host)
    {
        Entry e = hosts.get(host);
        if (null == e)
        {
            Entry t = hosts.putIfAbsent(host, e = get(host.getAddress()));
            if (null != t)
                e = t;
        }
        return e.limit();
    }

    /**
     * Confirms that host accepts limit recipients.
     *
     * @param host  The host.
     * @param limit The number of recipients that was accepted.
     */
    public void confirmed(InetAddress host, int limit)
    {
        Entry e = hosts.get(host);
        if (null != e)
            e.confirm(limit);
        prune();
    }

    /**
     * Indicates that host did not accept limit number of recipients.
     *
     * @param host  The host.
     * @param limit The limit.
     */
    public void denied(InetAddress host, int limit)
    {
        Entry e = hosts.get(host);
        if (null != e)
            e.decrement(limit);
        prune();
    }

    /**
     * Removes inactive hosts from the table.
     */
    private void prune()
    {
        long last = lastrun.get();
        long millis = System.currentTimeMillis();
        if (millis - last >= ONE_HOUR_MILLIS && lastrun.compareAndSet(last, millis))
        {
            for (Iterator i = hosts.values().iterator(); i.hasNext();)
            {
                Entry e = i.next();
                if (0 == e.users.get() && millis - e.lastaccessed.get() >= THREE_DAYS_MILLIS)
                    i.remove();
            }
        }
    }

    /**
     * Look up host in the database.
     *
     * @param host The ip address of the host.
     *
     * @return A new Entry.
     */
    private Entry get(byte[] host)
    {
        //@todo don't forget to create an entry in the database if host does not already exist.
        return new Entry(1);
    }

    final class Entry
    {
        /**
         * The last time (milliseconds timestamp) this enty was accessed.
         */
        final AtomicLong lastaccessed;

        /**
         * The number of threads reading/writing this object.
         */
        final AtomicInteger users = new AtomicInteger();

        /**
         * Semaphore for updates.
         */
        private final AtomicInteger dflag = new AtomicInteger();

        /**
         * The recipient limit.
         */
        private final AtomicInteger limit;

        /**
         * Indicates when we've maxed out {@link #limit}.
         */
        private volatile boolean maxo;

        /**
         * The millisecond timestamp when we maxed out date.
         * 

* This field is correctly synchronized because there is a happens-before edge created by the write to this * followed by a write to {@link #maxo} in {@link #decrement(int)} and then the read of {@link #maxo} followed by the read * of this in {@link #confirm(int)}. *

*/ private long maxtstamp; /** * Constructs a new Entry. * * @param limit The recipient limit. */ Entry(int limit) { this.limit = new AtomicInteger(limit); lastaccessed = new AtomicLong(System.currentTimeMillis()); } /** * @return The current limit. */ int limit() { users.incrementAndGet(); try { return limit.get(); } finally { lastaccessed.compareAndSet(lastaccessed.get(), System.currentTimeMillis()); users.decrementAndGet(); } } /** * Confirm the limit. * * @param limit The value returned by {@link #limit()}. */ void confirm(int limit) { users.incrementAndGet(); try { if (0 == dflag.get() && (!maxo || System.currentTimeMillis() - maxtstamp >= ONE_MONTH_MILLIS) && this.limit.compareAndSet(limit, limit + 1)) { //@todo - talk to database //@todo - use limit not limit + 1 } } finally { lastaccessed.compareAndSet(lastaccessed.get(), System.currentTimeMillis()); users.decrementAndGet(); } } /** * Decrement the limit. * * @param x The value returned by {@link #limit()}. */ void decrement(int x) { users.incrementAndGet(); try { if (x > 1 && dflag.compareAndSet(0, x)) { try { if (limit.compareAndSet(x, x = x - 1)) { boolean dbput = true; if (dbput) { maxtstamp = System.currentTimeMillis(); maxo = true; } } } finally { dflag.set(0); } } } finally { lastaccessed.compareAndSet(lastaccessed.get(), System.currentTimeMillis()); users.decrementAndGet(); } } } }

Let's revisit the title of this post "Becoming a concurrency expert. Rule number not one, Relax". I've emphasized relax because it is critically important to finding LF/WF opportunities in your own code. So what exactly does relaxing mean? The biggest thing it entails is realizing that you have very little control over the order in which threads execute and being OK with that. Because if you try to be draconian about the order in which things happen you will have to go single threaded or use locks. So once you've relaxed and let go of draconian ordering the only thing you have to worry about is ensuring that data races are benign. Notice I didn't say eliminate data races, I said "ensuring that data races are benign". The difference of course is relaxation. In LF/WF, data races are part of the design because at some point in the code you are going to need to do a CAS (i.e. java.util.concurrent.atomic.AtomicBoolean.compareAndSet(false,true), java.util.concurrent.atomic.AtomicInteger.incrementeAndGet(), etc) and a CAS is a [CPU supported] race condition waiting to happen. CAS isn't the only place where its okay to let a race condition go unchallenged. Anywhere you can prove that the consequence of a data race is benign is an opportunity to relax.

Let me back up quickly and talk briefly about ordering. I hope you don't think I said that ordering is not important or that you have absolutely no control because that is not true. Let me repeat it. What you don't have control of is when a thread will run. That's [ultimately] the responsibility of the operating system. What that translates into is, you don't have control of when something executes, only what executes.

The new JMM strengthened the guarantees of volatile to prevent the reordering of volatile read/writes in relationship to non volatile fields. In other words, you can use volatile fields to force code to execute in a certain order without the use of the synchronized keyword. Which also means you can use a single volatile field to safely publish multiple non volatile fields (an example of this is located in the code above and is described in the JavaDoc comment for maxtstamp). This was not possible prior to Java 5. Before Java 5 if you wanted visibility guarantees for fields you had to (a) declare all of them volatile or (b) use a synchronized block.

So there is a lot you can accomplish given the new volatile semantics but you can't do everything. Specifically, you can't make binding decisions with volatiles. So what's a binding decision?

if (some_volatile_condition IS True)
{
  //Execute code under the assumption that some_volatile_condition is still True.
}
... is a binding decision and in the absence of locking [before the read of some_volatile_condition] is a bug, except of course, in the case that the code being executed results in a benign race condition. Examples of non-binding decisions can be found in Entry.confirm(int) and Entry.decrement(int).

That's it for today. I'll pick the code apart in Part II. Thanx for stopping bye.

Tuesday, June 26, 2007

A Developer's Journal: Solaris #11

CTRL+BACKSPACE has bitten me for the last time.

I've been trying to make OpenSolaris/JDS my work environment for the last week and time and time again my desktop restarts itself because of CTRL+BACKSPACE. 99% of the time I swear I didn't press the key combination! Fixing it has solved one of the many things about JDS that annoy the crap out of me (I'll tell those stories another day).

The solution is extremely simple. Edit the xorg config file located at /etc/X11/xorg.conf and add the following line to the ServerFlags Section:

Option         "DontZap"  "true"
Save the file and hit CTRL+BACKSPACE to restart the JDS for the last time.

I'm so elated I figure I would take a moment and send this one up. Thanx Alfred Peng.

Monday, June 18, 2007

A Developer's Journal: Solaris #10

I've solved the font problem. It was a font rendering issue. I have dual 20 inch LCD monitors and the default font rendering settings just wasn't cutting it. A quick visit to the "Font" dialog box under "Preferences" made all the difference. The fuzz is gone. Yeah!

I've also completed configuring my bash environment ala Gentoo so Solaris is starting to feel a lot less foreign to me than it did two days ago. The next step is installing and configuring IDEA and pulling down my current working set from a couple Subversion repositories. I don't think the real benefits of Solaris are going to be evident to me without hacking on some code.

Saturday, June 16, 2007

A Developer's Journal: Solaris #9

Installation complete! Yeah!

Initial Reactions

The fonts look like crap! At this point, I'm not sure if the problem is the default set of fonts or if font rendering just sucks. Every bit of text everywhere is fuzzy. Text is my life! I am a programmer and sometimes blogger, after all. Solving this issue is at the top of my todo list.

The default configuration for the root user account is just plain insipid. The root user does not have a default HOME directory so every file and directory that gets automatically created by the shell or the desktop environment gets dumped to /. How absolutely, positively, retarded is that? Every Linux distro I have ever used creates a /root directory to store the root user's files. I just can't fathom the rationale behind not setting a decent default directory for root.

Now some of you may say or be thinking, "you are not supposed to be using the root user account anyway". If you did say that then you've obviously never installed Solaris 10. Because if you have installed it then you know you only ever get prompted to provide a password for the root user. The install process forces you to wait until the installation is complete to create a normal user account. The problem is on first reboot the only account you have to log in with is root and the minute you log in, the desktop environment creates a host of hidden files and directories on the root filesystem for root. So you never get the opportunity not to use the root account. Like I said before, insipid, and that's putting it nicely.

The good news is most of my hardware works out of the box. The only thing that isn't working is my USB DVD-R/W. One more item for my todo list.

A Developer's Journal: Solaris #8

I am returning to the world of Solaris. My original foray into the land of (the) Sun didn't go so well. It was a try and buy 60 day trial of a T1000 that just didn't go a well as I had hoped. Some of the problems were Sun's fault but most of them where mine. One of the big issues was proximity. The T1000 was hosted in an off site data center which required several layers of indirection to get to the machine and some things are just harder to do remotely. The bottom line is I didn't get the most out of the 60 days. But I really am interested in learning more about Solaris. DTrace is simply too compelling a technology to ignore, especially now that Solaris is opensource. So almost 8 months later I'm revisiting the land of Sun.

This time around I'm doing things a lot closer to home. I've added a SATA RAID controller and 4 35GB 10,000 RPM HD to my workstation. This will be Solaris' new home. I had to get rid of my CD-R/W for a 3 disk enclosure but I have an external USB DVD-R/W so no sweat.

A standard Solaris install is bit more heavy weight that I'm interested in. Coming from the land of Gentoo, I'm used to and prefer installing things as I need them instead of a default kitchen sink install like the one that comes w/ standard Solaris 10. Plus, I'm a tinkerer. Fortunately for me, Sun saw me and my kind coming and has a distro taylor made for us; OpenSolaris Developer Edition. We get the latest and greatest Solaris kernel and the developer tools necessary to tinker, compile, and profile. It has a (relatively) recent GNOME based desktop and Firefox is the default browser.

As soon as I'm done posting this I'm going to go burn my (downloading) distro to DVD and install it. Stay tuned.

Thursday, May 10, 2007

I'm A Real Programmer Because ...

David Miller says I'm a real programmer. Are you?

Wednesday, March 28, 2007

A Great Hot Dog

Hebrew National Franks really does make the best hot dogs I've ever eaten. At this point if it ain't Hebrew National I ain't eatin' it. After all, it's kosher. So if it's good enough for Jesus it's good enough for me. But then again Jesus was something of a radical. Maybe he bucked Jewish law and ate non kosher beef. If he did and was trying to spite the establishment and was still around, he would be missing out on a great hot dog.

If you are going to have a great hot dog you are going to need a great hot dog bun. Otherwise, what's the point? The best hot bug bun I've found is Martin's Long Roll Potato Rolls and according to the packaging, it has that "famous Dutch taste".

My condiment choices are simple yet cosmopolitan. It requires mayonnaise, pesto, sharp or extra sharp cheddar cheeses and ketchup (optional). Hellmann's is the only brand of mayonnaise I'll buy. As they say, "if you bring out the Hellman's you bring out the best" [Those TV people ain't never lied!]. I've tried different brands of pesto and can't say I've found a favorite brand. They all bring something a little different. I'm currently using Classico brand pesto, in case you want a place to start. Cracker Barrel shredded sharp cheddar is my cheddar of choice. I also like Sargento but Cracker Barrel has just a little more bite to it. Finally, we get to ketchup. I'm a Heinz man. Been that way since I was a boy. I'll try others when outside the domicile and Heinz is not available, but never while I'm at home.

So now its time to put it all together. Boil the hot dogs, cover the pot, turn the stove off, and immediately ...

  1. Spread mayo on half of the bun.
  2. Spread pesto on the other half.
  3. Lay down a light layer of cheese in the bun groove.
  4. Drop a hot dog in the groove.
  5. Add another layer of cheese on top.
  6. Let sit for 30-60 seconds.
  7. Enjoy!
Let me expand step 6 briefly. The reason there is a step 6 is to allow the heat from the cooling hot dog to lightly melt the (refrigerated) cheese.

You may have noticed that the application of ketchup was not in any of the steps. That's because you have a choice to make. It's been my observation that the ketchup weakens the impact of the cheese on the pallet. Thus, if you are afflicted by this you may want to skip the ketchup entirely. But since one hot dog is never enough, make two, then you can have it both ways.



P.S. I'm not Jewish. Not even a little bit.