Entrepreneur, executive, and investor; octo-dad; former Googler, now VP Product at

Beyond Dark Castle

I’ve a lot of fond memories from my youth that revolve around the Mac: getting invited over to a girl’s house in the second grade… to play Dark Castle on an early-model Mac; retrofitting my 3rd grade teacher’s Lisa to be Mac-compatible; getting pulled out of classes later on in elementary school to help the school principal with their Mac; learning how to do desktop publishing on the Macs at the neighborhood “Laser Layouts” shop; playing with one at the local software store, etc.

I could never afford one growing up, so it was always this magical machine I could only play with on borrowed time. And I guess the small piece of me that yearned for a Mac during my elementary school years still lives on in some form–I own a few vintage Macs, purchased a few years back.

Dark Castle and its sequel were among my favorite games on those old Macs, so it’s been great fun for me to relive a bit of the past and spend a few minutes playing the recently released remake: Return to Dark Castle. Beats firing up an emulator or obsolete hardware. Well, it’s easier, at least.

Google is Amazing

I find in Google’s search engine significant irony. Let me explain.

Have you considered how they can possibly index so much of the web so often and make it available coherently to the entire world and deliver individual results quickly and consistently? The Google engineering team deserves our immense respect for such an accomplishment. It is truly amazing.

Of course, the “Page Rank” algorithm is an equally amazing feat of engineering–a core idea with millions of tweaks to deliver the amazingly accurate results we have come to expect.

And yet, when you use Google, the interface is amazingly simple and effective. We forget about all of the complexity underneath, and we are compelled to return to Google time and again because we have such a great experience with it.

As a software engineer by background, I’ve spent a great deal of time in my career pondering how to build a scalable infrastructures, considering what languages to use to create my algorithms, and wiring together plumbing. Those are all important, but I’m thrilled that I also have the opportunity to work on the most important aspect of software creation: crafting an amazing user experience.

Does it sound arrogant to say that the user experience is that important? Yes, but when you consider that statement, isn’t it true? We can have amazing code and compelling server infrastructures, but if users don’t enjoy using our software, we’ve failed indeed–but if the users enjoy our software, does it matter if our code is crap or our infrastructure isn’t the best?

How many of the popular web properties and successful businesses of the world were built on famously hackish codebases? Facebook and MySpace, as does the wild popularity of PHP in general.


This reminds me of the Nintendo Wii. I have some friends in the video gaming industry. They can’t stand the Wii. They scoff that some people find it fun. They can’t get past that it doesn’t have the horsepower of some of the other consoles. Its the same as us scoffing at PHP.

Or is it? This is actually a bit different, because the hardware limitations of the Wii bleeds into the interface. Games *are* limited. Cross-console games always look worse on the Wii, and even Wii-specific titles make a point of focusing on the interaction instead of the graphics.

It’s one thing to say that the user interface is more important than the hidden guts of a system, but that internal heart of the system affects the user interface, so the two are obviously quite related. I’m just saying we need to maintain our perspective and remember which is the tail, and which is the dog.

You know, when you start to focus on refining the user interface, you realize that it’s quite a bit of work. GUI interfaces–as distinct from text-based user interfaces–are especially labor-intensive. If you’ve ported an old green-screen app to a GUI technology, you know what I’m talking about.

One of the reasons for the labor-intensiveness is simply because more is possible, and therefore, much more is expected. It’s not that there’s anything particularly complex about asking GUI toolkits like Java Swing and JavaFX, Adobe Flex, the browser, etc. to render a text box as compared to some text-based toolkit, but it can be a lot more complicated to add asynchronous data validation, to deal with concurrency in the interface, progress notification of background asynchronous tasks, creating rich and complex tables to display summary data in, dealing with hundreds more choices for skinning the UI, and so forth.

There are all these details involved in getting it right. I love the term “craftsmanship” for describing a devotion to getting the details right in creative acts.

Alan Cooper, author of “The Inmates are Running the Asylum” and “About Face”, has talked about craftsmanship recently in the excellent talk “An Insurgency of Quality“:

[Craftsmanship] is all about quality–it’s all about getting it right, not to get it fast. It’s measured by quality, not speed. It’s a pure measurement, and a delightful one. Craftsmen do it over and over again until they get it right. In their training, they build things over and over so they get the experience they need to get it right.

But you know what? Most of us in the software industry work in IT departments. Our bosses don’t talk to us in these terms, do they? They usually talk to us in terms of “getting it done”, don’t they?

I think Joel Spolsky captures the way many in IT view craftsmanship:

Craftsmanship is, of course, incredibly expensive. The only way you can afford it is when you 
are developing software for a mass audience. Sorry, but internal HR applications developed at insurance companies are never going to reach this level of craftsmanship because there simply aren't enough users to spread the extra cost out. For a shrink-wrapped software company, though, craftsmanship is precisely what delights users and provides longstanding competitive advantage.

So does that mean those of us in IT are doomed to create crappy software for the rest of our careers? Well, in the same talk, Alan Cooper maintains that Joel is wrong:

[The distinction between commercial and I.T. applications is artificial.] The issue is that there are human beings who are using our products. Some of those human beings pay for the privilege of using our products, and some are paid to use our products. I.T. [workers have] chosen to work in an arena where people are paid to use our products, and it's amazing how that covers up a lot of interaction design sin. While real people will use your really bad product because they are paid to use it, if it is a good product with decent behavior, productivity will climb. You can walk into any organization and spot the SAP users–they are crying in a corner. You can’t tell me that that’s good for business.

There’s a lot more to say on this topic, but let’s suffice for now with agreeing that whoever you are wherever you’re working, you can make the decision now to care about the quality of what you do. Various cultures will tolerate various degrees of craftsmanship, and that’s okay. Do the best you can within your own constraints.

Firefox Logos

Four snapshots of the Firefox logo revision effort

By the way, one of the things I love about Mozilla is the level of craftsmanship that goes on there. For example, Alex Faaborg, one of our UI designers, has recently been driving a revision of the Firefox icon. He’s gone through fourteen iterations and still isn’t done, all for a bit of polish that isn’t immediately obvious. I can’t tell you the number of times I seen him here at the office past midnight working away on this and other details out of his love for the craft.

Going back to the Wii thing. It’s a great example of expectations. If you expect video games to look like this:

Crysis Screenshot

…then the Wii is going to let you down. On the other hand, if you expect people who play video games to look like this:

Video Gamers

…you may find your enjoyment of the Wii a pleasant surprise indeed.

It’s funny–my friends who hold the Wii in such contempt, even when they have fun playing the Wii (usually with family), they still proclaim such hatred for it. They expect their video games to have amazing graphics, and even when such games deliver a valuable service–entertainment–the disappointment of missed expectations detracts from the experience. (Whereas those whose expectations are exceeded by the Wii, they just love it!)

The comedian Louis C. K. explores this area of our psychology in a popular video clip that’s gone viral on YouTube, Hulu, and others. It’s hilarious but it demonstrates something very true: our expectations for our life’s interactions are constantly on the rise–and we get very annoyed when our expectations are not met.

I think the first key of creating a compelling experience for your users–of practicing effective craftmanship in software–is to understand what their expectations are, and to meet or exceed them as often as possible.

What do you think?


Clearly Apple’s PR Skynet saw my recent whining about the MacBook Pro and engage in virtual retribution, because now my iPhone is failing.

I used Apple’s excellent data migraiton program to copy my data from my old MacBook Pro to the new one. After the transfer, the first time I tried to sync with iTunes between my new Mac and my iPhone, iTunes informed me that I could not activate the DRM because I’ve used all five of my machine authorizations. I had already put the old MacBook Pro away so I decided to deal with this later. I interrupted the iPhone sync-in-progress just as it started to avoid issues with the third-party applications and music on my phone.

Things worked well for quite a while. Then, three days later, I noticed that when I tried to launch a third-party app (actually, Apple’s own Remote application), it crashed right away. And that, in fact, every third-party app crashes after launch, but the built-in apps continue to work.

I’ve since discovered this is a fairly common problem with iPhones. I’ve rebooted the phone and hard-rebooted the phone (you know, holding down the front button and the top button) many times, authorized iTunes and synced, de-authorized iTunes and synced, downloaded new free third-party apps via iTunes and synced (to try and trigger the DRM to reset), downloaded new apps via the iPhone, removed and re-downloaded several of my apps at random, wiped the iPhone and restored from a backup, removed all my apps via iTunes and synced (losing all my data, some of which wasn’t synced with other services and I’ve no idea how to restore the data back), and then added them back via iTunes and re-synced, and yet, after all this, the problem still persists.

I guess I’m off to the Apple store to fix the problem, but geez, I really don’t have time this week to deal with all this. Have I missed anything? I’ve scoured the forums and I seem to have tried everything.


I’ve been using Mac laptops as my primary machine for a few years now, going through 5 notebooks since 2004. Each time I upgraded, from PowerBook G4s to various MacBook Pros, I’ve always felt like I was making a step forward, gaining speed, useful new features, and so forth.

This week, I “upgraded” from a 1.5 year old 17″ MacBook Pro to a 15″ MacBook Pro. For the first time, I feel like I’ve taken a big step backwards.

(By switching from 17″ to 15″, I’m going from 1920×1200 resolution to 1440×900. I didn’t really want to make the switch from 17″ to 15″, but that’s another story and I obviously can’t fault the new notebook for this.)

The Trackpad

The biggest problem with the new notebook has been the trackpad.


Apple has added support for all kinds of gestures to it, which is interesting, but it turns out the trackpad registers the gestures unintentionally far too often. The first thing I did with my new notebook was hack on a Keynote presentation, and I found objects constantly being rotated and resized by accident. I also had a similar problem in web browsers where I’m constantly inadvertently changing the font size of the page with the new notebook.

The ill effects of the extreme sensitivity of the trackpad are made worse by how much bigger the trackpad is on this model, and by the fact they’ve removed the trackpad button. My hand is trained to rest my thumb on the bottom of the trackpad–where the button used to be–and this leads to many (but not nearly all) of the false gesture triggers.

After being constantly frustrated by this, I discovered that you cannot turn off gestures. What? Leave it to Apple to be utterly unconcerned with their user base by changing a primary input mechanism of the notebook from a pattern unchanged through nearly a decade of Apple laptops and skipping on obvious accommodations to help users make any adjustment.

By reading through countless forum messages from other frustrated users, I discovered a third-party application that can disable the gestures.

The Display Connector

Next up on the list, there is no longer a DVI port in the laptop. Instead, you’ve got this mini-port that requires the use of an adapter. That’s right, unless you have the new-model 24″ Apple Cinema Display, you’ve got to buy a dongle to hook up the notebook to any display. And when you present on the road, you’ve got to have two dongles–one to adapt the mini-port to DVI, and another to adapter the DVI to VGA or to do mini-port to VGA.

But the fun doesn’t stop there! It turns out if you have a 30″ Apple Cinema Display, like yours truly, you have to buy another $100 display adapter–one that’s fairly big and unwieldy. And once you do that, you’ll discover that there’s a known (unsolved) problem where you’ll get noise in the display: little flickering lines that appear all over the place. Apple’s forums are full of complaints about this; no solutions mentioned. Brings me back to the UHF days on the family TV growing up. In a bad way.

The Glossy Display

Due to popular demand, the 15″ MacBook Pro is no longer offered in matte–you must choose the glossy display. This means that under many common lighting conditions, you get to see yourself in the display clear as day! I guess the narcissistic set will enjoy this, but I find it extremely distracting and can make the display hard to read.

Wrrr, Weeeeeee, Wrrr

I had to do some late night work tonight, and as my wife was drifting off to sleep, I opened up the new MacBook Pro. As it always does on sleep and wake, the CD-ROM drive made a lovely loud repeated set of “Wrrrrrr, Weeeee, Wrrrrr” noises. This loud and obnoxious noise pulled her back fully alert and wondering what all the racket was. Folks will certainly notice any time you close or open the lid on this sucker.

Think Before You Leap

If you’ve got a MacBook Pro from the generation right before this new one, consider the downsides before upgrading. You may find (like me) that it’s not an upgrade at all.

(Update: My original posting-in-anger had a really cranky conclusion; I chilled it out a bit the next day.)

I’d like to give an update on an interface concept we’re exploring for Bespin. We call it “The Pie”.

The Pie

We’re hoping the Pie solves a couple of problems for us. Let me take a couple of steps back. Currently, Bespin has two screens, a “dashboard” and the editor itself.

Dashboard and Editor

We’ve not been happy with this arrangement.

You see, our original concept for the dashboard was that it would have all kinds of neato project statistics, showing information on your productivity, where you are spending most of your time editing, real-time information on others currently working in the same project, etc. You know, a dashboard.

But the “dashboard” in the Bespin 0.1 and 0.2 releases is just a file explorer. And it’s been the only way to open files, so Bespin users have to constantly go back and forth between the dashboard and the editor. Bleah.

Augmenting the Editor

Our first approach to solving this problem was bringing the dashboard into the editor:

Dashboard in Editor

While we developed a bunch of ideas for making this into a pretty good experience, we couldn’t get past the uneasy feeling that we were taking the first step towards this kind of an interface:


A typical Eclipse configuration. Note how small a space is left over for code
(the region in color; I made the rest monochrome) and how much of the interface is wasted on clutter.

One of our goals for Bespin is to keep it simple–while still providing advanced features. So the thought of winding up with an interface with so many knobs and dials gives us more than a few shudders. Plus, it’s just fugly.

Don’t get me wrong–I’m sure lots of folks like the traditional IDE clutter-up-your-world-with-panels style, and we want to support that. But we want to be absolutely sure that clutter doesn’t become the default, nor the required way to interact with Bespin in order to utilize most of its helpful features.

The Pie

As we discussed these issues, we started thinking more about a concept I’ve been pondering for a while for mobile devices: a predictive pop-up that groks HTML, CSS, and JavaScript grammars and based on context predicts with high accuracy a reduced set of words you’re likely to want to type next (outside of free-form text entry, of course). For example, given a blank link in JavaScript, we can predict that you’re likely to type one of “var, if, for, while” and so forth. Obviously, there’s lots of challenges involved in getting this right, and it may be unworkable entirely.

We took this basic concept and re-imagined it in a more general-purpose application:

Pie Design

And, as of this morning, we’ve got some of this implemented in Bespin:

Bespin with the Pie Menu

I apologize for the dark colors; we’re still tweaking the details

The idea is that you can use the mouse (right-click) or the keyboard (CTRL-K) to bring up the pie, and then select the quadrant (up, down, left, right on the keyboard) you want, and a pop-up menu renders the content. You can also skip directly to the area you want with a direct keyboard short-cut (e.g., CTRL-J for the command-line); selecting it with the mouse just requires a gesture towards the desired icon.

Why a Pie?

We’re intrigued by this concept for a few reasons:

  • On small screens (e.g., iPhone), the interface still works well. The pie would appear first, and once you make a selection, it disappears and is replaced by the pop-up. This is an easy adaption to make, and we’re designing the contents of the pop-ups to scale to different sizes easily.
  • The pie will pop-up exactly where the mouse right-clicks, minimizing the effort required for the mouse to select items (see this demo from Jono for more detail on that point). For the keyboard, the pie will always appear in the bottom of the screen.
  • The number of pie pieces can expand, giving users a top-level short-cut mechanism that is easy to use with the keyboard and mouse
  • It’s different, a bit whimsical, and hopefully fun (will take some tweaking to find out for sure)

There’s lots more to discuss about finer details and interactions, but I’ll save those for another post once we start refining things a bit and getting feedback from others. (All of the details now are in flux; e.g., we’re not crazy about the individual icons in the pie yet).

Enter the Gratuitous Animation

But before I go, I wanted to show you a bit of fun we had with the pie. Another of our goals with Bespin is to make it fun, and so of course we can’t just have the pie menu appear. We designed a couple of different animation effects describing how it could come in and go out:

Pie Fx

Here’s how it turned out:

You can see a stand-alone, live version of the animation by clicking on the image below (but you’ll need a canvas-compatible browser, such as Firefox 3+, Safari 3+, Chrome, etc.):

Stand-alone Pie Demo

We’re pretty impressed by how quickly canvas animations render on Safari 4, Firefox 3, and Firefox 3.5 beta. Even when we run the animation on a huge window, we’re able to alpha-blend every pixel in the window to create a fade out effect on the window contents while rotating the pie into place. Nice job, browser graphics gurus! Thanks for making it so fast.

We did the animation by hijacking Dojo’s animation facility a touch:

var anim = dojo.fadeIn({
    // create a fake node because Dojo Animation
    // expects to mutate DOM node properties
    node: {
        style: {}

    duration: 500,

    // use one of Dojo's fancy easing functions
    easing: dojo.fx.easing.backOut,

    // Dojo supports an onAnimate() callback for each frame
    // of the animation and passes in values that it is setting
    // on the DOM node. We'll grab "opacity" and use it as a
    // general "progress" value.
    onAnimate: function(values) {
        var progress = values.opacity;

As with everything we do here in Mozilla Labs, this is experimental–it may be we come crawling back to docking panels as our primary interface metaphor.

What do you think?

Yesterday, I posted some of my frustrations with how color display behavior got a bit wacky when I hooked up a non-Apple external monitor to my MacBook Pro.

Responses varied from, “I’ve seen this too and I feel your pain” to “Hey noob, don’t expect different monitors to display colors the same.” Thanks to all of you who have shared helpful suggestions on calibration and so forth.

But I failed to convey what was happening, and as I set about clarifying it, things got weirder.

So, here’s a PNG image that shows what I occasionally see: rendering what should be the same color differently on the same display:

Terminal's Different Colors?

I included this image in my original blog post, and “Lew Z” posted this comment:

When I mouseover your 2 terminal text examples, in DigitalColor Meter I get:

Red: 0
Green: 65535
Blue: 0

For *both*. As far as the OS knows, and what is displayed on your webpage, they are the same color. If you (and maybe I) perceive color differences between the two, it is a physiological issue of how the eyes and brain perceive color when surrounded by different colors (in this case, your window title bars).

This confused me greatly as when I sampled the image, the colors came out as #00FF00 and #7EF41D. Fortunately, right next to me is another identical hardware configuration I can replicate this on: MacBook Pro (15″ instead of 17″, but ordered weeks of each other and same generation, etc.), same OS X version, same Dell display with the same color profile, same versions of the web browsers. So I pulled the image up on that machine and…

…the colors were identical!

It appears that the PNG has some color profile information in it that’s interfering here, so I saved out a GIF version of what I see on my system:

Terminal Color Differences

Obviously, the GIF format will dither the colors, etc. but it does convey that the colors are significantly different. On Dion’s system, the GIF appears exactly as it does on mine.

So here’s what has me profoundly confused:

Why would the same image display differently on the same hardware and the same software with the same settings? I understand why the entire image could display uniformly different to the eye, but why does just part of the image change its actual content?

Anyone know?

Different colors

If the two images above look exactly the same to you, move along; this blog post doesn’t apply to you.

Note: I posted a follow-up to this going into a bit more detail on one angle of this.

For much of my workdays, I’m using a Dell 24″ monitor to do most of my work, hooked up to my Mac laptop. For color fidelity, it turns out this has been rather painful. For some reason, this setup has caused me to experience all kinds of weird color glitches, such as the one at the head of this post. At first, I thought this was just colors rendering differently on the external monitor and the laptop’s internal display–but unfortunately, it’s more than that.

The same colors on the same display differently under certain conditions. Here’s another fun example:


This is more than just really annoying. When working on Bespin recently, I discovered that the slice images I’ve made from our designer’s source files contain different color values than what he initially specified. At least, some of the slices do. The slices are in fact inconsistent due to this same problem. Argh!

I’ve checked OS X’s System Preferences and the Dell is using its own Color Profile; isn’t this the right thing for it to be using? Why am I getting this behavior?

My guess would be that a Carbon/Cocoa Window, when displayed, uses the settings of the display on which it initially appears, but when you move the window from display to display, either the application is responsible for detecting the event and responding to it, or OS X has bugs in properly managing the shifting settings?

Does anyone know how I can fix this problem? Maybe I just need to start working on Apple displays again… or limit myself to one monitor and class the laptop display at work.

UPDATE: Because several folks were confused about what exactly I was showing in the Terminal screenshot above, I replaced it with something that may illustrate the problem a bit more clearly. Look at the text in the Terminal graphic. See how the shade of green is different? This is not because of foreground/background windowing issues. The color green is different, even though its the same theme, etc. These are not screenshots from different displays sewn together; they are running on the same display; so this isn’t to do with embedded color profiles in images, etc.

The Power of Lowered Expectations!

Recently Dion and I gave a talk at O’Reilly’s well-produced Web 2.0 Expo conference.

We messed up. Let me explain.

Last fall, on a lark, we wrote a quick program that would buzz at random intervals. We finished it right before walking on-stage to give a keynote at The Ajax Experience and ran it with the rule that whenever the buzzer sounded, we had to instantly switch speakers. Folks loved it, so on occasion we’ve been repeating the buzzer thing.

We did it at Web 2.0 Expo, but this time, the crowd was not amused. A sampling of the feedback on the conference site:

Thought [the talk] was great…except…hated the random buzzer bit. I can appreciate adding some fun…but…a little silly at first and eventually really irritating.

The the random buzzer was really terrible, distracting and loud. It was funny for about 1 minute. Doing it for the whole presentation just didn’t make sense.

Maybe the volume was higher than it has been in times past? Maybe we had the maximum interval set too high? In any event, I went to apologize in the comment thread when I was presented with… the dreaded login:


Time to create my 501st Internet credential; but wait! They support OpenID!


I’ve been hearing lots about how I can use my existing Google credentials to login to websites that support OpenID. I couldn’t wait to take advantage of that here. So I click on the “Use an OpenID to sign up” link and with the magic of a cross-fade technique, I see this:

OpenID Login

I’ve no idea how this stuff works, so I clicked on the “Read more about OpenID” link; a pop-up window opened:


First thing I did was click on “Check against this list” to see if I already had an OpenID as I thought I might. Doh! Error:


No problem, URLs get mangled from time to time. This one seems to have an obvious problem:

I removed the extra forward-flash after “” and then got this page:

Login to Login

That’s right; to find out how to avoid creating a login for the O’Reilly site, I have to create a login for the OpenID wiki site. Of course.

The other links on the pop-up were equally useless and/or broken.

At this point I just went ahead and tried my Google login id:

Another error

Rats. I googled around a bit and found this page:

You Have OpenID!

Sweet! I have a account, so I tried that:

You are not you

OpenID, I hate you! Still, perhaps there’s light at the end of the tunnel.

Web 2.0 Expo and web2Open

Tomorrow, Dion and I are giving a talk at O’Reilly’s Web 2.0 Expo conference in San Francisco at the Moscone Center: “Web Developer Tools: How to Be Productive Building for the Web” (Wed. Apr 1, 10:50 am). While normally Web 2.0 Expo costs money to attend, our session is free; all you need do is register for the free web2Open program. As part of our session we’ll be releasing something small; we’d love to see you there and get your feedback.

Following the presentation, we’ll be hosting a web2Open session at 12:40 pm–also in Moscone–to host a discussion about the state of Developer Tools for the Open Web and explore their future. If you’ve an interest in the subject and find yourself in town, won’t you drop in?

Marc Andreessen

They say Marc Andreessen, co-founder of Netscape and co-author of the Mosaic browser, once said:

[An operating system] is just a bag of drivers.

People have been fantasizing about the web as application platform for as long as we’ve had it. Nearly a decade later, we’re really just getting started at realizing this vision–of truly reproducing the power of traditional operating system APIs inside of the browsers.

While some have had this vision of browser-as-application-runtime since the beginning, most of us have traditionally viewed the browser as a web page renderer. It’s only been in the past few years that some have begun to push hard on changing this status quo. Google stands out in this group both with the creation of boundary-pushing “desktop-quality” applications like Gmail and in describing Google Chrome as an application run-time, not a page viewer. [1]

The Chrome Comic

Here in the Mozilla Developer Tools Lab, we’ve been pondering the various gaps in the tool-chain when you treat the browser as a serious, OS-grade application run-time. We’ll talk more about the landscape of tools and what’s available in a different post. In this one, we’d like to talk about one of the gaps we’ve found: memory tools.

The Memory Problem

It’s a rare application developer indeed who doesn’t wish their GUI to be “snappy”. In technical terms, Jakob Nielsen defines snappy as responding to user input within a tenth of a second. To put that in perspective, that’s shorter than it takes the average person to blink their eye.

Jakob Nielsen

If an application’s appetite for memory crosses over into gluttony, it can put a developer’s snappiness ambitions at risk. There are at least a couple of reasons why.

First, applications have a finite amount of memory available to them. When the operating system runs out of memory, a cool trick lets them supplement disk space for memory, but when this happens, performance hits the floor–hard drives, being mechanical, are orders of magnitude slower than memory.

While web applications don’t directly interact with the operating system to obtain memory, the browser does both for its own internal functions and as a proxy to the appetite of the web applications it is displaying, and as a web application’s memory consumption grows, so does that of the browser.

Therefore, if an individual web application’s memory needs grow sufficiently large, it can force the operating system to start dipping into disk space to provide sufficient memory, and when this happens, kiss any semblance of responsiveness goodbye.

Since there’s no way for a web application to know how much memory is available before this performance doomsday occurs, its good behavior to make your memory footprint as svelte as possible.

Garbage Collection and You

But there’s another, much more important reason why small web application memory footprints are good. It has to do with the way memory is handled in a browser. Like Java and pretty much any scripting language, JavaScript manages memory allocation for developers. This frees developers from having to deal with the tedious bookkeeping associated with manual memory management, but it comes at a cost.

That cost is embodied by the garbage collector. As a web application executes, it is constantly creating new objects, most of which are fairly transient–they are part of a transaction that has completed, like creating some short-lived jQuery objects to look-up some DOM elements. These objects consume memory. Eventually, the web application has created enough objects and is therefore consuming enough memory that the collector needs to wade through all the objects to see which ones are no longer being used and therefore represent memory that can be released.

This is where the performance implication comes in. To do its work, the collector stops the web application’s execution. Typically, this happens so fast that the user doesn’t notice. But when a web application creates lots and lots of objects, and these objects aren’t transient, the collector has a lot of work to do–it must go through all of these objects to ferret out the ones that are no longer used. This is turn results in delays that the user can perceive–and impairs the application’s responsiveness.


To be clear, most web pages and web applications don’t push the browser’s memory limitations enough to cause performance problems related to either of the scenarios above. As stated at the outset, this blog entry is about those web applications that need to treat the browser as a high-performance run-time, which in the context of this entry means that they have much-larger-than-average memory requirements.

However, these issues apply to more than just those web apps that are designed to use large amounts of memory; they can also apply to long-running applications which, over time, gradually consume small amounts of memory until the footprint grows to be quite large. When an application consumes more memory than its designers intended, it is said to leak memory. [2]

And this leads in turn to a third way in which memory can give the shift to performance: when the browser itself leaks memory. It turns out that mere mortals have created web browsers, and every so often they’ve made mistakes which can trigger either of the two scenarios described above.

Diagnosing the Problem

So how do you as a developer go about troubleshooting these sorts of problems? Today, there’s really only one way good way to do it: use the operating system’s tools. Unfortunately, this option doesn’t provide the right level of detail; you can either see how much memory the browser is consuming in aggregate (which is fine to let you know that your memory use is increasing, but doesn’t tell you why) or you can see which data structures in the browser itself are consuming the memory (which is fine if understand the guts of the browser, but it’s pretty hard for anyone else to understand how this maps into the web application they’ve developed).

What’s missing is a tool targeted at web developers that makes it easy to understand what’s happening with their application’s memory usage. We propose to create such a tool.

Start Small, Start Focused

Our plan is to start small and address two key needs that are presently unmet by any of the existing, developer-friendly, easy-to-use tools we’ve seen on any browser. These needs are:

  1. Understand the memory usage of an application
  2. Understand the garbage collector’s behavior

While here in the Developer Tools Lab we’re most interested in creating developer tools for the entire web community (i.e., not just Firefox users), in this case because the tool will need some pretty deep integration with the browser, we’re going to start with Firefox (because we sit close to the engineers who work on it).

We plan on the initial implementation of this tool to be simple. For memory usage, we want to introduce the ability to visualize the current set of non-collectible JavaScript objects at any point in time (i.e., the heap) and give you the ability to understand why those objects aren’t collectible (i.e., trace any object to a GC root). For the garbage collector, we want to give you a way to understand when a collection starts and when it finishes and thus understand how long it took.

Help Us!

This is obviously a small step into a large world. Is it a good first step? What do you think we should do differently? We’d love to hear from you, and thanks for reading!

[1] Of course, Firefox does a fine job of acting as application run-time; my point is that Google was the first to call out web applications as a distinct class of web content and to talk in terms of supporting these for their mainstream browser. Incidentally, Mozilla Labs’ Prism project sought to pioneer this idea years before.

[2] I’m using the term “leak” in a much more general way than is common in most developer communities. Traditionally, the term is applied to an application that allocates memory and then neglects to deallocate it when done. Because a language like JavaScript doesn’t allow developers to manually allocate or deallocate memory, it is impossible to leak at the JS level in this sense. But in my broader sense, any time a developer unintentionally creates memory footprint (e.g., by continuously storing objects in a hash in a mis-designed cache, etc.), I consider it a leak. This broader definition is borrowed from the Java community.

[3] Image of Marc Andreessen taken from

[4] Image of the Google Chrome comic book taken from

[5] Image of Jakob Nielsen taken from