Privileged to be a Mozillian

Mike Conley noticed a bug. There was a regression on a particular Firefox Nightly build he was tracking down. It looked like this:

A time series plot with a noticeable regression at November 6

A pretty big difference… only there was a slight problem: there were no relevant changes between the two builds. Being the kind of developer he is, :mconley looked elsewhere and found a probe that only started being included in builds starting November 16.

The plot showed him data starting from November 15.

He brought it up on irc.mozilla.org#telemetry. Roberto Vitillo was around and tried to reproduce, without success. For :mconley the regression was on November 5 and the data on the other probe started November 15. For :rvitillo the regression was on November 6 and the data started November 16. After ruling out addons, they assumed it was the dashboard’s fault and roped me into the discussion. This is what I had to say:

Hey, guess what's different between rvitillo and mconley? About 5 hours.

You see, :mconley is in the Toronto (as in Canada) Mozilla office, and Roberto is in the London (as in England) Mozilla office. There was a bug in how dates were being calculated that made it so the data displayed differently depending on your timezone. If you were on or East of the Prime Meridian you got the right answer. West? Everything looks like it happens one day early.

I hammered out a quick fix, which means the dashboard is now correct… but in thinking back over this bug in a post-mortem-kind-of-way, I realized how beneficial working in a distributed team is.

Having team members in multiple timezones not only provided us with a quick test location for diagnosing and repairing the issue, it equipped us with the mindset to think of timezones as a problematic element in the first place. Working in a distributed fashion has conferred upon us a unique and advantageous set of tools, experiences, thought processes, and mechanisms that allow us to ship amazing software to hundreds of millions of users. You don’t get that from just any cube farm.

#justmozillathings

:chutten

Data Science is Hard – Case Study: How Do We Normalize Firefox Crashes?

When we use Firefox Crashes to determine the quality of a Firefox release, we don’t just use a count of the number of crashes:aurora51a2crashes

We instead look at crashes divided by the number of thousands of hours Firefox was running normally:auroramcscrashes

I’ve explained this before as a way to account for how crash volumes can change depending on Firefox usage on that particular day, even though Firefox quality likely hasn’t changed.

But “thousands of usage hours” is one of many possible normalization denominators we could have chosen. To explain our choice, we’ll need to explore our options.

Fans of Are We Stable Yet? may be familiar with a crash rate normalized by “hundred of Active Daily Instances (ADI)”. This is a valid denominator as Firefox usage does tend to scale linearly with the number of unique Firefox instances running that day. It is also very timely, as ADI comes to us from a server that Firefox instances contact at the beginning of their browsing sessions each day.

From across the fence, I am told that Google Chrome uses a crash rate normalized by “millions of pageloads”. This is a valid denominator as “loading a page” is one of the primary actions users take with their browsers. It is not any more or less timely than “thousands of usage hours” but with Google properties being primary page load destinations, this value could potentially be estimated server-side while waiting for user data to trickle in.

Denominators that would probably work, but I haven’t heard of anyone using, include: number of times the user opens the browser, amount of times the user scrolls, memory use, … generally anything could be used that increases at the same rate that crashes do on a given browser version.

So why choose “thousands of usage hours”? ADI comes in faster, and pageloads are more closely related to actions users take in the browser.

Compared to ADI, thousands of usage hours has proven to be a more reasonable and stable measure. In crashes-per-100-ADI there are odd peaks and valleys that don’t reflect decreases or increases in build quality. And though crashes scale proportionally with the number of Firefox instances running, it scales more closely with how heavily those running instances are being used.

As for why we don’t use pageloads… Well, the first reason is that “thousands of usage hours” is something we already have kicking around. A proper count of pageloads is something we’re adding at present. It will take a while for users to start sending us these numbers, and a little development effort to get that number into the correct dataset for analysis. Then we will evaluate its suitability. It won’t be faster or slower than “thousands of usage hours” (since it will use the same reporting mechanism) but I have heard no compelling evidence that it will result in any more stable or reasonable of a measure. So I’ll do what I always try to do: let the data decide.

So, for the present, that leaves us with crashes per thousands of usage hours which, aside from latency issues we have yet to overcome, seems to be doing fairly well.

:chutten

Data Science is Hard – Case Study: What is a Firefox Crash?

In the past I’ve gone on at length about the challenge of getting timely data to determine Firefox release quality with respect to how often Firefox crashes. Comparatively I’ve spent essentially no time at all on what a crash actually is.

A crash (broadly) is what happens when a computer process encounters an error it cannot recover from. Since it cannot recover, the system it is running within ends the process abruptly.

Not all crashes are equal. Not all crashes mean the same thing to users and to release managers and to computer programmers.

If you are in the middle of drafting an email and the web page content suddenly goes blank and says “Sorry, this tab has crashed.” then that’s a big deal. It’s even worse if the entire browser disappears without warning.

But what if Firefox crashes, but only after it has mostly shut down? Everything’s been saved properly, but we didn’t clean up after ourselves well. This is a crash (technically), but does it really matter to a user?

What if the process that contains Flash crashes and web advertisements stop working? It can be restarted without too much trouble, and no one likes ads, so is it really that bad of a thing?

And on top of these families of events, there are other horrible things that can happen to users we might want to call “crashes” even though they aren’t. For instance: what if the browser becomes completely unresponsive and the user has no recourse but to close it? The process didn’t encounter a fatal error, but that user’s situation is the same: Something weird happened, and now their data is gone.

Generally speaking, I look at four classes of crash: Main Crashes (M), Content Crashes (C), Content Shutdown Crashes (S), and Plugin Crashes (P).

In my opinion, the most reliable indicator of Firefox’s stability and quality is M + C – S. In plain English, it is the sum of the events where the whole Browser goes poof or the Web Content inside the browser goes poof, ignoring the times when the Web Content goes poof after the user has decided to shut down the browser.

It doesn’t include Plugin crashes, as those are less obtrusive and more predicted by the plugin code, not Firefox code. It does include some events where Firefox became unresponsive (or “hangs” for short) and had to be terminated.

This, to my mind, most accurately encompasses a measure of Firefox quality. If the number of these crashes goes up, that means there are more times where more users are having less fun with Firefox. If the number of these crashes goes down, that means there are fewer times that fewer people are having less fun with Firefox.

It doesn’t tell the whole story. What good is a not-crashing browser if it doesn’t scroll when you ask it to? What good is a stable piece of web content if half of it is missing because we don’t support it? What good is a Firefox that is open all the time if it takes twice as long to load the web pages you care about?

But it gives us one very important part of the Firefox Quality story, and that’s good enough for me.

:chutten

crashreporterwin10

Data Science is Hard – Case Study: Latency of Firefox Crash Rates

Firefox crashes sometimes. This bothers users, so a large amount of time, money, and developer effort is devoted to keep it from happening.

That’s why I like that image of Firefox Aurora’s crash rate from my post about Firefox’s release model. It clearly demonstrates the result of those efforts to reduce crash events:auroramcscrashes

So how do we measure crashes?

That picture I like so much comes from this dashboard I’m developing, and uses a very specific measure of both what a crash is, and what we normalize it by so we can use it as a measure of Firefox’s quality.

Specifically, we count the number of times Firefox or the web page content disappears from the user’s view without warning. Unfortunately, this simple count of crash events doesn’t give us a full picture of Firefox’s quality, unless you think Firefox is miraculously 30% less crashy on weekends:aurora51a2crashes

So we need to normalize it based on some measure of how much Firefox is being used. We choose to normalize it by thousands of “usage hours” where a usage hour is one hour Firefox was open and running for a user without crashing.

Unfortunately, this choice of crashes per thousand usage hours as our metric, and how we collect data to support it, has its problems. Most significant amongst these problems is the delay between when a new build is released and when this measure can tell you if it is a good build or not.

Crashes tend to come in quickly. Generally speaking, when a user’s Firefox disappears out from under them, they are quick to restart it. This means this new Firefox instance is able to send us information about that crash usually within minutes of it happening. So for now, we can ignore the delay between a crash happening and our servers being told about it.

The second part is harder: when should users tell us that everything is fine?everythingisok

We can introduce code into Firefox that would tell us every minute that nothing bad happened… but could you imagine the bandwidth costs? Even every hour might be too often. Presently we record this information when the user closes their browser (or if the user doesn’t close their browser, at the user’s local midnight).

The difference between the user experiencing an hour of un-crashing Firefox and that data being recorded is recording delay. This tends to not exceed 24 hours.

If the user shuts down their browser for the day, there isn’t an active Firefox instance to send us the data for collection. This means we have to wait for the next time the user starts up Firefox to send us their “usage hours” number. If this was a Friday’s record, it could easily take until Monday to be sent.

The difference between the data being recorded and the data being sent is the submission delay. This can take an arbitrary length of time, but we tend to see a decent chunk of the data within two days’ time.

This data is being sent in throughout each and every day. Somewhere at this very moment (or very soon) a user is starting up Firefox and that Firefox will send us some Telemetry. We have the facilities to calculate at any given time the usage hours and the crash numbers for each and every part of each and every day… but this would be a wasteful approach. Instead, a scheduled task performs an aggregation of crash figures and usage hour records per day. This happens once per day and the result is put in the CrashAggregates dataset.

The difference between a crash or usage hour record being submitted and it being present in this daily derived dataset is aggregation delay. This can be anywhere from 0 to 23 hours.

This dataset is stored in one format (parquet), but queried in another (prestodb fronted by re:dash). This migration task is performed once per day some time after the dataset is derived.

The difference between the aggregate dataset being derived and its appearance in the query interface is migration delay. This is roughly an hour or two.

Many queries run against this dataset and are scheduled sequentially or on an ad hoc basis. The query that supplies the data to the telemetry crash dashboard runs once per day at 2pm UTC.

The difference between the dataset being present in the query interface and the query running is query scheduling delay. This is about an hour.

This provides us with a handy equation:

latency = reporting delay + submission delay + aggregation delay + migration delay + query scheduling delay

With typical values, we’re seeing:

latency = 6 hours + 24 hours + 12 hours + 1 hour + 1 hour

latency = 2 days

And since submission delay is unbounded (and tends to be longer than 24 hours on weekends and over holidays), the latency is actually a range of probable values. We’re never really sure when we’ve heard from everyone.

So what’s to blame, and what can we do about it?

The design of Firefox’s Telemetry data reporting system is responsible for reporting delay and submission delay: two of the worst offenders. submission delay could be radically improved if we devoted engineering resources to submitting Telemetry (both crash numbers and “usage hour” reports) without an active Firefox running (using, say, a small executable that runs as soon as Firefox crashes or closes). reporting delay will probably not be adjusted very much as we don’t want to saturate our users’ bandwidth (or our own).

We can improve aggregation delay simply by running the aggregation, migration, and query multiple times a day, as information is coming in. Proper scheduling infrastructure can remove all the non-processing overhead from migration delay and query scheduling delay which can bring them easily down below a single hour, combined.

In conclusion, even given a clear and specific metric and a data collection mechanism with which to collect all the data necessary to measure it, there are still problems when you try to use it to make timely decisions. There are technical solutions to these technical problems, but they require a focused approach to improve the timeliness of reported data.

:chutten

All Aboard the Release Train!

I’m working on a dashboard (a word which here means a website that has plots on it) to show Firefox crashes per channel over time. The idea is to create a resource for Release Management to figure out if there’s something wrong with a given version of Firefox with enough lead time to then do something about it.

It’s entering its final form (awaiting further feedback from relman and others to polish it) and while poking around (let’s call it “testing”) I noticed a pattern:auroramcscrashes

Each one of those spikes happens on a “merge day” when the Aurora channel (the channel powering Firefox Developer Edition) updates to the latest code from the Nightly channel (the channel powering Firefox Nightly). From then on, only stabilizing changes are merged so that when the next merge day comes, we have a nice, stable build to promote from Aurora to the Beta channel (the channel powering Firefox Beta). Beta undergoes further stabilization on a slower schedule (a new build every week or two instead of daily) to become a candidate for the Release channel (which powers Firefox).

For what it’s worth, this is called the Train Model. It allows us to ship stable and secure code with the latest features every six-to-eight weeks to hundreds of millions of users. It’s pretty neat.

And what that picture up there shows is that it works. The improvement is especially noticeable on the Aurora branch where we go from crashing 9-10 times for every thousand hours of use to 3-4 times for every thousand hours.

Now, the number and diversity of users on the Aurora branch is limited, so when the code rides the trains to Beta, the crash rate goes up. Suddenly code that seemed stable across the Aurora userbase is exposed to previously-unseen usage patterns and configurations of hardware and software and addons. This is one of the reasons why our pre-release users are so precious to us: they provide us with the early warnings we need to stabilize our code for the wild diversity of our users as well as the wild diversity of the Web.

If you’d like to poke around at the dashboard yourself, it’s over here. Eventually it’ll get merged into telemetry.mozilla.org when it’s been reviewed and formalized.

If you have any brilliant ideas of how to improve it, or find any mistakes or bugs, please comment on the tracking bug where the discussion is currently ongoing.

If you’d like to help in an different way, consider installing Firefox Beta. It should look and act almost exactly like your current Firefox, but with some pre-release goodies that you get to use before anyone else (like how Firefox Beta 50 lets you use Reader Mode when you want to print a web page to skip all the unnecessary sidebars and ads). By doing this you are helping us ship the best Firefox we can.

:chutten

Firefox Windows XP Exit Plan

7drhiqr

Last I reported, the future of Firefox’s Windows XP support was uncertain, even given long-standing plans for its removal.

With the filing of bug 1305453 and the commensurate discussion on firefox-dev, things are now much more certain. Firefox will (pending approval) be ending support for Windows XP and Windows Vista in Firefox 53 (scheduled release date: April 18, 2017).

Well, thanks for tuning in. I guess I can wrap up these posts and…

Okay, yes, you’re right. It isn’t that simple.

First, the actual day that Windows XP and Windows Vista users will cease getting Firefox updates is actually much later than April of 2017. Instead, those users will continue to receive security updates until April of 2018 because the version of Firefox 52 they’ll be getting is an Extended Support Release.

What is Firefox Extended Support Release (ESR)? It’s a version of Firefox for enterprises and other risk-averse users that receives security (and only security) updates for one year after initial release. This allows these change-weary users to still chose Firefox without having to consider how to support a major version release every six-to-eight weeks.

Windows XP and Vista users will be shunted from the normal roughly-six-weeks-per-version “Release” channel to the “ESR” channel for 52. New installs on Windows XP and Vista at that time will also be for ESR 52. This should ensure that our decreasing Windows XP+Vista userbase will be supported until they’ve finished diminishing into…

…well, okay that’s not simple either. In absolute terms, our Windows XP userbase has actually increased over the past six months or so. Some if not all of this is the end of the well-documented slump we see in user population over the Northern-hemisphere Summer (we’re now coming back up to Fall-Winter-Spring numbers). It is also possible that we’ve seen some ex-Chrome users fleeing Google’s drop of support from earlier this year.

Deseasonalized numbers for just WinXP users are hard to come by, so this is fairly speculative. One thing that’s for certain is that the diminishing Windows XP userbase trend I had previously observed (and was counting on seeing continue) is no longer in evidence.

So what happens if we reach April of 2018 and we still have millions and millions of Windows XP users still relying on Firefox to provide them with a safe way to navigate the increasingly-hostile environment of the Web?

No idea. I guess this means I’ll be continuing to blog about WinXP for a couple years yet.

:chutten

One-Year Moziversary

Holy crap, I totally forgot to note on the 19th that it had been a full year since I started working at Mozilla!

In that year I’ve done surprisingly little of what I was hired to do (coding in Gecko, writing webpages and dashboards for performance monitoring) and a whole lot more of stuff I’d never done before (data analysis and interpretation, data management, teaching, blogging, interviewing).

Which is pretty awesome, I have to say, even if I sorta-sleepwalked into some of these responsibilities.

Highlights include hosting a talk at MozLondon (Death-Defying Stats), running… three? iterations of Telemetry Onboarding (including a complete rewrite), countless phone screens for positions within and outside of the team, being a team lead for just long enough to help my two reports leave the team (one to another team, the other to another company :S), and becoming (somehow) a voice for Windows XP Firefox users (expect another blog post in that series, real soon).

For my New MozYear Resolutions, I resolve to blog more (I’ve certainly slacked on the blogging in the latter half of the year), think more, and mentor more. We’ll see how that goes.

Anyway, here’s to another year!

:chutten