When do All Firefox Users Update?

Last time we talked about updates I wrote about all of what goes into an individual Firefox user’s ability to update to a new release. We looked into how often Firefox checks for updates, and how we sometimes lie and say that there isn’t an update even after release day.

But how does that translate to a population?

Well, let’s look at some pictures. First, the number of “update” pings we received from users during the recent Firefox 61 release:update_ping_volume_release61

This is a real-time look at how many updates were received or installed by Firefox users each minute. There is a plateau’s edge on June 26th shortly after the update went live (around 10am PDT), and then a drop almost exactly 24 hours later when we turned updates off. This plot isn’t the best for looking into the mechanics of how this works since it shows volume of all types of “update” pings from all versions, so it includes users finally installing Firefox Quantum 57 from last November as well as users being granted the fresh update for Firefox 61.

Now that it’s been a week we can look at our derived dataset of “update” pings and get a more nuanced view (actually, latency on this dataset is much lower than a week, but it’s been at least a week). First, here’s the same graph, but filtered to look at only clients who are letting us know they have the Firefox 61 update (“update” ping, reason: “ready”) or they have already updated and are running Firefox 61 for the first time after update (“update” ping, reason: “success”):update_ping_volume_61only_wide

First thing to notice is how closely the two graphs line up. This shows how, during an update, the volume of “update” pings is dominated by those users who are updating to the most recent version.

And it’s also nice validation that we’re looking at the same data historically that we were in real time.

To step into the mechanics, let’s break the graph into its two constituent parts: the users reporting that they’ve received the update (reason: “ready”) and the users reporting that they’re now running the updated Firefox (reason: “success”).

update_ping_volume_stackedupdate_ping_volume_unstacked

The first graph shows the two lines stacked for maximum similarity to the graphs above. The second unstacks the two so we can examine them individually.

It is now much clearer to see how and when we turned updates on and off during this release. We turned them on June 26, off June 27, then on again June 28. The blue line also shows us some other features: the Canada Day weekend of lower activity June 30 and July 1, and even time-of-day effects where our sharpest peaks are (EDT) 6-8am, 12-2pm, and a noticeable hook at 1am.

(( That first peak is mostly made up of countries in the Central European Timezone UTC+1 (e.g. Germany, France, Poland). Central Europe’s peak is so sharp because Firefox users, like the populations of European countries, are concentrated mostly in that one timezone. The second peak is North and South America (e.g. United States, Brazil). It is broader because of how many timezones the Americas span (6) and how populations are dense in the Eastern Timezone and Pacific Timezone which are 3 hours apart (UTC-5 to UTC-8). The noticeable hook is China and Indonesia. China has one timezone for its entire population (UTC+8), and Indonesia has three. ))

This blue line shows us how far we got in the last post: delivering the updates to the user.

The red line shows why that’s only part of the story, and why we need to look at populations in addition to just an individual user.

Ultimately we want to know how quickly we can reach our users with updated code. We want, on release day, to get our fancy new bells and whistles out to a population of users small enough that if something goes wrong we’re impacting the fewest number of them, but big enough that we can consider their experience representative of what the whole Firefox user population would experience were we to release to all of them.

To do this we have a two levers: we can change how frequently Firefox asks for updates, and we can change how many Firefox installs that ask for updates actually get it right away. That’s about it.

So what happens if we change Firefox to check for updates every 6 hours instead of every 12? Well, that would ensure more users will check for updates during periods they’re offered. It would also increase the likelihood of a given user being offered the update when their Firefox asks for one. It would raise the blue line a bit in those first hours.

What if we change the %ge of update requests that are offers? We could tune up or down the number of users who are offered the updates. That would also raise the blue line in those first hours. We could offer the update to more users faster.

But neither of these things would necessarily increase the speed at which we hit that Goldilocks number of users that is both big enough to be representative, and small enough to be prudent. Why? The red line is why. There is a delay between a Firefox having an update and a user restarting it to take advantage of it.

Users who have an update won’t install it immediately (see how the red line is lower than the blue except when we turn updates off completely), and even if we turn updates off it doesn’t stop users who have the update from installing it (see how the red line continues even when the blue line is floored).

Even with our current practices of serving updates, more users receive the update than install it for at least the first week after release.

If we want to accelerate users getting update code, we need to control the red line. Which we don’t. And likely can’t.

I mean, we can try. When an update has been pending for long enough, you get a little arrow on your Firefox menu: update_arrow.png

If you leave it even longer, we provide a larger piece of UI: a doorhanger.

restartDoorhanger

We can tune how long it takes to show these. We can show the doorhanger immediately when the update is ready, asking the user to stop what they’re doing and–

windows_update_bluewindows_updatewindows_update_blue_lg

…maybe we should just wait until users update their own selves, and just offer some mild encouragement if they’ve put it off for, say, four days? We’ll give the user eight days before we show them anything as invasive as the doorhanger. And if they dismiss that doorhanger, we’ll just not show it again and trust they’ll (eventually) restart their browser.

…if only because Windows restarted their whole machines when they weren’t looking.

(( If you want your updates faster and more frequent, may I suggest installing Firefox Beta (updates every week) or Firefox Nightly (updates twice a day)? ))

This means that if the question is “When do all Firefox users update?” the answer is, essentially, “When they want to.” We can try and serve the updates faster, or try to encourage them to restart their browsers sooner… but when all is said and done our users will restart their browsers when they want to restart their browsers, and not a moment before.

Maybe in the future we’ll be able to smoothly update Firefox when we detect the user isn’t using it and when all the information they have entered can be restored. Maybe in the future we’ll be able to update seamlessly by porting the running state from instance to instance. But for now, given our current capabilities, we don’t want to dictate when a user has to restart their browser.

…but what if we could ship new features more quickly to an even more controlled segment of the release population… and do it without restarting the user’s browser? Wouldn’t that be even better than updates?

Well, we might answers for that… but that’s a subject for another time.

:chutten

Advertisements

Perplexing Graphs: The Case of the 0KB Virtual Memory Allocations

Every Monday and Thursday around 3pm I check dev-telemetry-alerts to see if there have been any changes detected in the distribution of any of the 1500-or-so pieces of anonymous usage statistics we record in Firefox using Firefox Telemetry.

This past Monday there was one. It was a little odd.489b9ce7-84e6-4de0-b52d-e0179a9fdb1a

Generally, when you’re measuring continuous variables (timings, memory allocations…) you don’t see too many of the same value. Sure, there are common values (2GB of physical memory, for instance), but generally you don’t suddenly see a quarter of all reports become 0.

That was weird.

So I did what I always do when I find an alert that no one’s responded to, and triaged it. Mostly this involves looking at it on telemetry.mozilla.org to see if it was still happening, whether it was caused by a change in submission volumes (could be that we’re suddenly hearing from a lot more users, and they all report just “0”, for example), or whether it was limited to a single operating system or architecture:

windowsVSIZE

Hello, Windows.

windowsx64VSIZE

Specifically: hello Windows 64-bit.

With these clues, :erahm was able to highlight for me a bug that might have contributed to this sudden change: enabling Control Flow Guard on Windows builds.

Control Flow Guard (CFG) is a feature of Windows 8.1 (Update 3) and 10 that inserts some runtime checks into your binary to ensure you only make sensible jumps. This protects against certain exploits where attackers force a binary to jump into strange places in the running program, causing Bad Things to happen.

I had no idea how a control flow integrity feature would result in 0-size virtual memory allowances, but when :erahm gives you a hint, you take it. I commented on the bug.

Luckily, I was taken seriously, so a new bug was filed and :tjr looked into it almost immediately. The most important clue came from :dmajor who had the smartest money in the room, and crucial help from :ted who was able to reproduce the bug.

It turns out that turning CFG on made our Virtual Memory allowances jump above two terabytes.

Now, to head off “Firefox iz eatang ur RAM!!!!111eleven” commentary: this is CFG’s fault, not ours. (Also: Virtual Memory isn’t RAM.)

In order to determine what parts of a binary are valid “indirect jump targets”, Windows needs to keep track of them all, and do so performantly enough that the jumps can still happen at speed. Windows does this by maintaining a map with a bit per possible jump location. The bit is 1 if it is a valid location to jump to, and 0 if it is not. On each indirect jump, Windows checks the bit for the jump location and interrupts the process if it was about to jump to a forbidden place.

When running this on a 64-bit machine, this bitmap gets… big. Really big. Two Terabytes big. And that’s using an optimized way of storing data about the jump availability of up to 2^64 (18 quintillion) addresses. Windows puts this in the process’ storage allocations for its own recordkeeping reasons, which means that every 64-bit process with CFG enabled (on CFG-aware Windows versions (8.1 Update 3 and 10)) has a 2TB virtual memory allocation.

So. We have an abnormally-large value for Virtual Memory. How does that become 0?

Well, those of you with CS backgrounds (or who clicked on the “smartest money” link a few paragraphs back), will be thinking about the word “overflow”.

And you’d be wrong. Ish.

The raw number :ted was seeing was the number 2201166503936. This number is the number of bytes in his virtual memory allocation and is a few powers of two above what we can fit in 32 bits. However, we report the number of kilobytes. The number of kilobytes is 2149576664, well underneath the maximum number you can store in an unsigned 32-bit integer, which we all know (*eyeroll*) is 4294967296. So instead of a number about 512x too big to fit, we get one that can fit almost twice over.

Welll….

So we’re left with a number that should fit, being recorded as 0. So I tried some things and, sure enough, recording the number 2149576664 into any histogram did indeed record as 0. I filed a new bug.

Then I tried numbers plus or minus 1 around :ted’s magic number. They became zeros. I tried recording 2^31 + 1. Zero. I tried recording 2^32 – 1. Zero.

With a sinking feeling in my gut, I then tried recording 2^32 + 1. I got my overflow. It recorded as 1. 2^32 + 2 recorded as 2. And so on.

All numbers between 2^31 and 2^32 were being recorded as 0.

sensibleError

In a sensible language like Rust, assigning an unsigned value to a signed variable isn’t something you can do accidentally. You almost never want to do it, so why make it easy? And let’s make sure to warn the code author that they’re probably making a mistake while we’re at it.

In C++, however, you can silently convert from unsigned to signed. For values between 0 and 2^31 this doesn’t matter. For values between 2^31 and 2^32, this means you can turn a large positive number into a negative number somewhere between -2^31 and -1. Silently.

Telemetry Histograms don’t record negatives. We clamp them to 0. But something in our code was coercing our fancy unsigned 32-bit integer to a signed one before it was clamped to 0. And it was doing it silently. Because C++.

Now that we’ve found the problem, fixed the problem, and documented the problem we are collecting data about the data[citation] we may have lost because of the problem.

But to get there I had to receive an automated alert (which I had to manually check), split the data against available populations, become incredibly lucky and run it by :erahm who had an idea of what it might be, find a team willing to take me seriously, and then do battle with silent type coercion in a language that really should know better.

All in a day’s work, I guess?

:chutten

Firefox Telemetry Use Counters: Over-estimating usage, now fixed

Firefox Telemetry records the usage of certain web features via a mechanism called Use Counters. Essentially, for every document that Firefox loads, we record a “false” if the document didn’t use a counted feature, and a “true” if the document did use that counted feature.

(( We technically count it when the documents are destroyed, not loaded, since a document could use a feature at any time during its lifetime. We also count top-level documents (pages) separately from the count of all documents (including iframes), so we can see if it is the pages that users load that are using a feature or if it’s the subdocuments that the page author loads on the user’s behalf that are contributing the counts. ))

To save space, we decided to count the number of documents once, and the number of “true” values in each use counter. This saved users from having to tell us they didn’t use any of Feature 1, Feature 2, Feature 5, Feature 7, … the “no-use” use counters. They could just tell us which features they did see used, and we could work out the rest.

Only, we got it wrong.

The server-side adjustment of the counts took every use counter we were told about, and filled in the “false” values. A simple fix.

But it didn’t add in the “no-use” use counters. Users who didn’t see a feature used at all weren’t having their “false” values counted.

This led us to under-count the number of “false” values (since we only counted “falses” from users who had at least one “true”), which led us to overestimate the usage of features.

Of all the errors to have, this one was probably the more benign. In failing in the “overestimate” direction we didn’t incorrectly remove features that were being used more than measured… but we may have kept some features that we could have removed, costing mozilla time and energy for their maintenance.

Once we detected the fault, we started addressing it. First, we started educating people whenever the topic came up in email and bugzilla. Second, :gfritzsche added a fancy Use Counter Dashboard that did a client-side adjustment using the correct “true” and “false” values for a given population.

Third, and finally, we fixed the server-side aggregator service to serve the correct values for all data, current and historical.

And that brings us to today: Use Counters are fixed! Please use them, they’re kind of cool.

:chutten

bfcbd97c-80cf-483b-8707-def6057474e6
Before
beb4afee-f937-4729-b210-f4e212da7504
After (4B more samples)

Anatomy of a Firefox Update

Alessio (:Dexter) recently landed a new ping for Firefox 56: the “update” ping with reason “ready”. It lets us know when a client’s Firefox has downloaded and installed an update and is only waiting for the user to restart the browser for the update to take effect.

In Firefox 57 he added a second reason for the “update” ping: reason “success”. This lets us know when the user’s started their newly-updated Firefox.

I thought I might as well see what sort of information we could glean from this new data, using the recent shipping of the new Firefox Quantum Beta as a case study.

This is exploratory work and you know what that means[citation needed]: Lots of pretty graphs!

First: the data we knew before the “update” ping: Nothing.

Well, nothing specific. We would know when a given client would use a newly-released build because their Telemetry pings would suddenly have the new version number in them. Whenever the user got around to sending them to us.

We do have data about installs, though. Our stub installer lets us know how and when installs are downloaded and applied. We compile those notifications into a dataset called download_stats. (for anyone who’s interested: this particular data collection isn’t Telemetry. These data points are packaged and sent in different ways.) Its data looks like this:Screenshot-2017-9-29 Recent Beta Downloads.png

Whoops. Well that ain’t good.

On the left we have the tailing edge of users continuing to download installs for Firefox Beta 56 at the rate of 50-150 per hour… and then only a trace level of Firefox Beta 57 after the build was pushed.

It turns out that the stub installer notifications were being rejected as malformed. Luckily we kept the malformed reports around so that after we fixed the problem we could backfill the dataset:Screenshot-2017-10-4 Recent Beta Downloads

Now that’s better. We can see up to 4000 installs per hour of users migrating to Beta 57, with distinct time-of-day effects. Perfectly cromulent, though the volume seems a little low.

But that’s installs, not updates.

What do we get with “update” pings? Well, for one, we can run queries rather quickly. Querying “main” pings to find the one where a user switched versions requires sifting through terabytes of data. The query below took two minutes to run:

Screenshot-2017-10-3 Users Updating to Firefox Quantum Beta 57(1)

The red line is update/ready: the number of pings we received in that hour telling us that the user had downloaded an update to Beta 57 and it was ready to go. The blue line is update/success: the number of pings we received that hour telling us the user had started their new Firefox Quantum Beta instance.

And here it is per-minute, just because we can:Screenshot-2017-10-3 Users Updating to Firefox Quantum Beta 57(2).png

September 30 and October 1 were the weekend. As such, we’d expect their volumes to be lower than the weekdays surrounding them. However, looking at the per-minute graph for update/ready (red), why is Friday the 29th the same height as Saturday the 30th? Fridays are usually noticeably busier than Saturdays.

Friday was Navarati in India (one of our largest market for Beta) but that’s a multi-day festival that started on the Wednesday (and other sources for client data show only a 15% or so dip in user activity on that date in India), so it’s unlikely to have caused a single day’s dip. Friday wasn’t a holiday at all in any of our other larger markets. There weren’t any problems with the updater or “update” ping ingestion. There haven’t been any dataset failures that would explain it. So what gives?

It turns out that Friday’s numbers weren’t low: Saturday’s were high. In order to improve the stability of what was going to become the Firefox 56 release we began on the 26th to offer updates to the new Firefox Quantum Beta to only half of updating Firefox Beta users. To the other half we offered an update to the Firefox 56 Release Candidate.

What is a Release Candidate? Well, for Firefox it is the stabilized, optimized, rebuilt, rebranded version of Firefox that is just about ready to ship to our release population. It is the last chance we have to catch things before it reaches hundreds of millions of users.

It wasn’t until late on the 29th that we opened the floodgates and let the rest of the Beta users update to Beta 57. This contributed to a higher than expected update volume on the 30th, allowing the Saturday numbers to be nearly as voluminous as the Friday ones. You can actually see exactly when we made the change: there’s a sharp jump in the red line late on September 29 that you can see clearly on both “update”-ping-derived plots.

That’s something we wouldn’t see in “main” pings: they only report what version the user is running, not what version they downloaded and when. And that’s not all we get.

The “update”-ping-fueled graphs have two lines. This rather abruptly piques my curiosity about how they might relate to each other. Visually, the update/ready line (red) is almost always higher than the update/success line (blue). This means that we have more clients downloading and installing updates than we have clients restarting into the updated browser in those intervals. We can count these clients by subtracting the blue line from the red and summing over time:Screenshot-2017-10-3 Outstanding Updates for Users Updating to Firefox Quantum Beta 57

There are, as of the time I was drafting this post, about one half of one million Beta clients who have the new Firefox Quantum Beta… but haven’t run it yet.

Given the delicious quantity of improvements in the new Firefox Quantum Beta, they’re in for a pleasant surprise when they do.

And you can join in, if you’d like.

:chutten

(NOTE: earlier revisions of this post erroneously said download_stats counted updater notifications. It counts stub installer notifications. I have reworded the post to correct for this error. Many thanks to :ddurst for catching that)

Two Days, or How Long Until The Data Is In

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs people open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

 

The Photonization of about:telemetry

This summer I mentored :flyingrub for a Google Summer of Code project to redesign about:telemetry. You can read his Project Submission Document here.

Background

Google Summer of Code is a program funded by Google to pay students worldwide to contribute in meaningful ways to open source projects.

about:telemetry is a piece of Firefox’s UI that allows users to inspect the anonymous usage data we collect to improve Firefox. For instance, we look at the maximum number of tabs our users have open during a session (someone or several someones have more than one thousand tabs open!). If you open up a tab in Firefox and type in about:telemetry (then press Enter), you’ll see the interface we provide for users to examine their own data.

Mozilla is committed to putting users in control of their data. about:telemetry is a part of that.

Then

When :flyingrub started work on about:telemetry, it looked like this (Firefox 55):

oldAboutTelemetry

It was… functional. Mostly it was intended to be used by developers to ensure that data collection changes to Firefox actually changed the data that was collected. It didn’t look like part of Firefox. It didn’t look like any other about: page (browse to about:about to see a list of about: pages). It didn’t look like much of anything.

Now

After a few months of polishing and tweaking and input from UX, it looks like this (Firefox Nightly 57):

newAboutTelemetry

Well that’s different, isn’t it?

It has been redesigned to follow the Photon Design System so that it matches how Firefox 57 looks. It has been reorganized into more functional groups, has a new top-level search, and dozens of small tweaks to usability and visibility so you can see more of your data at once and get to it faster.

newAboutTelemetry-histograms.png

Soon

Just because Google Summer of Code is done doesn’t mean about:telemetry is done. Work on about:telemetry continues… and if you know some HTML, CSS, and JavaScript you can help out! Just pick a bug from the “Depends on” list here, and post a comment asking if you can help out. We’ll be right with you to help get you started. (Though you may wish to read this first, since it is more comprehensive than this blog post.)

Even if you can’t or don’t want to help out, you can take sneak a peek at the new design by downloading and using Firefox Nightly. It is blazing fast with a slick new design and comes with excellent new features to help be your agent on the Web.

We expect :flyingrub will continue to contribute to Firefox (as his studies allow, of course. He is a student, and his studies should be first priority now that GSoC is done), and we thank him very much for all of his good work this Summer.

:chutten

Data Science is Hard: Client Delays

Delays suck, but unmeasured delays suck more. So let’s measure them.

I’ve previous talked about delays as they relate to crash pings. This time we’re looking at the core of Firefox Telemetry data collection: the “main” ping. We’ll be looking at a 10% sample of all “main” pings submitted on Tuesday, January 10th[1].

In my previous post on delays, I defined five types of delay: recording, submission, aggregation, migration, and query scheduling. This post is about delays on the client side of the equation, so we’ll be focusing on the first two: recording, and submission.

Recording Delay

How long does it take from something happening, to having a record of it happening? We count HTTP response codes (as one does), so how much time passes from that first HTTP response to the time when that response’s code is packaged into a ping to be sent to our servers?

output_20_1

This is a Cumulative Distribution Functions or CDF. The ones in this post show you what proportion (0% – 100%) of “main” pings we’re looking at arrived with data that falls within a certain timeframe (0 – 96 hours). So in this case, look at the red, “aurora”-branch line. It crosses the 0.9 y-axis line at about the 8 x-axis line. This means 90% of the pings had a recording delay of 8 hours or less.

Which is fantastic news, especially since every other channel (release and beta vying for fastest speeds) gets more of its pings in even faster. 90% of release pings have a recording delay of at most 4 hours.

And notice that shelf at 24 hours, where every curve basically jumps to 100%? If users leave their browsers open for longer than a day, we cut a fresh ping at midnight. Glad to see evidence that it’s working.

All in all it shows that we can expect recording delays of under 30min for most pings across all channels. This is not a huge source of delay.

Submission Delay

With all that data finally part of a “main” ping, how long before the servers are told? For now, Telemetry has to wait for the user to restart their Firefox before it is able to send its pings. How long can that take?

output_23_1

Ouch.

Now we see aurora is no longer the slowest, and has submission delays very similar to release’s submission delays.  The laggard is now beta… and I really can’t figure out why. If Beta users are leaving their browsers open longer, we’d expect to see them be on the slower side of the “Recording Delay CDF” plot. If Beta users are leaving their browser closed longer, we’d expect them to show up lower on Engagement Ratio plots (which they don’t).

A mystery.

Not a mystery is that nightly has the fastest submission times. It receives updates every day so users have an incentive to restart their browsers often.

Comparing Submission Delay to Recording Delay, you can see how this is where we’re focusing most of our “Get More Data, Faster” attentions. If we wait for 90% of “main” pings to arrive, then we have to wait at least 17 hours for nightly data, 28 hours for release and aurora… and over 96 hours for beta.

And that’s just Submission Delay. What if we measured the full client -> server delay for data?

Combined Client Delay

output_27_1

With things the way they were on 2017-01-10, to get 90% of “main” pings we need to wait a minimum of 22 hours (nightly) and a maximum of… you know what, I don’t even know. I can’t tell where beta might cross the 0.9 line, but it certainly isn’t within 96 hours.

If we limit ourselves to 80% we’re back to a much more palatable 11 hours (nightly) to 27 hours (beta). But that’s still pretty horrendous.

I’m afraid things are actually even worse than I’m making it out to be. We rely on getting counts out of “main” pings. To count something, you need to count every single individual something. This means we need 100% of these pings, or as near as we can get. Even nightly pings take longer than 96 hours to get us more than 95% of the way there.

What do we use “main” pings to count? Amongst other things, “usage hours” or “how long has Firefox been open”. This is imperative to normalizing crash information properly so we can determine the health and stability of a release.

As you can imagine, we’re interested in knowing this as fast as possible. And as things stood a couple of Tuesdays ago, we have a lot of room for improvement.

For now, expect more analyses like this one (and more blog posts like this one) examining how slowly or quickly we can possibly get our data from the users who generate it to the Mozillians who use it to improve Firefox.

:chutten

[1]: Why did I look at pings from 2017-01-10? It was a recent Tuesday (less weekend effect) well after Gregorian New Year’s Day, well before Chinese New Year’s Day, and even a decent distance from Epiphany. Also the 01-10 is a mirror which I thought was neat.