When do All Firefox Users Update?

Last time we talked about updates I wrote about all of what goes into an individual Firefox user’s ability to update to a new release. We looked into how often Firefox checks for updates, and how we sometimes lie and say that there isn’t an update even after release day.

But how does that translate to a population?

Well, let’s look at some pictures. First, the number of “update” pings we received from users during the recent Firefox 61 release:update_ping_volume_release61

This is a real-time look at how many updates were received or installed by Firefox users each minute. There is a plateau’s edge on June 26th shortly after the update went live (around 10am PDT), and then a drop almost exactly 24 hours later when we turned updates off. This plot isn’t the best for looking into the mechanics of how this works since it shows volume of all types of “update” pings from all versions, so it includes users finally installing Firefox Quantum 57 from last November as well as users being granted the fresh update for Firefox 61.

Now that it’s been a week we can look at our derived dataset of “update” pings and get a more nuanced view (actually, latency on this dataset is much lower than a week, but it’s been at least a week). First, here’s the same graph, but filtered to look at only clients who are letting us know they have the Firefox 61 update (“update” ping, reason: “ready”) or they have already updated and are running Firefox 61 for the first time after update (“update” ping, reason: “success”):update_ping_volume_61only_wide

First thing to notice is how closely the two graphs line up. This shows how, during an update, the volume of “update” pings is dominated by those users who are updating to the most recent version.

And it’s also nice validation that we’re looking at the same data historically that we were in real time.

To step into the mechanics, let’s break the graph into its two constituent parts: the users reporting that they’ve received the update (reason: “ready”) and the users reporting that they’re now running the updated Firefox (reason: “success”).

update_ping_volume_stackedupdate_ping_volume_unstacked

The first graph shows the two lines stacked for maximum similarity to the graphs above. The second unstacks the two so we can examine them individually.

It is now much clearer to see how and when we turned updates on and off during this release. We turned them on June 26, off June 27, then on again June 28. The blue line also shows us some other features: the Canada Day weekend of lower activity June 30 and July 1, and even time-of-day effects where our sharpest peaks are (EDT) 6-8am, 12-2pm, and a noticeable hook at 1am.

(( That first peak is mostly made up of countries in the Central European Timezone UTC+1 (e.g. Germany, France, Poland). Central Europe’s peak is so sharp because Firefox users, like the populations of European countries, are concentrated mostly in that one timezone. The second peak is North and South America (e.g. United States, Brazil). It is broader because of how many timezones the Americas span (6) and how populations are dense in the Eastern Timezone and Pacific Timezone which are 3 hours apart (UTC-5 to UTC-8). The noticeable hook is China and Indonesia. China has one timezone for its entire population (UTC+8), and Indonesia has three. ))

This blue line shows us how far we got in the last post: delivering the updates to the user.

The red line shows why that’s only part of the story, and why we need to look at populations in addition to just an individual user.

Ultimately we want to know how quickly we can reach our users with updated code. We want, on release day, to get our fancy new bells and whistles out to a population of users small enough that if something goes wrong we’re impacting the fewest number of them, but big enough that we can consider their experience representative of what the whole Firefox user population would experience were we to release to all of them.

To do this we have a two levers: we can change how frequently Firefox asks for updates, and we can change how many Firefox installs that ask for updates actually get it right away. That’s about it.

So what happens if we change Firefox to check for updates every 6 hours instead of every 12? Well, that would ensure more users will check for updates during periods they’re offered. It would also increase the likelihood of a given user being offered the update when their Firefox asks for one. It would raise the blue line a bit in those first hours.

What if we change the %ge of update requests that are offers? We could tune up or down the number of users who are offered the updates. That would also raise the blue line in those first hours. We could offer the update to more users faster.

But neither of these things would necessarily increase the speed at which we hit that Goldilocks number of users that is both big enough to be representative, and small enough to be prudent. Why? The red line is why. There is a delay between a Firefox having an update and a user restarting it to take advantage of it.

Users who have an update won’t install it immediately (see how the red line is lower than the blue except when we turn updates off completely), and even if we turn updates off it doesn’t stop users who have the update from installing it (see how the red line continues even when the blue line is floored).

Even with our current practices of serving updates, more users receive the update than install it for at least the first week after release.

If we want to accelerate users getting update code, we need to control the red line. Which we don’t. And likely can’t.

I mean, we can try. When an update has been pending for long enough, you get a little arrow on your Firefox menu: update_arrow.png

If you leave it even longer, we provide a larger piece of UI: a doorhanger.

restartDoorhanger

We can tune how long it takes to show these. We can show the doorhanger immediately when the update is ready, asking the user to stop what they’re doing and–

windows_update_bluewindows_updatewindows_update_blue_lg

…maybe we should just wait until users update their own selves, and just offer some mild encouragement if they’ve put it off for, say, four days? We’ll give the user eight days before we show them anything as invasive as the doorhanger. And if they dismiss that doorhanger, we’ll just not show it again and trust they’ll (eventually) restart their browser.

…if only because Windows restarted their whole machines when they weren’t looking.

(( If you want your updates faster and more frequent, may I suggest installing Firefox Beta (updates every week) or Firefox Nightly (updates twice a day)? ))

This means that if the question is “When do all Firefox users update?” the answer is, essentially, “When they want to.” We can try and serve the updates faster, or try to encourage them to restart their browsers sooner… but when all is said and done our users will restart their browsers when they want to restart their browsers, and not a moment before.

Maybe in the future we’ll be able to smoothly update Firefox when we detect the user isn’t using it and when all the information they have entered can be restored. Maybe in the future we’ll be able to update seamlessly by porting the running state from instance to instance. But for now, given our current capabilities, we don’t want to dictate when a user has to restart their browser.

…but what if we could ship new features more quickly to an even more controlled segment of the release population… and do it without restarting the user’s browser? Wouldn’t that be even better than updates?

Well, we might answers for that… but that’s a subject for another time.

:chutten

Advertisements

Faster Event Telemetry with “event” Pings

Screenshot_2018-07-04 New Query(1).pngEvent Telemetry is the means by which we can send ordered interaction data from Firefox users back to Mozilla where we can use it to make product decisions.

For example, we know from a histogram that the most popular way of opening the Developer Tools in Firefox Beta 62 is by the shortcut key (Ctrl+Shift+I). And it’s nice to see that the number of times the Javascript Debugger was opened was roughly 1/10th of the number of times the shortcut key was used.

…but are these connected? If so, how?

And the Javascript Profiler is opened only half as often as the Debugger. Why? Isn’t it easy to find that panel from the Debugger? Are users going there directly from the DOM view or is it easier to find from the Debugger?

To determine what parts of Firefox our users are having trouble finding or using, we often need to know the order things happen. That’s where Event Telemetry comes into play: we timestamp things that happen all over the browser so we can see what happens and in what order (and a little bit of how long it took to happen).

Event Telemetry isn’t new: it’s been around for about 2 years now. And for those two years it has been piggy-backing on the workhorse of the Firefox Telemetry system: the “main” ping.

The “main” ping carries a lot of information and is usually sent once per time you close your Firefox (or once per day, whichever is shorter). As such, Event Telemetry was constrained in how it was able to report this ordered data. It takes two whole days to get 95% of it (because that’s how long it takes us to get “main” pings), and it isn’t allowed to send more than one thousand events per process (lest it balloon the size of the “main” ping, causing problems).

This makes the data slow, and possibly incomplete.

With the landing of bug 1460595 in Firefox Nightly 63 last week, Event Telemetry now has its own ping: the “event” ping.

The “event” ping maintains the same 1000-events-per-process-per-ping limit as the “main” ping, but can send pings as frequently as one ping every ten minutes. Typically, though, it waits the full hour before sending as there isn’t any rush. A maximum delay of an hour still makes for low-latency data, and a minimum delay of ten minutes is unlikely to be overrun by event recordings which means we should get all of the events.

This means it takes less time to receive data that is more likely to be complete. This in turn means we can use less of it to get our answers. And it means more efficiency in our decision-making process, which is important when you’re competing against giants.

If you use Event Telemetry to answer your questions with data, now you can look forward to being able to do so faster and with less worry about losing data along the way.

And if you don’t use Event Telemetry to answer your questions, maybe now would be a good time to start.

The “event” ping landed in Firefox Nightly 63 (build id 20180627100027) and I hope to have it uplifted to Firefox Beta 62 in the coming days.

Thanks to :sunahsuh for her excellent work reviewing the proposal and in getting the data into the derived datasets so they can be easily queried, and further thanks to the Data Team for their support.

:chutten

Some More Very Satisfying Graphs

I guess I just really like graphs that step downwards:

Screenshot_2018-06-27 Telemetry Budget Forecasting

Earlier this week :mreid noticed that our Nightly population suddenly started sending us, on average, 150 fewer kilobytes (uncompressed) of data per ping. And they started doing this in the middle of the previous week.

Step 1 was to panic that we were missing information. However, no one had complained yet and we can usually count on things that break to break loudly, so we cautiously-optimistically put our panic away.

Step 2 was to see if the number of pings changed. It could be we were being flooded with twice as many pings at half the size, for the same volume. This was not the case:

Screenshot_2018-06-27 Telemetry Budget Forecasting(2)

Step 3 was to do some code archaeology to try and determine the “culprit” change that was checked into Firefox and resulted in us sending so much less data. We quickly hit upon the removal of BrowserUITelemetry and that was that.

…except… when I went to thank :Standard8 for removing BrowserUITelemetry and saving us and our users so much bandwidth, he was confused. To the best of his knowledge, BrowserUITelemetry was already not being sent. And then I remembered that, indeed, back in March :janerik had been responsible for stopping many things like BrowserUITelemetry from being sent (since they were unmaintained and unused).

So I fired up an analysis notebook and started poking to see if I could find out what parts of the payload had suddenly decreased in size. Eventually, I generated a plot that showed quite clearly that it was the keyedHistograms section that had decreased so radically.

Screenshot_2018-06-27 main_ping_size - Databricks

Around the same time :janerik found the culprit in the list of changes that went into the build: we are no longer sending a couple of incredibly-verbose keyed histograms because their information is now much more readily available in profiles.

The power of cleaning up old code: removing 150kb from the average “main” ping sent multiple times per day by each and every Firefox Nightly user.

Very satisfying.

:chutten

Perplexing Graphs: The Case of the 0KB Virtual Memory Allocations

Every Monday and Thursday around 3pm I check dev-telemetry-alerts to see if there have been any changes detected in the distribution of any of the 1500-or-so pieces of anonymous usage statistics we record in Firefox using Firefox Telemetry.

This past Monday there was one. It was a little odd.489b9ce7-84e6-4de0-b52d-e0179a9fdb1a

Generally, when you’re measuring continuous variables (timings, memory allocations…) you don’t see too many of the same value. Sure, there are common values (2GB of physical memory, for instance), but generally you don’t suddenly see a quarter of all reports become 0.

That was weird.

So I did what I always do when I find an alert that no one’s responded to, and triaged it. Mostly this involves looking at it on telemetry.mozilla.org to see if it was still happening, whether it was caused by a change in submission volumes (could be that we’re suddenly hearing from a lot more users, and they all report just “0”, for example), or whether it was limited to a single operating system or architecture:

windowsVSIZE

Hello, Windows.

windowsx64VSIZE

Specifically: hello Windows 64-bit.

With these clues, :erahm was able to highlight for me a bug that might have contributed to this sudden change: enabling Control Flow Guard on Windows builds.

Control Flow Guard (CFG) is a feature of Windows 8.1 (Update 3) and 10 that inserts some runtime checks into your binary to ensure you only make sensible jumps. This protects against certain exploits where attackers force a binary to jump into strange places in the running program, causing Bad Things to happen.

I had no idea how a control flow integrity feature would result in 0-size virtual memory allowances, but when :erahm gives you a hint, you take it. I commented on the bug.

Luckily, I was taken seriously, so a new bug was filed and :tjr looked into it almost immediately. The most important clue came from :dmajor who had the smartest money in the room, and crucial help from :ted who was able to reproduce the bug.

It turns out that turning CFG on made our Virtual Memory allowances jump above two terabytes.

Now, to head off “Firefox iz eatang ur RAM!!!!111eleven” commentary: this is CFG’s fault, not ours. (Also: Virtual Memory isn’t RAM.)

In order to determine what parts of a binary are valid “indirect jump targets”, Windows needs to keep track of them all, and do so performantly enough that the jumps can still happen at speed. Windows does this by maintaining a map with a bit per possible jump location. The bit is 1 if it is a valid location to jump to, and 0 if it is not. On each indirect jump, Windows checks the bit for the jump location and interrupts the process if it was about to jump to a forbidden place.

When running this on a 64-bit machine, this bitmap gets… big. Really big. Two Terabytes big. And that’s using an optimized way of storing data about the jump availability of up to 2^64 (18 quintillion) addresses. Windows puts this in the process’ storage allocations for its own recordkeeping reasons, which means that every 64-bit process with CFG enabled (on CFG-aware Windows versions (8.1 Update 3 and 10)) has a 2TB virtual memory allocation.

So. We have an abnormally-large value for Virtual Memory. How does that become 0?

Well, those of you with CS backgrounds (or who clicked on the “smartest money” link a few paragraphs back), will be thinking about the word “overflow”.

And you’d be wrong. Ish.

The raw number :ted was seeing was the number 2201166503936. This number is the number of bytes in his virtual memory allocation and is a few powers of two above what we can fit in 32 bits. However, we report the number of kilobytes. The number of kilobytes is 2149576664, well underneath the maximum number you can store in an unsigned 32-bit integer, which we all know (*eyeroll*) is 4294967296. So instead of a number about 512x too big to fit, we get one that can fit almost twice over.

Welll….

So we’re left with a number that should fit, being recorded as 0. So I tried some things and, sure enough, recording the number 2149576664 into any histogram did indeed record as 0. I filed a new bug.

Then I tried numbers plus or minus 1 around :ted’s magic number. They became zeros. I tried recording 2^31 + 1. Zero. I tried recording 2^32 – 1. Zero.

With a sinking feeling in my gut, I then tried recording 2^32 + 1. I got my overflow. It recorded as 1. 2^32 + 2 recorded as 2. And so on.

All numbers between 2^31 and 2^32 were being recorded as 0.

sensibleError

In a sensible language like Rust, assigning an unsigned value to a signed variable isn’t something you can do accidentally. You almost never want to do it, so why make it easy? And let’s make sure to warn the code author that they’re probably making a mistake while we’re at it.

In C++, however, you can silently convert from unsigned to signed. For values between 0 and 2^31 this doesn’t matter. For values between 2^31 and 2^32, this means you can turn a large positive number into a negative number somewhere between -2^31 and -1. Silently.

Telemetry Histograms don’t record negatives. We clamp them to 0. But something in our code was coercing our fancy unsigned 32-bit integer to a signed one before it was clamped to 0. And it was doing it silently. Because C++.

Now that we’ve found the problem, fixed the problem, and documented the problem we are collecting data about the data[citation] we may have lost because of the problem.

But to get there I had to receive an automated alert (which I had to manually check), split the data against available populations, become incredibly lucky and run it by :erahm who had an idea of what it might be, find a team willing to take me seriously, and then do battle with silent type coercion in a language that really should know better.

All in a day’s work, I guess?

:chutten

Firefox Telemetry Use Counters: Over-estimating usage, now fixed

Firefox Telemetry records the usage of certain web features via a mechanism called Use Counters. Essentially, for every document that Firefox loads, we record a “false” if the document didn’t use a counted feature, and a “true” if the document did use that counted feature.

(( We technically count it when the documents are destroyed, not loaded, since a document could use a feature at any time during its lifetime. We also count top-level documents (pages) separately from the count of all documents (including iframes), so we can see if it is the pages that users load that are using a feature or if it’s the subdocuments that the page author loads on the user’s behalf that are contributing the counts. ))

To save space, we decided to count the number of documents once, and the number of “true” values in each use counter. This saved users from having to tell us they didn’t use any of Feature 1, Feature 2, Feature 5, Feature 7, … the “no-use” use counters. They could just tell us which features they did see used, and we could work out the rest.

Only, we got it wrong.

The server-side adjustment of the counts took every use counter we were told about, and filled in the “false” values. A simple fix.

But it didn’t add in the “no-use” use counters. Users who didn’t see a feature used at all weren’t having their “false” values counted.

This led us to under-count the number of “false” values (since we only counted “falses” from users who had at least one “true”), which led us to overestimate the usage of features.

Of all the errors to have, this one was probably the more benign. In failing in the “overestimate” direction we didn’t incorrectly remove features that were being used more than measured… but we may have kept some features that we could have removed, costing mozilla time and energy for their maintenance.

Once we detected the fault, we started addressing it. First, we started educating people whenever the topic came up in email and bugzilla. Second, :gfritzsche added a fancy Use Counter Dashboard that did a client-side adjustment using the correct “true” and “false” values for a given population.

Third, and finally, we fixed the server-side aggregator service to serve the correct values for all data, current and historical.

And that brings us to today: Use Counters are fixed! Please use them, they’re kind of cool.

:chutten

bfcbd97c-80cf-483b-8707-def6057474e6
Before
beb4afee-f937-4729-b210-f4e212da7504
After (4B more samples)

Data Science is Hard: What’s in a Dashboard

1920x1200-4-COUPLE-WEEKS-AFTER
The data is fake, don’t get excited.

Firefox Quantum is here! Please do give it a go. We have been working really hard on it for quite some time, now. We’re very proud of what we’ve achieved.

To show Mozillians how the release is progressing, and show off a little about what cool things we can learn from the data Telemetry collects, we’ve built a few internal dashboards. The Data Team dashboard shows new user count, uptake, usage, install success, pages visited, and session hours (as seen above, with faked data). If you visit one of our Mozilla Offices, you may see it on the big monitors in the common areas.

The dashboard doesn’t look like much: six plots and a little writing. What’s the big deal?

Well, doing things right involved quite a lot more than just one person whipping something together overnight:

1. Meetings for this dashboard started on Hallowe’en, two weeks before launch. Each meeting had between eight and fourteen attendees and ran for its full half-hour allotment each time.

2. In addition there were several one-off meetings: with Comms (internal and external) to make sure we weren’t putting our foot in our mouth, with Data ops to make sure we weren’t depending on datasets that would go down at the wrong moment, with other teams with other dashboards to make sure we weren’t stealing anyone’s thunder, and with SVPs and C-levels to make sure we had a final sign-off.

3. Outside of meetings we spent hours and hours on dashboard design and development, query construction and review, discussion after discussion after discussion…

4. To say nothing of all the bikeshedding.

It’s hard to do things right. It’s hard to do even the simplest things, sometimes. But that’s the job. And Mozilla seems to be pretty good at it.

One last plug: if you want to nudge these graphs a little higher, download and install and use and enjoy the new Firefox Quantum. And maybe encourage others to do the same?

:chutten

Anatomy of a Firefox Update

Alessio (:Dexter) recently landed a new ping for Firefox 56: the “update” ping with reason “ready”. It lets us know when a client’s Firefox has downloaded and installed an update and is only waiting for the user to restart the browser for the update to take effect.

In Firefox 57 he added a second reason for the “update” ping: reason “success”. This lets us know when the user’s started their newly-updated Firefox.

I thought I might as well see what sort of information we could glean from this new data, using the recent shipping of the new Firefox Quantum Beta as a case study.

This is exploratory work and you know what that means[citation needed]: Lots of pretty graphs!

First: the data we knew before the “update” ping: Nothing.

Well, nothing specific. We would know when a given client would use a newly-released build because their Telemetry pings would suddenly have the new version number in them. Whenever the user got around to sending them to us.

We do have data about installs, though. Our stub installer lets us know how and when installs are downloaded and applied. We compile those notifications into a dataset called download_stats. (for anyone who’s interested: this particular data collection isn’t Telemetry. These data points are packaged and sent in different ways.) Its data looks like this:Screenshot-2017-9-29 Recent Beta Downloads.png

Whoops. Well that ain’t good.

On the left we have the tailing edge of users continuing to download installs for Firefox Beta 56 at the rate of 50-150 per hour… and then only a trace level of Firefox Beta 57 after the build was pushed.

It turns out that the stub installer notifications were being rejected as malformed. Luckily we kept the malformed reports around so that after we fixed the problem we could backfill the dataset:Screenshot-2017-10-4 Recent Beta Downloads

Now that’s better. We can see up to 4000 installs per hour of users migrating to Beta 57, with distinct time-of-day effects. Perfectly cromulent, though the volume seems a little low.

But that’s installs, not updates.

What do we get with “update” pings? Well, for one, we can run queries rather quickly. Querying “main” pings to find the one where a user switched versions requires sifting through terabytes of data. The query below took two minutes to run:

Screenshot-2017-10-3 Users Updating to Firefox Quantum Beta 57(1)

The red line is update/ready: the number of pings we received in that hour telling us that the user had downloaded an update to Beta 57 and it was ready to go. The blue line is update/success: the number of pings we received that hour telling us the user had started their new Firefox Quantum Beta instance.

And here it is per-minute, just because we can:Screenshot-2017-10-3 Users Updating to Firefox Quantum Beta 57(2).png

September 30 and October 1 were the weekend. As such, we’d expect their volumes to be lower than the weekdays surrounding them. However, looking at the per-minute graph for update/ready (red), why is Friday the 29th the same height as Saturday the 30th? Fridays are usually noticeably busier than Saturdays.

Friday was Navarati in India (one of our largest market for Beta) but that’s a multi-day festival that started on the Wednesday (and other sources for client data show only a 15% or so dip in user activity on that date in India), so it’s unlikely to have caused a single day’s dip. Friday wasn’t a holiday at all in any of our other larger markets. There weren’t any problems with the updater or “update” ping ingestion. There haven’t been any dataset failures that would explain it. So what gives?

It turns out that Friday’s numbers weren’t low: Saturday’s were high. In order to improve the stability of what was going to become the Firefox 56 release we began on the 26th to offer updates to the new Firefox Quantum Beta to only¬†half of updating Firefox Beta users. To the other half we offered an update to the Firefox 56 Release Candidate.

What is a Release Candidate? Well, for Firefox it is the stabilized, optimized, rebuilt, rebranded version of Firefox that is just about ready to ship to our release population. It is the last chance we have to catch things before it reaches hundreds of millions of users.

It wasn’t until late on the 29th that we opened the floodgates and let the rest of the Beta users update to Beta 57. This contributed to a higher than expected update volume on the 30th, allowing the Saturday numbers to be nearly as voluminous as the Friday ones. You can actually see exactly when we made the change: there’s a sharp jump in the red line late on September 29 that you can see clearly on both “update”-ping-derived plots.

That’s something we wouldn’t see in “main” pings: they only report what version the user is running, not what version they downloaded and when. And that’s not all we get.

The “update”-ping-fueled graphs have two lines. This rather abruptly piques my curiosity about how they might relate to each other. Visually, the update/ready line (red) is almost always higher than the update/success line (blue). This means that we have more clients downloading and installing updates than we have clients restarting into the updated browser in those intervals. We can count these clients by subtracting the blue line from the red and summing over time:Screenshot-2017-10-3 Outstanding Updates for Users Updating to Firefox Quantum Beta 57

There are, as of the time I was drafting this post, about one half of one million Beta clients who have the new Firefox Quantum Beta… but haven’t run it yet.

Given the delicious quantity of improvements in the new Firefox Quantum Beta, they’re in for a pleasant surprise when they do.

And you can join in, if you’d like.

:chutten

(NOTE: earlier revisions of this post erroneously said download_stats counted updater notifications. It counts stub installer notifications. I have reworded the post to correct for this error. Many thanks to :ddurst for catching that)