Two Days, or How Long Until The Data Is In

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs people open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

 

Advertisements

The Photonization of about:telemetry

This summer I mentored :flyingrub for a Google Summer of Code project to redesign about:telemetry. You can read his Project Submission Document here.

Background

Google Summer of Code is a program funded by Google to pay students worldwide to contribute in meaningful ways to open source projects.

about:telemetry is a piece of Firefox’s UI that allows users to inspect the anonymous usage data we collect to improve Firefox. For instance, we look at the maximum number of tabs our users have open during a session (someone or several someones have more than one thousand tabs open!). If you open up a tab in Firefox and type in about:telemetry (then press Enter), you’ll see the interface we provide for users to examine their own data.

Mozilla is committed to putting users in control of their data. about:telemetry is a part of that.

Then

When :flyingrub started work on about:telemetry, it looked like this (Firefox 55):

oldAboutTelemetry

It was… functional. Mostly it was intended to be used by developers to ensure that data collection changes to Firefox actually changed the data that was collected. It didn’t look like part of Firefox. It didn’t look like any other about: page (browse to about:about to see a list of about: pages). It didn’t look like much of anything.

Now

After a few months of polishing and tweaking and input from UX, it looks like this (Firefox Nightly 57):

newAboutTelemetry

Well that’s different, isn’t it?

It has been redesigned to follow the Photon Design System so that it matches how Firefox 57 looks. It has been reorganized into more functional groups, has a new top-level search, and dozens of small tweaks to usability and visibility so you can see more of your data at once and get to it faster.

newAboutTelemetry-histograms.png

Soon

Just because Google Summer of Code is done doesn’t mean about:telemetry is done. Work on about:telemetry continues… and if you know some HTML, CSS, and JavaScript you can help out! Just pick a bug from the “Depends on” list here, and post a comment asking if you can help out. We’ll be right with you to help get you started. (Though you may wish to read this first, since it is more comprehensive than this blog post.)

Even if you can’t or don’t want to help out, you can take sneak a peek at the new design by downloading and using Firefox Nightly. It is blazing fast with a slick new design and comes with excellent new features to help be your agent on the Web.

We expect :flyingrub will continue to contribute to Firefox (as his studies allow, of course. He is a student, and his studies should be first priority now that GSoC is done), and we thank him very much for all of his good work this Summer.

:chutten

Another Advantage of Decreasing Data Latency: Flatter Graphs

I’ve muttered before about how difficult it can be to measure application crashes. The most important lesson is that you can’t just count the number of crashes, you must normalize it by some “usage” value in order to determine whether a crashy day is because the application got crashier or because the application was just being used more.

Thus you have a numerator (number of crashes) and a denominator (some proxy of application usage) to determine the crash rate: crashes-per-use.

The current dominant denominator for Firefox is “thousand hours that Firefox is open,” or “kilo-usage-hours (kuh).”

The biggest problem we’ve been facing lately is how our numerator (number of crashes) comes in at a different rate and time than our denominator (kilo-usage-hours) due to the former being transmitted nearly-immediately via “crash” ping and the former being transmitted occasionally via “main” ping.

With pingsender now sending most “main” pings as soon as they’re created, our client submission delay for “main” pings is now roughly in line with the client submission delay of “crash” pings.

What does this mean? Well, look at this graph from https://telemetry.mozilla.org/crashes:

Screenshot-2017-7-25 Crash Rates (Telemetry)

This is the Firefox Beta Main Crash Rate (number of main process crashes on Firefox Beta divided by the number of thousands of hours users had Firefox Beta running) over the past three months or so. The spike in the middle is when we switched from Firefox Beta 54 to Firefox Beta 55. (Most of that spike is a measuring artefact due to a delay between a beta being available and people installing it. Feel free to ignore it for our purposes.)

On the left in the Beta 54 data there is a seven-day cycle where Sundays are the lowest point and Saturday is the highest point.

On the right in the Beta 55 data, there is no seven-day cycle. The rate is flat. (It is a little high, but flat. Feel free to ignore its height for our purposes.)

This is because sending “main” pings with pingsender is behaviour that ships in Firefox 55. Starting with 55, instead of having most of our denominator data (usage hours) coming in one day late due to “main” ping delay, we have that data in-sync with the numerator data (main crashes), resulting in a flat rate.

You can see it in the difference between Firefox ESR 52 (yellow) and Beta 55 (green) in the kusage_hours graph also on https://telemetry.mozilla.org/crashes:

Screenshot-2017-7-27 Crash Rates (Telemetry)

On the left, before Firefox Beta 55’s release, they were both in sync with each other, but one day behind the crash counts. On the right, after Beta 55’s release, notice that Beta 55’s cycle is now one day ahead of ESR 52’s.

This results in still more graphs that are quite satisfying. To me at least.

It also, somewhat more importantly, now makes the crash rate graph less time-variable. This reduces cognitive load on people looking at the graphs for explanations of what Firefox users experience in the wild. Decision-makers looking at these graphs no longer need to mentally subtract from the graph for Saturday numbers, adding that back in somehow for Sundays (and conducting more subtle adjustments through the week).

Now the rate is just the rate. And any change is much more likely to mean a change in crashiness, not some odd day-of-week measurement you can ignore.

I’m not making these graphs to have them ignored.

(many thanks to :philipp for noticing this effect and forcing me to explain it)

:chutten

Latency Improvements, or, Yet Another Satisfying Graph

This is the third in my ongoing series of posts containing satisfying graphs.

Today’s feature: a plot of the mean and 95th percentile submission delays of “main” pings received by Firefox Telemetry from users running Firefox Beta.

Screenshot-2017-7-12 Beta _Main_ Ping Submission Delay in hours (mean, 95th %ile)

We went from receiving 95% of pings after about, say, 130 hours (or 5.5 days) down to getting them within about 55 hours (2 days and change). And the numbers will continue to fall as more beta users get the modern beta builds with lower latency ping sending thanks to pingsender.

What does this mean? This means that you should no longer have to wait a week to get a decently-rigorous count of data that comes in via “main” pings (which is most of our data). Instead, you only have to wait a couple of days.

Some teams were using the rule-of-thumb of ten (10) days before counting anything that came in from “main” pings. We should be able to reduce that significantly.

How significantly? Time, and data, will tell. This quarter I’m looking into what guarantees we might be able to extend about our data quality, which includes timeliness… so stay tuned.

For a more rigorous take on this, partake in any of dexter’s recent reports on RTMO. He’s been tracking the latency improvements and possible increases in duplicate ping rates as these changes have ridden the trains towards release. He’s blogged about it if you want all the rigour but none of Python.

:chutten

FINE PRINT: Yes, due to how these graphs work they will always look better towards the end because the really delayed stuff hasn’t reached us yet. However, even by the standards of the pre-pingsender mean and 95th percentiles we are far enough after the massive improvement for it to be exceedingly unlikely to change much as more data is received. By the post-pingsender standards, it is almost impossible. So there.

FINER PRINT: These figures include adjustments for client clocks having skewed relative to server clocks. Time is a really hard problem when even on a single computer and trying to reconcile it between many computers separated by oceans both literal and metaphorical is the subject of several dissertations and, likely, therapy sessions. As I mentioned above, for rigour and detail about this and other aspects, see RTMO.

Apple Didn’t Kill BlackBerry

bbred_wide

It was Oracle.

And I don’t mean “an Oracle” in the allegorical way Shakespeare had it where it was MacBeth’s prophecy-fueled hubris what incited the incidents (though it is pretty easy to cast anything in the mobile space as a reimaging of the Scottish Play). I mean the company Oracle was the primary agent of the the downfall of the company then-known as Research in Motion.

And they probably didn’t even mean to do it.

To be clear: this is my theory, these are my opinions, and all of it’s based on what I can remember from nearly a decade ago.

At the end of June 2007, Apple released the iPhone in the US. It was an odd little device. It didn’t have apps or GPS or 3G (wouldn’t have any of those until July 2008), it was only available on the AT&T network (a one-year exclusivity agreement), and it didn’t have copy-paste (that took until June 2009).

Worst of all, it didn’t even run Java.

Java was incredibly important in the 2000s. It was the only language both powerful enough on the day’s mobile hardware to be useful and sandboxed enough from that hardware to be safe to run.

And the iPhone didn’t have it! In fact, in the release of the SDK agreement in 2008, Apple excluded Java (and browser engines like Firefox’s Gecko) by disallowing the running of interpreted code.

It is understandable, then, that the executives in Research in Motion weren’t too worried. The press immediately called the iPhone a BlackBerry Killer… but they’d done that for the Motorola Q9H, the Nokia E61i, and the Samsung BlackJack. (You don’t have to feel bad if you’ve never heard of them. I only know they exist because I worked for BlackBerry starting in June 2008.)

I remember a poorly-chroma-keyed presentation featuring then-CTO David Yach commanding a starship that destroyed each of these devices in turn with our phasers of device portfolio depth, photon torpedoes of enterprise connectivity, and warp factor BlackBerry OS 4.6. Clearly we could deal with this Apple upstart the way we dealt with the others: by continuing to be excellent at what we did.

Still, a new competitor is still a new competitor. Measures had to be taken.

Especially when, in November of 2007, it was pretty clear that Google had stepped into the ring with the announcement of Android.

Android was the scarier competitor. Google was a well-known software giant and they had an audacious plan to marry their software expertise (and incredible buying, hiring, and lawyering power) with chipsets, handsets, and carrier reach from dozens of companies including Qualcomm, Motorola, and T-mobile.

The Android announcements exploded across the boardrooms of RIM’s Waterloo campus.

But with competition comes opportunity.

You see, Android ran Java. Well, code written in Java could run on Android. And this meant they had the hearts and minds of every mobile developer in the then-nascent app ecosystem. All they had to do was not call it Java and they were able to enable a far more recent version of Java’s own APIs than BlackBerry was allowed and run a high-performance non-Java virtual machine called Dalvik.

BlackBerry couldn’t match this due to the terms of its license agreement, while Google didn’t even need to pay Sun Microsystems (Java’s owner) a per-device license fee.

Quickly, a plan was hatched: Project Highlander (no, I’m not joking). It was going to be the one platform for all BlackBerry devices that was going to allow us to wield the sword of the Katana filesystem (still not joking) and defeat our enemies. Yes, even the execs were dorks at RIM in early 2009.

Specifically, RIM was going to adopt a new Operating System for our mobile devices that would run Dalvik, allowing them to not only finally evolve past the evolutionary barriers Sun had refused to lift from in front of BlackBerry Java…. but to also eat Google’s lunch at the same time. No matter how much money Google poured into app development for Android, we would reap the benefit through Highlander’s Android compatibility.

By essentially joining Google in the platform war against the increasingly-worrisome growth of Apple, we would be able to punch above our weight in the US. And by not running Android, we could keep our security clearance and be sold places Google couldn’t reach.

It was to be perfect: the radio core running RIM’s low-power, high-performance Nessus OS talking over secure hardware to the real-time QNX OS atop which would be running an Android-compatible Dalvik VM managing the applications RIM’s developers had written in the language they had spent years mastering: Java. With the separation of the radio and application cores we were even planning how to cut a deal with mobile carriers to only certify the radio core so we’d be free to update the user-facing parts of the OS without having to go through their lengthy, costly, irritating process.

A pity it never happened.

RIM’s end properly began on April 20, 2009, when Oracle announced it was in agreement to purchase Sun Microsystems, maker of Java.

Oracle, it was joked, was a tech company where the size of its Legal department outstripped that of the rest of its business units… combined.

Even I, a lowly grunt working on the BlackBerry Browser, knew what was going to happen next.

After Oracle completed its acquisition of Sun it took only seven months for them to file suit against Google over Android’s use of Java.

These two events held monumental importance for Research in Motion:

Oracle had bought Sun, which meant there was now effectively zero chance of a new version of mobile Java which would allow BlackBerry Java to innovate within the terms of RIM’s license to Sun.

Oracle had sued Google, which meant RIM would be squashed like a bug under the litigant might of Sun’s new master if it tried to pave its own not-Android way to its own, modern Java-alike.

All of RIM’s application engineers had lived and breathed Java for years. And now that expertise was to be sequestered, squandered, and then removed.

While Java-based BlackBerry 6 and 7 devices continued to keep the lights on under steadily decreasing order volumes, the BlackBerry PlayBook was announced, delayed, released, and scrapped. The PlayBook was such a good example of a cautionary tale that BlackBerry 10 required an extra year of development to rewrite most of the things it got wrong.

Under that extra year of pressure-cooker development, BlackBerry 10 bristled with ideas. This was a problem. Instead of evolving with patient direction, adding innovation step-by-step, guiding users over the years from 2009 to BlackBerry 10’s release in 2013, all of the pent up ideas of user interaction, user experience paradigms, and content-first design landed in users’ laps all at once.

This led to confusion, which led to frustration, which led to devices being returned.

BlackBerry 10 couldn’t sell, and with users’ last good graces spent, the company suddenly-renamed BlackBerry just couldn’t find something it could release that consumers would want to buy.

Massed layoffs, begun during the extra year of BlackBerry 10 development with the removal of entire teams of Java developers, continued as the company tried to resize itself to the size of its market. Handset prices increased to sweeten fallen margins. Developers shuffled over to the Enterprise business unit where BlackBerry was still paying bonuses and achieving sales targets.

The millions of handsets sold and billions of dollars revenue were gone. And yet, despite finding itself beneath the footfalls of fighting giants, BlackBerry was not dead — is still not dead.

Its future may not lie with smartphones, but when I left BlackBerry in late 2015, having myself survived many layoffs and reorganizations, I left with the opinion that it does indeed have a future.

Maybe it’ll focus on its enterprise deployments and niche device releases.

Maybe it’ll find a product millions of consumers will need.

Maybe it’ll be bought by Oracle.

:chutten

Data Science is Hard: Anomalies Part 3

So what do you do when you have a duplicate data problem and it just keeps getting worse?

You detect and discard.

Specifically, since we already have a few billion copies of pings with identical document ids (which are extremely-unlikely to collide), there is no benefit to continue storing them. So what we do is write a short report about what the incoming duplicate looked like (so that we can continue to analyze trends in duplicate submissions), then toss out the data without even parsing it.

As before, I’ll leave finding out the time the change went live as an exercise for the reader:newplot(1)

:chutten

Data Science is Hard: Anomalies Part 2

Apparently this is one of those problems that jumps two orders of magnitude if you ignore it:

aurora51-submissions

Since last time we’ve noticed that the vast majority of these incoming pings are duplicate. I don’t mean that they look similar, I mean that they are absolutely identical down to their supposedly-unique document ids.

How could this happen?

Well, with a minimum of speculation we can assume that however these Firefox instances are being distributed, they are being distributed with full copies of the original profile data directory. This would contain not only the user’s configuration information, but also copies of all as-yet-unsent pings. Once the distributed Firefox instance was started in its new home, it would submit these pending pings, which would explain why they are all duplicated: the distributor copy-pasta’d them.

So if we want to learn anything about the population of machines that are actually running these instances, we need to ignore all of these duplicate pings. So I took my analysis from last time and tweaked it.

First off, to demonstrate just how much of the traffic spike we see is the same fifteen duplicate pings, here is a graph of ping volume vs unique ping volume:

output_12_0

The count of non-duplicated pings is minuscule. We can conclude from this that most of these distributed Firefox instances rarely get the opportunity to send more than one ping. (Because if they did, we’d see many more unique pings created on their new hosts)

What can we say about these unique pings?

Besides how infrequent they are? They come from instances that all have the same Random Agent Spoofer addon that we saw in the original analysis. None of them are set as the user’s default browser. The hosts are most likely to have a 2.4GHz or 3.5GHz cpu. The hosts come from a geographically-diverse spread of area, with a peculiarly-popular cluster in Montreal (maybe they like the bagels. I know I do).

All of the pings come from computers running Windows XP. I wish I were more surprised by this, but it really does turn out that running software over a decade past its best before is a bad idea.

Also of note: the length of time the browser is open for is far too short (60-75s mostly) for a human to get anything done with it:

output_26_0

(Telemetry needs 60s after Firefox starts up in order to send a ping so it’s possible that there are browsing sessions that are shorter than a minute that we aren’t seeing.)

What can/should be done about these pings?

These pings are coming in at a rate far exceeding what the entire Aurora 51 population had when it was an active release. Yet, Aurora 51 hasn’t been an active release for six months and Aurora itself is going away.

As such, though its volume seems to continue to increase, this anomaly is less and less likely to cause us real problems day-to-day. These pings are unlikely to accidentally corrupt a meaningful analysis or mis-scale a plot.

And with our duplicate detector identifying these pings as they come in, it isn’t clear that this actually poses an analysis risk at all.

So, should we do anything about this?

Well, it is approaching release-channel-levels of volume per-day, submitted evenly at all hours instead of in the hump-backed periodic wave that our population usually generates:

aurora51-duplicateMainPings

Hundreds of duplicates detected every minute means nearly a million pings a day. We can handle it (in the above plot I turned off release, whose low points coincide with aurora’s high points), but should we?

Maybe for Mozilla’s server budget’s sake we should shut down this data after all. There’s no point in receiving yet another billion copies of the exact same document. The only things that differ are the submission timestamp and submitting IP address.

Another point: it is unlikely that these hosts are participating in this distribution of their free will. The rate of growth, the length of sessions, the geographic spread, and the time of day the duplicates arrive at our servers strongly suggest that it isn’t humans who are operating these Firefox installs. Maybe for the health of these hosts on the Internet we should consider some way to hotpatch these wayward instances into quiescence.

I don’t know what we (mozilla) should do. Heck, I don’t even know what we can do.

I’ll bring this up on fhr-dev and see if we’ll just leave this alone, waiting for people to shut off their Windows XP machines… or if we can come up with something we can (and should) do.

:chutten