Distributed Teams: On the non-Universality of “Not it!”

I’ve surprisingly not written a lot over here about working on a distributed team in a distributed organization. Mozilla is about 60% people who work in MoLos (office workers) and 40% people who don’t (remotees). My team is 50/50: I’m remote near Toronto, one works from his home in Italy, and the other two sit in the Berlin office most days.

If I expand to encompass one extra level of organization, I work with people in Toronto, San Francisco, Poland, Iowa, Illinois, more Berlin, Nova Scotia… well, you get the idea. For the past two and a half years I’ve been working with people from all over the world and I have been learning how that’s different from the rather-more-monocultural experience I had working in offices for the previous 8-10.

So today when I shouted “Not it!” into the IRC channel in response to the dawning realization that someone would have to investigate and take ownership of some test bustage, I followed it up within the minute with a cultural note:

09:35 <chutten> (Actually, that's a cultural thing that may need
 explanation. As kids, usually at summer sleep-away camp, if there
 is an undesirable thing that needs to be done by one person in
 the cabin the last person to say "Not it" is "It" and thus, has
 to do the undesirable thing.)

“Not it” is cultural. I think. I’ve been able to find surprisingly little about its origins in the usual places. It seems to share some heritage with the game of Tag. It seems to be an evolution of the game “Nose Goes,” but it’s hard to tell exactly where any of it started. Wikipedia can’t find an origin earlier than the 1979 Canadian film “Meatballs” where the nose game was already assumed to be a part of camp life.

Regardless of origin, I can’t assume it’s shared amongst my team. Thus my little note. Lucky for me, they seem to enjoy learning things like this. Even luckier, they enjoy sharing back. For instance, :gfritzsche once said his thumbs were pressed that we’d get something done by week’s end… what?

There were at least two things I didn’t understand about that: the first was what he meant, the second was how one pressed one’s thumbs. I mean, do you put them in your fist and squeeze, or do you press them on the outside of your fist and pretend you’re having a Thumb War (yet another cultural artefact)?

First, it means hoping for good luck. Second, it’s with thumbs inside your fist, not outside. I’m very lucky there’s a similar behaviour and expression that I’m already familiar with (“fingers crossed”). This will not always be the case, and it won’t ever be an even exchange…

All four of my team members speak the language I spoke at home while I was growing up. A lot of my culture is exported by the US via Hollywood, embedding it into the brains of the people with whom I work. I have a huge head-start on understanding and being understood, and I need to remain mindful of it.

Hopefully I’ll be able to catch some of my more egregious references before I need to explain camp songs, cringe-worthy 90s slang, or just how many hours I spent in a minivan with my siblings looking for the letter X on a license plate.

Or, then again, maybe those explanations are just part of being a distributed team?

:chutten

Advertisements

Mozilla All-Hands Tips

25486648678_90fa78a27e_k
All Hands Austin, December 2017, Mitchell Baker presenting. (Photo used under CC BY-NC-SA 2.0)

Twice a year, Mozilla gathers employees, volunteers, and assorted hangers-on in a single place to have a week of planning, working, and socializing. Being as distributed an organization as we are, it’s a bit rare to get enough of us in a single place to generate the kind of cross-talk and beneficial synergistic happenstances that help us work smarter and move in more-or-less the same direction. These are our All Hands events.

They’re a Pretty Big Deal(tm).

So here you are, individual contributor or manager, staff or volunteer, veteran or first-timer. With all these Big Plans, what are we littler folk to do to not become overwhelmed?

I have some tips.

Before You Go

Set up a mail folder/label for relevant email: You’ll be getting some email with details about where you should be, what you should be doing, and when. Organizing these into one place is helpful for reference, so come up with a label (maybe “201807-sanfran” or “mozsf2” or “fogzilla” or something) and organize those emails as they come in.

Act on those emails immediately: If they contain instructions or an announcement that bookings or registration is now open… then do that thing right then. Do not file the email and forget. Do the thing while you are looking at that email. Only then should you file that email and get back to where you were in your brain. If you absolutely can’t just then (have to synchronize with family or what-have-you), put a calendar reminder in that repeats every weekday until you handle it.

Do not upgrade Nightly: You’re running Nightly, right? You’ll be travelling through a land of uncertain connectivity, and the last thing you want is to use it downloading a multi-MB Nightly update that might have accidentally disabled Captive Portal Detection. If it works, keep your Nightly build until you’re certain you have the bandwidth to download a new one. All else fails, keep it until you get back.

Make sure your laptop is in shape: My laptop is often neglected in favour of its Desktop comrade: updates may be pending, credentials may have expired, the source code checkouts might be weeks old, and there may have even been a new version of Mozilla Build released since the last time I tried to compile Firefox. With luck, while at an All Hands you won’t have to compile Gecko on a laptop in your hotel… but we make our own luck, we who are prepared. Prepare your laptop.

Prepare your family: If you don’t live alone, you’ll have non-mozilla prepwork to do. Spouse and kids or roomates and pets, there are lifeforms who normally expect to see you that won’t. Clear the family schedule for the week you’re gone, and do as much preparation ahead of time as you can. Laundry, meal planning, groceries, sitters, dog walkers, even lawn services are things you can arrange to lighten the load that your absence will place on those around you. Even if you’re bringing them with you.

While You’re There

Do not fear missing out: You will not be able to attend both Boardgame Night and your team dinner. There will be karaoke parties you won’t get to, or be invited to. This is fine. This is expected. This is unavoidable when you have so many people disorganizing so many things simultaneously. So don’t fret about it. Prioritize.

Say no: Speaking of prioritizing: prioritize for yourself. You may very well be operating as a Level 100 You for hours at a time. So many people to talk to, so many talks and social events to organize, deliver, and attend… No. You don’t have to stay the entire length of the party. You don’t even have to go. If you feel yourself fading, get out while you have the strength. Regroup. Find a quiet corner or go to sleep early… At my first All Hands, I napped on both Wednesday and Thursday. And I wasn’t even in a different timezone. It really helped.

Wash your hands: Lots. Before meals. After meals. You’ll be talking, working, eating, and otherwise hanging out with a thousand of your closest coworkers. It’s probably your best bet for not catching mozflu, and it’s definitely your best bet to not transmit it.

After You’re Back

Consider taking a day: Generally speaking you’ll be flying back on Saturday and returning to work on Monday. Depending on distance to travel, available flight times, and cancellations, this may result in only a few hours between stumbling through your door and stumbling back to work. Consider booking that Monday off (or, honestly, if your trip back was heinous, don’t even book it off. Just take it. Get some sleep. Work can wait until Tuesday.)

Check in: If you live with family, you haven’t seen them for a week. Even if you brought them with you, you’ve been in meetings and talks and stuff most hours. Check in with them. Get up to speed on what’s been happening in their lives while you’ve been away.

Get excited for the next one: Even immediately back from an All Hands, it’s still only six months to the next one. Take stock of what you liked and what you didn’t like about this one. Rest up, and try not to get impatient :)

:chutten

(( Great minds think alike, because Seburo recently wrote a Wiki article covering even more excellent tips for All Hands events. Check that out, too! ))

TIL: Feature Detection in Windows using GetProcAddress

In JavaScript, if you want to use a function that was introduced only in certain versions of browsers, you use Feature Detection. For example, you can ask “Hey, browser, do you have a function called `includes` on Array?” If the browser has it, you use it; and if it doesn’t, you either get along without it or load your own implementation.

It turns out that this same concept can be (and, in Firefox, is) done with Windows APIs.

Firefox for Windows is built against the Windows 10 SDK. This means the compiler knows the API calls and type definitions for all sorts of wondrous modern features like toast notifications and enumerating graphics adapters in a specific order.

However, as of writing, Firefox for Windows supports Windows 7 and up. What would happen if Firefox tried to use those fancy new Windows 10 features when running on Windows 7?

Well, at compile time (when Mozilla builds Firefox), it knows everything it needs to about the sizes and names of things used in the new features thanks to the SDK. At runtime (when a user runs Firefox), it needs to ask Windows at what address exactly all of those fancy new features live so that it can use them.

If Firefox can’t find a feature it expects to be there, it won’t start. We want Firefox to start, though, and we want to use the new features when available. So how do we both use the new feature (if it’s there) and not (if it’s not)?

Windows provides an API called GetProcAddress that allows the running program to perform some Feature Detection. It is asking Windows “Hey, so I’m looking for the address of this fancy new feature named FancyNewAPI. Do you know where that is?”. Windows will either reply “No, sorry” at which point you work around it, or “Yes, it’s over at address X” at which point to convert address X into a function pointer that takes the same number and types of arguments that the documentation said it takes and then instruct your program to jump into it and start executing.

We use this in Firefox to detect gamepad input modules, cancelable synchronous IO, display density measurements, and a whole bunch of graphics and media acceleration stuff.

And today (well, yesterday at this point) I learned about it. And now so have you.

:chutten

–edited to remove incorrect note that GetProcAddress started in WinXP– :aklotz noted that GetProcAddress has been around since ancient times, MSDN just periodically updates its “Minimum Supported Release” fields to drop older versions.

Perplexing Graphs: The Case of the 0KB Virtual Memory Allocations

Every Monday and Thursday around 3pm I check dev-telemetry-alerts to see if there have been any changes detected in the distribution of any of the 1500-or-so pieces of anonymous usage statistics we record in Firefox using Firefox Telemetry.

This past Monday there was one. It was a little odd.489b9ce7-84e6-4de0-b52d-e0179a9fdb1a

Generally, when you’re measuring continuous variables (timings, memory allocations…) you don’t see too many of the same value. Sure, there are common values (2GB of physical memory, for instance), but generally you don’t suddenly see a quarter of all reports become 0.

That was weird.

So I did what I always do when I find an alert that no one’s responded to, and triaged it. Mostly this involves looking at it on telemetry.mozilla.org to see if it was still happening, whether it was caused by a change in submission volumes (could be that we’re suddenly hearing from a lot more users, and they all report just “0”, for example), or whether it was limited to a single operating system or architecture:

windowsVSIZE

Hello, Windows.

windowsx64VSIZE

Specifically: hello Windows 64-bit.

With these clues, :erahm was able to highlight for me a bug that might have contributed to this sudden change: enabling Control Flow Guard on Windows builds.

Control Flow Guard (CFG) is a feature of Windows 8.1 (Update 3) and 10 that inserts some runtime checks into your binary to ensure you only make sensible jumps. This protects against certain exploits where attackers force a binary to jump into strange places in the running program, causing Bad Things to happen.

I had no idea how a control flow integrity feature would result in 0-size virtual memory allowances, but when :erahm gives you a hint, you take it. I commented on the bug.

Luckily, I was taken seriously, so a new bug was filed and :tjr looked into it almost immediately. The most important clue came from :dmajor who had the smartest money in the room, and crucial help from :ted who was able to reproduce the bug.

It turns out that turning CFG on made our Virtual Memory allowances jump above two terabytes.

Now, to head off “Firefox iz eatang ur RAM!!!!111eleven” commentary: this is CFG’s fault, not ours. (Also: Virtual Memory isn’t RAM.)

In order to determine what parts of a binary are valid “indirect jump targets”, Windows needs to keep track of them all, and do so performantly enough that the jumps can still happen at speed. Windows does this by maintaining a map with a bit per possible jump location. The bit is 1 if it is a valid location to jump to, and 0 if it is not. On each indirect jump, Windows checks the bit for the jump location and interrupts the process if it was about to jump to a forbidden place.

When running this on a 64-bit machine, this bitmap gets… big. Really big. Two Terabytes big. And that’s using an optimized way of storing data about the jump availability of up to 2^64 (18 quintillion) addresses. Windows puts this in the process’ storage allocations for its own recordkeeping reasons, which means that every 64-bit process with CFG enabled (on CFG-aware Windows versions (8.1 Update 3 and 10)) has a 2TB virtual memory allocation.

So. We have an abnormally-large value for Virtual Memory. How does that become 0?

Well, those of you with CS backgrounds (or who clicked on the “smartest money” link a few paragraphs back), will be thinking about the word “overflow”.

And you’d be wrong. Ish.

The raw number :ted was seeing was the number 2201166503936. This number is the number of bytes in his virtual memory allocation and is a few powers of two above what we can fit in 32 bits. However, we report the number of kilobytes. The number of kilobytes is 2149576664, well underneath the maximum number you can store in an unsigned 32-bit integer, which we all know (*eyeroll*) is 4294967296. So instead of a number about 512x too big to fit, we get one that can fit almost twice over.

Welll….

So we’re left with a number that should fit, being recorded as 0. So I tried some things and, sure enough, recording the number 2149576664 into any histogram did indeed record as 0. I filed a new bug.

Then I tried numbers plus or minus 1 around :ted’s magic number. They became zeros. I tried recording 2^31 + 1. Zero. I tried recording 2^32 – 1. Zero.

With a sinking feeling in my gut, I then tried recording 2^32 + 1. I got my overflow. It recorded as 1. 2^32 + 2 recorded as 2. And so on.

All numbers between 2^31 and 2^32 were being recorded as 0.

sensibleError

In a sensible language like Rust, assigning an unsigned value to a signed variable isn’t something you can do accidentally. You almost never want to do it, so why make it easy? And let’s make sure to warn the code author that they’re probably making a mistake while we’re at it.

In C++, however, you can silently convert from unsigned to signed. For values between 0 and 2^31 this doesn’t matter. For values between 2^31 and 2^32, this means you can turn a large positive number into a negative number somewhere between -2^31 and -1. Silently.

Telemetry Histograms don’t record negatives. We clamp them to 0. But something in our code was coercing our fancy unsigned 32-bit integer to a signed one before it was clamped to 0. And it was doing it silently. Because C++.

Now that we’ve found the problem, fixed the problem, and documented the problem we are collecting data about the data[citation] we may have lost because of the problem.

But to get there I had to receive an automated alert (which I had to manually check), split the data against available populations, become incredibly lucky and run it by :erahm who had an idea of what it might be, find a team willing to take me seriously, and then do battle with silent type coercion in a language that really should know better.

All in a day’s work, I guess?

:chutten

Two Days, or How Long Until The Data Is In

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs people open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

 

Data Science is Hard: Anomalies Part 3

So what do you do when you have a duplicate data problem and it just keeps getting worse?

You detect and discard.

Specifically, since we already have a few billion copies of pings with identical document ids (which are extremely-unlikely to collide), there is no benefit to continue storing them. So what we do is write a short report about what the incoming duplicate looked like (so that we can continue to analyze trends in duplicate submissions), then toss out the data without even parsing it.

As before, I’ll leave finding out the time the change went live as an exercise for the reader:newplot(1)

:chutten

Data Science is Hard: Anomalies Part 2

Apparently this is one of those problems that jumps two orders of magnitude if you ignore it:

aurora51-submissions

Since last time we’ve noticed that the vast majority of these incoming pings are duplicate. I don’t mean that they look similar, I mean that they are absolutely identical down to their supposedly-unique document ids.

How could this happen?

Well, with a minimum of speculation we can assume that however these Firefox instances are being distributed, they are being distributed with full copies of the original profile data directory. This would contain not only the user’s configuration information, but also copies of all as-yet-unsent pings. Once the distributed Firefox instance was started in its new home, it would submit these pending pings, which would explain why they are all duplicated: the distributor copy-pasta’d them.

So if we want to learn anything about the population of machines that are actually running these instances, we need to ignore all of these duplicate pings. So I took my analysis from last time and tweaked it.

First off, to demonstrate just how much of the traffic spike we see is the same fifteen duplicate pings, here is a graph of ping volume vs unique ping volume:

output_12_0

The count of non-duplicated pings is minuscule. We can conclude from this that most of these distributed Firefox instances rarely get the opportunity to send more than one ping. (Because if they did, we’d see many more unique pings created on their new hosts)

What can we say about these unique pings?

Besides how infrequent they are? They come from instances that all have the same Random Agent Spoofer addon that we saw in the original analysis. None of them are set as the user’s default browser. The hosts are most likely to have a 2.4GHz or 3.5GHz cpu. The hosts come from a geographically-diverse spread of area, with a peculiarly-popular cluster in Montreal (maybe they like the bagels. I know I do).

All of the pings come from computers running Windows XP. I wish I were more surprised by this, but it really does turn out that running software over a decade past its best before is a bad idea.

Also of note: the length of time the browser is open for is far too short (60-75s mostly) for a human to get anything done with it:

output_26_0

(Telemetry needs 60s after Firefox starts up in order to send a ping so it’s possible that there are browsing sessions that are shorter than a minute that we aren’t seeing.)

What can/should be done about these pings?

These pings are coming in at a rate far exceeding what the entire Aurora 51 population had when it was an active release. Yet, Aurora 51 hasn’t been an active release for six months and Aurora itself is going away.

As such, though its volume seems to continue to increase, this anomaly is less and less likely to cause us real problems day-to-day. These pings are unlikely to accidentally corrupt a meaningful analysis or mis-scale a plot.

And with our duplicate detector identifying these pings as they come in, it isn’t clear that this actually poses an analysis risk at all.

So, should we do anything about this?

Well, it is approaching release-channel-levels of volume per-day, submitted evenly at all hours instead of in the hump-backed periodic wave that our population usually generates:

aurora51-duplicateMainPings

Hundreds of duplicates detected every minute means nearly a million pings a day. We can handle it (in the above plot I turned off release, whose low points coincide with aurora’s high points), but should we?

Maybe for Mozilla’s server budget’s sake we should shut down this data after all. There’s no point in receiving yet another billion copies of the exact same document. The only things that differ are the submission timestamp and submitting IP address.

Another point: it is unlikely that these hosts are participating in this distribution of their free will. The rate of growth, the length of sessions, the geographic spread, and the time of day the duplicates arrive at our servers strongly suggest that it isn’t humans who are operating these Firefox installs. Maybe for the health of these hosts on the Internet we should consider some way to hotpatch these wayward instances into quiescence.

I don’t know what we (mozilla) should do. Heck, I don’t even know what we can do.

I’ll bring this up on fhr-dev and see if we’ll just leave this alone, waiting for people to shut off their Windows XP machines… or if we can come up with something we can (and should) do.

:chutten