> > Our early analysis of performance data suggests that engineers who either joined Meta in-person and then transferred to remote or remained in-person performed better on average than people who joined remotely. This analysis also shows that engineers earlier in their career perform better on average when they work in-person with teammates at least three days a week. This requires further study, but our hypothesis is that it is still easier to build trust in person and that those relationships help us work more effectively.
This is meaningless without knowing how productivity is being measured. It's very possible that said metric inherently favors in-person collaboration over remote collaboration - which would then skew the results of any further analysis of that metric.
This is also meaningless without knowing exactly what's being compared. I wouldn't be surprised if 3 in-person days a week was indeed more productive than 0, but what about 2? 1? 1/2? 1/30? 1/365.24? There's a lot of numbers between 0 and 3, and coming to any conclusions without understanding the actual shape of that graph is folly.
This is also meaningless without information on the wide range of in-person cultures or the wide range of remote cultures. These preliminary results could readily come about from having a remote culture that's more dysfunctional than other remote cultures and/or an in-person culture that's less dysfunctional than other in-person cultures. I've personally experienced both ranges more-or-less in full over the course of my career - as well as varying combinations of the two in hybrid environments.
Why does there need to be a metric? People with management experience can decide these things based on intuition. It's practically impossible to quantify anything in this industry, but 'data-driven decisionmaking' is fetishized to the extreme. Ultimately it just becomes another buzzword and method for people with little or no experience to override people with experience.
Phrasing the whole discussion in terms of 'metrics' to justify the common-sense conclusion that junior people need more direct supervision is clearly sugarcoating it for junior recipients for whom the message is intended.
> Phrasing the whole discussion in terms of 'metrics' to justify the common-sense conclusion that junior people need more direct supervision is clearly sugarcoating it for junior recipients for whom the message is intended.
This line of reasoning will never not make my blood boil. "Well of course it's not data driven but we have to pretend it is for the juniors while the seniors know the truth", what absolute hogwash. Say what you mean or I'll assume what you believe what you say. Yout don't get to have it both ways. They said it's based on data so you don't get to pretend it was intuition (on top of which the people exercising the "intuition" have a vested interest in WFO).
> common-sense conclusion that junior people need more direct supervision is clearly sugarcoating it for junior recipients for whom the message is intended.
I see this being repeated ad-nauseam. I don't really buy that at all. I have seen junior employees that performed and evolved fine on remote settings, on teams that had effective remote collaboration.
IMO, this is a lie that people that really miss working from offices grasping at straws to muddy the conversation with some kind of plausible deniability.
I believe it might be predicated on some mythical junior engineer who would excel in-person but finds themselves completely helpless outside of the earshot of a senior engineer.
Those who want to figure stuff out will do it. That’s largely what this job is
> People with management experience can decide these things based on intuition. It's practically impossible to quantify anything in this industry, but 'data-driven decisionmaking' is fetishized to the extreme.
The answer to "it's practically impossible to quantify anything in this industry" is not "therefore we should just give up and rely on managerial intuition with zero accountability or measurement", but rather "therefore we should improve our ability to quantify things and pair that quantification and measurement with human judgment and oversight".
I agree and I’ve played Goldharts game enough to be fucked off with it all. Sure track bug counts etc. but keep that stuff lightweight as possible on the day to day.
Most business decisions, including this one, are made on incomplete data. Expecting a level of scientific rigor such that conclusions are unassailable is not realistic, as science moves slowly on high-certainty and business moves quickly on low-moderate certainty.
If you don’t like the conclusion your company has reached, it’s going to be a lot faster and more fruitful to change companies than to change their mind.
I love remote overall; I think it needs some amount of in-person to be most effective. I don’t blame companies for making decisions on imperfect [not “meaningless”] information as they try to figure out how to compete.
> Expecting a level of scientific rigor such that conclusions are unassailable is not realistic
A big chunk of the commenter community here is ready to hand the reigns of this decision-making to AI, simply because a data-driven decision must be the correct one. No need to look too deep at the completeness of that data or the methodology under which it was collected and analyzed.
I ain't saying that the data/metrics need to be complete. I'm saying that the data's/metrics' incompleteness ought to be transparently documented if one expects to gain any useful insights from them or from conclusions derived from them.
> This is meaningless without knowing how productivity is being measured. It's very possible that said metric inherently favors in-person collaboration over remote collaboration - which would then skew the results of any further analysis of that metric.
Why is it supposed to be problematic for the metric to favor or skew towards one over the other? The goal of the metric should be to measure something the company wants to optimize not to try to show all proposals for optimizing the something are equally adept at doing so. It's possible they are measuring productivity incorrectly but whether or not the measure favors a method isn't a signal for that.
I agree the other information is needed for the rest of us to consume the findings meaningfully though.
> Why is it supposed to be problematic for the metric to favor or skew towards one over the other?
It's akin to asking whether a weightlifter or a marathon runner is more athletic. If I'm measuring athleticism by one's max jogging distance, that'll disproportionately favor the runner. If I'm measuring athleticism by one's single-rep max deadlift, that'll disproportionately favor the weightlifter.
This ain't problematic per se; I might have good reason for measuring athleticism by a skewed metric (for example, because I'm the coach of a track & field team and therefore need runners more than I do weightlifters). However, if that was the case then I'd probably be explicit about that when presenting my findings rather than simply using running distance as a proxy for general athleticism.
Likewise, Facebook might have good reasons to use metrics that inherently prefer in-person work and/or exclude metrics that inherently prefer remote work, but without knowing what those metrics even are (let alone why they were chosen), it's impossible to say whether findings based on those metrics are actually applicable to any organization other than Meta as it exists at this very moment.
I guess the relation I'm failing to see in this argument is why skew or no skew is supposed to be a determining factor in whether or not the metric is to be applicable outside of Meta in the first place. Say the metric weren't skewed, you still don't know it is applicable outside Meta because you don't know what it was measuring only that what was measured didn't favor either remote or on site work. Say you knew what was measured were skewed, you still don't know if it's an applicable measurement outside of Meta. The point being while you do need to understand what was measured to be able to apply it to other companies this doesn't suggest whether the metric is skewed is what plays a part in deciding if the metric is relevant or not. The metric's point of measurement is solely what's relevant while the skew is just a property of the measurement which can sometimes be skewed and sometimes not regardless if the measurement is applicable or not.
Taking this to your mention of athleticism with two examples, the first of which being athleticism by lowest BMI which skews towards runners versus weight lifters. Being skewed towards runners doesn't mean lowest BMI is a sensible metric for a coach to pick runners with it just means it's an even worse way to pick weight lifters. Similarly measuring athleticism by how many basketball shots one can make in 10 minutes may not be skewed towards one or the other but still doesn't mean it's a good way to measure a runner. The only thing relevant to know is whether the metric is one you want to optimize for. Whether or not that metric is skewed towards a certain disposition may or may not be true but regardless which it's not how you decide if the metric is one you want to optimize for.
> I guess the relation I'm failing to see in this argument is why skew or no skew is supposed to be a determining factor in whether or not the metric is to be applicable outside of Meta in the first place.
And I guess what I'm saying is that it ain't a determining factor in and of itself, but it's rather a tool to discern whether it is indeed applicable outside (or, for that matter, inside) Meta. If a metric does skew toward a specific form of "athleticism" or "productivity" at the expense of others, then it's worth asking whether there's a particular reason for using it v. it being arbitrarily selected.
And if a metric doesn't skew you still need to ask the same question anyways because whether the metric is arbitrary or not does not depend on skew it depends on the applicability.
This is meaningless without knowing how productivity is being measured. It's very possible that said metric inherently favors in-person collaboration over remote collaboration - which would then skew the results of any further analysis of that metric.
This is also meaningless without knowing exactly what's being compared. I wouldn't be surprised if 3 in-person days a week was indeed more productive than 0, but what about 2? 1? 1/2? 1/30? 1/365.24? There's a lot of numbers between 0 and 3, and coming to any conclusions without understanding the actual shape of that graph is folly.
This is also meaningless without information on the wide range of in-person cultures or the wide range of remote cultures. These preliminary results could readily come about from having a remote culture that's more dysfunctional than other remote cultures and/or an in-person culture that's less dysfunctional than other in-person cultures. I've personally experienced both ranges more-or-less in full over the course of my career - as well as varying combinations of the two in hybrid environments.