I feel the same way. There’s a shift that happens at around 80 people where not everyone rows in the same direction. Incentives become different because not everyone “lives and dies” together or by the same metric. By the time you are at bigco status, this is so ingrained that work becomes repeated prisoner’s dilemma trials.
80-100: my theory is as humans who lived hundreds of thousands of years in small hunter-gathering groups, there is a natural limit to interpersonal relationships we can manage on one-on-one level.
Any levels of social complexities beyond that has to rely on systems/traditions rather than direct interpersonal relationships.
Catering to the masses is indicative of catering to system 1 thinking. System 1 thinking is extraordinarily cheap compared to system 2. When a movie has good cover art, an alluring trailer and one name you've heard of before, it is good enough so long as you don't engage system 2 thinking. The same can be said for your domino's argument - picking a good pizza place takes a lot of thought: deep dish vs new york style, delivery vs pick up, price point, etc. Domino's is just there, in-app, and cheap.
System 2 thinking compounds. Once you've really tried great pizza, studied film, felt good product design, drank good wine, and so on it is hard to go back. Even when operating in system 1, after you know what makes things good, you can just feel the lack of quality. This is what some people use to term "snobishness" because it can lead to turning one's nose up at something that's good enough to the untrained eye.
The minimum bar for is a great measure for society's system 2 quotient. The more deep thought, focus, and experienced a culture is, the higher the quality bar is. For instance, as a community becomes wealthier there are more shake shakes rather than burger kings because with more money people have more free time to experience good foods, leading to a system 1 preference for a higher quality bar. I'd love to see how this plays out over different communities and cultures.
My understanding was that System 1/System 2 thinking is unproven conjecture[1] that can't even be replicated[2]. It would be unwise to analyse behaviour using this framework.
I don't want to argue the basis of system 1/system 2 as described in [1], because the point I'm taking away is more about whether they interoperate at times of decision making. The point I'm making is system 2 is a far more costly (effortful in the article) mechanism of decision making.
The point I'm making is, as an organism we avoid utilizing higher-effort or higher-cost actions when unnecessary. An untrained lower-cost (IR1 in the article or System 1 in my definition) decision will result in not caring about quality. A trained lower-cost decision will utilize heuristics to bias for higher quality.
Respectfully, I don't think you took away the correct implications. Specifically in the implications section of [1]:
"The key to effective intuitive decision making, though, is to learn to better calibrate one’s confidence in the intuitive response (i.e., to develop more refined meta-thinking skills) and to be willing to expand search strategies in lower confidence situations or based on novel information."
and
"Relatedly, it also means we should stop assuming that more conscious and effortful decision-making is necessarily better than more heuristically-driven intuitive decision-making."
I would say that while the article makes very interesting objections to the S1/S2 thinking framework, its objections are that they are far more intertwined as measured. However, the article still very clearly agrees that S1 is lower cost than S2.
> most notably that many of the properties attributed to System 1 and System 2 don’t actually line up with the evidence, that dual-process theories are largely unfalsifiable, and that most of the claimed support for them is “confirmation bias at work”
The article absolutely does not agree that S1 is lower cost than S2, as the article does not agree that S2 exists at all.
I see so this may be semantics then as the article agrees with intuitive decision making. I think I understand where we’re saying the same things. I will consider replacing my terminology in the future, thank you!
My personal theory (which is also baseless speculation) is that we use intuition to consider the decision pipeline closed and the matter settled. We keep at it until it feels right.
In this representation, "system 1" is simply an early pipeline decision, where one intuitively feels that it is the correct decision immediately. And if a satisfying decision doesn't come up, we keep looping over the decision, adding more factors, until we finally find the factors that make our intuition agree with it and close the matter. The longer we try to find a satisfactory decision, the more factors we try out, and therefore, someone came up with "system 2", but I see "system 2" as a particularly bad misrepresentation: it is still the same system looping, we are just staying in it longer.
The source of my theory is the interesting effect of a broken intuition: OCD sufferers are unable to break from this cycle, and even when intellectually satisfied with a conclusion, they perceive their brains as "stuck" in the question.
So fundamentally, I agree with your general idea: intuition plays a major role in this system, and when it breaks, people get paralyzed in it, no matter how good the decision is intellectually. My only point is that there is no division of systems. It's one single subsystem, integrated with many others, forming one single blackbox entity. The fast/slow thinking framework is a misrepresentation that doesn't really help one understand people's behaviors. It's a bad map.
What mechanism would make it possible to enforce non-paywalled, non-authenticated access to public web pages? This is a classic "problem of the commons" type of issue.
The AI companies are signing deals with large media and publishing companies to get access to data without the threat of legal action. But nobody is going to voluntarily make deals with millions of personal blogs, vintage car forums, local book clubs, etc. and setup a micro payment system.
Any attempt to force some kind of micro payment or "prove you are not a robot" system will add a lot of friction for actual users and will be easily circumvented. If you are LinkedIn and you can devote a large portion of your R&D budget on this, you can maybe get it to work. But if you're running a blog on stamp collecting, you probably will not.
Years ago I was building a search engine from scratch (back when that was a viable business plan). I was responsible for the crawler.
I built it using a distributed set of 10 machines with each being able to make ~1k queries per second. I generally would distribute domains as disparately as possible to decrease the load on machines.
Inevitably I'd end up crashing someone's site even though we respected robots.txt, rate limited, etc. I still remember the angry mail we'd get and how much we tried to respect it.
Debrief | 2 Founding Engineers | San Francisco, Remote | Full-time | Visa sponsorship available | $130k-$170k, 0.4-1%
Hello HN, I'm Ned, cofounder of Debrief (YC W21). Our mission is to improve the future of work with asynchronous video.
Our current solution helps organizations create, collaborate, and manage asynchronous videos via AI-driven transcription and search.
You can learn more at our careers page: https://www.getdebrief.com/careers or reach out to me directly at ned@getdebrief.com. Please let me know what interests you about the job!
This is so interesting! I noticed a sign while driving yesterday that was unbranded saying "high speed internet" and an arbitrary local phone number. I presumed it could be a reseller but my mind started turning on how I could create an ISP and what that process looks like. Next morning I see this.
I don't think I would execute on this personally because of the support required -- the spreadsheet takes into account the building but less so the maintaining. I would struggle to be hated as much as people fume at ISPs when their service is impacted.
I'm certain this type of data is tracked on most devices. In an optimistic outlook it drives a better customer experience because AMZN can capture trends in data to discover things like "oh people change the page forward too frequently accidentally" and track down root causes. In a pessimistic sense it can help target you based on how long you spend reading specific content and which content you highlight as a reader.
Generally every product I am aware of tracks interaction based data such as where someone clicks or taps and what context they are in. Consider things like `utm` parameters which suffix most links people click to determine the context they clicked on something and what they clicked on.
I do not see this as sinister. I imagine somewhere in settings one can turn this feature off but I don't know for sure.
* Disclaimer: I am currently employed at a subsidiary of Amazon. These views are my own.
> "oh people change the page forward too frequently accidentally"
Would you need to log every page turn with every book and time and date for something like that though? Wouldn't that be a more specific event like "turns page forward, turns page back within x seconds"? This sounds more like "we don't know what we might use this data for, but it's better to have it and not need it than to need it and not have it ... who knows, maybe we can deduct some profile from knowing how quick the user read through that chapter in that book" than legitimate use cases.
This assumes the client can keep state in a more reasonable way than a server can piece it together. Definitely a stateful event is more powerful but is likely more lossy.
From what I've seen tracking simple events and then piecing them together en masse tends to show up significantly more frequently.
Sure, I mean, it's also easier because you don't have to know the questions you might want answered.
Given the very private nature of the data ("he read Marx and Mao, and read some sections carefully!"), vacuuming up as much as possible doesn't sound like a good idea. Add to that the almost chronic inability of large corporations to protect data, they really should start treating data collection as a liability rather than an opportunity.
I couldn't agree more. I think our entire tech interview system exists to find people who fit the role you defined above. I wrote a post on this describing my thoughts [1].
What's strange is so many companies brag about how they have "The Best" technical minds working for them. They pride themselves on their engineers' pedigrees. However, once they hire "The Best" they put a ton of process for those engineers to go through. Effectively this is a mechanism to be predictable and to let managers know more precisely when things will be done.
I don't see this changing in industry too quickly, simply because once a company gets past the founding stage, those companies hire engineers who can close tickets and work "effectively" rather than creatively.
Reminds me a lot of OSX photo albums. Pretty cool idea! Curious about memory and performance constraints for larger gifs as presumably all frames must be cached.
Currently, there are some limitations on frame number. We are trying to use resources efficiently. However, a Vine video can be converted without any problems.