Aren't age limits already in the ToS of most social media platforms? If the parents/children break the ToS their accounts should be deleted and their emails or even IPs banned.
I don't really see why we need more government involvement here. It's just going to be ham-fisted and create unintended consequences like the kids in Australia having to use adult YouTube because they can't have a kids account anymore.
I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).
But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.
NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.
Those aren’t mutually exclusive; something can be both useful and a con.
When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.
LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.
Yes, that’s the point I’m making. In the scenario you’re describing, that would make Sam Altman a con man. Alternatively, he could simply be delusional and/or stupid. But given his history of deceit with Loopt and Worldcoin, there is precedent for the former.
It would make every marketing department and basically every startup founder conmen too.
While I don’t completely disagree with that framing it’s not really helpful.
Slogans are not promises, they are vague feelings. In the case of Coca-Cola, I know someone who might literally agree with the happiness part of it (though I certainly wouldn’t).
The promises of Theranos and LLMs are concrete measurable things we can evaluate and report where they succeed, fall short, or are lies.
Sure but equating Theranos and LLMs seems a bit disingenuous.
Theranos was an outright scam that never produced any results, whereas LMMs might not have (yet?) lived up to all the marketing promises (you might call them slogans?) they made, but they definitely provided some real measurable value.
These are not independent hypotheses. If (b) is true it decreases the possibility that (a) is true and vice versa.
The dependency here is that if Sam Altman is indeed a con man, it is reasonable to assume that he has in fact conned many people who then report an over inflated metric on the usefulness of the stuff they just bought (people don’t like to believe they were conned; cognitive dissonance).
In other words, if Sam Altman is indeed a con man, it is very likely that most metrics of the usefulness of his product is heavily biased.
That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.
That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.
The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.
The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.
I do think though that lack of online learning is a bigger drawback than a lot of people believe, because it can often be hidden/obfuscated by training for the benchmarks, basically.
This becomes very visible when you compare performance on more specialized tasks that LLMs were not trained for specifically, e.g. playing games like Pokemon or Factorio: General purpose LLMs are lagging behind a lot in those compared to humans.
But it's only a matter of time until we solve this IMO.
Hallucinations are IMO a hard wall. They have gotten slightly better over the years but you still get random results that may or may not be true, or rather, are in a range between 0-100% true, depending on which part of the answer you look at.
OpenAI's o3 was SOTA, and valued by its users for its high performance on hard tasks - while also being an absolute hallucination monster due to one of OpenAI's RLVR oopsies. You'd never know whether it's brilliant or completely full of shit at any given moment in time. People still used o3 because it was well worth it.
So clearly, hallucinations do not stop AI usage - or even necessarily undermine AI performance.
And if the bar you have to clear is "human performance", rather than something like "SQL database", then the bar isn't that high. See: the notorious unreliability of eyewitness testimonies.
Humans avoid hallucinations better than LLMs do - not because they're fundamentally superior, but because they get a lot of meta-knowledge "for free" as a part of their training process.
LLMs get very little meta-knowledge in pre-training, and little skill in using what they have. Doesn't mean you can't train them to be more reliable - there are pipelines for that already. It just makes it hard.
The wall is training data. Yes, we can make more and more of post training examples. No, we can never make enough. And there are diminishing returns to that process.
I didn’t say that is the case, I said it could be. Do you understand the difference?
And if it is the case, it doesn’t immediately follow that we would know right now what exactly the wall would be. Often you have to hit it first. There are quite a few possible candidates.
And there could be a teapot in an orbit around the Sun. Do we have any evidence for that being the case though?
So far, there's a distinct lack of "wall" to be seen - and a lot of the proposed "fundamental" limitations of LLMs were discovered to be bogus with interpretability techniques, or surpassed with better scaffolding and better training.
He’s not saying there is a hard wall he’s saying there’s a point where we’ll need new techniques or technologies not just refine the current one. Less of a hard barrier like the speed of light than an innovative one like creating artificial ammonia to make industrial amounts of fertilizer to support increasing crop amounts
pole-vaulting records improve incrementally too. and there is finite distance left to the moon. without deep understanding and experience and numbers to back up the opinion, any progress seems about to reach arbitrary goals.
AI doomerism was sold by the AI companies as some sort of "learn it or you'll fall behind". But they didnt think it through, now that AI is widely seen as a bad thing by general public (except programmers who think they can deliver slop faster). Who would be buying $200/month sub when they get laid off, I am not sure the strategy of spreading fear was worth it. I also don't think this tech can ever be profitable. I hope it burns more money at this rate.
The employer buys the AI subscription, not the employee. An employee that sends company code to an external AI is somebody looking for troubles.
In the case of contractors, the contractors buy the subscription but they need authorization to give access to the code. That's obvious if the property of the code is of the customer but there might be NDAs even if the contractor owns the code.
If companies have very little no of employees, AI companies are expecting regular people to pay for AI access. Then who would be buying $200/month for a thing that took their job? By cutting employees strat, the AI companies also lose much more in revenue.
I disagree with this perspective. Human labour is mostly inefficiency from habitual repetition from experience. LLMs tend not to improve that. They look like they do but instead train the user into replacing the repetition with machine repetition.
We had an "essential" reporting function in the business which was done in Excel. All SMEs seem to have little pockets of this. Hours were spent automating the task with VBA to no avail. Then LLMs came in after the CTO became obsessed with it and it got hit with that hammer. This is four iterations of the same job: manual, Excel, Excel+VBA, Excel+CoPilot. 15 years this went on.
No one actually bothered to understand the reason the work was being done and the LLM did not have any context. This was being emailed weekly to a distribution list with no subscribers as the last one had left the company 14 years ago. No one knew, cared or even though about it.
And I see the same in all areas LLMs are used. They are merely pasting over incompetence, bad engineering designs, poor abstractions and low knowledge situations. Literally no one cares about this as long as the work gets done and the world keeps spinning. No one really wants to make anything better, just do the bad stuff faster. If that's where something is useful, then we have fucked up.
Another one. I need to make a form to store some stuff in a database so I can do some analytics on it later. The discussion starts with how we can approach it with ReactJS+microservices+kubernetes. That isn't the problem I need solving. People have been completely blinded on what a problem is and how to get rid of it efficiently.
I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?
NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.
This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.
At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.
And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.
"Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.
The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.
The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.
> even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks
This sounds like a lot of the work engineers do as well, we're not perfect at it (though execs aren't either), but the work you produce is expected to survive long term, thats why we spend time accounting for edge cases and so on.
Case in point; the popularity of docker/containerization. "It works on my machine" is generally fine in the short term, you can replicate the conditions of the local machine relatively easily, but doing that again and again becomes a problem, so we prepare for that (a long-horizon task) by using containers.
Some management would be cut off when the time comes, Execs on the other hand are not there for work and are in due to personal relationships, so impossible to fire. If you think someone like lets say Satya Nadella can't be replaced by a bot which takes different input streams and then makes decisions, then you are joking. Even his recent end of 2025 letter was mostly written by AI.
If an AI exec reliably outperformed meatbag execs while demanding less $$$, many boards would consider that an upgrade. Why gamble on getting a rare high performance super-CEO when you can get a reliable "good enough"?
The problem is: we don't have an AI exec that would outperform a meatbag exec on average, let alone reliably. Yet.
Yeah. Obviously. Duh. That's why we keep doing it.
Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.
If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?
For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.
So could the company hiring you to do that work fire you and just use Opus instead? If no, then you cannot compare an engineers salary to what Opus costs, because the engineer is needed anyway.
> And API costs at all major providers represent selling the model with a good profit margin.
Though we don't know for certain, this is likely false. At best, it's looking like break even, but if you look at Anthropic, they cap their API spend at just $5,000 a month, which sounds like a stop loss. If it were making a good profit, they'd have no reason to have a stop loss (and certainly not that low).
> Yeah. Obviously. Duh. That's why we keep doing it.
I don't think so. I think what is promised is what keeps spend on it so high. I'd imagine if all the major AI companies were to come out and say "this is it, we've gone as far as we can", investment would likely dry up
But now instead of spending 10 hours working on that, he can go and work on something else that would otherwise have required another engineer.
It's not going to mean they can employ 0 engineers, but maybe they can employ 4 instead of 5 - and a 20% reduction in workforce across the industry is still a massive change.
Thats assuming a near 100% success rate from the agent, meaning it's not something he needs to supervise at all. It also assumes that the agent is able to take on the task completely, meaning he can go do something else which would normally occupy the time of another engineer, rather than simply doing something else within the same task (from the sounds of things, it was helping with debugging, not necessarily actually solving the bug). Finally, and most importantly, the 20% reduction in workforce assumes it can do this consistently well across any task. Saving 10h on one task is very different from saving 10h on every task.
Assuming all the stars align though and all these things come true, a 20% reduction in workforce costs is significant, but again, you have to compare that to the cost of investment, which is reported to be close to a trillion. They'll want to see returns on that investment, and I'm not sure a 20% cut (which, as above, is looking like a best case scenario) in workforce lives up to that.
> it can still massively reduce the amount of human labour required for many tasks.
I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.
I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.
Yeah, but we also haven't seen what making actually decent music or movies or whatever with AI will look like. Maybe it simply won't be possible and there will not be a market for it.
But if it is possible it's probably going to be a lot more involved than just '"video of cute cartoon cat, Pixar style" into a prompt'.
Though relatively old in the AI world (2023), it's still quite interesting.
In case you can't access the article, the prompt used is:
> 35mm, 1990s action film still, close-up of a bearded man browsing for bottles inside a liquor store. WATCH OUT BEHIND YOU!!! (background action occurs)…a white benz truck crashes through a store window, exploding into the background…broken glass flies everywhere, flaming debris sparkles light the neon night, 90s CGI, gritty realism
I felt a lot safer when I was a young grad than now that I have kids to support and I can't just up and move to wherever the best job opportunity is or live off lentils to save money or whatever.
Yeah, kids change the landscape a lot. On the other hand, if you don't have any personal ties, its easier to grab opportunities, but you are unlikely to build any kind of social network when chasing jobs all over the country/world.
Either way, there is very little to no path toward "family + place to live + stable job" model.
When I was single with no kids, I felt pretty comfortable leaving a good job to join a startup. I took a 50% pay cut to join when the risk seemed high, but the reward also seemed high.
It paid off for me, but who knows if I would have taken that leap later in life.
There must be "dozens of us" with this fear right now. I'm kinda surprised there isn't a rapid growing place for us to discuss this... (Youtube, X account, Discord place..)
I don't have a college degree either. I am about 50. I have never been unemployed and have had high paying software dev jobs my entire adult life. Your claim that the lack of degree is the only thing holding you back is very much incorrect.
I suspect the problem is elsewhere and you are unwilling or uncomfortable to discuss it.
I'm confused as to why someone who freely admits they have been broke & unemployed for 15 years feels they are qualified to provide "advice", make critical judgement calls about others and brag about their awesomeness.
>> My actual accomplishments in the world of computing ... are the stuff of legends
> "going back to school" to learn what I already know pretty damn well already, given that I've been programming since I was 8
It's small consolation if sitting in a classroom is something you truly hate, but the guys who are programming pros before they go into a CS program are very often the ones who do really well and get the most out of it.
I created my first Linux from scratch when I was a freshman in college in a third world country (not India). Fast forward few years later, I now write Linux kernel code for a living. Not sure what you did wrong, bud, to end up miserable like this.
Protip: When you consistently present yourself as somebody with a massively inflated ego who will be a constant pain to interact with, nobody's going to hire you, skills or not.
I left high school with average results and immediately got a job as a junior web developer, and I’m nothing special. I feel there must be more to this story… You don’t come off very well in your post, I imagine it could be the same in person and perhaps therein lies the issue?
> There is MUCH you still have to learn about life.
This response, along with your OP, it’s so pretentious and condescending. It seems you feel that you’re superior to everyone intellectually. I assume that you hold the same attitude in person and this is not helping your situation.
The irony is that I’ve done exactly this. I tried to start a business in my early 20’s and failed dramatically. I stopped developing altogether for a decade while I did minimum wage jobs and struggled to find a career. I started developing again in my early 30’s and half a decade later I’m running a software business.
You may well be intelligent but severely lacking in other necessary areas. It seems it is you who has much to learn.
I'm on the flip side of this - not exactly young but no dependants which is making me a little bit less nervous. Seems like the next 20 years will be a wild ride & it doesn't seem optional so lets go I guess
True. This is one of the best arguments for not having kids. I could never imagine putting myself in that uncertain situation. Much better to reduce those risks, and focus on yourself.
Having kids is a personal choice. The stress of having to support them is real and it might mean, at times, you sacrifice more than you would have without kids.
It's been entirely worth it for me and I cannot imagine my life without kids. But it's a deeply personal choice and I am not buying or selling the idea. I would just say nobody is ever ready and the fears around having them probably are more irrational than rational. But not wanting them because of how it might change your own life is a completely valid reason to not have kids.
> the fears around having them probably are more irrational than rational
My $0.02 is that if anything, the fears people have about how much their lives would be transformed are significantly lacking, and a lot of the "it's not so bad" advice is post-hoc rationalization. I mean, it's evolutionarily excellent that we humans choose to have kids, but it's very rational to be afraid and to postpone or even fully reject this on an individual basis. And as an industry and as a society, we should probably do a lot more to support parents of young children.
Ya, this is a fair callout. I moreso meant fears around being a bad parent. If anything, people experiencing those fears will be fine parents because they've got the consideration to already be thinking about doing a good job for their newly born.
I mean it's useful for some things, mainly as a complement to Stack Overflow or Google.
But the hallucination problem is pretty bad, I've had it recommend books that don't actually exist etc.
When using it for studying languages I've seen it make silly mistakes and then get stuck in the typical "You´re absolutely right!" loop, the same when I've asked it about how to do something with a particular Python library that turns out not to be possible with that library.
But it seems the LLM is unable to just tell me it's not possible so instead goes round and round in loops generating code that doesn't work.
So yeah, it has some uses but it feels a long way off of the revolutionary panacea they are selling it as, and the issues like hallucinations are so innate to how the LLMs function that it may not be possible to solve them.
> One change that’s likely to please almost everyone is a reduction in Europe’s ubiquitous cookie banners and pop-ups. Under the new proposal, some “non-risk” cookies won’t trigger pop-ups at all, and users would be able to control others from central browser controls that apply to websites broadly.
Truly non-risk cookies were already exempt from the cookie banner. In fact, the obnoxious consent-forcing cookie banners are themselves in violation of the law. It's ironic that instead of enforcement we dumb it all down for the data grabbers. And most of them non-European to boot, so clearly this is amazing for the EU tech ecosystem.
There's the confusion about whether ePD (which is all cookies even functional ones) was superseded by GDPR or whether it wasn't and both rules apply. Personally I think common sense is that GDPR replaced ePD or at least its cookie banner rule, but I'm also not a company with billions of euros to sue.
How can you comply with the current requirements without cookie banners? Why would EU governments use cookie banners if they are just nonsense meant to degrade approval of GDPR?
EU law requires you to use cookie banners if your website contains cookies that are not required for it to work. Common examples of such cookies are those used by third-party analytics, tracking, and advertising services.
[...] we find cookie banners quite irritating, so we decided to look for a solution. After a brief search, we found one: just don’t use any non-essential cookies. Pretty simple, really.
When I open this link I'm greeted with the cookies banner
"We use optional cookies to improve your experience on our websites and to display personalized advertising based on your online activity. If you reject optional cookies, only cookies necessary to provide you the services listed above will be used. You may change your selection on which cookies to accept by clicking "Manage Cookies" at the bottom of the page to change your selection. This selection is maintained for 180 days. Please review your selections regularly. "
By not tracking and setting any third party cookies. Just using strictly functional cookies is fine, just put a disclaimer somewhere in the footer and explain as those are already allowed and cannot be disabled anyway.
The EU's own government websites are polluted with cookie banners. They couldn't even figure out how to comply with their own laws except to just spam the user with cookie consent forms.
By not putting a billion trackers on your site and also by not using dark patterns. The idea was a simple yes or no. It became: "yes or click through these 1000 trackers" or "yes or pay". The problem is that it became normal to just collect and hoard data about everyone.
Again, then why does the EU do this? Clearly its not simply about erroding confidence in GDPR if the EU is literally doing it themselves.
Besides, you seem to be confusing something.
GDPR requires explicit explanation of each cookie, including these 1000s of trackers. It in no way bans these. This is just GDPR working as intended - some people want to have 1000s of trackers and GDPR makes them explain each one with a permission.
Maybe it would be nice to not have so many trackers. Maybe the EU should ban trackers. Maybe consumers should care about granular cookie permissions and stop using websites that have 1000s of them because its annoying as fuck. But some companies do prefer to have these trackers and it is required by GDPR to confront the user with the details and a control.
No. You asked How can you comply with the current requirements without cookie banners? Not How can you have trackers and comply with the current requirements without cookie banners? And don't use dark patterns would have answered this question as well.
>No. You asked How can you comply with the current requirements without cookie banners?
Within the context of the discussion of if its malicious compliance or a natural consequence of the law. Obviously you could have a website with 0 cookies but thats not the world we live in. Maybe you were hoping GDPR would have the side effect of people using less cookies? It in no way requires that though.
I mean just think of it this way. Company A uses Scary Dark Pattern. EU makes regulation requiring information and consent from user for companies that use Scary Dark Pattern. Company A adds information and consent about Scary Dark Pattern.
Where is the malicious compliance? The EU never made tracker cookies or cookies over some amount illegal.
> Within the context of the discussion of if its malicious compliance or a natural consequence of the law.
You ignored I said don't use dark patterns answered the question you meant to ask.
> Obviously you could have a website with 0 cookies but thats not the world we live in. Maybe you were hoping GDPR would have the side effect of people using less cookies?
We were discussing trackers. Not cookies.
> I mean just think of it this way. Company A uses Scary Dark Pattern. EU makes regulation requiring information and consent from user for companies that use Scary Dark Pattern. Company A adds information and consent about Scary Dark Pattern.
I will not think of it using an unnecessary and incorrect analogy. And writing things like Scary Dark Pattern is childish and shows bad faith.
> Where is the malicious compliance? The EU never made tracker cookies or cookies over some amount illegal.
The malicious compliance is the dark patterns you ignored. Rejecting cookies was much more complicated than accepting them. Users were pressured to consent by constantly repeating banners. The “optimal user experience” and “accept and close” labels were misleading. These were ruled not compliance in fact.[1] But the companies knew it was malicious and thought it was compliance.
Ignoring Do Not Track or Global Privacy Control and presenting a cookie banner is a dark pattern as well.
They generally don't, because you don't need banners to store cookies that you need to store to have a working site.
In other words, if you see cookie banner, somebody is asking to store/track stuff about you that's not really needed.
Cookie banners were invented by the market as a loophole to continue dark patterns and bad practices. EU is catching flak because its extremely hard to legislate against explicit bad actors abusing loopholes in new technology.
But yeah, blame EU.
And before you go all "but my analytics is needed to get 1% more conversion on my webshop": if you have to convince me to buy your product by making the BUY button 10% larger and pulsate rainbow colors because your A/B test told you so, I will happily include that in the category "dark patterns".
Let's not deceive ourselves -- first-party analytics are much, much harder to set up, and a lot less people are trained on other analytics platforms.
They're also inherently less trustworthy when it comes to valuations and due diligence, since you could falsify historical data yourself, which you can't do with Google.
The regulation is only concerned with cookies that are not required to provide the service. It makes no differentiation between first party and third party - if you use cookies for anything optional (like analytics) you need consent. So you can have third party non-cookie analytics for example without a banner.
Do you know an analytics service that actually does this? I've seen a bunch of "consentless" analytics solutions that seem to be violating GDPR one way or another because they use the IP address as an identifier (or as part of one).
Can you actually do meaningful analytics without the banner at all? You need to identify the endpoint to deduplicate web page interactions and this isn't covered under essential use afaik. I think this means you need consent though I don't know if this covered under GDPR or ePrivacy or one of the other myriad of regulations on this.
So take the IP, browser agent, your domain name and some other browser identifiers, stick them together and run them through SHA3-256, now you have a hash you can use for deduplication. You can even send this hash to a 3rd party service.
Or assign the user an anonymous session cookie that lasts an hour but contains nothing but a random GUID.
Or simply pipe your log output through a service that computes stats of accessed endpoints.
I think this scheme still requires consent since you are processing pseudo anonymous identifiers that fall under personal information without the essential function basis. Hashing is considered insufficient under the GDPR iirc. Have you asked a lawyer about this?
> You need to identify the endpoint to deduplicate web page
You can deduplicate but you cannot store or transmit this identity information. The derived stats are fine as long as it’s aggregated in such a way that preserves anonymity
No one needs to deduplicate over a longer period than a few minutes, or a single session. If you need that, then you're doing something shady. If a user visits your site, clicks a few things, leaves and comes back two hours later, you don't need know if it's the same person or not. The goal of analytics is to see how people in general use your website, not how an individual person use your website.
So just take IP address, browser details, your domain name, and a random ID you stick in a 30 minute session cookie. Hash it together. Now you have token valid for 30 minutes you can use for deduplication but no way of tying it back to particular user (after 30 minutes). And yes, if the user changes browser preferences, then they will get a new hash, but who cares?
> No one needs to deduplicate over a longer period than a few minutes, or a single session. If you need that, then you're doing something shady. If a user visits your site, clicks a few things, leaves and comes back two hours later, you don't need know if it's the same person or not.
Sure you do if for example you want to know how many unique users browse your site per day or month. Which is one of the most commonly requested and used metrics.
> So just take IP address, browser details, your domain name, and a random ID you stick in a 30 minute session cookie.
That looks a lot like a unique identifier which does require a user's consent and a cookie banner.
> Now you have token valid for 30 minutes you can use for deduplication but no way of tying it back to particular user (after 30 minutes)
The EU Court of Justice has ruled in the past that hashed personal data is still personal data.
> And yes, if the user changes browser preferences, then they will get a new hash, but who cares?
It will also happen after 30 minutes have passed which will happen all the time.
> Not rocket science.
And yet your solution is illegal according to the GDPR and does still not fulfil the basic requirement of returning the number of unique users per day or month.
In terms of whether or not the ubiquity of cookie banners is malicious compliance or if it was an inevitable consequence of GDPR, it doesnt matter if trackers are good or necessary. GDPR doesn't ban them. So having them and getting consent is just a normal consequence.
We can say, "Wouldn't it have been nice if the bad UX of all these cookies organically led to the death of trackers," but it didn't. And now proponents of GDPR are blaming companies for following GDPR. This comes from confusing the actual law with a desired side effect that didn't materialize.
> And now proponents of GDPR are blaming companies for following GDPR.
Not really, proponents of GDPR are aware that GDPR explicitly blocking trackers would be extremely hard as there is a significant gray area where cookies can be useful but non-essential, so you'd have to define very specifically what constitutes a tracker or do a blanket ban and hurt legitimate use-cases. Both are bad.
For some reason though people think that the body that institutes laws that try to make the world a better place, when loopholes are found and abused for profit, this is somehow the standard body making a mistake, rather than each individual profit-seeking loophole-abusing entity being the problematic and blame-worthy actor.
I never understand why, I guess you work somewhere that makes money off of this.
This. I don't know why there's a heavy overlap between the "GDPR didn't go far enough" people and not actually reading the GRPR. I'd think they would overlap a lot with people who actually read it.
I dont think you actually need a cookie for that, technically. But I take your point.
What about trackers which they want to set immediately on page load? Just separate prompts for each seems worse than 1 condensed view. You might say "but trackers suck - I don't care about supporting a good UX for them" and it would be hard to disagree. But I'm making the point that its not malicious compliance. It would be great if people didn't use trackers but that is the status quo and GDPR didn't make theme illegal. Simply operating as normal plus new GDPR compliance clearly isnt malicious. The reality is cookie banners everywhere was an inevitable consequence of GDPR.
> But I'm making the point that its not malicious compliance.
It’s totally technically feasible to have a non-blocking opt-in box.
But sites effectively make a legally mandated opt-in dialog into an opt-out dialog by making it block the site. Blocking the page loading until the banner is dismissed is definitely malicious, and arguably not compliant at all.
And lets not get started on all the sites where the banner is just non-functional smoke screen.
But some companies prefer to have trackers. They are required by GDPR to explain each cookie and offer a control for permissions. They probably had trackers before GDPR too. So how is that malicious compliance? They are just operating how they did before except now they are observing GDPR.
It sounds like maybe you just want them to ban trackers. Or for people to care more about trackers and stop using websites with trackers (thereby driving down trackers) Great. Those are all great. But none of them happened and none of that is dictated by GDPR.
You can have first party trackers. That is not so hard. Every site onto itself is a first party tracker, but if your developers can't do it there are opensource solutions available to host.
1p solutions still require consent since the analytics banners are also there to enable processing of personal information in the first place (on the most primitive level IP address)
I can already hear big tech explaining to me, that not sending the do-not-track header must mean do-track obviously, and that I am wrong, when I complain about missing consent. And I can already see the people who have been gaslit sufficiently to believe this stuff.
Oh, but you see, this can impossibly be interpreted as not consenting to our specific tracking. Surely users mostly clicked this accidentally. I mean why would they block our tracking? It's all for a better user experience... /s
_Company goes on to put tutorial about disabling the do-not-track header on their website._
It worked to highlight the insane amount of tracking every fucking website does. Unfortunately it didn’t stop it. A browser setting letting me reject everything by default will be a better implementation. But this implementation only failed because almost every website owner wants to track your every move and share those moves with about 50 different other trackers and doesn’t want to be better.
I used to use an extension that let me whitelist which sites could set cookies (which was pretty much those I wanted to login to). I had to stop using it because I had to allow the cookie preference cookies on too many sites.
You can fix that. I use an extension called "I don't care about cookies" that clicks "yes" to all cookies on all websites, and I use another extension* that doesn't allow any cookies to be set unless I whitelist the site, and I can do this finely even e.g. to the point where I accept a cookie from one page to get to the next page, then drop it, and drop the entire site from even that whitelist when I leave the page, setting this all with a couple of clicks.
* Sadly the second is unmaintained, and lets localStorage stuff through. There are other extensions that have to be called in (I still need to hide referers and other things anyway.) https://addons.mozilla.org/en-US/firefox/addon/forget_me_not.... I have the simultaneous desire to take the extension over or fork it, and the desire not to get more involved with the sinking ship which is Firefox. Especially with the way they treat extension developers.
The only thing that works well for me is using an extension that automatically gives permissions and another that auto deletes cookies when i close the tab.
The problem with Ublock etc. is that just blocking breaks quite a lot of sites.
The website wouldn’t inform you about which cookies are doing what. You wouldn’t have a basis to decide on which cookies you want because they are useful versus which you don’t because they track you. You also wouldn’t be informed when functional cookies suddenly turn into tracking cookies a week later.
The whole point of the consent popups is to inform the user about what is going on. Without legislation, you wouldn’t get that information.
Because it's not like the browser has two thousand cookies per website, it only has one and then they share your data with the two thousand partners server-side. The government absolutely needs to be involved.
To begin with that isn't true, because the worst offenders are third party cookies, since they can track the user between websites, but then you can block them independently of the first party cookies.
Then you have the problem that if they are using a single cookie, you now can't block it because you need it to be set so it stops showing you the damn cookie banner every time, but meanwhile there is no good way for the user or the government to be able to tell what they're doing with the data on the back end anyway. So now you have to let them set the cookie and hope they're not breaking a law where it's hard to detect violations, instead of blocking the cookie on every site where it has no apparent utility to you.
But the real question is, why does this have anything to do with cookies to begin with? If you want to ban data sharing or whatever then who cares whether it involves cookies or not? If they set a cookie and sell your data that's bad but if they're fingerprinting your browser and do it then it's all good?
Sometimes laws are dumb simply because the people drafting them were bad at it.
> If you want to ban data sharing or whatever then who cares whether it involves cookies or not?
Nobody. The law bans tracking and data sharing, not cookies specifically. People have just simplified it to "oh, cookies" and ignore that this law bans tracking.
> The law bans tracking and data sharing, not cookies specifically.
From what I understand it specifically regards storing data on the user's device as something different, and then cookies do that so cookies are different.
Again. You could literally try and read the law. After all, it's only been around for 9 years.
--- start quote ---
(1) The protection of natural persons in relation to the processing of personal data is a fundamental right.
...
(6) Rapid technological developments and globalisation have brought new challenges for the protection of personal data. The scale of the collection and sharing of personal data has increased significantly. Technology allows both private companies and public authorities to make use of personal data on an unprecedented scale in order to pursue their activities. Natural persons increasingly make personal information available publicly and globally.
...
(14) The protection afforded by this Regulation should apply to natural persons, whatever their nationality or place of residence, in relation to the processing of their personal data.
...
(15) In order to prevent creating a serious risk of circumvention, the protection of natural persons should be technologically neutral and should not depend on the techniques used. The protection of natural persons should apply to the processing of personal data by automated means, as well as to manual processing, if the personal data are contained or are intended to be contained in a filing system.
...
(26) The principles of data protection should apply to any information concerning an identified or identifiable natural person.
...
(32)
Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject's agreement to the processing of personal data relating to him or her, such as by a written statement, including by electronic means, or an oral statement. This could include ticking a box when visiting an internet website, choosing technical settings for information society services or another statement or conduct which clearly indicates in this context the data subject's acceptance of the proposed processing of his or her personal data. Silence, pre-ticked boxes or inactivity should not therefore constitute consent. Consent should cover all processing activities carried out for the same purpose or purposes. When the processing has multiple purposes, consent should be given for all of them. If the data subject's consent is to be given following a request by electronic means, the request must be clear, concise and not unnecessarily disruptive to the use of the service for which it is provided.
You keep saying to read the law, but did you? "The law literally doesn't talk about cookies." It does:
> (30) Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags.
That is why: "In order to prevent creating a serious risk of circumvention, the protection of natural persons should be technologically neutral and should not depend on the techniques used."
That it also applies to things "such as" RFID tags isn't really that interesting. The salient part is identifiers. Because fingerprinting turns that into a mess.
Is your browser user agent string an "identifier"? It generally isn't unique, and requiring explicit consent to process it would cause a lot of trouble, but that and a few other things you could say the same thing about are collectively enough to be uniquely identifying.
Which is something different which they apparently hadn't considered and it's not clear how it's supposed to work. Do they become an identifier as soon as you have enough of them to uniquely identify someone? How do you even know when that threshold is passed? Does it require you to actually use them as an identifier, or is it enough just to have them because then they could be used retroactively? What if you provide a non-identifying subset of them to a third party in another jurisdiction who collects others from someone else and then combines them without explicitly notifying you?
> The EPR was supposed to be passed in 2018 at the same time as the GDPR came into force. The EU obviously missed that goal, but there are drafts of the document online, and it is scheduled to be finalized sometime this year even though there is no still date for when it will be implemented. The EPR promises to address browser fingerprinting in ways that are similar to cookies, create more robust protections for metadata, and take into account new methods of communication, like WhatsApp.
If the thing they failed to pass promises to do something additional, doesn't that imply that the thing they did pass doesn't already do it?
And I mean, just look at this:
> Strictly necessary cookies — These cookies are essential for you to browse the website and use its features, such as accessing secure areas of the site. Cookies that allow web shops to hold your items in your cart while you are shopping online are an example of strictly necessary cookies. These cookies will generally be first-party session cookies. While it is not required to obtain consent for these cookies, what they do and why they are necessary should be explained to the user.
> Preferences cookies — Also known as “functionality cookies,” these cookies allow a website to remember choices you have made in the past, like what language you prefer, what region you would like weather reports for, or what your user name and password are so you can automatically log in.
So you don't need consent for a shopping cart cookie, which is basically a login to a numbered account with no password, but if you want to do an actual "stay logged in with no password" or just not forget the user's preferred language now you supposedly need an annoying cookie banner even if you're not selling the data or otherwise doing anything objectionable with it. It's rubbish.
> but if you want to do an actual "stay logged in with no password"
Wouldn't that be a session cookie (which is a strictly necessary cookie for accessing a secure area) with no expiration?
> or just not forget the user's preferred language
Why would you store the language preference client site anyhow? Isn't a better place the user profile on the server? I use the same language for the same site no matter the device I am logged in.
> Wouldn't that be a session cookie (which is a strictly necessary cookie for accessing a secure area) with no expiration?
The gdpr.eu website literally says that a cookie that allows the website to remember "what your user name and password are so you can automatically log in" is a functional cookie rather than a strictly necessary cookie.
> Why would you store the language preference client site anyhow?
You're not storing the language preference in the cookie, you're storing a cookie that identifies the user so that the server can remember their language preference.
Consider the two possible ways that this can work: 1) if the cookie identifies the user then using it for anything outside of the "strictly necessary" category requires the cookie banner, or 2) if the cookie is used for any strictly necessary purpose then you can set the cookie even if you're also using it for other purposes, in which case anyone can set a strictly necessary cookie and then also use the same cookie to do as much tracking as they want without your consent.
Both of these are asinine because if it's the first one they're putting things like remembering your language preference outside of the strictly necessary category and requiring the dumb cookie banner for that, but if it's the second one the law is totally pointless.
> The gdpr.eu website literally says that a cookie that allows the website to remember "what your user name and password are so you can automatically log in" is a functional cookie rather than a strictly necessary cookie.
But one row before it mentions "such as accessing secure areas of the site.". If the secure cookie has 12 months validity, this is basically a different way to implement "remember username/password".
Besides, all my browsers (Firefox, Chrome) remember the users and passwords for all the site I access, so are we even talking about this? Is Safari that bad that it doesn't remember your user/password (no experience with that one)?
> You're not storing the language preference in the cookie, you're storing a cookie that identifies the user
Ok, I agree that for sites without username / password that will not work. On the other hand, personally I rarely end up on any site that is not in a language that I can read and on top the browser has a language preference : https://developer.mozilla.org/en-US/docs/Web/API/Navigator/l... . So, in practice, I think there are extremely few cases for sites require a language cookie for a not authenticated user.
> But one row before it mentions "such as accessing secure areas of the site."
Which could be read as allowing session cookies but not ones that allow you to save your login if you come back later. But it's also kind of confusing/ambiguous, which is another problem -- if people don't know what to do then what are they going to do? Cookie banners everywhere, because it's safer.
> Ok, I agree that for sites without username / password that will not work.
How would it work differently for sites with a username and password? The login cookie would still identify the user and would still be used to remember the language preference.
> allow you to save your login if you come back later.
Again, is there any browser nowadays that doesn't save the login? I don't know any, personally but I do not know all of them. And if they are, how much market share they have? (If I myself build tomorrow a browser without the functionality, that can't be an argument that the legislation is wrong...)
> How would it work differently for sites with a username and password?
Generally for sites where you use a username, the site will load from the server several information to display (ex: your full name to write "Hello Mister X", etc.). In the same request you can have the user preferences (theme/language/etc.), and the local javascript uses them to do whatever it needs to do. Even with a cookie, there needs to be some javascript to do some actions, so no difference.
Or you could just redirect via a URL that has the user preferences once he logged in (ex: after site knows you are the correct user it will redirect you to https://mysite.com?lang=en&theme=dark)
There are many technical solutions, not sure why everybody is so crazy about cookie (oh, maybe they think of the food! Yummy)
Actually it often is a separate cookie per tracker because that's convenient for the trackers. But the only reason they don't put in the effort to do it the way you said is that browsers don't have the feature to block individual cookies. If they did, they would.
Some browsers like Midori do the sensible thing and ask you for every cookie, whether you actually want to have it. Cookie dialogs are then entirely redundant. You can click accept all in the website, and reject all in the browser.
Not all cookies are bad for the user, for instance the one that keeps you logged in or stores the session id. Those kind were never banned in the first place.
Blocking cookies locally doesn't allow you to easily discriminate between tracking and functional cookies. And even if the browser had a UI for accepting or rejecting each cookie, they're not named such that a normal user could figure out which are important for not breaking the website, and which are just for tracking purposes.
By passing a law that says "website providers must disambiguate" this situation can be improved.
If there's no regulation, nothing stops a website from telling hundreds of third-party entities about your visit. No amount of fiddling with browser settings and extensions will prevent a keen website operator from contributing to tracking you (at least on ip/household level) by colluding with data brokers via the back-end.
> Yet, some how the vast majority of HN comments defend the cookie banners saying if you don't do anything "bad" then you don't need the banners.
There are a LOT of shades of gray when it comes to website tracking and HN commenters refuse to deal with nuance.
Imagine running a store, and then I ask you how many customers you had yesterday and what they are looking at. "I don't watch the visitors - it's unnecessary and invasive". When in fact, having a general idea what your customers are looking for or doing in your store is pretty essential for running your business.
Obviously, this is different than taking the customer's picture and trading it with the store across the street.
When it comes to websites and cookie use, the GDPR treated both behaviors identically.
Realistically, you want to know things like, how many users who looked at something made a purchase in the next 3 days? Is that going up or down after a recent change we made?
Many necessary business analytics require tracking and aggregating the behavior of individual users. You can't do that with server logs.
> Many necessary business analytics require tracking and aggregating the behavior of individual users.
Businesses existed before tracking individuals was practical. Wanting something does not make it necessary.
> Realistically, you want to know things like, how many users who looked at something made a purchase in the next 3 days? Is that going up or down after a recent change we made?
Metrics like this had little benefit sales did not in my experience. And tracking might be acceptable if it stopped there.
Many people want to do many things, problem is do we agree as society it is ok, considering all the implications.
I personally find the commercial targeting extremely poor. I look for things to buy and I get stupid ads which don't fit, or I bought the things and still bombarded with the ad for the same thing.
But data collection can be used by far more nefarious purposes, like political manipulation (already happening). So yes, I am willing to give up some percentage points in optimizing the commercial and advertisement process (for your example, wait for 2 weeks and check for the actual sales volume difference) to prevent other issues.
This isn't even about ads. It's just about basic business metrics.
And no, you can't just "wait 2 weeks and check for the actual sales volume difference". The example I gave requires individual anonymized tracking. Pretty much anything that has to do with correlations in customer behavior requires individual tracking. And that's how businesses improve.
Also, it's not just giving up "some percentage points". There are a huge number of small businesses that can only exist because Facebook ads work so well in targeting very precise customer segments who would never know about their product otherwise. Targeting advertising does actually work, and you'd be putting tons of small business owners out of work if you got rid of it.
Maybe what you say is correct, but without a reference can also be an opinion influenced by your domain of activity.
What I see though is many shops closing, because more and more people buying online. What I hear is people buying crap from Amazon and throwing it very fast, or using fast fashion from the like of Shein. Neither seem to me a great outcome.
I did a cursory look and I found this https://www.pewresearch.org/short-reads/2024/04/22/a-look-at... , will quote "The number of high-propensity business applications – those that are highly likely to turn into businesses with payrolls – remained relatively stable between 2009 and 2019,". This for me does not support the idea that of "huge number" that only exist due to Facebook (business exits have also grown over the period, more data at https://data-explorer.oecd.org/), but of course this is an interpretation.
Okay, and why do you need to share whatever info you collect with thousands of random data "partners" if it's just for you to keep track of whatever made up thing you say you need to track? Because in reality that's what GDPR exposed, that random ecomm website selling socks or whatever is sharing everything they know about you with a billion random companies for some unknowable reason.
The funny part is that many banners are already now not required. But there has been much propaganda by adtech around it, to rule people up against tracking protections and promote their own "solutions". That's the reason you see the same 3-5 cookie banners all around the web. Already today websites that use purely technical cookies would not actually not need any banners at all.
Yes. I don't think you should have to show a popup to track the user's language preferences, whether they want a header toggled on or off, or other such harmless preferences. Yet, the EU ePrivacy directive (separately from the GDPR) really does require popups to inform users of these "cookies".
No it doesn't. A website's own preferences fall under the 'necessary for site functionality" exception.
Besides how many sites actually have this as the only reason for cookies? Every time I get a new cookie banner I check it and there's always lots of data shared with "trusted partners". Even sites of companies that purely make money off their own products and services and shouldn't need to sell data. Businesses are just addicted to it.
The only provision I like is that they may only ask once every 6 months. However personally I wish that they'd make it a requirement to honour the do not track flag and never ask anything in that case. The common argument that browsers turn it on by default doesn't matter in the EU because tracking should be opt-in here anyway so this is expected behaviour. The browsers would quickly bring the flag back if it actually serves a purpose.
I would on the other hand ask if I should really set my "preferred language" on every device I log in ?! Why not store it server side (not to mention, why not use the browser language selection to start with).
I do agree with you that most of the cookies we talk about are not at all "preference cookie"...
the issue were the 100s of tracking cookies and that websites would use dark patterns or simply not offer a "no to all" button at all (which is against the law, btw.)
Most websites do. not. need. cookies.
It's all about tracking and surveillance to show you different prices on airbnb and booking.com to maximise their profits.
I think that most websites need cookies. I have a website with short stories. It lets you set font size and dark/bright theme, nothing special. Do I want to store your settings on server? No, why should I waste my resources? Just store it in your browser! Cookies are perfect for that. Do I know your settings? No, I don't, I don't care. I set a cookie, JS reads it and changes something on client. No tracking at all. Cookies are perfect for that. People just abuse them like everything else, that's the problem, not cookies.
And BTW because I don't care about your cookies, I don't need to bother you with cookie banner. It's that easy.
Also, if I would implement user management for whatever reason, I would NOT NEED to show the banner also. ONLY if I shared the info with third side. The rules are simple yet the ways people bend them are very creative.
A cookie is something that is sent to the server, by design - that's their whole point! So if the only part of your code that needs them lives on the client, cookies are the wrong mechanism for that - use localStorage instead.
You do not need cookies for either of these. CSS can follow browser preferences, and browsers can change font sizes with zoom.
I am not sure these cookies are covered by the regulations. No personal so not covered by GDPR. They might be covered by the ePrivacy directive (the "cookie law").
Unfortunately, because these types of preferences (font size, dark/light mode theme) are "non-essential", you are required to inform users about them using a cookie banner, per EU ePrivacy directive (the one that predates the GDPR). So if you don't use a cookie banner in this case, you are not in compliance.
That's not true. You can use those cookies, you just need to explain them somewhere on the site. No opt in required.
I talked with our then national information law official (funny fact, same person is currently president of our country), rule of thumb is if you're not using your users' personal data to pay for other people's services (e.g Google analytics) or putting actual personal data in them, you're generally fine without the banner.
Further, if you're a small shop or individual acting in good faith and somehow still violated the law, they will issue a warning first so you can fix the issue. Only the blatant violations by people who should've known better will get a fine instantly (that is the practice here, anyway, I assumed that was the agreement between EU information officers)
All websites need cookies, at least for functionality and for analytics. We aren't living in the mid-1990s when websites were being operated for free by university departments or major megacorps in a closed system. The cookie law screwed all the small businesses and individuals who needed to be able to earn money to run their websites. It crippled everyone but big megacorps, who just pay the fines and go ahead with violating everyone's privacy.
Functional cookies are fine. Even analytics is fine if you're using your own (though said own analytics must also company with GDPR personal data retention rules).
What is not fine is giving away your users' personal data to pay for your analytics bill.
The implementors of the banners did it in the most annoying way, so most users will just accept all instead of rejecting all (because the button to reject all was hidden or not there at all), check steam store for example their banner is non intrusive and you can clearly reject or accept all in one click.
The law wasn't poorly written, most websites just don't follow the law. Yes, they're doing illegal things, but it turns out enforcement is weak so the lawbreaking is so ubiquitous that people think it's the fault of the law itself.
> [...] most websites just don't follow the law. Yes, they're doing illegal things, but it turns out enforcement is weak so the lawbreaking is so ubiquitous [...]
I just checked the major institutional EU websites listed here[0], and every single one (e.g., [1][2][3]) had a different annoying massive cookie banner. In fact, I was impressed I couldn't find a single EU government website without a massive cookie banner.
I don't know if it is due to the law enforcement being so weak (or if the law itself is at fault or whatever else). But it seems like something is not right (either with your argument or EU), given the EU government itself engages in this "lawbreaking" (as defined by you) on every single one of their own major institutional websites.
The potential reason you brought up of "law enforcement is just weak" just seems like the biggest EU regulatory environment roast possible (which is why I don't believe it to be the real reason), given that not only they fail to enforce it against third parties (which would be at least somewhat understandable), but they cannot even enforce it on any of their own first party websites (aka they don't even try following their own rules themselves).
What do you mean? The original post mention 1000 cookies and no button to reject them. The sites you mention do have only two buttons (accept/reject). So they are following the law and not engaging in dark patterns.
That is unfortunate, EU could well present itself as an example of how things can be done right. Unfortunately incompetence and/or indifference, plus lack of IT talent willing to work for the public sector is also a thing in politics. It's an opportunity lost for sure.
> Attempts at "compliance" made the web browsing experience worse.
Malicious compliance made the web browsing experience worse. That and deliberately not complying by as much as sites thought they could get away with, which is increasing as it becomes more obvious enforcement just isn't there.
Because the issue is due to a failure in the law. The failure of not enforcing the "do not track" setting from browsers that would avoid the need for these annoying pop-ups in the first place.
I'm convinced there's a psyop on this site when it comes to GDPR, and I'm only half-joking. If people would bother to read those intrusive banners, they'd notice that their info is being harvested and shared with hundreds, even thousands of "partners". In what universe is this something we should be okay with? Why exactly does some random ecommerce site need to harvest my data and share it with a bajillion "partners" of theirs? Why are we okay with that?
I hate that the psychotic data harvesting assholes behind all these dark patterns emerged victorious by just straight up lying to people and deluding them into thinking GDPR was the issue, and not them and their shitty dark pattern banners
The main problem there is soaring housing costs which have nothing to do with technology and everything to do with extremely restrictive planning regulations that make it impossible for the housing supply to keep up with population growth.
Yeah, like imagine if the LLM's don't advance that much, the agentic stuff doesn't really take off etc.
Even in this conservative case, ChatGPT could seriously erode Google Search revenues. That alone would be a massive disruption and Google wants to ensure they end up as the Google in that scenario and not the Lycos, AltaVista, AskJeeves etc. etc.
But what Google is doing, is like what Firefox did when Chrome came out. Panicking.
Panicking, and therefore making horrible design and product choices.
Google has made their main search engine output utter and complete junk. It's just terrible. If they didn't have 'web' search, I'd never be able to use it.
In almost every search for the last month, normal search results in horrible matches. Switch to web? Bam! First result.
Not web? The same perfect result might be 3 or 4 pages deep. If that.
(I am comparing web results in both cases, and ignoring the also broken 80% of the pages of AI junk.)
In an attempt to compete, they're literally driving people to use ChatGPT for search in droves.
They could compete, and do so without this panicky disaster of a response.
Exclusives aren't consumer-friendly but they shift boxes. Everyone knows if you want to play a Mario game you need a Nintendo.
The exclusives ship has sailed for the Xbox now so the best they can do is try to compete with the new Steam Machine with what will essentially be a PC and allow all storefronts.
It seems Valve has gone for an entry-level machine while Xbox is going for a premium one so it'll be interesting to see how it all pans out.
Entry-level gaming PC is still quite high up there on the performance scale compared to consoles. They haven't announced a price yet but it'll hopefully be similar to current consoles on the lower storage model. Anything higher will put it in range with existing prebuilt gaming PCs.
It will be interesting to see how the market will determine whether subjective “fun” is the same in an entry-level versus a premium experience. Short of some ego boosting element, the experience is likely the same.
I don't really see why we need more government involvement here. It's just going to be ham-fisted and create unintended consequences like the kids in Australia having to use adult YouTube because they can't have a kids account anymore.
reply