The biggest boneheaded decision from my perspective was their taking over the + prefix in Google search (to filter for results that have this term verbatim). That just positioned G+ as my enemy and I had a strong desire for it to die. Unfortunately, they didn't bring back the prefix even after it died. Quotes around a term do something similar, but I am still angry.
It's a dimension of neglect. If I run a service advertising itself as preventing people from harming themselves or each other (e.g. a mental health institution), then it would be criminally negligent of me to not limit people's access to sharp knives.
That is an excellent point. My recent coursework at Penn State, there were guardrails around cheating using Honor Lock, I am guessing a motivated student could find ways around it, but the system was better than trusting students to do the right thing.
I would argue that "knowledge" is an almost meaningless concept on its own. What assessments measure is a more complex form of "competency", and the competency of being able to write an essay on a topic is different from the competency of passing an MCQ quiz about it and both are different from being able to apply it in the field.
I don't have a clear solution, other than to have the assessments depend on what we're preparing people for. As an extreme example, I don't care how good of an essay a surgeon or anesthesiologist can write if they can't apply that under pressure.
> AI will probably be smarter than any single human next year.
Arguably that's already so. There's no clear single dimension for "smart"; even within exact sciences, I wouldn't know how to judge e.g. "Who was smarter, Einstein or Von Neumann?". But for any particular "smarts competition", especially if it's time limited, I'd expect Claude 4.5 Opus and Gemini 3 Pro to get higher scores than any single human.
Hear me out: let's say that generating a new and better compression algorithm is something that might take a dedicated researcher about a year of their life, and that person is being paid to work on it, in the industry or via a grant. Is there anyone who has been running Claude Code instances for a human-year in a loop with the instruction to try different approaches until it has a better compression algorithm?
I am yet to see an "AI doesn't impress me" comment that added anything to the discussion. Yes, there's always going to be a state of the art and things that are as of yet beyond the state of the art.
The "bubble" is in the financial investment, not in the technology. AI won't disappear after the bubble bursts, just like the web didn't disappear after 2000. If anything, bursting the financial bubble will most likely encourage researchers to experiment more, trying a larger range of cheaper approaches, and do more fundamental engineering rather than just scaling.
AI is here to stay, and the only thing that can stop it at this stage is a Butlerian jihad.
I maintain, the web today is not what people though it would be in 1998. The tech has it's uses, it's just not what snake oil sellers are making it to be. And talking about Butlerian jihad is borderline snake oil selling.
Borg logic consists of framing matters of choice as "inevitable". As long as those with power convince everyone that technological implementation is "inevitable", people will passively accept their self-serving and destructive technological mastery of the world.
The framing allows the rest of us to get ourselves off the hook. "We didn't have a choice! It was INEVITABLE!"
But history shows that it is inevitable. Can you give me an example of a single useful technology that humans ever stopped developing because of its negative externalities?
> "We didn't have a choice! It was INEVITABLE!"
There is no "we". You can call it the tragedy of the commons, or Moloch, or whatever you want, but I don't see how you can convince every single developer and financial sponsor on the planet to stop using and developing this (clearly very useful) tech. And as long as you can't, it's socially inevitable.
If you want a practice run, see if you can stop everyone in the world from smoking tobacco, which is so much more clearly detrimental. If you manage that, you might have a small chance at stopping implementation of AI.
> see if you can stop everyone in the world from smoking tobacco
this is a logical fallacy i think; nobody needs to stop tobacco full-stop, but we have been extremely successful at making it less and less incentivized/used over time, which is the goal...
Why would you spend $200 a day on Opus if you can pay that for a month via the highest tier Claude Max subscription? Are you using the API in some special way?
The $200/month plan doesn't have limits either - they have an overage fee you can pay now in Claude Code so once you've expended your rate limited token allowance you can keep on working and pay for the extra tokens out of an additional cash reserve you've set up.
> The $200/month plan doesn't have limits either... once you've expended your rate limited token allowance... pay for the extra tokens out of an additional cash reserve you've set up
You're absolutely right! Limited token allowance for $200/month is actually unlimited tokens when paying for extra from a cash reserve which is also unlimited, of course.
I think you may have misunderstood something here.
When paying for Claude Max even at $200/month there are limits - you have a limit to the number of tokens you can use per five hour period, and if you run out of that you may have to wait an hour for the reset.
You COULD instead use an API key and avoid that limit and reset, but that would end up costing you significantly more since the $200/month plan represents such a big discount on API costs.
As-of a few weeks ago there's a third option: pay for the $200/month plan but allow it to charge you extra for tokens when you reach those limits. That gives you the discount but means your work isn't interrupted.
Thank you for the explanation, but I did fully understand that is what you were saying.
What I don't fully understand is how you can characterize that as "not limited" with a straight face; then again, I can't see your face so maybe you weren't straight faced as you wrote it in the first place.
Hopefully you could see my well meaning smile with the "absolutely right" opening, but apparently that's no longer common so I can understand your confusion as https://absolutelyright.lol/ indicates Opus 4.5 has had it RLHF'd away.
When I said "not limited" I meant "no longer limits your usage with a hard stop when you run out of tokens for a five hour period any more like it did until a few weeks ago".
That's why I said "not limited" as opposed to "unlimited" - a subtle difference in word choice, I'll give you that.
reply