That sounds like the market at work? Government doesn’t control what private companies stock, so it seems they’ve gotten some signal that the majority of their customers prefer energy-efficient products. If you’re a non-mainstream consumer, things are always going to be harder for you.
When you insert a giant labeling bureaucracy in the middle of it, then point to consumers doing what they would have done anyway, then no, it’s not “the market at work”.
It’s a bit like making a Department of Deliciousness, and “voluntarily” labeling every cookie sold as part of the SweetStar program. My goodness…people like certified delicious products! Let’s hire more people…
While we’re at it, why not let manufacturers reintroduce lead into paint and toys and let consumers choose what they want there too? The problem is that “consumer choice” is frequently a shield for amoral companies to take advantage of information asymmetry to externalize problems onto individuals. Individual consumers do not have time to deeply research every purchase they make and so it is not reasonable to expect them to handle these things themselves. Instead we have the Hobbesian contract where it is much more efficient to empower a government to centralize the handling of these common goals. It’s not smart or edgy to argue for the “free hand of the market” in these one-off topics, because none of these decisions are made in a vacuum but rather are part of a continuum of choices that the governed are mostly happy with (no such safety regime can ever be perfect).
I think there’s a difference in that this is about as good as LLM code is going to get in terms of code quality (as opposed to capability a la agentic functionality). LLM output can only be as good as its training data, and the proliferation of public LLM-generated code will only serve as a further anchor in future training. Humans on the other hand ideally will learn and improve with each code review and if they don’t want to you can replace them (to put it harshly).
I did not get the feeling that the author was against AI, but rather was bemoaning that students were using it to avoid learning. Philosophy is a good example of a subject where the knowledge is a means to developing your own cohesive principles. You don’t have to ever evolve your principles beyond their organic development, but why even bother taking a philosophy class at that point.
The ideal philosophy class is probably Aristotelian, with direct conversation between teacher and student. But this is inefficient, so college settled on using essays instead, where some of that conversation happened with the student themself as they worked through a comprehensive argument and then the teacher got to “efficiently” interject through either feedback or grading. This also resulted in asymmetric effort though, and AI is good at narrowing effort dynamics like that.
The author’s point was that the student’s effort isn’t a competition against the teacher to minmax a final grade but rather part of developing their thinking, so your “day of reckoning” seems to be cheering for students (and maybe people) to progressively offload more of their _thinking_ (not just their tasks) to AI? I’d argue that’s a bleak future indeed.
Where I disagree with the author is in worrying about devaluing a college degree. It shouldn’t be necessary for many career paths, and AI will make it increasingly equivalent to having existed in some town for 4 years (in its current incarnation). I’m all for that day of reckoning, where the students going to university want to be there for the sake of learning and not for credentialing. Most everyone else will get to fast-forward their professional lives.
You would have to grade every user on every knowledge axis though. Just because someone is an expert in software doesn’t mean you should believe their takes on medicine, no matter how good faith their model interactions appear. I’d argue that coming up with an automated way to determine the objective truthfulness of information would be among the greatest creations of humanity (basically “solving” philosophy), so this isn’t a small task.
I've been thinking about how this happens with human cognitive development. There's a constant reinforcement mechanism that simply compares one's predicted reality with actual reality. The machines lack an authoritative reality.
If we had to grade truthiness of data sources - our sight or other main senses would probably be #1. Some gossip we heard from a 6 year old is near the bottom.
We know how to grade these data sources based on longitudinal experience and they are graded on multiple axes. For instance Angela is wrong about most facts but always right about matters of the heart.
of course. Each user input would be compared with other user input and existing data in the model before. Only legit and cross-referenced data could be used. Other data could still be used but marked as "possible controversial data". Good model should know that controversial data exists too and should distinguish it from the proper scientific data on each topic.
Users can be adversarial to the “truth” (to the extent it exists) without being adversarial in intent.
Dinosaur bones are either 65 million year old remnants of ancient creatures or decoys planted by a God during a 7 day creation, and a large proportion of humans earnestly believe either take. Choosing which of these to believe involves a higher level decision about fundamental worldviews. This is an extreme example, but incorporating “honest” human feedback on vaccines, dark matter, and countless other topics won’t lead to de facto improvements.
I guess to put it another way: experts don’t learn from the masses. The average human isn’t an expert in anything, so incorporating the average feedback will pull a model away from expertise (imagine asking 100 people to give you grammar advice). You’d instead want to identify expert advice, but that’s impossible to do from looking at the advice itself without giving into a confirmation bias spiral. Humans use meta-signals like credentialing to augment their perception of received information, yet I doubt we’ll be having people upload their CV during signup to a chat service.
And at the cutting edge level of expertise, the only real “knowledgeable” counterparties are the physical systems of reality themselves. I’m curious how takeoff is possible for a brain in a bottle that can’t test and verify any of its own conjectures. It can continually extrapolate down chains of thought, but that’s most likely to just carry and amplify errors.
Dirac’s prediction of antimatter came from purely mathematical reasoning—before any experimental evidence existed. Testing and verifying conjectures requires the ability to extrapolate beyond known data, rather than from it, and the ability discard false leads based on theoretical reasoning, rather than statistical confidence.
All of this is possible in a bottle, but laughably far beyond our current capabilities.
This is a good take. What models seem to be poor at is undoing their own thinking down a path even when they can test.
If you let a model write code, test it, identify bugs and fix them, you get an increasingly obtuse and complex code base where errors happen more. The more it iterate the worse it gets.
At the end of the day, written human language is a poor way of describing software. Even to a model. The code is the description.
At the moment we describe solutions we want to see to the models and they aren't that smart about translating that to an unambiguous form.
We are a long was off describing the problems and asking for a solution. Even when the model can test and iterate.
Same way corporations do it, they hire humans and other companies to do things. Organisations already have a mind of their own with more drive to survive than an llm.
Yeah, the solution isn’t divorcing risk (as communicated by cost) from reality. If the concern is usurious insurance rates, that’s where things like profit caps and other regulations come in. Society should want people to have fair insurance rates but not necessarily cheap rates.
Profit caps are a bad idea in general, but they are an especially terrible fit for companies insuring against tail risks, because you need to eke out a small profit for years or decades to hedge against the black swans with massive costs. The 2017 and 2018 wildfires wiped out _25 years_ of insurance company profits, for example: if you had said in 2013, "hey, these guys have made 20 straight years of profits, we need caps to control costs", you'd have left them insolvent against the fires.
This is all a moot point though: you cannot force companies to offer insurance. If regulations prevent them from offering policies at a profit, they just leave. Which is exactly what is happening in California (and Florida): every company is bailing out and refusing to renew policies.
It’s all in the nuance. Currently the insurance companies have too much moral hazard, as they are able to extract profits during the “good” years (like AllState’s recent $3B stock buyback) and then deny or default during disasters. An extractive profit cap could allow companies to take in more than they spent and save it to prepare for major catastrophes. They wouldn’t have to simply disperse these funds back to policy holders or something. I’m sure that idea would need more refinement, but my overall point was that our regulations should directly target the incentives we actually care about. And we have to rely more on regulation in these situations because the market can’t properly price the risk of companies disappearing during major payout events.
I’d really argue that for-profit insurance companies are a bad idea in general, but that’s a higher-level debate. There’s an interesting idea where governments handle all disaster-related insurance handling but are then also able to have a more comprehensive approach to management (though that’d be hard to trust in the current US political climate).
Profit caps are not the same as disallowing profit. They make sure insurance payouts are fair given the insurance premiums. Distributing "profits" back to shareholders to the point that the insurance company cannot honor policies is a disingenuous use of funds for an insurance company. You seem to think profit cap = no profit, which is not the case. It means the profit ROI cannot take precedence over the insured ROI.
Profit caps in general are a bad idea, and should only be considered in a near complete absence of competition.
In insurance the problem is even worse, because you can’t compute what a reasonable profit cap is. Because of tail risks, you often see insurance companies making a profit of $1b each year for 30 years, then suffering a loss of $40b. Looked at during the typical year you might conclude the profits are excessive, but over a long term it might become apparent that the average profit is actually zero or even negative.
As is often the case, more competition and better competition policy is the solution.
Except they end up paying most of that out in stock buybacks and dividends each year, then the state has to bale them out for tens of billion after 40 years anyway, either directly by taking on the liabilities or by bailing out the homeowners after the insurance company goes bankrupt.
Insurance is an industry with great cashflow. They should be able to keep any profits they make off of investing the premiums, but not the premiums themselves. The incentives just do not line up, they siphon off the money and scream about over regulation before they need to get bailed out.
Do you have an example of the government bailing out an insurance company that couldn’t pay claims?
Or an insurance company that went bankrupt?
Insurance companies are already highly regulated (especially in CA). There are regulations around how much money has to be held in reserves to pay claims. There are regulations around what investments can hold reserves in.
Hell in CA, there are regulations around how premiums can actually increase and a mechanism for returning “excess premiums” back to policy holders.
In fact those regulations are one of the reasons insurers are leaving CA. They can’t increase premiums sufficiently to cover risk.
Yes. There were 6 insurance companies that went bankrupt in Florida in 2022. I am surprised you didn't know insurance companies go bankrupt all the time due to mismanagement.
Shouldn't competition take care of usurious rates in relatively free and working market? That is people will move to cheaper offerings which likely are close to real price.
Or coming to temporary clarity? Things like the “culture wars” are distractions pushed by the elites to keep the lower classes fighting amongst themselves and not their true enemy. But extractive robber barons are the real problem behind everyone’s life getting worse all the time, and for a brief moment everyone has seen that and been in alignment.