>Couldn’t someone else just give him a bunch of cash to blow on the test, to spoil the result?
If you still need a rich person to pass the test, then the test is working as intended. Person A is rich or person A is backed by a rich sponsor is not a material difference for the test. You are hinging too much on minute details of the analogy.
In the real word, your riches can be sponsored by someone else, but for whatever intelligence task we envision, if the machine is taking it then the machine is taking it.
>Couldn’t he give away his last dollar but pretend he’s just going to another casino?
Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.
>> Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.
So, most people can't get $1 Trillion to build a machine that fools people into thinking it's intelligent. That's probably also not a trick that will ever be repeated.
Did you pay for that AI to work for you? How much would you be willing to pay for it?
Tractors undoubtedly increase farmer efficiency, the evidence is clear to see, even when accounting for all costs necessary to design and produce tractors. There’s even room for farmers and tractor manufacturers to generate economic profit.
> Tractors undoubtedly increase farmer efficiency, the evidence is clear to see, even when accounting for all costs necessary to design and produce tractors. There’s even room for farmers and tractor manufacturers to generate economic profit.
Aren’t small independent farms struggling because of debt loads are really small returns?
Is this what AI will do as well? Enable large players consolidate while anything bigger than hobby work becomes very difficult and expensive for individuals.
Sample size of one, but I pay $20 a month for ChatGPT and I feel I get much more than that of value from it.
I would honestly probably be willing to pay $75-100 for ChatGPT if it came down to it. I feel it makes some tedious jobs a lot less horrible and in turn pays for itself.
Google does - Google search and maps have gotten objectively worse over the past 2 years. At least in Canada.
I have 38 locations in my (huge) city saved in Google maps and it breaks when I ask it to find a way from point A to point B. Works fine when logged out.
Maps also put traffic signals where there are none, and while finding shortest path it stopped putting weights to traffic signals. So you could have it route you via a city's main street instead of the freeway because it's 2km shorter but it ignores the 9 traffic signals that wastes 15 mins. Apple maps works fine.
Google search on web has adopted bad UX, and clicking a map or a shopping item has a noticeable delay between. Also right click "Open in new tab" options are gone.
Are you saying all these "enshittification" changes are deliberate?
To play devil's advocate, ABC doesn't have a right to execute a corporate merger and this is what was threatened by the FCC. I don't know what the courts would think of this kind of argument and unfortunately we will probably not find out.
Regardless of that, it certainly seems like some kind of corruption.
There’s more to it than that. It’s better for the longevity of the components to be shielded, and the noise it gives iff could bother you in your home, in terms of wifi, Bluetooth, etc interference. I practice electric guitar at home and I don’t want an unshielded computer near me when I’m doing that.
Actually the entire theme of Yes Minister, one of the best parodies of how the government is run is that not a single important decision or discussion is had in a public forum. Many episodes involve burying particularly incriminating official records.
Not only that when learning business one comment made was
Decisions are not made in meetings they are made in discussions before the meetings. Going into a meeting and thinking that your comments will change things is being naive.
From that the thing to be learnt is that you have to have off the record meetings first to convince the powers that be.
Now at least some of these meetings are recorded via WhatsApp and leaks before they never were.
Also see how IBM and Oracle get business - they take the senior C level managers out to lunch or golf and persuade them. They don't bother talking to the people who could evaluate if it was a good deal technically.
The idea that models are transformative is debatable. Works with copyright are the thing that imbues the model with value. If that statement isn’t true, then they can just exclude those works and nothing is lost, right?
Also, half the problem isn’t distribution, it’s how those works were acquired. Even if you suppose models 44are transformative, you can’t just download stuff from piratebay. Buy copies, scan them, rip them, etc.
It’s super not cool that billion dollar vc companies can just do that.
> In Monday's order, Senior U.S. District Judge William Alsup supported Anthropic's argument, stating the company's use of books by the plaintiffs to train their AI model was acceptable.
"The training use was a fair use," he wrote. "The use of the books at issue to train Claude and its precursors was exceedingly transformative."
I agree it is debatable but it is not so cut and clear that it is _not_ transformative when a judge has ruled that it is.
> The idea that models are transformative is debatable. Works with copyright are the thing that imbues the model with value. If that statement isn’t true, then they can just exclude those works and nothing is lost, right?
I don't follow.
For one, all works have a copyright status I believe (under US jurisdiction; this of course differs per jurisdiction, although there are international IP laws), some are just extremely permissive. Models rely on a wide range of works, some with permissive, some with restrictive licensing. I'd imagine Wikipedia and StackOverflow are pretty important resources for these models for example, and both are licensed under CC BY-SA 4.0, a permissive license.
Second, despite your claim being thus false, dropping restrictively copyrighted works would make a dent of course I'm pretty sure, although how much, I'm not sure. I don't see why this would be a surprise: restrictively licensed works do contribute value, but not all of the value. So their removal would take away some of the value, but not all of it. It's not binary.
And finally, I'm not sure these aspects solely or even primarily determine whether these models are legally transformative. But then I'm also not a lawyer, and the law is a moving target, so what do I know. I'd imagine it's less legal transformativeness and more colloquial transformativeness you're concerned about anyhow, but then these are not necessarily the best aspects to interrogate either.
Couldn’t someone else just give him a bunch of cash to blow on the test, to spoil the result?
Couldn’t he give away his last dollar but pretend he’s just going to another casino?
Observing someone’s behavior in Vegas is a just looking at a proxy for wealth, not the actual wealth.