It’s far easier to just pirate (nearly all?) GOG games. Like there are torrents with big chunks of their entire store on them, and I’ve seen allusions to an unofficial “store” that just has all(?) their games on it for free. I doubt many people are abusing the refund system because going through those steps is more work than piracy.
sadly they don't do regional pricing at all, so steam price is almost half the GoG and maybe even lower. But yeah if you can buy GoG, it's better due to no DRM
But you still don't have autonomous flying, even though the case is much simpler than driving: take off, ascend, cruise, land.
It isn't easy to fix autonomous driving not because the problem isn't identified. Sometimes two conflicting scenario can happen on the road that no matter how good the autonomous system is, it won't be enough
Though I agree that having different kind of human instead will not make it any safer
> But you still don't have autonomous flying, even though the case is much simpler than driving: take off, ascend, cruise, land.
Flying is actually a lot more complicated than just driving. When you're driving you can "just come to a stop". When you're flying... you can't. And a hell of a lot can go wrong.
In any case, we do have autonomous flying. They're called drones. There are even prototypes that ferry humans around.
Being unable to abort a flight with a moment's notice does add complication, but not so much that flying is "a lot more complicated" than driving. The baseline for cars is very hard. And cars also face significant trouble when stopping. A hell of a lot can go wrong with either.
a bit unclear from my statement before but that's the point. Something that feels easy is actually much more complicated than that. Like weather, runway condition, plane condition, wind speed / direction, ongoing incidents at airport, etc. Managing all that scenario is not easy.
the similar things also applied in driving, especially with obstacles and emergency, like floods, sinkhole in Bangkok recently, etc.
Flying is the “easy” part. There’s a lot more wood behind the arrow for a safe flight. The pilot is (an important) part of an integrated system. The aviation industry looks at everything from the pilot to the supplier of lightbulbs.
With a car, deferred or shoddy maintenance is highly probable and low impact. With an aircraft, if a mechanic torques a bolt wrong, 400 people are dead.
At least one reason for intentionally not having fully autonomous flying is that you want the human pilots to keep their skills sharp (so they are available in case of an emergency).
Deck can run witcher 3 and mh:world decently (maybe some hiccup and lower graphic setting). There should be not a big problem to make games run on steam deck (ignoring controller support since it's a separate matter).
The same company that has a weird dichotomy between sometimes using other industry-standard data formats, but at times either succeeding (UMatic, Betacam, Professional Disc, Video8/Hi8) and sometimes failing (Betamax, Memory Stick) to have a dominant media storage standard. And sometimes they have products that accept industry standard formats (I have an older prosumer camera from Sony that supports SD, Memory Stick Pro Duo, and a proprietary flash format that is mostly used to record two formats at once, and they made plenty of VHS VCRs).
that's highly inconvenient, most people won't bother with that.
The ~1% though will certainly do that, with black market apps and jailbroken OS will rise.
Do you know that Proton is developed as a countermeasure against Microsoft's possibility of vendor locking? It is already anticipated that little or more Microsoft will want that cut.
We're at late stage capitalism, where enshittification occurs at alarming rate.
Yeah I was gonna say the same thing. So in a base-6 counting system primes must be very intuitive to spot. Although also expanding it out to base-12 shows the primes always fall into 4 specific rows.
It's similar to how in base 10 all primes must have last digit 1, 3, 7, or 9. But it woks slightly better in base 6 because fewer numbers are candidates (2/6 ~ 33% instead of 4/10 = 40%)
Yep and you can also just keep going with this to get to the Prime Number Theorem.
If you just consider odd numbers you know that at best only half of numbers can be prime. We've ruled out 1/2 of numbers.
If you then multiply 2 x 3 to get 6 you can state all prime numbers above 6 are of the form 6n + [1 or 5], everything else is a multiple of 2 or 3. We've now ruled out 1/3 more numbers in that remaining half we already ruled out above. Leaving 1/2 x 2/3 = 1/3 numbers possibly being prime (you could write this in a non-simplified form as 2/6 to match the count above).
If you then multiply 2 x 3 x 5 to get 30 you can state that all numbers above 30 are of the form 30n + [1,7,11,13,17,19,23,29]. The rest are multiples of 2,3 or 5. You've now ruled out another 1/5 of numbers from that remaining 1/3 above. Leaving 1/3 x 4/5 = 4/15 numbers possibly being prime (or 8/30 if you don't simplify the fraction to more clearly match what we counted above).
If you continue this you have a series that's multiplicative_sum( 1 - 1/p) of all primes so far. This function is called Euler's product formula and is the inverse of the famous Riemann Zeta Function (1/ζ(s)). This series converges to the Prime counting function https://en.wikipedia.org/wiki/Prime_number_theorem#Non-vanis... which you can intuitively understand from what i've written above.
Fwiw these patterns in prime numbers, or more specifically the gaps where numbers can't possibly be prime, are extremely well understood. They were first documented by Erasthosenes in BC times who used the above to quickly find new large prime numbers. While it's fun to look at patterns in primes and any enthusiasm should be encouraged i will take a moment to point out that mathematicians occasionally deal with lay people who think they've stumbled on some revelatory new thing by observing these well known patterns in primes. There's a myth that 'there's no patterns in primes'. But... that isn't true at all. We know there's patterns. It's the basis for prime number theory. It's been known for a few thousand years now.
LLMs need significant optimization or we get significant improvement on computing power while keeping the energy cost the same. It's similar with smartphone, when at the start it's not feasible because of computing power, and now we have one that can rival 2000s notebooks.
LLMs is too trivial to be expensive
EDIT: I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate
Looking up a project on github, downloading it and using it can give you 10000 lines of perfectly working code for free.
Also, when I use Cursor I have to watch it like a hawk or it deletes random bits of code that are needed or adds in extra code to repair imaginary issues. A good example was that I used it to write a function that inverted the axis on some data that I wanted to present differently, and then added that call into one of the functions generating the data I needed.
Of course, somewhere in the pipeline it added the call into every data generating function. Cue a very confused 20 minutes a week later when I was re-running some experiments.
Are you seriously comparing downloading static code from github with bespoke code generated for your specific problem? LLMs don't keep you from coding, they assist it. Sometimes the output works, sometimes it doesn't (on first or multiple tries). Dismissing the entire approach because it's not perfect yet is shortsighted.
Cheaper models might be around $0.01 per request, and it's not subsidized: we see a lot of different providers offering open source models, which offer quality similar to proprietary ones. On-device generation is also an option now.
For $1 I'm talking about Claude Opus 4. I doubt it's subsidized - it's already much more expensive than the open models.
Thousands of lines of perfectly working code? Did you verify that yourself?
Last time I tried it produced slop, and I've been extremely detailed in my prompt.
Well recently cursor got a heat for rising price and having opaque usage, while anthropic's claude reported to be worse due to optimization. IMO the current LLMs are not sustainable, and prices are expected to increase sooner or later.
Personally, until models comparable with sonnet 3.5 can be run locally on mid range setup, people need to wary that the price of LLM can skyrocket
You can already run a large LLM (like sonnet 3.5) locally on CPU with 128GB of ram which is <300 USD, but can be offset by swap space. Obviously, response speed is going to be slower, but I can't imagine people will pay much more than 20 USD for waiting 30-60 seconds longer for a response.
And obviously consumer hardware is already being more optimized for running models locally.
Imagine telling a person from five years ago that the programs that would basically solve NLP, perform better than experts at many tasks and are hard not to anthropomorphize accidentally are actually "trivial". Good luck with that.
There is a load-bearing “basically” in this statement about the chat bots that just told me that the number of dogs granted forklift certification in 2023 is 8,472.
Sure, maybe solving NLP is too great a claim to make. It is still not at all ordinary that beforehand we could not solve referential questions algorithmically, that we could not extract information from plain text into custom schemas of structured data, and context-aware mechanical translation was really unheard of. Nowadays LLMs can do most of these tasks better than most humans in most scenarios. Many NLP questions at least I find interesting reduce to questions of the explanability of LLMs.
"hard not to anthropomorphize accidentally' is a you problem.
I'm unhappy every time I look in my inbox, as it's a constant reminder there are people (increasingly, scripts and LLMs!) prepared to straight-up lie to me if it means they can take my money or get me to click on a link that's a trap.
Are you anthropomorphizing that, too? You're not gonna last a day.
I didn't mean typical chatbot output, these are luckily still fairly recognizable due to stylistic preferences learned during fine-tuning. I mean actual base model output. Take a SOTA base model and give it the first two paragraphs of some longer text you wrote, and I would bet on many people being unable to distinguish your continuation from the model's autoregressive guesses.
It still doesn't pass the Turing test, and is not close. Five years ago me would be impressed but still adamant that this is not AI, nor is it on the path to AI.
Calling LLMs trivial is a new one. Yea just consume all of the information on the internet and encode it into a statistical model, trivial, child could do it /s
reply