How about the unabashed raiding of the Commons? Scraping any and all websites over and over so fast that it kills small servers? Meddling in the government, lobbying for regulatory capture. Buying up enough future RAM production to quintuple prices for consumers and lock out competitors. All the copyright stuff. The shift from nonprofit to for-profit, the whole "open" part of OpenAI.
Or perhaps the extremely explicit promise to put all of us out of work forever.
Have you actually heard anything at all about this company from the real world?
I sort of thought that too; we've known it can cause myocarditis for years, it was the basis for people avoiding the mRNA vaccine (i.e. "clot shot"). Still though I'm happy to see people continuing to research this instead of just avoiding politically tense fields all-together
It's not just a human thing; people who study wolves find they maintain surprisingly strict borders between different packs, and this behavior continues though a lot of other mammals and even some smaller animals like certain birds and insects.
I had that thought too; the author claims tor suffers from user free bugs but like, really? A 20 year old code base with a massive supportive following suffers from basic memory issues that can often be caught with linters or dynamic analysis? Surely there are development issues I'm not aware of but I wasn't really sold by the article
That can easily happen in C programs. Some edge case doesn't really happen, unless you specifically craft inputs for that, or even sequences of inputs, and simply no one was aware for those 20 years, or no one tried to exploit that specific part of the code. With C you are never safe, unless you somehow proved the code to be free of such bugs.
This is a bit of an exaggeration. Many types of bugs can also happen with Rust, and with you use unsafe or have some dependency that uses it, then also memory safety bugs. At the same time, it is possible to reduce the probability and impact of such bugs in C code in a reasonable way even without formal verification.
Does every discussion of Rust and C need this recurring subthread conversation? It is approaching Groundhog Day levels of repetition. Yes, `unsafe` code can have memory safety bugs. Yes, `unsafe` doesn't mean "unsafe". Yes, a sufficiently advanced C-developer can have a sufficiently advanced C-codebase with reduced probability of bugs that might be even better than an insufficiently advanced Rust-developer. Yes, regular safe-Rust isn't the same formal verification. Yes, Rust doesn't catch every bug.
On the other hand, most developers have no need of writing `unsafe` Rust. The same tools used for static and dynamic analysis of C codebases are available to Rust (ASAN and friends) and it is a good idea to use them when writing `unsafe` (plus miri).
The reason I'm replying is that "the impact of Rust on memory safety" is always a conversation that gets outsized amounts of ink that it drains focus away from other things. I would argue that sum types and exhaustive pattern matching are way more important to minimize logic bugs, even if they aren't enough. You can still have a directory traversal bug letting a remote service write outside of the directory your local service meant to allow. You can still have TOCTOU bugs. You can still have DoS attacks if your codebase doesn't handle all cases carefully. Race conditions can still happen. Specific libraries might be able to protect from some of these, and library reuse increases the likelihood of of these being handled sanely (but doesn't ensure it). Rust doesn't protect against every potential logic error. It never claimed to, and arguing against it is arguing against a strawman. What safe-Rust does claim is no memory safety bugs, no data races, and to provide language and tooling features to model your business logic and deal with complexity.
I might make such comments as long as other continue to make statements about Rust vs Cs. which I think are exaggerated. As long as people make such statements, it is obviously not a strawman.
I felt the same way when I read the bold part that says "But that C codebase is an issue" so I quickly checked out the public databases and couldn't find a single serious vulnerability in the past 7 years.
Admittedly I stopped after going through a bunch of useless stuff related to CVE-2017-8823 (which was initially reported as remotely exploitable with no proof at all).
I went through the tor repository (not vidalia though) and read a bunch of conversations about some of the memory related bugs but none of those were exploitable either (exploitable as in remote execution, not a DoS) and most of the (not so many) bugs were actually logical bugs.
I really don't care what they decide to do with their project and honestly anything that can potentially improve the security of such a system is fine by me but I really think they're doing themselves and the language a disservice by communicating the way they do.
Also, as a side note, even with a C codebase there is SO MUCH you could (and should) do to minimize the impact of a vulnerability that the fact that some choose to present just rewriting code in a different language is not even funny.
And of course, "impossible to refactor" just is very deep in the bullshit territory. "more fun to write new code" would probably be more honest, and the Rust proponents created a marketing narrative that allows them to do this while pretending (and probably also believing themself) to do a good thing.
If a codebase is being maintained and extended, it's not all code with 20 years of testing.
Every change you make could be violating a some assumption made elsewhere, maybe even 20 years ago, and subtly break code at distance. C's type system doesn't carry much information, and is hostile to static analysis, which makes changes in large codebases difficult, laborious, and risky.
Rust is a linter and static analyzer cranked up to maximum. The whole language has been designed around having necessary information for static analysis easily available and reliable. Rust is built around disallowing or containing coding patterns that create dead-ends for static analysis. C never had this focus, so even trivial checks devolve into whole-program analysis and quickly hit undecidability (e.g. in Rust whenever you have &mut reference, you know for sure that it's valid, non-null, initialized, and that no other code anywhere can mutate it at the same time, and no other thread will even look at it. In C when you have a pointer to an object, eh, good luck!)
Grok 3 and 4 scored at the bottom, only above gpt-4o, which I find interesting, because there was such big pushback on reddit when they got rid of 4o due to people having emotional attachments to the model. Interestingly the newest models (like gemini 2.5 and gpt 5 did the best.
They completely revolutionized laptop processors, were the first to put meaningful health data in watches, and created the first good bluetooth earbuds, but I guess they don't do things anymore.
> They completely revolutionized laptop processors
Tough love: no, they didn't. 99.9% of consumers simply can't detect a performance difference between an M4 Air and a junky Asus box (and what ones can will announce that games run much better on the windows shipwreck!), and while the Air has a huge power delta no one cares because the windows thing still lasts for 6+ hours.
Apple absolutely ran ahead of the industry technically, by a shocking amount. But in a commoditized field that isn't sensitive to quality metrics, that doesn't generate sales.
There's a reason why the iPhone remains the dominant product but macs are stuck at like 9% market share, and it's not the technlogy base that is basically the same between them.
Laptops are done, basically. It's like arguing about brands of kitchen ranges: sure, there are differences, but they all cook just fine.
> Tough love: no, they didn't. 99.9% of consumers simply can't detect a performance difference between an M4 Air and a junky Asus box (and what ones can will announce that games run much better on the windows shipwreck!), and while the Air has a huge power delta no one cares because the windows thing still lasts for 6+ hours.
This wildly, comically untrue in my experience: all of the normal people I know loooooove how fast it is and charging a few times a week. It was only the people who self-identify as PC users who said otherwise, much like the Ford guys who used to say Toyotas were junk rather than admit their preferred brand was facing tough competition.
Your "normal people" are mac owners, and your other group is "PC users". You're measuring the 0.1%! (Which, fine, is probably more like 15% or whatever. Still not a representative sample.) You're likely also only sampling US consumers, or even Californians, and so missing an awful lot of the market.
Again, real normal people can't tell the difference. They don't care. And that's why they aren't buying macs. The clear ground truth is that Macintosh is a lagging brand with poor ROI and no market share growth over more than a decade. The challenge is explaining why this is true despite winning all the technical comparisons and being based on the same hardware stack as the world-beating iOS devices.
My answer is, again, "users don't care because the laptop market is commoditized so they'll pick the value product". You apparently think it's because "users are just too dumb to buy the good stuff". Historically that analysis has tended to kill more companies than it saves.
> Your "normal people" are mac owners, and your other group is "PC users”
No. Remember that Apple sells devices other than Macs: they were all non-IT people who liked their iPhones and figured they’d try a Mac for their next laptop and liked it. One thing to remember is that Windows is a lot less dominant when you’re looking at what people buy themselves as opposed to what an enterprise IT department picked out. There are a ton of kids who start with ChromeOS or iPads, got a console for gaming, and don’t feel any special attraction to Windows since everything they care about works on both.
> You apparently think it's because "users are just too dumb to buy the good stuff".
Huh? Beyond being insulting, this is simply wrong. My position is that people actually do consider fast, silent, and multi-day battery life as desirable. That’s not the only factor in a buying decision, of course, but it seems really weird not to acknowledge it after the entire PC industry has spent years in a panic trying to catch up.
Best I can tell you're arguing that 9% market share by units sold is some kind of failure. Now go look at who has the highest market share by revenue. Hint: it's a fruit company.
This whole take might make sense if Apple didn’t double their laptop market share from like 10% to 20% when the M1 series came out, which actually happened.
That's kind of a weird one because the PC market has notably regressed there over the past few years. Other than the Surface Pro 12 there've been no fanless PC laptops released since 2022-ish, when there used to be dozens.
On a technical basis, fanless PC laptops released now would be better than the ones in 2022 just on the basis of 2022 lineup having a moribund lineup of CPUs (Snapdragon SQ1, Amber Lake, etc.) You could release a lineup now that would be broadly competitive with the M1 at least, but it doesn't seem to be a market segment that PC OEMs are interested in.
Right, so, a K-12 education-oriented PC with an Intel N-series chip, about 1/3 as fast as what you get with an M4 (or worse).
When I asked my snarky question I'm really talking about "fanless laptops that someone would actually want to use and get some serious use out of."
The regression of the PC market is because the PC market didn't see the ARM train coming from a million miles away and just sat there and did nothing. They saw smartphones performing many times more efficiently than PCs and shrugged their arms at it.
Meanwhile, Apple's laptop marketshare has purportedly doubled from 10% to 20% or perhaps even higher since the M1 lineup was released.
I say this as someone who actually moved away from Apple systems to a Linux laptop. Don't get me wrong, modern Intel and AMD systems are actually impressively efficient and can offer somewhat competitive experiences, but the MacBook Air as an every-person's experience is really tough to beat (consider also, you could get a MacBook Air M2 for $650 during the most recent Black Friday sales, and you'd have a really damn hard time finding any sort of PC hardware that's anywhere near as nice, never mind match it on performance/battery life).
Yeah, like we're in agreement about the current state of the market, I just don't think it has to be that way. The Surface Pro 12 is fanless, so presumably anyone else could make a fanless Snapdragon laptop if they wanted to. (My daily driver work laptop is Windows-on-ARM, and most everything works pretty well on it.)
I believe the whole Vivobook Go line is fanless, actually.
But again, the point isn't to get into a shouting match over whose proxied anatomy is largest. It's to try to explain why the market as a whole doesn't move the way you think it should. And it's clearly not about fans.
I love the idea of using Go for games, but go-routines and channels aren't really low-enough latency to be used in games. In particular, ebiten, one of the largest go game engines doesn't use a single go-routine under the hood for (presumably) this very reason. Attempting to use channels and such in my own project (https://thomashansen.xyz/blog/ebiten-and-go.html) left me with about 70% cpu utilization I couldn't pass
A legitimate point, there are lots of performance and fine grain changes you can make, and it's a simple, common language many people use. Perhaps we could realize some of these benefits from a simple, fast language.
> Or hell, why not do it in x86 assembly?
A terrible take imo. This would be impossible to debug and it's complex enough you likely won't see any performance improvements from writing in assembly. It's also not portable, meaning you'd have to rewrite it for every OS you want to compile on.
I think there's an argument that if machines are writing code, they should write for a machine optimized language. But even using this logic I don't want to spend a bunch of time and money writing multiple architectures, or debugging assembly when things go wrong.
If the boosters are correct about the trajectory of llm performance, these objections do not hold.
Debugging machine code is only bad because of poor tooling. Surely if vibe coding to machine code works we should be able to vibe code better debuggers. Portability is a non issue because the llm would have full semantic knowledge of the problem and would generate optimal, or at least nearly optimal, machine code for any known machine. This would be better, faster and cheaper than having the llm target an intermediate language, like c or rust. Moreover, they would have the ability to self-debug and fix their own bugs with minimal to no human intervention.
I don't think there is widespread understanding of how bloated and inefficient most real world compilers (and build systems) are, burning huge amounts of unnecessary energy to translate high level code, written by humans who have their own energy requirements, to machine code. It seems highly plausible to me that better llms could generate better machine code for less total energy expenditure (and in theory cost) than the human + compiler pair.
Of course I do not believe that any of the existing models are capable of doing this today, but I do not have enough expertise to make any claims for or against the possibility that the models can reach this level.
It's cool to see how quickly the industry has been shifting to F Prime, cFS is great but it's run it's course. Unfortunately F Prime isn't quite as advanced as we may want (in terms of data throughput or modern features) but it is a great step forward in an industry that's often scared or unable to produce open source software.
I'm just generally impressed by their community efforts vs cFS. They really do their development on GitHub - not just a place to push out source, but issues, PR's, discussions.
FPrime has some rough edges, and it takes a bit to get used to, but the team's interest in continually improving the core for a wide range of users is the differentiator.
What are you talking about? I haven't heard anything negative about them other than generic "things are changing" grumbling
reply