Hacker Newsnew | past | comments | ask | show | jobs | submit | ActorNightly's commentslogin

>afforadbility

This 1000%.

Electric cars are supposed to be simple. Give me something in a shape of a Civic, with the engine replaced with a motor and a battery good for 150 miles, and sell it for $10-12k new. Don't even need an entertainment cluster, give me a place to put a tablet or a phone and just have a bluetooth speaker.

Instead, we are getting these boutique, expensive vehicles packed full of tech, but in the end, they still fundamentally suck as cars compared to gas alternatives, especially hybrid. I got a Prius Prime for my wife last year, the car is way better than any EV on the market in terms of usability. Driving to work and back can all be done in EV mode easily, and then when you wanna go somewhere, you can keep the car above 80 mph easily and get there faster without worrying about where to charge.


> Electric cars are supposed to be simple. Give me something in a shape of a Civic, with the engine replaced with a motor and a battery good for 150 miles, and sell it for $10-12k new. Don't even need an entertainment cluster, give me a place to put a tablet or a phone and just have a bluetooth speaker.

I think this is more or less the pitch behind Slate (https://www.slate.auto/en), though it's more of a truck/SUV form factor.


Also the Dacia Spring is exactly that.

Slate is nowhere near cheap. Base 27k with hand crank windows? No thanks.

Unfortunately federal standards now require the backup camera, so the entertainment cluster comes along basically for free from that.

> Don't even need an entertainment cluster, give me a place to put a tablet or a phone and just have a bluetooth speaker.

Illegal - backup camera is required. Speakers probably too for alerts. Also you are super naive if you think that's where actual cost is.


Most of the cost is in development and prototyping that the company has to make up for, followed by battery as those have a set $/kwh price.

This is why the rest of the car has to be an already proven platform that is cheap to make.


Good. Let this version of internet be locked down and censored.

If people care enough, they will build a new internet.


Guifi.net and the rest of meshnets. Also, Yggdrasil. Not for anonymity, but availability.

When we were growing up, internet was for smart people. Chat rooms and video games were for "nerds", the "cool" people all hung out in person.

When someone wanted to do something counter-culture (i.e the *chan websites), there was actually a shared interest behind it. People would spend time making content and actually doing things on the web.

These days, internet is so ubiqutious that the majority of the users are simply consumers. There is no drive to build anything. Modern day kids aren't going to be spending time trying to figure out how to get around social media bans with technology, because most internet users simply just don't care enough to organize and build something.


>And this is further normalising the government making decisions about speech where they have every incentive and tendency to shut down people who tell inconvenient and important truths.

You really should think about how idiotic this libertarian talking point is

It would be valid if you had a populace that was educated (implying that when people heard the inconvenient truths, they would be able to parse fact from fiction and not be ideologically driven), combined with a tyrannical government that would be in power and afraid of the general populace knowing that information and starting a revolt.

This situation is pretty much impossible. How can an educated populace elect that government in the first place? If the population was dumb and elected a fascist government (i.e USA), they would just ignore anyone speaking inconvenient truths (i.e how MAGA is blind to all the stuff that is going on).

Secondly, information dissemination is pretty much impossible to stop these days with everyone being on the internet all the time.

The only people who complain about government silencing them these days are racists who wanna push some racist or "anti-woke" narrative, or the brainrotted people like anti-vaxxers. Because in their mind, they live in this false reality where they believe that everyone is brainwashed by the evil government and they are the actually "woke" ones.


> The agent forgets to free memory latter just like a human would and has to go back and fix it later.

I highly recommend people learn how to write their own agents. Its really not that hard. You can do it with any llm model, even ones that run locally.

I.e you can automate things like checking for memory freeing.


Why would I want to have an extra thing to maintain, on top of having to manually review, debug, and write tests for a language I don't like that much?

You don't have to maintain it. LLMs are really good at following direction.

I have a custom agent that can take python code, translates it to C, does a refactoring job to include a mempool implementation (so that memory is allocated once at the start of the program and instead of malloc it grabs chunks out of mempool), runs cppcheck, uploads to a container, and runs it with valgrind.

Been using it since ChatGPT3 - the only updates I did to it was API changes to call different providers. Doesn't use any agent/mcp/tools thing either, pure chat.


There's always going to be some maintenance, at the very least the API changes for providers you mentioned, and then there's still the reviews and testing of the C.

A mempool seems very much like a DIY implementation of malloc, unless you have fixed size allocations or something else that would make things different, not sure why I'd want that in the general case.

For "non hacker style" production code it just seems like a lot of extra steps.


> I.e you can automate things like checking for memory freeing.

Or, if you don't need to use C (e.g. for FFI or platform compatibility reasons), you could use a language with a compiler that does it for you.


Right, a lot of the promise of AI can (and has) been achieved with better tool design. If we get the AI to start writing Assembly or Machine Code as some people want it to, we're going to have the same problems with AI writing in those languages as we did when humans had to use them raw. We invented new languages because we didn't find those old ones expressive enough, so I don't exactly understand the idea that LLMs will have a better time expressing themselves in those languages. The AI forgetting to free memory in C and having to go back and correct itself is a perfect example of this. We invented new tools so we wouldn't have to do that anymore, and they work. Now we are going backwards, and building giant AI datacenters that suck up all the RAM in the world just to make up for lost ground? Weak.

> We invented new languages because we didn't find those old ones expressive enough

Not quite. Its not about being expressive enough to define algorithms, its about simplification, organization and avoidance of repetition. We invented languages to automate a lot of the work that programmers had to do in a lower level language.

C abstracts away handling memory addresses and setting up frame stacks like you would in assembly.

Rust makes handling memory more restrictive so you don't run into issues.

Java abstracts away memory management completely, so you don't need to manage memory, freeing up you to design algorithm without worrying about memory leaks (although apparently you do have to worry if your log statements can execute arbitrary code).

Javascript and Python abstract type definition away through dynamic interpretation.

Likewise, OOP/Typing, functional programming, and other styles were included for better organization.

LLMs are right in line with this. There is no difference between you using a compiler to compile a program, vs a sufficiently advanced LLM writing said compiler and using it to compile your program, vs LLM compiling the program directly with agentic loops for accuracy.

Once we get past the hype of big LLMs, the next chapter is gonna be much smaller, specialized LLMs with architecture that is more deterministic than probabilistic that are gonna replace a lot of tools. The future of programming will be you defining code in a high level language like Python, then the LLM will be able to infer a lot of the information (for example, the task of finding how variables relate to each other is right in line with what transformers do) just from the code and do things like auto infer types, write template code, then adapt it to the specific needs.

In fact, CPUs already do this to a certain extent - modern branch predictors are basically miniature neural networks.


I use rust. The compiler is my agent.

Or to quote Rick and Morty, “that’s just rust with extra steps!”


On a related note, I've always regarded Python as the best IDE for writing C. :)

Replace memory with one of the dozen common issues the Rust compiler does nothing for like deadlocks.

Well, the case would still stand, wouldn't it? Unless C is free of these dozen common issues.

Sure. Or you can let the language do that for you and spend your tokens on something else. Like, do you want your LLM to generate LLVM byte code? It could, right? Buy why wouldn't you let the compiler do that?

Unless im writing something like code for a video game in a game engine that uses C++, most of the stuff that I need C is compartmentalized enough to where its much faster to have an LLM write it.

For example, the last C code I wrote was tcp over ethernet, bypassing the IP layer, so I can be connected to the VPN while being able to access local machines on my network.

If im writing it in Rust, I have to do a lot of research, think about code structure, and so on. With LLMs, it took me an hour to write, and that is with no memory leaks or any other safety issues.


Interesting. I find that Claude 4.5 has a ridiculous amount of knowledge and “I don’t know how to do that in Rust” is exactly what it’s good at. Also, have you tried just modifying your route table?

>Also, have you tried just modifying your route table?

The problem is I want to run VNC on my home computer to the server on my work Mac so I can just access everything from one screen and m+b combo without having to use a USB switch and a second monitor. With VPN it basically just does not allow any inbound connections.

So I run a localhost tunnel its a generic ethernet listener that basically takes data and initiates a connection to localhost from localost and proxies the data. On my desktop side, its the same thing just in reverse.


Do you have any good starting points? For example, if someone had an ollama or lm studio daemon running where would they go from that point?

Im surprised that there are no Rust headlines.

There was one: "100% rust Linux kernel upstreamed"

Yes (with caveats)

In todays world, web based exploits are pretty rare. The only time you really see this happen is with full proprietary systems like IPhones because the software stack on those is all intertwined between kernel code and user code, and things like sending a text message with some formatted characters can lead to reboots of phones. But even then, to gain a full command line shell or steal secrets is either impossible due to attack surface, or requires the phone to be in a specific state, like fully factory reset.

The only real danger is chains of trust being compromised, as in some attacker manages to insert malitious code into an already trusted app that uses these exploits.

On a side note i get kick out of reading HN comments about exploitation and hacking. I think people firmly believe that with enough time, a hacker can figure out how to basically take over your phone given any exploit, no matter what it is.


Oh, but they will, given enough time.

Remember Kevin Mitnick's most successful approach, social engineering :)


CVE records are public. All info is there.

Search CVE numbers.

https://www.cve.org/CVERecord?id=CVE-2025-48633

Basically, just like most things these days, its all just local privilege escalation. This means that you have to install/run an app that has these exploits built in.

Soif you usage profile doesn't include downloading apps from untrusted sources, you don't need to worry.


In other words, if you ever need to install anything on your device, you do need to worry. What even could be trusted, a random app from Play Store?

> In other words, if you ever need to install anything on your device, you do need to worry.

No, its "If you ever need to install some random app from the play, you do need to worry"

I installed the Teams app and Torque Pro today. I am not worried. I've also got the Sherlock games (purchased way back when) that I have yet to install on my new phone.

Installing that app also will not worry me. These apps are trusted because of the authors, not because of the Play store.

Worry is not binary, it's a probability, and you are at high risk if you're installing every rando's app on your phone and low risk if you are not.


What if an existing app gets an update that exploits the vulnerability?

For sure that's not going to happen to an app released by a major company, but there are lots of less known app created by many different developers.


Turn off app updates. If it's working now, why do you need to update it? Does the update add something specific you want?

In other words, continue as normal: Don't install random crap you don't trust. That this is even newsworthy is kind of strange.

Its not the training/tuning, its pretty much the nature of llms. The whole idea is to give a best quess of the token. The more complex dynamics behind the meaning of the words and how those words relate to real world concepts isn't learned.

You're not making any sense. The best guess will often be refusals if they see enough of that in the training data, so of course it is down to training

And I literally saw the effect of this first hand, in seeing how the project I worked on was actively part of training this behaviour into a major model.

As for your assertion they don't learn the more complex dynamics, that was trite and not true already several years ago.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: