This is true. But, it's also true of assigning tasks to junior developers. You'll get back something which is a bit like what you asked for, but not done exactly how you would have done it.
Both situations need an iterative process to fix and polish before the task is done.
The notable thing for me was, we crossed a line about six months ago where I'd need to spend less time polishing the LLM output than I used to have to spend working with junior developers. (Disclaimer: at my current place-of-work we don't have any junior developers, so I'm not comparing like-with-like on the same task, so may have some false memories there too.)
But I think this is why some developers have good experiences with LLM-based tools. They're not asking "can this replace me?" they're asking "can this replace those other people?"
> They're not asking "can this replace me?" they're asking "can this replace those other people?"
People in general underestimate other people, so this is the wrong way to think about this. If it can't replace you then it can't replace other people typically.
Yeah, I see quite a lot of misanthropy in the rhetoric people sometimes use to advance AI. I'll say something like "most people are able to learn from their mistakes, whereas an LLM won't" and then some smartass will reply "you think too highly of most people" -- as if this simple capability is just beyond a mere mortal's abilities.
Aren't these two points contradictory? Forgive me if I'm misunderstanding.
> Rust’s borrow checker is a a pretty powerful tool that helps ensure memory safety during compile time. It enforces a set of rules that govern how references to data can be used, preventing common programming memory safety errors such as null pointer dereferencing, dangling pointers and so on. However you may have notice the word compile time in the previous sentence. Now if you got any experience at systems programming you will know that compile time and runtime are two very different things. Basically compile time is when your code is being translated into machine code that the computer can understand, while runtime is when the program is actually running and executing its instructions. The borrow checker operates during compile time, which means that it can only catch memory safety issues that can be determined statically, before the program is actually run.
>
> This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
This appears to be claiming that Rust's borrow checker is only useful for preventing a subset of memory safety errors, those which can be statically analysed. Implying the existence of a non-trivial quantity of memory safety errors that slip through the net.
> The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem.
Whereas this is only A Thing because Rust enforces rules so that memory safety errors can be statically analysed and therefore the first problem isn't really a problem. (Of course you can still have memory safety problems if you try hard enough, especially if you start using `unsafe`, but it does go out of its way to "save you from bad design choices" within that context.)
If you don't want that feature, then it's not a benefit. But if you do, it is. The downside is that there will be a proportion of all possible solutions that are almost certainly safe, but will be rejected by the compiler because it can't be 100% sure that it is safe.
> It daunts me too, and reminds me of the early-days of commercial flight. I hope we'll have a similar "we cannot continue like this moment".
The first citation of the words "Software Crisis" meaning the inherent difficulty of writing high-quality software in a predictable way was from a NATO conference fifty years ago: https://en.wikipedia.org/wiki/Software_crisis
It is taking a long time for good practices to be discovered and win-out, and even when obvious improvements have been made, they're not necessarily used effectively.
I suspect a large part of the reason why the software industry isn't maturing at the same speed that other industries have had to, is that in software, failure is much easier to hide.
> It daunts me how much software is getting unreliable, but trying to shame people to hold them accountable is naive.
> The root of the problem is the uncontrolled complexity of modern software products.
I think there's a feedback loop between those two things, especially when it comes to government or giant corporation projects. Lack of accountability causes accidental complexity, which in turn causes a lack of accountability.
It starts with a organisation that lacks tech leadership hiring a consultancy, which then treats the project as a "flagship engagement" which means trying to make everything perfect, where perfect is used in the context of the number of future sales-pitches that will cite this one project.
As a result, there's a gap between what the organisation needs and what it gets, which adds to the amount of work required and complexity to navigate, whenever changes are required, and the overall complexity snowballs from there.
Most of the above is business-as-usual for most very expensive projects. The real danger-zone is when you get to the third iteration, six or seven years down the line, and you're forced to re-hire the first consultancy again because they're the only one with the resources to take it on; but the tech-world has moved on, so they see you as a "modernisation engagement". They simultaneously can't criticise their own bad decisions from several years prior, but at the same time they want the wider-industry to see their "transformative" power, so can't merely iterate on what's already there either.
That's how you end up with iOS apps, talking to Ruby-on-Rails APIs (which used to be the primary web-app, before that was replaced with a React frontend), reading and writing from an Oracle database which is also updated with a series of batch jobs dating back to early 2000s Java EE.
The "coal face" developers in all these situations have done the best work to their ability, and quite often achieved minor miracles in stability given the underlying complexity. The problem is always a management (or lack-of management) problem.
That isn't what they were saying at the time. Even level-headed forums like Hacker News had a lot of comments (gathering positive votes, although I suspect not all were organic) trying to describe with a wide-array of pseudo-science why Bitcoin was the king of a new world of inherently superior currencies.
Of course it was nonsense, as anyone who'd read about previous bubbles could tell you. But it was a mad time last December.
Yes, I am one of them. This is what, the fourth Bitcoin bubble that has burst? The December bubble bursting is no surprise for most level-headed folks that have been following this for a while -- just for the newbies making extremely risk bets at the top of the latest bubble.
Also, most of us are not deluded enough to think it going up to $100k+/BTC is a certainty; there are a number of reasons the larger BTC experiment/bubble could deflate for good one day. But there's also rational arguments for betting otherwise.
> If the government is saying the cost will be £14.5B then they must think only 1.5M people will sign up
It's not a government proposal. It's not even a proposal of the opposition, or anyone who would potentially be in a position to actually enact it.
The headline isn't wrong, but the prominence the BBC has given it could trick people into thinking its an official (or at least serious unofficial) proposal.
Pretty much everything successful now was ridiculed or dismissed in the past, but that doesn't mean everything that was ridiculed or dismissed became successful.
And even those things that became successful did so in a form that the contemporaneous positive predictions did not expect. The 1990's predictions of the Internet was of decentralised power and freedom of information, what actually happened? There was a land-grab, and a new generation of billion-dollar corporations have become established. Where once there was Blockbuster, now there is Netflix...
My prediction is the same will happen with cryptocurrency and blockchain applications. The technology will live-on, mostly in non-currency applications (e.g. behind the scenes settlement rather than day-to-day transactions); most of the "land grab" phase will die off though. Then, one day, ten or twenty years from now, we'll realise that "fiat" currencies have been blockchain based since some un-trumpeted central bank upgrade eighteen-months prior. But no-one will have noticed, as they still spend "dollars" on a credit card... same as they always did.
> Then, one day, ten or twenty years from now, we'll realise that "fiat" currencies have been blockchain based since some un-trumpeted central bank upgrade eighteen-months prior.
Tom, Dick, and Jane may not notice, but I would hope that those investing in that space, and those building out the technology, would :) The general public may not have an appreciation for the complexity behind it, but I assume many of Hacker News's readers, now and in the future, will be working on the forefront of developing this new technology.
I don't think Apple ever applied throttling to devices prior to the iPhone 6. As you say, old 5Ses would shutdown at 20% (or lower), but it was with the iPhone 6 and later where devices would occasionally stop at 50%, so became a much bigger issue. Presumably this was due to the A8 CPU having a wider-range of power consumption (this is a guess, if anyone actually knows why I would be interested to know).
For point 2, there is Coconut Battery on MacOS. This is telling me my old iPhone 6 has 90% capacity after 400 odd cycles, which is probably par for the course, but I have no idea if that's bad enough for the performance throttling to kick in or not. Hopefully the new screen is going to be detailed enough to say how much throttling has been applied.
The tone taken by the financial press towards cryptocurrencies is interesting. They give plenty of press to the sceptics, the warnings from central bankers, etc. But they also give plenty of positive press, arguably undeserved when compared to the size of similar ventures funded by traditional means.
It's almost as though they're writing about it solely because they feel that they should, to avoid the risk of being seen to be taking a stand. As a result coverage of ICOs gets softer coverage than it otherwise would.
Both situations need an iterative process to fix and polish before the task is done.
The notable thing for me was, we crossed a line about six months ago where I'd need to spend less time polishing the LLM output than I used to have to spend working with junior developers. (Disclaimer: at my current place-of-work we don't have any junior developers, so I'm not comparing like-with-like on the same task, so may have some false memories there too.)
But I think this is why some developers have good experiences with LLM-based tools. They're not asking "can this replace me?" they're asking "can this replace those other people?"