Yes I‘m also watching with disbelief. Even more so since media attention in the EU about it seems higher than in the US. Although the recent trove I found especially disturbing.
I recently watched a documentary where elites from beginning of the 20th century were also portrayed. Self-portrayed as Philanthropists. Moral bankruptcy became obvious, although in other manifestations such as shooting members of worker unions. And the US government did something in form of the New Deal, splitting monopolies and other policies.
In an optimistic scenario I’d expect something similar. New ways to hold elites accountable and keeping extreme differences in wealth in check.
This is the first time I hear that anyone hates D-Bus. I always saw it as a global API Bus that Apps can register to and which enables some sort of interoperability and automation. After all it can even be used from Bash. What is bad about this?
The security aspect seems also a bit funny to me. After all the average Desktop has most data in the home directory, so every application can read everything. That's not the fault of D-Bus.
Also I'm puzzled that Polkit hasn't been mentioned even once.
> The security aspect seems also a bit funny to me. After all the average Desktop has most data in the home directory, so every application can read everything.
The world is moving towards sandboxed applications (through flatpak and friends) more and more. As per the OP, this is one of the things holding sandboxing back.
That's only somewhat true if we are talking about the same sandbox nested (which would be quite dumb to do).
Escaping two different sandboxes are multiple times as hard, and a sane sandbox is not trivially picked, see web browsers and that the fact that the world is not one giant botnet.
The reason you do t hear much about it is because it's not an often discussed topic. Nonetheless the hate is there.
Dbus is a godawful mess. Imagine the windows registry, except it can only be inspected at runtime, contains executable binaries and is exceptionally fragile
> The security aspect seems also a bit funny to me. After all the average Desktop has most data in the home directory, so every application can read everything. That's not the fault of D-Bus.
Those secret stores (gnome-keyring/kwallet) store the secrets encrypted on disk, so every application can read the encrypted secrets but only the secret store has the encryption key to decrypt them. This is held in memory, not on disk.
Despite the flashy title that's the first "sober" analysis from a CEO I read about the technology. While not even really news, it's also worth mentioning that the energy requirements are impossible to fulfill
Also now using ChatGPT intensely since months for all kinds of tasks and having tried Claude etc. None of this is on par with a human. The code snippets are straight out of Stackoverflow...
Take this "sober" analysis with a big pinch of salt.
IBM have totally missed the AI boat, and a large chunk of their revenue comes from selling expensive consultants to clients who do not have the expertise to do IT work themselves - this business model is at a high risk of being disrupted by those clients just using AI agents instead of paying $2-5000/day for a team of 20 barely-qualified new-grads in some far-off country.
IBM have an incentive to try and pour water on the AI fire to try and sustain their business.
Asking because the biggest IT consulting branch of IBM, Global Technology Services (GTS), was spun off into Kyndryl back in 2021[0]. Same goes for some premier software products (including one I consulted for) back in 2019[1]. Anecdotal evidence suggests the consulting part of IBM was already significantly smaller than in the past.
It's worth noting that IBM may view these AI companies as competitors to it's Watson AI tech[2]. It already existed before the GPU crunch and hyperscaler boom - runs on proprietary IBM hardware.
I know people who still work there and are doing consultancy work for clients.
I am a former IBMer myself but my memory is hazy. IIRC there was 2 arms of the consultants - one was the boring day to day stuff, and the other was "innovation services" or something. Maybe the spun out the drudgery GTS and kept the "innovation" service? No idea.
My go-to analysis for these sorts of places is net income per employee. Back in the day, IBM was hovering around $5,000. Today, Kyndryl is still around $5,000 (2025). But the parent company seems to be now at $22,000 (2024). For comparison: Meta is at $800,000, Apple is at $675,000, and Alphabet is at $525,000. And Wal-Mart, the nation's largest private employer, is around $9,250.
Now, probably part of that is just that those other companies hire contractors so their employment figure is lower than reality. But even if you cut the numbers in half, neither side of that spin off is looking amazing.
The part that was spun off was "Infrastructure Services" (from the Wiki article.) Outsourcing and operations, not the business consulting organization that provides high level strategy to coding services.
Missed the boat? Have you been living under a rock? Watson AI advertising has been everywhere for years.
It’s not that they aren't in the AI space, it’s that the CEO has a shockingly sober take on it. Probably because they’ve been doing AI for 30+ years combined with the fact they don’t have endless money with nowhere to invest it like Google.
Advertising for it has been everywhere, but it's never seemed like it's at the forefront of anything. It certainly wasn't competitive with ChatGPT and they haven't managed to catch back up in the way Google have.
It was competitive before ChatGPT existed, and IMHO that gives them a special insight that people miss to consider in this context.
They know what revenue streams existed and how damn hard it was to sell it, considering IBM Watson probably had the option of 100% on-prem services for healthcare, if they failed to sell that will a privacy violation system like ChatGPT,etc have a chance to penetrate the field?
Because however good ChatGPT, Claude,etc are, the _insane_ amounts of money they're given to play with implies that they will then emerge as winners in a future with revenue streams to match the spending that has been happening.
> Missed the boat? […] Watson AI advertising has been everywhere for years.
They were ahead of the game with their original Watson tech, but pretty slow to join and try get up to speed with the current GenAI families of tech.
The meaning of “AI” has shifted to mean “generative AI like what ChatGPT does” in the eyes of most so you need to account for this. When people talk about AI, even though it is a fairly wide field, they are generally referring to a limited subset of it.
The death of IBM’s vision to own AI with Watson was never due to an inability to transition to the right tech. In fact, it was never about tech at all. As an entirely B2B company with a large revenue stream to defend, IBM was never going to go and scrape the entirety of the Internet. Especially not after the huge backlash they ignited with their customers over data rights and data ownership in trying to pitch the Watson they had.
> IBM have an incentive to try and pour water on the AI fire to try and sustain their business.
IBM has faced multiple lawsuits over the years. From age discrimination cases to various tactics allegedly used to push employees out, such as requiring them to relocate to states with more employer friendly laws only to terminate them afterward.
IBM is one of the clearest examples of a company that, if given the opportunity to replace human workers with AI, would not hesitate to do so. Assume therefore, the AI does not work for such a purpose...
If they could use THEIR AI to replace human workers, they would. If they learned that Claude or ChatGPT was better than an IBM consultant, they'd probably keep that to themselves.
Are you suggesting IBM made up the numbers? Or that CAPEX is a pre-GAI measure and is useless in guiding decision making?
IBM may have a vested interest in calming (or even extinguishing) the AI fire, but they're not the first to point out the numbers look a little wobbly.
And why should I believe OpenAI or Alphabet/Gemini when they say AI will be the royal road to future value? Don't they have a vested interest in making AI investments look attractive?
> a high risk of being disrupted by those clients just using AI agents instead of paying $2-5000/day for a team of 20 barely-qualified new-grads in some far-off country
Is there any concrete evidence of that risk being high? That doesn't come from people whose job is to sell AI?
they have incentive but what's the sustainable, actually-pays-for-itself-and-generates-profit cost of AI? We have no idea. Everything is so heavily subsidized by burning investor capital for heat with the hope that they'll pull an amazon and make it impossible to do business on the internet without paying an AI firm. Maybe the 20 juniors will turn out to be cheaper. Maybe they'll turn out to be slightly better. Maybe they'll be loosely equivalent and the ability to automate mediocrity will drive down the cost of human mediocrity. We don't know and everyone seems to be betting heavily on the most optimistic case, so it makes an awful lot of sense to take the other side of that bet.
20 juniors become some % of 20 seniors. and some % of that principals. Even if it lives up to the claims you’re still destroying the pipeline for creating experienced people. It is incredibly short sighted.
Do you expect Sam Altman to come on stage and tell you the whole thing is a giant house of cards when the entire western economy seems to be propped up by AI? I wonder whose "sober" analysis you would accept, because surely the people that are making money hand over fist will never admit it.
Seems to me like any criticism of AI is always handwaved away with the same arguments. Either it's companies who missed the AI wave, or the models are improving incredibly quickly so if it's shit today you just have to wait one more year, or if you're not seeing 100x improvements in productivity you must be using it wrong.
IBM was ahead of the boat! They had Watson on Jeopardy years ago! /s
I think you make a fair point about the potential disruption for their consulting business but didn't they try to de-risk a bit with the Kyndryl spinout?
I am a senior engineer, I use cursor a lot in my day to day. I find I can code longer and typically faster than without. Is it on par with human? It’s getting pretty darn close to be honest, I am sure the “10x” engineers of the world would disagree but it definitely has surpassed a junior engineer. We all have our anecdotes but I am inclined to believe on average there is net value.
I think surpassed is not the right word because it doesn't create/ideate. However it is incredibly resourceful. Maybe like having a jr engineer to do your bidding without thinking or growing.
Surpassed is probably the wrong word but the intent is more that it can comprehend quite complicated algorithms and patterns and apply them to your problem space. So yea it’s not a human but I don’t think saying subpar to a human is the right comparison either. In many ways it’s much better, I can run N parallel revisions and have the best implementation picked for review. This all happens in seconds.
Yes, this. Creating multiple iterations in parallel allows much more meaningful exploration of the solution space. Create a branch for each framework and try them all, compare them directly in praxis not just in theory. My brother is doing this to great effect as a solopreneur, and having the time of his life.
Largely agree. Anything that is just a multi-file edit, like an interface change, it can do. Maybe not immediately, but you can have it iterate, and it doesn't eat up your attention.
It is without a doubt worth more than the 200 bucks a month I spend on it.
I will go as far as to say it has decent ideas. Vanilla ideas, but it has them. I've actually gotten it to come up with algorithms that I thought were industry secrets. Minor secrets, sure. But things that you don't just come across. I'm in the trading business, so you don't really expect a lot of public information to be in the dataset.
i'm also a senior engineer and I use codex a lot. It has reduced many of the typical coding tasks to simply writing really good AC. I still have to write good AC, but I'm starting to see the velocity change from using good AI in a smart way.
Senior engineer here as well. I would say Opus 4.5 is easily a mid-level engineer. It's a substantial improvement over Sonnet 4.5, which required a lot more hand-holding and interventions.
I've found Claude's usefulness is highly variable, though somewhat predictable. It can write `jq` filters flawlessly every time, whereas I would normally spend 30 minutes scanning docs because nobody memorizes `jq` syntax. And it can comb through server logs in every pod of my k8s clusters extremely fast. But it often struggles making quality code changes in a large codebase, or writing good documentation that isn't just an English translation of the code it's documenting.
The problem I had that the larger your project gets, the more mistakes Claude makes. I (not a parent commenter) started with a basic CRUD web app and was blown away by how detailed it was, new CSS, good error handling, good selection and use of libraries, it could even write the terminal commands for package management and building. As the project grew to something larger Claude started forgetting that some code already existed in the project and started repeating itself, and worse still when I asked for new features it would pick a copy at random leaving them out of sync with eachother. Moving forward I've been alternating between writing stuff with AI, then rewriting it myself.
> The problem I had that the larger your project gets, the more mistakes Claude makes
I think the reason for this is because these systems get all their coding and design expertise from training, and while there is lots of training data available for small scale software (individual functions, small projects), there is much less for large projects (mostly commercial and private, aside from a few large open source projects).
Designing large software systems, both to meet initial requirements, and to be maintainable and extensible over time, is a different skill than writing small software projects, which is why design of these systems is done by senior developers and systems architects. It's perhaps a bit like the difference between designing a city and designing a single building - there are different considerations and decisions being made. A city is not just a big building, or a collection of buildings, and large software system is not just a large function or collection of functions.
Here's mine fully deployed, https://hackernewsanalyzer.com/. I use it daily and have some users. ~99.7% LLM code. About 1 hour to first working prototype then another 40 hours to get it polished and complete to current state.
It shows, quite an interesting wrapper over GPT with unauthorized access to prompting it you assembled there ;) Very much liked the part where it makes 1000 requests pulling 1000 comments from the firebase to the client and then shoots them back to GPT via supabase
41 hours total of prompting, looking at code diffs, reverting, reprompting, and occasional direct code commits. I do review the full code changes nearly every step of the way and often iterate numerous times until I'm satisfied with the resulting code approach.
Have you tried to go back to the old way, maybe just as an experiment, to see how much time you are actually saving? You might be a little surprised! Significant "reprompting" time to me indicates maybe a little too much relying on it rather than leading by example. Things are much faster in general if you find the right loop of maybe using Claude for like 15%-20% of stuff instead of 99.7%. You wouldn't give your junior 99.7% ownership of the app unless they were your only person, right? I find spending time thinking through certain things by hand will make you so much more productive, and the code will generally be much better quality.
I get that like 3 years ago we were all just essentially proving points building apps completely with prompts, and they make good blog subjects maybe, but in practice they end up being either fragile novelties or bloated rat's nests that end up taking more time not less.
I’ve done things in days that in the before times would have took me months. I don’t see how you can make that time difference up.
I have at least one project where I can make that direct comparison - I spent three months writing something in the language I’ve done most of my professional career in, then as a weekend project I got ChatGPT to write it from scratch in a different language I had never used before. That was pre-agentic tools - it could probably be done in an afternoon now.
I'm not a fulltime developer, but manage a large dev team now. So, this project is basically beyond my abilities to code myself by hand. Pre llm, I would expect in neighborhood of 1.5-2 months for a capable dev on my team to produce this and replicate all the features.
If you haunt the pull requests of projects you use I bet you'll find there's a new species of PR:
> I'm not an expert in this language or this project but I used AI to add a feature and I think its pretty good. Do you want to use it?
I find myself writing these and bumping into others doing the same thing. It's exciting, projects that were stagnant are getting new attention.
I understand that a maintainer may not want to take responsibility for new features of this sort, but its easier than ever to fork the project and merge them yourself.
I noticed this most recently in https://github.com/andyk/ht/pulls which has two open (one draft) PRs of that sort, plus several closed ones.
Issues that have been stale for years are getting traction, and if you look at the commit messages, it's AI tooling doing the work.
People feel more capable to attempt contributions which they'd otherwise have to wait for a specialist for. We do need to be careful not to overwhelm the specialists with such things, as some of them are of low quality, but on the whole it's a really good thing.
If you're not noticing it, I suggests hanging out in places where people actually share code, rather than here where we often instead brag about unshared code.
> People feel more capable to attempt contributions
That does not mean that they are more capable, and that's the problem.
> We do need to be careful not to overwhelm the specialists with such things, as some of them are of low quality, but on the whole it's a really good thing.
That's not what the specialists who have to deal with this slop say. There have been articles about this discussed here already.
At this point my prior is that all these 300/ns projects are some kind of internal tools, with very narrow scope and many just for a one-off use.
Which is also fine and great and very useful and I am also making those, but it probably does not generalize to projects that require higher quality standards and actual maintenance.
Places that aren't software businesses are usually the inverse. The software is extremely sticky and will be around for ages, and will also bloat to 4x the features it was originally supposed to have.
I worked at an insurance company a decade ago and the majority of their software was ancient. There were a couple desktops in the datacenter lab running Windows NT for something that had never been ported. They'd spent the past decade trying to get off the mainframe and a majority of requests still hit the mainframe at some point. We kept versions of Java and IBM WebSphere on NFS shares because Oracle or IBM (or both) wouldn't even let us download versions that old and insecure.
Software businesses are way more willing to continually rebuild an app every year.
There's a massive incentive not to share them. If I wrote a project using AI I'd be reluctant to publish it at all because of the backlash I've seen people get for it.
People are and always were reluctant to share their own code just the same. There is nothing to be gained, the chances of getting positive reviews from fellow engineers are slim to none. We are a critical and somewhat hypocritical bunch on average.
Claude has taught me so much about how to use jq better. And really, way more efficient ways of using the command line in general. It's great. Ironically, the more I learn the less I want to ask it to do things.
Maybe the most depressing part of all this is if people start thinking they would not have been able to do things without the LLM. Of course they would have, it's not like LLMs can do anything that you cannot. Maybe it would have taken more time at least the first time and you would have learned a few things in the process.
Sure, I can write all of it. But I simply won’t. I have Claude generated Avalonia C# applications and there is no way I would have written the thousands of lines of xaml they needed for the layouts. I would just have done it as a console app with flags.
But reducing friction, eliminating the barrier to entry, is of fundamental importance. It's human psychology; putting running socks next to your bed at night makes it like 95% more likely you'll actually go for a run in the morning.
I understand the point, and to some degree agree. For myself, I really couldn't (not to say it wouldn't have been possible). I tried many many times over so many years and just didn't have the mental stamina for it, it would never "click" like infra/networking/hardware does etc and I would always end up frustrated.
I have learnt so much in this process, nowhere near as much as someone that wrote every line (which is why I think being a good developer will be a hot commodity) but I have had so much fun and enjoyment, alongside actually seeing tangible stuff get created, at the end of the day, that's what it's all about.
I have a finite amount of time to do things, I already want to do more than I can fit into that time, LLMs help me achieve some of them.
This is a "scratch an itch" project I initially started to write manually in the past, but never finishing. I then used claude to do it basically on the side while watching the world series http://nixpkgs-pr-explorer.s3-website-us-west-2.amazonaws.co...
It’s not just good for small code bases. In the last six months I’ve built a collaborative word processor with its own editor engine and canvas renderer using Claude, mostly Opus. It’s practically a mini Google Docs, but with better document history and an AI agent built in. I could never have built this in 6 months by myself without Claude Code.
I think if you stick with a project for a while, keep code organized well, and most importantly prioritize having an excellent test suite, you can go very far with these tools. I am still developing this at a high pace every single day using these tools. It’s night and day to me, and I say that as someone who solo founded and was acquired once before, 10 years ago.
yes, I am using my voice agent, my head tracker, my sql writer, my odbc client, my shopping list, my sharepoint file uploader, my Timberborn map generator, my wireguard routing, my oxygen not included launch scripts, my i3wm config, my rust ATA over Ethernet with Content Addressable storage
The former tasks are directly from the training material, directly embedded into the model. For the latter task, it needs a context window and intelligence.
It'll be a common paradigm. Some agents support the coding agent discover relevant context for a plan, others will help the agent stay on track and ensure no rules break.
They really should have been supplying at least a week worth of readymade "projects" to every freelance AI promoter out there to demonstrate x9000 AI productivity gains for the skeptics.
Because vibing the air about those gains without any evidence looks too shilly.
Pointing out the where the burden of proof lies is not an ad hominem. Calling it such is in fact a good example of poisoning the well. all the fan girls have to do is post links to code they have vibe coded. some people have even done that in this thread. it's not an unreasonable standard.
I'm just as much of an avid llm code generator fan as you may be but I do wonder about the practicality of spending time making projects anymore.
Why build them if other can just generate them too, where is the value of making so many projects?
If the value is in who can sell it the best to people who can't generate it, isn't it just a matter of time before someone else will generate one and they may become better than you at selling it?
> Why build them if other can just generate them too, where is the value of making so many projects?
No offence to anyone but these generated projects are nothing ground-breaking. As soon as you venture outside the usual CRUD apps where novelty and serious engineering is necessary, the value proposition of LLMs drops considerably.
For example, I'm exploring a novel design for a microkernel, and I have no need for machine generated boilerplate, as most of the hard work is not implementing yet another JSON API boilerplate, but it's thinking very hard with pen and paper about something few have thought before, and even fewer LLMs have been trained on, and have no intelligence to ponder upon the material.
To be fair, even for the most dumb side-projects, like the notes app I wrote for myself, there is still a joy in doing things by hand, because I do not care about shipping early and getting VC money.
Weird, because I've created a webcam app that does segmentation so they can delete the background and put a new background in I mean, I suppose that's not groundbreaking. But it's not just reading and writing to a database.
I've just added a ATA over Ethernet server in Rust, I thought of doing it in the car on the way home and an hour later I've got a working version.
I type this comment using a voice to text system I built, admittedly it uses Whisper as the transcriber but I've turned it into a personal assistant.
I make stuff every day I just wouldn't bother to make if I had to do it myself. and on top of that it does configuration. So I've had it build full wireguard configs that is taking on our pay addresses so that different destinations cause different routing. I don't know how to do that off the top of my head. I'm not going to spend weeks trying to find out how it works. It took me an evening of prompting.
> I make stuff every day I just wouldn't bother to make if I had to do it myself
> I'm not going to spend weeks trying to find out how it works.
Then what is the point? For some of us, programming is an art form. Creativity is an art form and an ideal to strive towards. Why have a machine to create something we wouldn’t care about?
The only result is a devaluation to zero of actual effort and passion, whose only beneficiary are those that only care about creating more “product”. Sure, you can pump out products with little effort now, all the while making a few ultrabilionaires richer. Good for you, I guess.
The value is that we need a lot more software and now, because building software has gotten so much less time consuming, you can sell software to people that could/would not have paid for it previously at a different price point.
We don’t need more software, we need the right software implemented better. That’s not something LLMs can possibly give us because they’re fucking pachinko machines.
Here’s a hint: Nobody should ever write a CRUD app, because nobody should ever have to write a CRUD app; that’s something that can be generated fully and deterministically (i.e. by a set of locally-executable heuristics, not a goddamn ocean-boiling LLM) from a sufficiently detailed model of the data involved.
In the 1970s you could wire up an OS-level forms library to your database schema and then serve literally thousands of users from a system less powerful than the CPU in modern peripheral or storage controller. And in less RAM too.
People need to take a look at what was done before in order to truly have a proper degree of shame about how things are being done now.
Most CRUD software development is not really about the CRUD part. And for most framework, you can find packages that generate the UI and the glue code that ties it to the database.
When you're doing CRUD, you're spending most of the time with the extra constraints designed by product. It's dealing with the CRUD events, the IAM system, the Notification system,...
> That’s not something LLMs can possibly give us because they’re fucking pachinko machines.
I mostly agree, but I do find them useful for fuzzing out tests and finding issues with implementations. I have moved away from larger architectural sketches using LLMs because over larger time scales I no longer find they actually save time, but I do think they're useful for finding ways to improve correctness and safety in code.
It isn't the exciting and magical thing AI platforms want people to think it is, and it isn't indispensable, but I like having it handy sometimes.
The key is that it still requires an operator who knows something is missing, or that there are still improvements to be made, and how to suss them out. This is far less likely to occur in the hands of people who don't know, in which case I agree that it's essentially a pachinko machine.
I’m with you. Anyone writing in anything higher level than assembly, with anything less than the optimization work done by the demo scene, should feel great same.
Down with force-multiplying abstractions! Down with intermediate languages and CPU agnostic binaries! Down with libraries!
An issue with the doom forecasts is most of the hypothetical $8tn hasn't happened yet. Current big tech capex is about $315bn this year, $250bn last against a pre AI level ~$100bn so ~$400bn has been spent so far on AI boom data centers. https://sherwood.news/business/amazon-plans-100-billion-spen...
The future spend is optional - AGI takeoff, you spend loads, not happening not so much.
Say it levels of at $800bn. The world's population is ~8bn so $100 a head so you'd need to be making $10 or $20 per head per year. Quite possibly doable.
That seems super far fetched given that 37%[1] of the world's population does not have internet access. You could reasonably restrict further to populations that speak languages that are even passably represented in LLMs.
Even disregarding that, if you're making <3000 euros a year, I really don't think you'd be willing or able to spend that much money to let your computer gaslight you.
I agree. re: energy and other resource use: the analogy I like is with driving cars: we use cars for transportation knowing the environmental costs so we don’t usually just go on two hour drives for the fun of it, rather we drive to get to work, go shopping. I use Gemini 3 but only in specific high value use cases. When I use commercial models I think a little about the societal costs.
In the USA we have lost the thread here: we don’t maximize the use of small tuned models throughout society and industry, instead we use the pursuit of advanced AI as a distraction to the reality that our economy and competitiveness are failing.
You could have your morning shower 1°C less hot and save enough energy for about 200 prompts (assuming 50 litres per shower). (Or skip the shower altogether and save thousands of prompts.)
Yesterday I was talking to coworkers about AI I mentioned that a friend of mine used ChatGPT to help him move. So a coworker said I have to test this and asked ChatGPT if he could fit a set of the largest Magnepan speakers (the wide folding older room divider style) in his Infinity QX80. The results were hilarious. It had some of the dimensions right but it then decided the QX80 is as wide as a box truck (~8-8.5 feet/2.5 m) and to align the nearly 7 foot long speakers sideways between the wheel wells. It also posted hilariously incomprehensible ASCII diagrams.
I'm not sure what you mean with the "code snippets are straight out of Stackoverflow".
That is factually incorrect just by how LLM works. By now there has been so much code ingested from all kinds of sources, including Stackoverflow LLM is able to help generate quite good code in many occasions.
My point being it is extremly useful for super popular languages and many languages where resources are more scarce for developer but because they got the code from who knows where, it can definitely give you many useful ideas.
It's not human, which I'm not sure what is supposed to actually mean. Humans make mistakes, humans make good code. AI does also both. What it definitely needs is a good programmer still on top to know what he is getting and how to improve it.
I find AI (LLM) very useful as a very good code completion and light coder where you know exactly what to do because you did it a thousand times but it's wasteful to be typing it again. Especially a lot of boilerplate code or tests.
It's also useful for agentic use cases because some things you just couldn't do before because there was nothing to understand a human voice/text input and translate that to an actual command.
But that is all far from some AGI and it all costs a lot today an average company to say that this actually provided return on the money but it definitely speeds things up.
> I'm not sure what you mean with the "code snippets are straight out of Stackoverflow". That is factually incorrect just by how LLM works.
I'm not an AI lover, but I did try Gemini for a small, well-contained algorithm for a personal project that I didn't want to spend the time looking up, and it was straight-up a StackOverflow solution. I found out because I said "hm, there has to be a more elegant solution", and quickly found the StackOverflow solution that the AI regurgitated. Another 10 or 20 minutes of hunting uncovered another StackOverflow solution with the requisite elegance.
> While not even really news, it's also worth mentioning that the energy requirements are impossible to fulfill
If you believe this, you must also believe that global warming is unstoppable. OpenAI's energy costs are large compared to the current electricity market, but not so large compared to the current energy market. Environmentalists usually suggest that electrification - converting non-electrical energy to electrical energy - and then making that electrical energy clean - is the solution to global warming. OpenAI's energy needs are something like 10% of the current worldwide electricity market but less than 1% of the current worldwide energy market.
Google recently announced to double AI data center capacity every 6 month. While both unfortunately deal with exponential growth, we are talking about 1% growth CO2 which is bad enough vs 300% effectively per year according to Google
Constraints breed innovation. Humans will continue to innovate and demand for resources will grow. it is fairly well baked into most of civilization. Will that change in the future? Perhaps but it’s not changing now.
Imagine how big pile of trash as the current generation of graphics cards used for LLM training will get outdated. It will crash the hardware market (which is a good news for gamers)
I'd rather phrase it as "code is straight out of GitHub, but tailored to match your data structures"
That's at least how I use it. If I know there's a library that can solve the issue, I know an LLM can implement the same thing for me. Often much faster than integrating the library. And hey, now it's my code. Ethical? Probably not. Useful? Sometimes.
If I know there isn't a library available, and I'm not doing the most trivial UI or data processing, well, then it can be very tough to get anything usable out of an LLM.
> it's also worth mentioning that the energy requirements are impossible to fulfill
Maybe I'm misunderstanding you but they're definitely not impossible to fulfill, in fact I'd argue the energy requirements are some of the most straightforward to fulfill. Bringing a natural gas power plant online is not the hardest part in creating AGI
True, it is reminiscent of a time before me when people were lucky to have mainframe access through university. To be fair this was a long time in the making with the also quite aggressive move to cloud computing. While I don't mind having access to free AI tools, they seem to start taking possession of the content as well
Wow... one solution is of course to deactivate these. Which is what I did for my Legion 5 Gen 10. Speakers don't seem to be much in use these days anyway.
Still, I didn't expect this amount of custom configuration for my new laptop. Most importantly Bluetooth sound and getting Nvidia driver support. For Bluetooth I ended up writing my own tiny daemon. While driver support exists there seems to be a race condition somewhere between Pipewire, systemd and the bluetooth drivers. And for Nvidia drivers I ended up using the CUDA driver repository which is curiously only available for Debian 12.
That's a pity, the government fails to capitalize on its own policies because they fail to set up long term investment. First environmental and e-Mobility and now AI.
Sure, there's way too much bureaucracy. But I see there things like taxes, regulations about the cucumber radius etc.
He actual regulation said that you had to classify them based on their characteristics. If I wanted a straight cucumber and I ordered one I would get one. If I was happy with a bendy one then I’d simply order an “any shaped” one.
I don’t see a problem woth mandating truth in advertising.
To me XSLT came with a flood of web complexity that led to having effectively only 2 possible web browsers. It seems a bit funny because the website looks like straight out of the 90s when "everything was better"
It was not rendering that killed other browsers. Rendering isn't the hard part. Getting most of rendering working gets you about 99% of the internet working.
The hard part, the thing that killed alternative browsers, was javascript.
React came out in 2012, and everyone was already knee-deep in earlier generation javascript frameworks by then. Shortly after, Google would release the V8 engine which was able to bring the sluggish web back to some sense of usable. Similarly, Mozilla had to spend that decade engineering their javascript engine to claw itself out of the "Firefox is slow" gutter that people insisted.
Which is funny because if you had adblock, I'm not convinced firefox was ever slow.
A modern web browser doesn't JUST need to deal with rendering complexity, which is manageable and doable.
A modern web browser has to do that AND spin up a JIT compiler engineering team to rival Google or Java's best. There's also no partial credit, as javascript is used for everything.
You can radically screw up rendering a page and it will probably still be somewhat usable to a person. If you get something wrong about javascript, the user cannot interact with most of the internet. If you get it 100% right and it's just kind of slow, it is "unusable".
Third party web browsers were still around when HTML5 was just an idea. They died when React was a necessity.
Conveniently, all three of the major JS engines can be extracted from the browsers they are developed for, and used in other projects. Node and Bun famously use V8 and the WebKit one, and Servo I believe embeds SpiderMonkey.
If you want to start a new browser project, and you're not interested in writing a JS engine from scratch, there are three off-the-shelf options there to choose from.
I have the same mixed feelings. Complexity is antidemocratic in a sense. The more complex a spec gets the fewer implementations you get and the more easily it can be controlled by a small number of players.
It’s the extend part of embrace, extend, extinguish. The extinguish part comes when smaller and independent players can’t keep up with the extend part.
A more direct way of saying it is: adopt, add complexity cost overhead, shake out competition.
It's far from dead, though. XML is deeply ingrained in many industries and stacks, and will remain so for decades to come, probably until something better than JSON comes along.
There was fresh COBOL code written up until early 1990s too, long past its heyday.
Thing is you couldn't swing a dead cat in 00s without hitting XML. Nearly every job opening had XML listed in requirements. But since mid-2010s you can live your entire career without the need to work on anything XML-related.
But it’s still there and needs to be supported by the OS and tooling. Wether you edit it manually isn’t relevant (and as counterpoint, I do it all the time, for both apps and launchd agents).
There's still epub and tons of other standards built on xml and xhtml. Ironically, the last epub file I downloaded, a comic book from humble bundle, had a 16mb css file composed entirely of duplicate definitions of the same two styles, and none of it was needed at all (set each page and image to the size of the image itself, basically)
On the web. I, among other things, make Android apps, and Android and XML are one and the same. There is no such thing as Android development without touching XML files.
I did Android Developer Challenge back in 2008 and honestly don't remember doing that much of XML. Although it is the technology from peak XML days so perhaps you're right.
It has, I think, one nice feature that few markups I use these days have: every node is strongly-typed, which makes things like XSLT much cleaner to implement (you can tell what the intended semantics of a thing is so you aren't left guessing or hacking it in with __metadata fields).
... but the legibility and hand-maintainability was colossally painful. Having to tag-match the closing tags even though the language semantics required that the next closing tag close the current context was an awful, awful amount of (on the keyboard) typing.
Somewhat related OpenBSD is the fundament of my self-hosted homelab since it runs DNS, DHCP, a firewall router and a small local web server. Configuration is a dream compared to Linux and probably even compared to FreeBSD. You just need to go through the FAQ and copy&paste the relevant examples and modify them as needed. I don't know why it's so complicated on Linux where you need to appease a handful of daemons and find your way through a labyrinth of config files. I run a separate Linux based KVM host though.
OpenBSD is a very well kept secret that very few people are aware of. As close to nirvana as I can manage.
The fact I miss pretty much all the drama around the latest corporate take over attempts on Linux is just icing on the cake. The toxic slug strategy is an amazing one that more open source projects should use.
I can't find the article where I read it, many years ago now, but it was about strategies that small communities can adopt to keep their culture from being subsumed by the mainstream.
One was to pick a set of norms repugnant to the mainstream that everyone currently in the community can tolerate and enforce them rigorously on all new members. This will limit the appeal of the community to people like the ones currently there and will make sure that it never grows too big.
Thus your community is as appetising to activists attempting a hostile takeover as a toxic slug is to a bird.
As an example from six years ago, when the code of conduct madness had just reached its peak:
>I believe OpenBSD's code of conduct can be summed up as "if you are the type of person who needs a code of conduct to teach to you how to human then you are not welcome here".
Trouble is, the people who are most likely to need a code of conduct to tell them how to behave are also the most likely ones to strongly object to one on the basis that they don’t need a CoC to tell them how to behave.
True, but what you have ignored is that jerks exist equally on all sides of any CoC.
It's just as often as not that the producers and promulgators of some CoC are the jerks. In other words CoC's don't fix anything by merely existing. A few lines in a charter or mission statement already does the same to have something to point to just for formality and documentation sake.
--
[edit to expand or re-state a little...]
It's not that there is no problem and everything is fine already. It's that CoC's are almost always a thoughtless and ineffective, even actively counter-productive response to the problem.
A coc is an attempt to make an easy solution for something that there probably IS no easy solution for.
The problem takes the form of a continual fresh stream source of problem. IE a forever stream of new jerks, and existing jerks who dodn't just do one thing today but continue to exist tomorrow and the next day.
And so the solution can only be a matching continual case-by-case counter-effort, from intelligent insightful people who have good judgement.
Yeah, that doesn't scale and isn't easy and only some people do even a half-way good job of it.
It's just not a problam that you can bash script away.
But trying to do so is an example of being just a different color of jerk making life worse for others, but just in a different way and employing different mechanisms.
It's not just that jerks exist. It's that this "we welcome anyone who doesn't need a CoC to behave" is functionally equivalent to "we welcome jerks."
It's true that you can't just throw together a CoC and declare the problem to be solved. But there is value in writing down some ground rules. The purpose is not to "script" enforcement, it's to have something concrete you can point to. Having a CoC that says "no personal attacks" won't stop personal attacks, but it will let you very quickly shut down anyone who comes back with something like, "you just need to have a thicker skin."
> >I believe OpenBSD's code of conduct can be summed up as "if you are the type of person who needs a code of conduct to teach to you how to human then you are not welcome here".
> I believe OpenBSD's code of conduct can be summed up as "if you are the type of person who needs a code of conduct to teach to you how to human then you are not welcome here".
I think that the goal of any code of conduct is to prevent any semblance of arbitrary and whimsical punishment, which can kill entire communities.
Linux unfortunately has to endure with toxic contributors and even maintainers, and history showed that when those maintainers fail to human and consequently the community banishes them, they go on a tirade arguing all kinds of conspiracies. A code of conduct is a form of checks and balances, and code of conduct violation processes serve as processes to collect and present objectively verifiable paper trails of exactly when snd how those maintainers failed to human, and how bad at it they were. Those types can't simply argue their way out of a list of messages they were awful to others, how exactly they violated the code of conduct, and how bad it was. Thus any stunt they pull is immediately rendered moot by the deliverables from the project.
To me this seems to be true. From what I’ve seen CoCs are overwhelmingly used as a tool to enforce and reinforce a certain kind of ideological point of view.
As a result of this typically CoCs are used to block contributions or block contributors from projects where the people enforcing the CoC they wrote wield it as a weapon against men whose perceived personal politics they disagree with. And typically rumours are enough to trigger CoC proceedings against them.
> To me this seems to be true. From what I’ve seen CoCs are overwhelmingly used as a tool to enforce and reinforce a certain kind of ideological point of view.
I don't know which codes of conduct you have been exposed to. The ones in Linux cover basic things like not being cool to attack other maintainers with posts like:
> Get your head examined. And get the fuck out of here with this shit.
> Quite ironic then that CoCs overwhelmingly lead to arbitrary and whimsical punishment.
I don't agree. I think it has been working quite well in spite of the conspiratorial bullshit excuses made up by those who failed so hard to human to the point they were slapped with one.
Nevertheless, one of the values of a code of conduct is that people like you and me can check the deliberation and hear what all interested parties had to say. Without a code of conduct, the one with the loudest voice and the more interest to subvert code of conduct deliberations could basically dedicate their life shit-talking the project.
> A code of conduct is a form of checks and balances, and code of conduct violation processes serve as processes to collect and present objectively verifiable paper trails of exactly when snd how those maintainers failed to human, and how bad at it they were.
That's the opposite goal; the CoC is to be as broad as possible while still being as vague as possible.
It's a tool that has been repeatedly weaponised against the out-group by the in-group - there is never any sense of even-handed usage of a CoC against the community.
There are levels to being a dick. I think that chronically online types tend to forget that at the other side of the screen there are real flesh-and-bone people who would find it unacceptable to be addressed in a disrespectful way.
There are a few nice to haves that would really help me out with making an open bsd transition. I thought of writing them myself because I am getting very fed up with Linux for the above reasons.
- IDE support is an issue still
- Filesystem challenging when using a laptop that runs out of battery
- MATE lacking volume and WiFi controls
- This one is just me being picky but a GUI to help me gain a better understanding of the security settings or alternatively more up to date books.
- I am not exactly sure on how to correctly use virtualization and I need it to support docker workloads at work
Your points are valid but I'd like to present counterpoints:
> IDE support is an issue still
IMO, languages and platforms that require IDEs, also leads to complex software that is hard to maintain. The only exception is smalltalk.
> Filesystem challenging when using a laptop that runs out of battery
Easily resolved by using apmd and it `-z` flag. I think there's a couple utility out there that you can script for monitoring battery level.
> MATE lacking volume and WiFi controls
One of the good strength of OpenBSD is that the cli utilities are quite nice that I've not installed gui replacements (I'm using cwm). I don't mind doing a few `doas ifconfig` every once in a while.
> but a GUI to help me gain a better understanding of the security settings
I'm with you on that one. But the man pages are truly extensive. And the OS code is fairly readable.
> how to correctly use virtualization
Current vm solution is very bare. For docker, you'll need a linux VM, but the installation process maybe troublesome. It only supports serial interaction, which can be disabled by default in some distros.
> One of the good strength of OpenBSD is that the cli utilities are quite nice that I've not installed gui replacements (I'm using cwm). I don't mind doing a few `doas ifconfig` every once in a while.
I also don't mind doing things like this for network, but for volume this is very much an instant always-there requirement. If I need to mute/lower/raise the volumne in a hurry, I don't want to hunt for the application playing the sound, then find the volume slider on it, etc.
This is literally a deal-breaker for desktop/laptop users.
What I'd like to know, if there are any OpenBSD people reading, is how hard is it to contribute a fix or similar to make the desktop environment's volume control work?
I can obviously fix it for myself with some gui script/keyboard shortcut/etc, but I'd rather have anything be in the default installation whenever I refresh the install.
“ IMO, languages and platforms that require IDEs, also leads to complex software that is hard to maintain. ”
The truth is that I (and probably other users) don’t always have the luxury of choice and a large portion of commercial codebases have a very large number of files. Sometimes, it is multiple codebases at once with a very large number of files .
“ Easily resolved by using apmd and it `-z` flag. I think there's a couple utility out there that you can script for monitoring battery level.”
Yeah but I don’t want to accidentally lose data if I shut the lid and accidentally forget to plug the thing in for a few days .
“ One of the good strength of OpenBSD is that the cli utilities are quite nice”
I don’t want to enter and exit a cli tool in order to increase and decrease the volume . Ideally it’s a control in the top right or a keyboard mapping . What if something loud begins playing in a browser tab and I have to change the volume quickly?
Hello! Here are my thoughts on your totally valid concerns of using OpenBSD on a laptop.
> IDE support is an issue still
Yes, I agree. I enjoy using VSCode for most projects and there is no native support today in 2025 as far as I know. It is possible to use the web version (vscode.dev), but naturally, this lacks some features of the desktop application.
Typically I use some lightweight editor like Leafpad which has some basic IDE features. Not a replacement for a real IDE, but just an idea.
> Filesystem challenging when using a laptop that runs out of battery
Yes, OpenBSD uses FFS2 as the default file system. It's a solid filesystem with extensive history and testing, but it's not particularly tolerant of sudden power loss. In my experience most OpenBSD systems will come back online automatically after power loss, but there is a risk it will drop into single user mode if `fsck` wants a human in the loop.
There are some things one can do to help mitigate this, granted it's not very appealing coming from a more fault tolerant journalling FS: automated backups, using the `sync` option on your main data partitions (can affect performance), and of course monitoring power as mentioned.
IMO, this is a bit easier to manage on desktop or server roles where one can put everything behind a UPS.
> MATE lacking volume and WiFi controls
I haven't used MATE on OpenBSD. It's possible it's a combination of hardware + OpenBSD + MATE if it's not working. I know I have had working media controls on OpenBSD laptops in the past but I tend to stick with older laptops, Thinkpads, etc.
There are some in-base utilities to probe media keys and hook into X etc. if you're open to scripting a bit on your own hardware.
But yeah, after using Linux on laptops, it would be annoying for media keys to not Just Work after installation.
> This one is just me being picky but a GUI to help me gain a better understanding of the security settings or alternatively more up to date books.
Fortunately, there aren't too many security settings to change on OpenBSD. The most common one for laptops would be to enable SMT, e.g. enable hyperthreading on CPUs that support it. It is disabled by default as SMT is difficult to secure properly, but it does naturally improve performance. The command is `sysctl hw.smt=1`, or `echo 'hw.smt=1' >> /etc/sysctl.conf` to make it permanent.
> I am not exactly sure on how to correctly use virtualization and I need it to support docker workloads at work
Virtualization is a little unusual on OpenBSD. It's not quite as flexible as qemu, FreeBSD jails, bhyve, KVM, etc. The `vmm` and `vmd` systems were built in-house by the OpenBSD team. It is currently limited to just one core per VM the last I checked, and only supported serial and not VGA, so no way to run Windows under it for example.
I have had great success running Alpine Linux under OpenBSD and then running Docker on top of that, which opens the door for many tools and apps to run under an OpenBSD hypervisor.
There are also some VPS providers out there that fully dogfood OpenBSD and run their entire VM architecture on OpenBSD, such as OpenBSD Amsterdam, so it is totally viable depending on what one needs to virtualize.
Of course, one can run qemu on OpenBSD and virtualize whatever the heart desires.
---
That said, while OpenBSD can be a great laptop OS, it can require a bit more setup and understanding compared to a mainstream Linux OS. IMO it's still worth playing around with, even in a VM or on different hardware (desktop, Raspberry Pi, etc.) just to see the OpenBSD way of doing things, because it is truly a wonderful OS to use and learn. Other OSs start to feel a bit clunky to me after using OpenBSD for a while. :)
You only need an IDE if you’re dealing with lots of symbols and a complicated module system (Java, .Net). That’s when you need a code indexing tool. For a lot of language, a text editor is enough.
One of the reasons you don't see a lot of books around OpenBSD (aside from the very small userbase) is that the built-in documentation is so good. The manpages are actually worth reading, and for the more complex services, include examples and additional reading.
But still, the rest of your points are very true. OpenBSD is really not for everybody, but I think that's one of its strengths. It works extremely well for the people it works for, because it's not trying to coax new users into the fold.
Also, you know, like you don't have to use OpenBSD for everything. I still have plenty of Linux servers, and Linux computers, because there are some things OpenBSD is not suited to.
My impression is that the BSD's are laser-focused on providing efficient environments for networking backbone software to exist in, so special attention is paid to making it easy to orchestrate everything with rc.conf and keeping anything not required for these goals out of the default installation; while Linux (and its distributions) being far more general-purpose naturally will take more configuration.
Linux packaging tools are bad and the people who make Linux packages generally don't do a very good job at it limited by tools and motivation.
So much linux software doesn't come with sane defaults out of the box, doesn't have an easy path to common desired configurations, and doesn't have reasonable documentation. PARTICULARLY for "open" software that has a paid hosted option.
I say this after decades of a career where a very large proportion of the frustration and "stupid work" I've had to do involved getting a piece of software to do something obvious.
Working with the BSDs is just delightful in how wanting to do something turns into something working with ease.
I don't understand how you arrived at that conclusion. That comment probably passed through several BSD boxes on its way to me - BGP servers, DNS servers, absolutely critical things that *BSD shines at. Even this website itself apparently:
OpenBSD used to run my network but Plan 9, specifically 9front is even easier. Everything is configured using NDB which is a flat text file containing entries for each system on the network. On my CPU server I run DHCP, DNS and TFTPd, which are three lines in /cfg/$sysname/cpurc. That's it. No init system and no /etc. Just start the programs which all look at the same central database for config info. When I setup PXE booting it took literally 5 minutes of adding the tftpd line, adding an extra bootf= tuple in the machines ndb entry, a plan9.ini in /cfg/pxe and I had a machine pxe booting 9front over the network when turned on.
> I don't know why it's so complicated on Linux where you need to appease a handful of daemons and find your way through a labyrinth of config files.
Not too mention that some newer servers you might want to run are containerised and have few, if any, instructions for how to set them up without containers.
Speaking of Linux, OpenBSD’s hypervisor (vmm) supports it so I managed to get docker and containers running on my server via Alpine Linux. Opens the door on all the latest ‘modern server stuff’ running happily on an OBSD box.
Have you dealt with hardware failure or instability yet? It can be pretty annoying to pin down and isolate, unless you keep an order of magnitude of hoarded hardware around.
Probably a lot of things. Often software is simplified, at the time because of limited hardware and probably other software. Nowadays it's often a deliberate product decision but it seemed for OS/2 no such limits existed. E.g. you could right-click on a program, get the properties, run multiple applications. It even had a Windows emulation so stable that it was never matched by WINE. Of course there was only 16 bit Windows support but still...
Of course it had limitations of its own, I don't think you could any DOS/4GW games. Linux Installation seems simple compared to installing OS/2. I had to go through some sort of pre-installation guide which was printed on a separate paper and not part of the official manual. Also dual boot was meant literally: you booted into OS/2 and then you could "exit" into Windows. Back in DOS/Windows there was a command to do this the other way around. One time I didn't do this for half a year and was really anxious if my setup would make it...
I recently watched a documentary where elites from beginning of the 20th century were also portrayed. Self-portrayed as Philanthropists. Moral bankruptcy became obvious, although in other manifestations such as shooting members of worker unions. And the US government did something in form of the New Deal, splitting monopolies and other policies.
In an optimistic scenario I’d expect something similar. New ways to hold elites accountable and keeping extreme differences in wealth in check.