You are correct in that OpenCASCADE is less refined than parasolid, but I would argue that most people just don't need it. Practically, FreeCAD is fit for all purposes, except those for which you require knowledge of what a geometric kernel even is, and then you know who you are and how to serve yourself.
I got to use SolidWorks, Catia, Inventor before having my hands on pre-1.0 versions of FreeCAD. I never really understood the argument that it's too complex. The UI may be what it is (and admittedly full of shortcomings), but I found FreeCAD to be very conventional in the sense that you build out of sketches, define constraints that are identical from every other tool, compose those through extrusions, revolutions, etc.
The fact that it crashed on me for everything and nothing all at once seemed a bigger problem than "complexity".
Now we do computing like we play Sim City: sketching fuzzy plans and hoping those little creatures behave the way we thought they might. All the beauty and guarantees offered by a system obeying strict and predictable rules goes down the drain, because life's so boring, apparently.
I think it's Darwinian logic in action. In most areas of software, perfection or near-perfection are not required, and as a result software creators are more likely to make money if they ship something that is 80% perfect now than if they ship something that is 99% perfect 6 months from now.
I think this is also the reason why the methodology typically named or mis-named "Agile", which can be described as just-in-time assembly line software manufacturing, has become so prevalent.
> software creators are more likely to make money if they ship something that is 80% perfect now than if they ship something that is 99% perfect 6 months from now.
Except they are shooting themselves in the foot. I reminds me of the goldrush where the shovel and trousers sellers (here the AI companies) would make more money than the miners (developers).
Soon there will be barely any software to build if the general public can just ask an AI to do the things they want. 10 years ago, people would ask a friend that knew about photoshop to help them edit a picture or create something. Nowadays most of them just ask an AI. Same will happen to any kind of productivity or artistic tool. The people alergic to AI slop will just go full luddite and analog and won't use a computer for anything artistry so software creators will lose them alltogether. Home and professionnal software might gradually just disappear and most software creators will have spent thoundands of dollars in tokens with nothing to sell anymore. What might survive might only be the tools that AI rely one, operating systems, database and storage systems, etc.
But boy you will have been super productive, yet totally cancelled by the increase in competition, for the few years it lasted.
The difference is that it's not a toy. I'd rather compare it to the early days of offshore development, when remote teams were sooo attractive because they cost 20% of an onshore team for a comparable declared capability, but the predictability and mutual understanding proved to be... not as easy.
We will not arrive at the desired state without stumbling around and going completely off the rails, as we do, but clearly the idea here is to do stuff that we failed to do under the previous "beauty and guarantees" paradigm.
>Now we do computing like we play Sim City: sketching fuzzy plans and hoping
I still have a native install of Sim City 2000 — which I've played since purchasing decades ago. My most recent cityscape only used low-density zoning, which is a handicap that leads to bucolic scenery and constant cashflow issues.
It's fuzzier sketching, more aimless fun as I've gotten older.
It’s like coders (and now their agents) are re-creating biology. As a former software engineer who changed careers to biology, it’s kind of cool to see this! There is an inherent fuzziness to biological life, and now AI is also becoming increasingly fuzzy. We are living in a truly amazing time. I don’t know what the future holds, but to be at this point in history and to experience this, it’s quite something.
The issue is that for most things we don't want the fuzzy nature of biology in our systems. Yet some people try to shoehorn it into everything. It is OK for chat or natural language things, which are directed at a human, but most other systems we would like to be 100% reliable, and not 99% or failing after a few years, and at the very least we want them to behave predictably, so that we can fix any mistakes we made, when writing that software.
We spent a ton of time removing subjectivity from this field… only to forcefully shove it in and punish it for giving repeatable objective responses. Wild.
> What if code generation is copy-pasting GPL-licensed code in to your proprietary codebase?
This is obviously a big, unanswered, issue. It's pretty clear to me that we are collectively incentivised to pollute the well, and that it happens for long-enough for everything to become "compromised". That's essentially abandoning opensource and IP licensing at large, taking us to an unchartered era where intellectual works become the protected property of nobody.
I see chatbots having less an impact on our societies than the above, and interestingly it has little to do with technology.
> we are collectively incentivised to pollute the well
Honestly, there are two diametrically opposed incentives occurring right now. The one you describe may not even be paramount -- how hard is it to prove infringement, shepherd a case through court, and win a token amount. Is it worthwhile just to enrich a few lawyers, and get more AI-regurgitated slop to open up?
The second incentive is to not publish source code that might be vacuumed up by a completely amoral automaton. We may be seeing the second golden age of proprietary software.
Waiting for the LLM evangelists to tell us that their box of weights of choice did that on purpose to create engagement as a sentient entity understanding the nature of tech marketing, or that OP should try again with quatuor 4.9-extended (that really ships AGI with the $5k monthly subscription addon) because it refactored their pet project last week into a compilable state, after only boiling 3 oceans.
Using an LLM to generate an image of a diagram is not a good idea, but you can get really good results if you ask it to generate a diagram.io SVG (or a Miro diagram through their MCP).
I sometimes ask Claude to read some code and generate a process diagram of it, and it works surprisingly well!
For me too, ejabberd is the admin-friendlier/lower-effort one. Being more "monolithic", your calls will work straight out of the box because it ships a turn server properly configured out of the box, manage certificates over ACME for you, etc. Prosody isn't bad but has a reputation of needing attention to be paid to which incompatible modules not to enable together and overall more protocol knowledge. Both will run on a first gen RPi effortlessly.
Re: Signal, it's even worse: they are openly opposed to federation and to letting alternative clients use their server. They demand control and obedience, which has always been suspicious-enough to defeat any goodwill effort on their side. Why would I have/want to trust them when XMPP is a viable federated alternative?
Signal focuses on security and privacy above all else, which they don't think a federated model can do well. Case in point, XMPP in practice is less secure than Signal but has the advantages you mentioned.
The other common anti-federation argument is spam/reputation, which is basically the reason email is becoming more centralized unfortunately, though it still survives.
Matrix has gotten to a complexity threshold that makes it near-impossible to have independent client/server implementations. Element is terrible, and many contenders are better in a way or another, but all lack some essential feature to turn them into practical alternatives.
reply