Hacker Newsnew | past | comments | ask | show | jobs | submit | theevilsharpie's commentslogin

I have used Terraform, Puppet, Helm, and Ansible (although that's not strictly declarative), and all of them ran into problems in real-world use cases that needed common imperative language features to solve.

Not only does grafting this functionality onto a language after-the-fact inevitably result in a usability nightmare, it also gets in the way of enabling developer self-service for these tools.

When a developer used to the features and functionality of full-featured language sees something ridiculous like Terraform's `count` parameter being overloaded as a conditional (because Terraform's HCL wasn't designed with conditional logic support, even though every tool in this class has always needed it), they go JoePesciWhatTheFuckIsThisPieceOfShit.mp4 at it, and just kick it over to Ops (or whoever gets saddled with grunt work) to deal with.

I'm seeing the team I'm working with going down that same road with Helm right now. It's just layers of templating YAML, and in addition to looking completely ugly and having no real support for introspection (so in order to see what the Helm chart actually does, you essentially have to compile it first), it has such a steep learning curve that no one other than the person that come up with this approach wants to even touch it, even though enabling developer self-service was an explicit goal of our Kubernetes efforts. It's absolutely maddening.


> Sure, but lts often doesn't work for other use cases like gaming. For example the experience on lts with this year's AMD gpus will be extremely poor if it works at all.

I'm using Ubuntu 24.04 LTS with a Radeon RX 9070 XT (currently the most recent and highest-end discrete GPU that AMD makes), and it works fine, both functionally and in terms of performance.

> I run Arch and my 9070 xt experience was poor for several months after release. I can't imagine modern gaming on an lts release.

Maybe instead of imagining it, you should just try it?


> Arch being unstable is a myth.

Arch follows a rolling release model. It's inherently unstable, by design.


You are probably using some annoying pedantic definition of unstable. Most people mean it to mean “does stuff crash or break”. Packages hang out in arch testing repos for a long time. In fact, Fedora often gets the latest GNOME release before Arch does, sometimes by months.


> You are probably using some annoying pedantic definition of unstable. Most people mean it to mean “does stuff crash or break”.

English has a specific word for that: reliable.

Pedantry aside, having a complex system filled with hundreds (thousands?) of software packages whose versions are constantly changing, and whose updates may have breaking changes and/or regressions, is a quick way of ending up with software that crashes or breaks through no fault of the user (save for the decision to use a rolling release distro).


This isn't true in practice. It turns out incrementally updating with small changes is more stable in the long run than doing a large amount of significant upgrades all at once.

Have you ever had to maintain a software project with many dependencies? If you have, then surely you have had the experience where picking up the project after a long period of inactivity makes updating dependencies much harder. Whereas an actively maintained or developed project, where dependencies are updated regularly, is much easier. You know what is changing and what is probably responsible if something breaks, etc. And it's much easier to revert.


> Have you ever had to maintain a software project with many dependencies? If you have, then surely you have had the experience where picking up the project after a long period of inactivity makes updating dependencies much harder. Whereas an actively maintained or developed project, where dependencies are updated regularly, is much easier. You know what is changing and what is probably responsible if something breaks, etc. And it's much easier to revert.

Have you ever had situations where Foo has an urgent security or reliability update that you can't apply, because Bar only works with an earlier version of Foo, and updating or replacing Bar involves a significant amount of work because of breaking changes?

I won't deny that there's value in having the latest versions of software applications, especially for things like GPU drivers or compatibility layers like Proton where updates frequently have major performance or compatibility improvements.

But there's also value in having a stable base of software that you can depend on to be there when you wake up in the morning, and that has a dependable update schedule that you can plan around.


Debian -- probably not, but Ubuntu has numerous variants whose primary purpose is providing a different desktop experience, and a SteamOS-like variant would fit in perfectly with that.


That’d still come with the limits brought by the old kernels Ubuntu ships.

Which as an aside, I think distros should advertise better. It must be awful to be sold on a distro only to find that it doesn’t support your newish hardware. A simple list of supported hardware linked on the features and download pages would suffice but a little executable tool that will tell you if your box’s hardware is supported would be even better.


> the kernel might still be good but the userland is just awful in every way imaginable

The Windows kernel is also falling behind. Linux is considerably faster for a wide variety of workloads, so much so that if you're CPU limited at all, moving from Windows to Linux can net you an improvement similar to moving up a CPU generation.


Dial-up modems can transfer a 4K HDR video file, or any other arbitrary data.

It obviously wouldn't have the bandwidth to do so in a way that would make a real-time stream feasible, but it doesn't involve any leap of logic to conclude that a higher bandwidth link means being able to transfer more data within a given period of time, which would eventually enable use cases that weren't feasible before.

In contrast, you could throw an essentially unlimited amount of hardware at LLMs, and that still wouldn't mean that they would be able to achieve AGI, because there's no clear mechanism for how they would do so.


From modern perspective it's obvious that simply upping the bandwidth allows streaming high-quality videos, but it's not strictly about "more bigger cable". Huge leaps in various technologies were needed for you to watch video in 4k:

- 4k consumer-grade cameras

- SSDs

- video codecs

- hardware-accelerated video encoding

- large-scale internet infrastructure

- OLED displays

What I'm trying to say is that I clearly remember reading an old article about sharing mp3s on P2P networks and the person writing the article was confident that video sharing, let alone video streaming, let alone high-quality video streaming, wouldn't happen in foreseeable future because there were just too many problems with that.

If you went back in time just 10 years and told people about ChatGPT they simply wouldn't believe you. They imagined that an AI that can do things that current LLMs can do must be insanely complex, but once technology made that step, we realized "it's actually not that complicated". Sure, AGI won't surface from simply adding more GPUs into LLMs, just like LLMs didn't emerge from adding more GPUs to "cat vs dog" AI. But if technology took us from "AI can tell apart dog and cat 80% of the time" to "AI is literally wiping out entire industry sectors like translation or creative work while turning people into dopamine addicts en masse" within ten years, then I assume that I'll see AGI within my lifetime.


There's nothing about 4K videos that needs an SSD, an OLED display, or any particular video codec, and "large-scale internet infrastructure" is just a different way of saying "lots of high-bandwidth links". Hardware graphics acceleration was also around long before any form of 4K video, and a video decoding accerator is such an obvious solution that dedicated accelerators were used for early full-motion video before CPUs could reasonably decode them.

Your anecdote regarding P2P file sharing is ridiculous, and you've almost certainly misunderstood what the author was saying (or the author themselves was an idiot). That there wasn't sufficient bandwidth or computing power to stream 4K video at consumer price points during the heyday of mp3 file sharing, didn't mean that no one knew how to do it. It would be as ridiculous as me today saying that 16K stereoscopic streaming video can't happen. Just because it's infeasible today, doesn't mean that it's impossible.

Regarding ChatGPT, setting aside the fact that the transformer model that ChatGPT is built on was under active research 10 years ago, sure, breakthroughs happen. That doesn't mean that you can linearly extrapolate future breakthroughs. That would be like claiming that if we developer faster and more powerful rockets, then we will eventually be able to travel faster than light.


> Besides, the gaming industry keeps shooting themselves in the foot by only supporting Windows (Mac is a thing too). That is slowly changing, but so many game devs are drinking the Microsoft koolaid they don't even consider using another graphics API other than DirectX. Many other decisions like that as well.

The gaming industry is thoroughly multi-platform, and many games that are limited to Windows on general-purpose PCs aren't so because the require DirectX, since they've also been developed for Playstation where DirectX isn't a thing.

Support for Mac can be somewhat challenging, partly because the platform (including the hardware) is so different from other general-purpose PCs, and partly because Apple doesn't particularly care about backwards compatibility, and will happily break applications if it suits their interest.

However, a developer that doesn't support Linux does so because they don't want to for whatever reason, not because the technical bar is too high. With the work that has gone into Wine, Proton, and other Windows compatibility libraries these days, there's a good chance that a Windows game will "just work" unless the developer does something to actively inhibit it.


DirectX has such a pull for game devs Sony has made a so called PlayStation Shader Language (PSSL), made to be very similar to the HLSL standard in DirectX 12. Granted this was an effort to meet where devs are, given how difficult it was to develop for the PS3. But then again, that's where devs are.

I mentioned DirectX as a clear enough example, but there are other decisions just like it.

Hell, most studios use Unreal nowadays, it already has their own RHI (Rendering Hardware Interface) between them and the graphics API. It really isn't much effort to start new projects targetting Metal or Vulkan.

Some have noticed this section of the market, in which they can grow, like Ubisoft and Capcom (see their games on Mac and iOS) which is why I said it's slowly changing. And that demonstrates it really isn't difficult.

Funny note: Have heard from a bird that Ubisoft's many engines had Vulkan support so their games could run under Google Stadia's servers. As soon as that got killed so did those engine branches. This just shows it isn't difficult at all. And it isn't a surprise they also acknowledged this by now using Metal to reach Apple's users.


I have seen no credible explanation on how current or proposed technology can possibly achieve AGI.

If you want to hand-wave that away by stating that any company with technology capable of achieving AGI would guard it as the most valuable trade secret in history, then fine. Even if we assume that AGI-capable technology exists in secret somewhere, I've seen no credible explanation from any organization on how they plan to control an AGI and reliably convince it to produce useful work (rather than the AGI just turning into a real-life SHODAN). An uncontrollable AGI would be, at best, functionally useless.

AGI is --- and for the foreseeable future, will continue to be --- science fiction.


You seem to have two separate claims. The first that it would be difficult to achieve AGI with current or proposed technology, and the second being that it would be difficult to control AGI, thus making it too risky to use or deploy.

The second is a significant open problem (the alignment problem) and I'd wager it is a very real risk which companies need to take more seriously. However, whether it would be feasible to control or direct an AGI towards reliably safe, useful outputs has no bearing on whether reaching AGI is possible via current methods. Current scaling gains and the rate of improvement (see METR's horizons on work an AI model can do reliably on its own) make it fairly plausible, at least more plausible than the plain denial that AGI is possible I see around here with very little evidence.


> I still say that x86 must run two FPUs all the time, and that has to cost some power (AMD must run three - it also has 3dNow).

Legacy floating-point and SIMD instructions exposed by the ISA (and extensions to it) don't have any bearing on how the hardware works internally.

Additionally, AMD processors haven't supported 3DNow! in over a decade -- K10 was the last processor family to support it.


80-bit x87 has no bearing on SSE implementation.

Right. Not.


Intel has only about half of the server market at this point, and that's with their products priced so low they're nearly selling them at cost.

The margins on their desktop products are also way down, their current desktop product isn't popular due to performance regressions in a number of areas relative to the previous generation (and not being competitive with AMD in general), and their previous generation products continue to suffer reliability problems.

And all this, while they're lighting billions of dollars on fire investing in building a foundry that has yet to attract a single significant customer.

Intel's not in a good spot right now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: