Hacker Newsnew | past | comments | ask | show | jobs | submit | try-working's commentslogin

Elaborate.

The whole post is based on the deluded idea that AI will cause a big hiring boom, because "the more AI-assisted work you do, the more work you create for yourself".

To begin with, you could just use AI to speedup your current level of work, with your existing developers, and have them use their new free time to further polish it. No hiring boom.

Second, you could use AI to speedup your current level of work and keep less developers, since one developer can now do 2x or 3x the stuff. The rest of your team remains as IS, since they are already there to support your current level of work. No hiring boom.

Third, you could AI to add more features faster, and also use AI to write the documentation for those features, and AI to provide most of the support (chatbot and automated replies and such) for those features, and AI to speed up QA on those features, and AI to design promotional materials for those features. No hiring boom still.

Even the initial claim that it takes 10x or 100x "in order for you to really share it with other people in your organization and for you to be able to guarantee the quality of the experience, the accuracy of the data and so on" if of course bogus. It is faster to do that too, even as you increase your new feature churn. Besides, half-baked is the new standard. What you'll see is more stuff being put out as barely better than the initial vibecoded output, not more polished experiences, or guarantees for the "quality and the accuracy". We didn't get that for a long time pre-AI, and we sure aren't getting this any better with the advent of AI.

Even more important, if everybody is churning new features and products with AI , who exactly is doing the buying of all these? People are cutting spending already for close to a decade now, subscription and app fatigue is real, as is inflation.

Did the author notice the massive layoffs explicitly attributed to AI (even if a lot of them are just due to the worse off economic times, and AI is just used as an excuse to make the companies appear to be forward looking). Do these look anywhere near like a hiring boom?



humans are still very needed to verify AI outputs

I wanted to like Discworld and read a couple of the books but they're frankly not funny.

He definitely grew significantly as a writer over time, and I would agree that some of his early work isn't particularly strong (The Light Fantastic, for example, is relatively bog-standard comic fantasy without any of the depth his later work showed).

If you start reading at the very beginning of the Discworld, you're slogging through the weaker stuff, and it's easy to get discouraged. A smoother path is to pick one of the defined sub-series (the guards are very popular, but my vote goes to the witches) and start along just that track; you'll get to the strong stuff much faster.


My advice has always been to start with Small Gods. It is a standalone book that references but does not rely on any others, and is far enough into his career that it’s fair to say that if you don’t like it, you won’t like his work in general.

You must have had a very good surgeon then! Congrats!

Looks alright for a "first" but there's no reason for anyone to really use until they open source it.

it's an agent product that you interact with over email. it has skills and stuff. it has multiplayer. so it's something like openclaw, but different.

you can counter the context rot and requirement drift that is experienced here by many users by using a recursive, self-documenting workflow: https://github.com/doubleuuser/rlm-workflow

I use a self-documenting recursive workflow: https://github.com/doubleuuser/rlm-workflow

you can use lurkkit.com to build your own chronological youtube feed with only your subscriptions


> Traditional Chinese relies on context: “Rain heavy, not go”, “雨大,不去了”. > Modern Chinese demands explicit logic: “Because the rain is heavy, therefore I will not go.””因为雨下得很大,所以我决定不去了。”

I would say "下雨了,我不去“ or something like that. The second example is perhaps what a language learner would say in order to "speak correctly", but nobody actually speaks or writes like that.


Totally. I also feel such a disconnect with HSK material, no one speaks like that or even uses that vocabulary. But I guess thats the case with almost every language/language course.


What's gone unnoticed with the Gemma 4 release is that it crowned Qwen as the small model SOTA. So for the first time a Chinese lab holds the frontier in a model category. It is a minor DeepSeek model, because western labs have to catch up with Alibaba now.


on my 16 GB GPU Gemma 4 is better and faster than Qwen 3.5, both at 4-bit

so it's not so clear cut


depends on usage, Gemma 4 is better on visuals/html/css and language understanding (Which probably plays a role in prompting). But it's worse at code in general compared to Qwen 3.5 27B.

Which in the series specifically?

It's unnoticed because it didn't. In Google's own benchmarks they are on par, and I've seen 3rd party benchmarks where Qwen beats G4 with high margin


The day a western anything will need to catch up with alibaba will be a notable day indeed. Also, this will never happen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: