Hacker Newsnew | past | comments | ask | show | jobs | submit | bauerd's commentslogin

They are building infrastructure components that they will soon wire together with an orchestration layer. Managed agents, scheduled tasks, workflow webhook automation.

The infrastructure piece is what they do best. I'd be happy if Anthropic became the AWS of AI. But this product is just a mediocre shot at Figma, when no such shot is strictly necessary for them. These kinds of consumer products are not what they do best.

It would be interesting if they’re dogfooding it by building these research projects on those building blocks

Unifying experiences and tying them together is always harder than net new. It's the GRRM problem - expanding out the universe is easy, wrapping it up on the other hand...

This is a really verbose way to say that using generative AI has a detrimental effect on the user because one deprives themselves of the learning experience.

Agreed on your take on the parent, although I have to say I feel that AI has had the opposite effect for me. It has only accelerated learning quite significantly. In fact not only is learning more effective/efficient, I have more time for it because I am not spending nearly as much time tracking down stupid issues.

> I have more time for it because I am not spending nearly as much time tracking down stupid issues.

It is a truism that the majority of effort and time a software dev spends is allocated toward boilerplate, plumbing, and other tedious and intellectually uninteresting drudgery. LLMs can alleviate much of that, and if used wisely, function as a tool for aiding the understanding of principles, which is ultimately what knowledge concerns, and not absorbing the mind in ephemeral and essentially arbitrary fluff. In fact, the occupation hazard is that you'll become so absorbed in some bit of minutia, you'll forget the context you were operating in. You'll forget what the point of it all was.

Life is short. While knowing how to calculate mentally and/or with pen and paper is good for mastering principles and basic facility (the same is true of programming, btw), no one is clamoring to go back to the days before the calculator. There's a reason physicists would outsource the numerical bullshit to teams of human computers.


Just wanted to say you put it really well, that's exactly how I feel.

Sounds like you're talking about research AI and not generative AI. You can't learn artistic/creative techniques when you're not practicing those techniques. You can have a vision, but the AI will execute that vision, and you only get the end result without learning the techniques used to execute it.

Okay, this is a pet peeve of mine, so forgive me if I come off a little curt here, but-- I disagree strongly with how this was phrased.

"Generative AI" isn't just an adjective applied to a noun, it's a specific marketing term that's used as the collective category for language models and image/video model -- things which "generate" content.

What I assume you mean is "I think <term> is misleading, and would prefer to make a distinction".

But how you actually phrased it reads as "<term> doesn't mean <accepted definition of the term>, but rather <definition I made up which contains only the subset of the original definition I dislike>. What you mean is <term made up on the spot to distinguish the 'good' subset of the accepted definition>"

I see this all the time in politics, and it muddies the discussion so much because you can't have a coherent conversation. (And AI is very much a political topic these days.) It's the illusion of nuance -- which actually just serves as an excuse to avoid engaging with the nuance that actually exists in the real category. (Research AI is generative AI; they are not cleanly separable categories which you can define without artificial/external distinctions.)


That's a really useful distinction to have explicitly articulated. It's also why plan mode feels like a super power. Research vs Generative AI are different: I'm going to use this.

I guess I was more referring to just using generative AI when learning new subjects and exploring new ideas. It's a really efficient tutor and/or sidekick who can either explain topics in more depth, find better sources, or help me explore new theories. I was thinking beyond just generating code, which is incredibly useful but only mildly interesting.

Well, the research is sometimes 10x quicker with AI assistant. But not always. Building phase is maybe 20-100% quicker for me at least, depending on the complexity of the project. Green field without 15 years of legacy that is never allowed to break is many times faster, always has been.

Big difference between gaining knowledge and building/maintaining cognitive skills.

I assure you, working with LLMs is intellectually challenging, and becomes more so as the technology matures.

It really really really depends on how you are using it and what you are using it for.

I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.


I find it a lot more useful to dive into bugs involving multiple layers and versions of 3rd party dependencies. Deep issues where when I see the answer I completely understand what it did to find it and what the problem was (so in essence I wouldn't of learned anything diving deep into the issue), but it was able to do so in a much more efficient fashion than me referencing code across multiple commits on github, docs, etc...

This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.

LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.


Indeed, many if not most bugs are intellectually dull. They're just lodged within a layered morass of cruft and require a lot of effort to unearth. It is rarely intellectually stimulating, and when it is as a matter of methodology, it is often uninteresting as a matter of acquired knowledge.

The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.

Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.

Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.


I agree with your definition of programming (and I’ve been saying the same thing here), but

> It's annoying when a distracting and unessential detail derails this conversation

there is no such details.

The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).

> No one argues that we should throw away type checking,…

That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.

As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.


> there is no such details.

Qua formal system, yes, but this is a pedantic point as the aim - the what - of a system is more important than the how. This distinction makes the distinction between domain-relevant features and implementation details more conspicuous. If I wish to predict the relative positions of the objects of our solar system, then in relation to that end and that domain concern, it matters not whether the underlying model assumes a geocentric or heliocentric stance in its model (that tacitly is the deeper value of Copernicus's work; he didn't vindicate heliocentrism, he showed that a heliocentric model is just as explanatory and preserves appearances equally well, and I would say that this mathematical and even philosophical stance toward scientific modeling is the real Copernican revolution, not all the later pamphleteer mythology).

Of course, in relation to other ends and contexts, what were implementation details in one case become the domain in the other. If you are, say, aiming for model simplicity, then you might prefer heliocentrism over geocentrism with all its baroque explanatory or predictive devices.

The underlying implementation is, from a design point-of-view, virtually within the composite. The implementation model is not of equal rank and importance as the domain model, even if the former constrains the latter. (It's also why we talk about rabbit-holing; we can get distracted from our domain-specific aim, but distraction presupposes a distinction between domain-specific aim and something that isn't.) When woodworking, we aren't talking about quantum mechanical phenomena in the wood, because while you cannot separate the wood from the quantum mechanical phenomena as a factual matter - distinction is not separation - the quantum is virtual, not actual with respect to the wood, and it is irrelevant within the domain concerning the woodworker.

So, if there is a bug in a library, that is, in some sense, a distraction from our domain. LLMs can help keep us on task, because our abstractions don't care how they're implemented as long as they work and work the way we want. This can actually encourage clearer thinking. Category mistakes occur in part because of a failure to maintain clear domain distinctions.

> That’s not a good comparison. Type checking [...]

It reduces cognitive load vis-a-vis understanding code. When I want to understand a function in a dynamic language, I often have to drill down into composing functions, or look at callers, e.g., in test cases to build up a bunch of constraints in my mind about what the domain and codomain is. (This can become increasingly difficult when the dynamic language has some form of generics, because if you care about the concrete type/class in some case, you need even more information.)

This cognitive load distracts us from the domain. The domain is effectively blurred without types. Usually, modeling something using types first actually liberates us, because it encourages clearer thinking upfront about the what instead of jumping right into how. (I don't pretend that types never increase certain kinds of burdens, at least in the short term, but I am talking about a specific affordance. In any case, LLMs play very nicely with statically-typed languages, and so this actually reduces one of the argued benefits of dynamic languages as ostensibly better at prototyping.)

> As long as you can pattern match to get a solution [...]

Indeed, and that's the point. LLMs work so well precisely, because our abstractions suck. We have lot of boilerplate and repetitive plumbing that is time-consuming and tedious and pulls us away from the domain. Years of programming research and programming practice has not resolved this problem, which suggests that such abstractions are either impractical or unattainable. (The problem is related to the philosophical question whether you can formalize all of reality, which you cannot, and certainly not under one formal system.)

I don't claim that LLMs don't have drawbacks or tradeoffs, or require new methodologies to operate. My stance is a moderate one.


Yes but that’s why you ask it to teach you what it just did. And then you fact-check with external resources on the side. That’s how learning works.

> Yes but that’s why you ask it to teach you what it just did.

Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.


I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.

This isn’t necessarily a bad thing. I know a little css and have zero desire or motivation to know more; the things I’d like done that need css just wouldn’t have been done without LLMs.

This exactly. My css designs have noticeably gotten better without me,the writer getting any better at all.

But were you trying to learn CSS in the first place?

I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.

It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.

The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.


> I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.

I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.

For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).


My experience is mostly the opposite. Provided the right context and prompt, CC will generally produce code, even in domains I know, 10-20x faster.

Quality is a different issue, sure.


/s/intellectually/emotionally

It is better than doomscrolling on Instagram for hours like the new generations. At least the brain is active, creating ideas or reading some text nonstop to keep itself active.

Are you sure that is not the illusion of learning? If you don't know the domains, how can you know how much you now know? Especially consider that these models are all Dunning Kruger-inducing machines.

Agree on that too. And I use these as tools. I don't think I'm missing out on anything if I use this drill press to put a hole through an inch of steel instead of trying to spend a day doing it wobbly with a hand-drill.

No. It says much more than that, because it applies to many other tools that aren't AI.

Well, you know what they say about our current attention spans. If it's not a slogan it's already too long!

"Verbose" is the wrong adjective. Yours is a terse projection into a lower space, valid in itself, but lacking the power and precision of its archetype.

What if you’re a musician and use design as part of your marketing? Why should a musician deep dive design when they really only care about music?

The argument is not that only designers can design, nor that everyone should design like a designer. It’s to not confuse shopping for or generating generic solutions with the activity of problem solving. Per Alexander, trivial problems, those that can be solved without balancing interactions between conflicting requirements, are not design problems. So, don’t worry and just pick what you need and like!

Presumably you care about the quality of your marketing. Otherwise why do it at all. Worst case scenario, your marketing turns people off to your music, who would have otherwise been listeners.

Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound.

So you should be really interested in how to capture the “vibe” of your music in a visual medium.

But if you don’t care at all whether ppl actually listen to your music, then yeah you don’t have to deep dive.


"Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound."

The term you are looking for is 'aesthetic'.

And indeed.. music is far more than just a sound or whatever simple thing one tries to boil it down to.

Im convinced many (especially here) really dislike that - they want it just be a case of typing in a few things in an LLM and bam... there you go. They have zero clue about the nature of the economy, what's really going on in various markets etc etc.


just use clipart & templates and move on then, taking your argument to the extreme and skip the AI tooling.

I think that the beauty of the human experience is that all you need to learn is to practice. You automatically improve at what you're doing. The kinds of skills that atrophy when you use AI are skills that AI can already automate. And nobody is going to pay you to do slowly what a machine can do quickly/cheaply.

When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.


I don’t know.

With the core programming skills atrophying, who’s going to have the skills to audit AI’s work?


> using generative AI has a detrimental effect on the user because one deprives themselves of the learning experience

Or it lets folks focus. My coding skills have gotten damn rough over the years. But I still like the math. Using AI to build visualizations while I work on the model math with paper and pen is the best of both worlds. I can rapidly model something I’m working on out algebraically and analytically.

Does that mean my R skills are deteriorating? Absolutely. But I think that’s fine. My total skillset’s power is increasing.


Your point is maybe 25% of the original, so no, it's not just a really verbose way to say the same thing.

In fact, there's a palpable irony in your decision to make that comment, which only reinforces the OP.


I think the larger part implied is the design will be crappy, because the problem was unexplored

Was thinking similarly... Without the friction, you're unable to explore the space, the space doesn't even exist at all... So it's not even clear where you're going from or where you'll arrive at.

your paragraph is parent's point in action.

If only all great works could just be an X post!


And, anyone who reads your comment will be deprived of the experience of learning why your comment makes so much sense.

Not really. It’s saying that most people in tech have no fucking idea what designers do, but somehow feel qualified to evaluate their output, and think tools that make things that look nice are designing things. What you reference is one effect of what the comment is about. Another effect is developers, combining this with engineer’s disease, being incredibly irritating to work with because they constantly make reductive comments that completely miss the point while other developers nod and say “yeah that sounds right.” I was a developer for ten years— I’ve seen this from both sides.

No because the technology will be used against you.

The argument isn't that LLMs are bad because they can hallucinate. Author (clearly) argues that LLM use has negative cognitive effects on their users and on society as a whole. Plus, the technology would wipe out a large, large number of jobs.

>They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.

Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.


The vast majority of people on this planet work repetitive, uncreative jobs.

There is no such job done by humans today that is 100% uncreative, but people will continue to insist there is.

The devaluing may come from AI pressure, but the harm is coming from humans foolishly not seeing the value in what's left behind. Most people have not and will not lose their jobs.


Oh? And what extensive knowledge and experience makes YOU qualified to determine what "the vast majority of people on this planet" are doing for work and if those tasks are creative or uncreative?

Not sure what you're insinuating. What do you think is the statistically average job on this planet? It's still going to be cultivating a smallholder farm in developing countries, or working in logistics, manufacturing or the broader service in developed countries.

All of these average jobs are structurally repetitive. Yes, humans do constantly inject creativity, but it's a means to an end, to getting the job done.

You apparently mistook my descriptive comment for a value judgment, but it isn't.


>a lack of imagination on what could be better

I'd argue it's likelier that people are more informed about their absolute position globally. Any screen gets you the mental image of the top of the ladder. So happy people would end up scoring themselves low, because there's a globalized vision of wealth nowadays.

Besides there's a difference in life self-evaluation and experienced happiness, so the report really is a misnomer.


>Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)

Absolutely. Tight feedback loops are essential to coding agents and you can’t run pipelines locally.


This is where I think we need better tooling around tiered validation - there's probably quite a bit you can run locally if we had the right separation; splitting the cheap validation from the expensive has compounding benefits for LLMs.


brew install emacs


I heard there’s an emacs macro for that.


Generally wrong. It may cost less because its externalities aren't priced in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: