In many ways, it is a zero-sum game. Art is a form of communication, and people have a finite amount of time and attention. Some people enjoy seeing the craftsmanship of artists, and some artists enjoy displaying their mastery of the craft. Beyond that, people use craftsmanship as a proxy for care/thought put into a work. If you can successfully mimic the appearance of craftsmanship without the effort, a major incentive for artists to create, polish, and publish their work is now gone. If you're someone who enjoys viewing craftsmanship, or who tries to find for high quality work based on the craftsmanship put into it, how long will you be willing to look through a sea of increasingly convincing noise to find some kind of signal?
I don't think this type of argument is sound at all. There are plenty of programmers whose work doesn't contribute to automating away others' jobs, or those who might not see it in such a way. You are free to disagree with the opinions expressed by the poster above, but making such a sweeping generalization about how we shouldn't hold a supposedly hypocritical opinion based on some kind of imagined consensus seems like an excuse to promote your views over others' as the 'correct' ones.
I’m not saying individual programmers consciously set out to eliminate jobs, or that every programmer's work directly replaces someone. But the historical and structural reality of the profession is that software development, as a field, has consistently produced automation that reduces the amount of human labor required.
That pattern is bigger than any one of us and it's not a moral judgment. It's simply part of what technology does and has always done. AI is a continuation of that same trend we've all participated in, whether directly or indirectly. My point is that to stop now and say "look at all these jobs being eliminated by computers" is several decades too late.
The LLM erotic roleplaying community's usage of "slop" aligns with the definition in this paper, so it's not without precedent. Several novel sampling methods have originated from that community trying to address this specific issue.
Yup. You see this with the very first projects to get a new sampler being oobabooga text gen webui, sillytavern circa early 2023 with min_p. Same with diffusion models. First projects to get new denoising algorithms are ComfyUI, Automatic1111, etc.
I love programming, but it turns out I love building useful stuff even more than I love programming. Agentic coding helped me fall in love with development all over again. If a team of junior engineers suddenly showed up at my door and offered to perform any tasks I was willing to assign to them for the rest of my life, I'd love that too.
Agentic coding is just doing for development what cloud computing did for systems administration. Sure, I could spend all day building and configuring Linux boxes to deploy backend infrastructure on if the time and budget existed for me to do that, and I'd have fun doing it, but what's more fun for me is actually launching a product.
Sadly the times where people joined software engineering for passion are way behind. People nowadays join just for the money or because it has lot of jobs available.
It is very easy to notice at work who actually likes building software and wants to make the best product and who is there for the money, wants to move on, hard code something and get away with the minimal amount of work, usually because they don't care much. That kind of people love vibe coding.
I don't like the kind of programming that an LLM can easily accomplish.
For instance, I recently had to replace a hard-coded parameter with something specifiable on the command line, in an unfamiliar behemoth of a Java project. The hard-coded value was literally 20 function calls deep in a heavily dependency-injected stack, and the argument parser was of course bespoke.
Claude Code oneshotted this in about 30 seconds. It took me all of 5 minutes to read through its implementation and verify that it correctly called the custom argument parser and percolated its value down all 20 layers of the stack. The hour of my time I got back from having to parse through all those layers myself was spent on the sort of programming I love, the kind that LLMs are bad at: things like novel algorithm development, low-level optimizations, designing elegant and maintainable code architecture, etc.
wait you were unfamiliar with a behemoth Java project to the point of dreading making the change yourself, and yet only spent 5 minutes reviewing "someone else's" PR?
Yup. Replacing a single hard-coded parameter with a command line argument is hardly an earth shattering change. It's trivial to verify that the argument properly gets passed down the stack (and that passing it has no side-effects), but figuring out that stack in the first place would have taken a much longer time. Think of it like an NP-complete problem: hard to solve, but easy to check that a solution is correct.
For more complex modifications, I would have taken the time to better internalize the code architecture myself. But for a no-brainer case like this, an LLM oneshot is perfect.
Sorry if I haven't been clear: it's one variable, used exactly once at the very bottom of the call stack. The change only required adding a corresponding extra argument or class member to all of the functions/classes upstream. In fact, there were other variables in the caller of the bottom function that get passed down from the command line, a pattern that the LLM likely picked up on (and exactly what clued me in to the fact that the LLM would likely make this change very easily, a hunch that proved correct).
You raise a good point: an important skill in effectively using LLMs for coding is both being able to recognize ahead of time that cases like this are indeed simple, but also recognizing after the fact that the code is more complex than you initially realized and you can't easily internalize the (side) effects of what the LLM wrote, warranting a closer look.
The tools and knowledge for making music are already unbelievably accessible. Anyone with an internet connection and a decent computer can read about music theory, learn to use a DAW, and get some basic virtual instruments. The same goes for producing art, which doesn't even require anything digital.
This does not augment the music making process in any way, it simply replaces it with what might as well be a gacha game. There's no low-level experimentation, no knowledge acquisition, no growth, and you can't even truly say you made whatever comes out.
It's not a tool for music creators, it's a tool for people who want slop that's "good enough".
Sure, with several hundred hours to spare one can make some songs in a DAW. Now one can make something as good/bad in maybe 1/10x the time. Or, given the same time investment, one can possibly make something better!
The goal of AI automating labor should be to give us more leisure time to pursue hobbies, not to fill our limited leisure time with low quality substitutes for those hobbies.
Making an activity in which the primary limiting factors for most people are the time, knowledge, and effort required (as opposed to expensive tools) into an effortless slot machine pull is enfeebling to human creativity and agency. Who will spend the hours of making bad music to get to the point where they become good if they can just rely on something else to generate music that's "good enough"?
There's something to be said about all this which is related to AI generated images that I rarely see brought up: people with specific skills play roles within groups, so AI making their hobby that they dedicated so much time to more easily accessible makes them lose social value, which might make them quit altogether.
The common response that "people should make art because they love it, not for attention" is a prescriptive statement that supposes there are more or less "pure" forms of performing an activity and also ignores that art is a form of communication.
"low quality substitute" and "effortless" are value judgements on your behalf. Many made similar judgements about DAWs and VSTs. And that is your right. But not everyone sees it in the same way - for some generative models are opening up a new world of possibilities.
I agree that the slot machine pull of current models is tedious and boring. I look forward to models/systems which better facilitate more creative control, directed exploration and iterative refinement.
Yes, there are a TON of free tools and endless instruction on using them. If you move your budget up to making one-time payments for things that cost less than one month using a subscription service, you get an astonishing breadth of new options. Beyond that, so many of the more expensive music making tools are one-time payments rather than subscription services. Buy Ableton once? You own it. You can get the latest version at a discount, but there's absolutely nothing stopping you from using the version you bought, in perpetuity.
reply