Hacker Newsnew | past | comments | ask | show | jobs | submit | h4ny's commentslogin

> I'm being a bit facetious here...

Maybe just don't do that? It's never helpful in good-faith discussions and just indicates a lack of empathy and maybe a lack of understanding of the actual issue being discussed.

> So, you haven't identified any actual problems with them being on social media though.

The problems GP raised seem pretty clear to me. Could gives us some examples of what you would consider to be "actual problems" in this context?

> Just that kids are doing something new and sometimes scary...

Any sane parent wouldn't send their kids to learn to ride a bicycle on the open road and without any supervision. You'd find a park or an empty lot somewhere, let them test it out, assess their ability to deal with potential dangers and avoid harming others at the same time, and let them be on their own once they are able to give you enough confidence that they can handle themselves most of the time without your help.

The problem with today's social media for children is that that there is no direct supervision or moderation of any kind. Like many have pointed out, social media extends to things like online games as well, and the chance that you will see content that are implicitly or explicitly unsuitable for children is extremely high. Just try joining the Discord channels of guilds of any online game to see for yourself.

Not all things new and scary come with a moderate to high risk of irreparable harm.


I encourage everyone to read the definition on the home page:

> Definition: A gaming dark pattern is something that is deliberately added to a game to cause an unwanted negative experience for the player with a positive outcome for the game developer.

And also the detailed descriptions of each of the dark patterns, for example:

https://www.darkpattern.games/pattern/12/grinding.html

Quoting just the short descriptions of the dark patterns without considering the definition above is effectively mischaracterizing the intent of the website and not using the tool as intended, and all the patterns seem like they can be/are just enjoyable mechanics to many.

Some of the users reviewing games on the website seem to also miss the point (inaccurate reviews), which leads to comments like https://news.ycombinator.com/item?id=45947761#45948330.

It is increasingly often the case in predatory games that a very subtle combination of the mechanics listed make them dark patterns collectively, so it's also important to consider the patterns in groups.


Some people criticize definition of dark patterns because they can't face their addiction.


This feels like a step backwards and now people who never bothered to write proper, appropriate commit messages for others to start with can care even less.

I personally don't see what the use case of this is -- you shouldn't even be hired in the first place if you can't even describe the changes you made properly.


GGP's sentiment resonates with me. I invest a fair bit of time into LLMs to keep up on how †hings are evolving and I do throw both small and large tasks at them. I'm seeing great results with some small task but with anything that is remotely close to actual engineering I just can't get satisfactory results.

My largest project is a year old, it's full-stack JavaScript, and I consciously use patterns, structures, and diligently add documentations right from the beginning for the code base to be as LLM friendly as possible.

I see great results on refactoring with limited scope, scaffolding test cases (I still choose to write my own tests but LLMs can also generate very good tests if I explicitly point to existing tests of highly related code, such as some repository methods), documenting functions, etc. but I'm just not seeing the kind of quality that people claim that LLMs can do for them on complex tasks.

I want to believe that LLMs are actually capable of doing what at least a good junior engineer can do but I'm not seeing that in my own experience. Whenever we point out these issues we are encountering, we just basically get the "git gud" response with no practical details on what we can actually dp to get the results that people claim to be getting. Then people start blaming our lack of structures, patterns, problems with our prompts, the language, our stack, etc. when we complain about the "git gud" response being too vague. Nobody claiming to be seeing great results seems to want to do a comprehensive write-up or, better still, a stream of their entire workflow to teach others how to do actual, good engineering with LLMs on real-world problems either -- they all just want to give high level details and assert success.

On top of that, the fact that none of the people I know in engineering working in both large organizations and respectable startups that are pushing AI are seeing that kind of results naturally makes me even more skeptical of claims of success. What I'm often hearing from them are mediocre engineers thinking that they are being productive but actually just offloading the work to their colleagues through review, and nobody seems to be seeing tangible returns from using AI in their workflow but people in C-suites are pushing AI anyway.

If just about anything can be "your fault", how can anyone claiming that LLMs are great for real engineering without showing evidence be so confident that what they're claiming but not showing is actually the case.

I feel like every time I comment on anything related to your blog posts I probably came across as belligerent and get down voted but I really don't intend to.


Which model and tools are you using it that repo?


Could you elaborate on what you mean by "moral basis" in your comment?


It is in their selfish interest to push for open weights.

That's not to say they are being selfish, or to judge in any way the morality of their actions. But because of that incentive, you can't logically infer moral agency in their decision to release open-weights, IP-free CPUs, etc.


By selfish interests you mean the public good?


Leaving China aside, it's arguably immoral that our leading AI models are closed and concentrated in the hands of billionaires with questionable ethical histories (at best).


I mean China's push for open weights/source/architecture probably has more to do with them wanting legal access to markets than it does with those things being morally superior.


Of course, but that translates in a benefit for most people, even for Americans. In my case (European), I cannot but support the Chinese companies in this respect, as we would be especially in trouble if the common models are the norm.


If by being selfish they end up doing morally superior thing, then, I much prefer to go with the Chinese.

Even more so now that Trump is in command.


Not speaking for everyone but to me the problem is the normalization of bad behavior.

Some people in this thread are already interpreting that policies that allow contributions of AI-generated code means it's OK to not understand the code they write and can offload that work to the reviewers.

If you have ever had to review code that an author doesn't understand or written code that you don't understand for others to review, you should know how bad it is even without an LLM.

> Why do you care? Their sandbox their rules...

* What if it's a piece of software or dependency that I use and support? That affects me.

* What if I have to work with these people in these community? That affects me.

* What if I happen to have to mentor new software engineers who were conditioned to think that bad practices are OK? That affects me.

Things are usually less sandboxed than you think.


You just stop accepting contributions from them?

There is nothing inherently different about these policies that make them more or less difficult to enforce than other kinds of polices.


> I didn't make a decision on the tradeoff, the LLVM community did. I also disclosed it in the PR.

That's not what the GP mean. Just because a community doesn't disallow something doesn't mean it's the right thing to do.

> I also try to mitigate the code review burden by doing as much review as possible on my end

That's great but...

> & flagging what I don't understand.

It's absurd to me that people should commit code they don't understand. That is the problem. Just because you are allowed to commit AI-generated/assisted code does not mean that you should commit code that you don't understand.

The overhead to others of committing code that you don't understand then ask someone to review is a lot higher than asking someone for directions first so you can understand the problem and code you write.

> If your project has a policy against AI usage I won't submit AI-generated code because I respect your decision.

That's just not the point.


> It's absurd to me that people should commit code they don't understand

The industrywide tsunami of tech debt arising from AI detritus[1] will be interesting to watch. Tech leadership is currently drunk on improved productivity metrics (via lines of code or number of PRs), but I bet velocity will slow down, and products be more brittle due to extraneous AI-generated, with a lag, so it won't be immediately apparent. Only teams with rigorous reviews will fare well in the long term, but may be punished in the short term for "not being as productive" as others.

1. From personal observation: when I'm in a hurry, I accept code that does more than is necessary to meet the requirements, or is merely not succinct. Where as pre-AI, less code would be merged with a "TBD" tacked on


I agree with more review. The reason I wrote the PR is because AI keeps using `int` in my codebase when modern coding guidelines suggest `size_t`, `uint32_t`, or something else modern.


Interesting idea! Would be nice to see:

* How the colors were picked and assigned to each category and (e.g. at what point is red pink and no longer red)

* An indication of distribution in charts, they have different scales on the y-axis.

* The author likely sampled posters with mostly the same color above a given threshold for each category, would that (together with the lack of methodology and error bars) heavily skews the reader's presentation of the data analysis?


Take this as a additional point of reference: I don't have formal education in art and not an artist, but I find your work interesting enough that I would stop at a store to look at and probably buy something (printed and fabric) if I can afford to (especially the cover art on the home page).

Reading your comment, it sounds like you are actively sabotaging yourself by convincing yourself that you shouldn't just try (perhaps due to a subconscious fear of rejection). How do you get an audience if you don't actively promote your work and/or try to sell them?

There is no guarantee that you will "succeed" (whatever that looks like to you — success could mean having a lot of people appreciate your work and/or selling your art for lots of money) if you try your hardest but if you don't try you will never succeed at all. I'll break down the second last paragraph as an example below.

> I'd love to sell it online, but without an audience, no one will visit.

Audience don't just suddenly appear because you have created something. You need to put in the effort to create an audience to begin with.

> I could sell it at https://www.saatchiart.com, but they don't really market most of what they have. You have to drag people there.

You need an incredible amount of luck for people to just "discover" your work and just suddenly like it (especially with abstract art?), so having need "to drag people there" is just what you should do if you want exposure for your work whether or not you host them on saatchiart.com.

Don't fall into the trap of "if you build it, they will come".

Focus on creating a compelling narrative behind your art and keep iterating to attract a small, loyal audience first (1000 people is already a lot).

> Plus they take 30% or 40% (50% is normal for galleries).

This is irrelevant if nobody knows your work and would buy them to begin with. It's just another excuse to not try. By the time this is a problem you can migrate to something more personal. Many people that support independent artists want the artists they like to get more money from them.

> Locally, in the right location, people see your art, and stop by. It's just the pain of setting it up, and then sitting there while you wait!

I enjoy engaging with artists at markets because the personal connection with them is actually the most valuable thing for me and the most compelling reason for me to make purchases. I also appreciate the artists who show up consistently at related events particular those who remember me well, which also becomes a reason for me to introduce their work to my friends.

Good luck with your work and I hope you will find success with it! ^^


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: