Hacker Newsnew | past | comments | ask | show | jobs | submit | more shinycode's commentslogin

What a nice side effect, unfortunately they’ll lock chatbots with more barriers in the future but that’s ironic.


...And under pressure, those barriers will fail, too.

It is not possible, at least with any of the current generations of LLMs, to construct a chatbot that will always follow your corporate policies.


That's what people aren't understanding, it seems.

You are providing people with an endlessly patient, endlessly novel, endlessly naive employee to attempt your social engineering attacks on. Over and over and over. Hell, it will even provide you with reasons for its inability to answer your question, allowing you to fine-tune your attacks faster and easier than with a person.

Until true AI exists, there are no actual hard-stops, just guardrails that you can step over if you try hard enough.

We recently cancelled a contract with a company because they implemented student facing AI features that could call data from our student information and learning management systems. I was able to get it to give me answers to a test for a class I wasn't enrolled in and PII for other students, even though the company assured us that, due to their built-in guardrails, it could only provide general information for courses that the students are actively enrolled in (due dates, time limits, those sorts of things). Had we allowed that to go live (as many institutions have), it was just a matter of time before a savvy student figured that out.

We killed the connection with that company the week before finals, because the shit-show of fixing broken features was less of a headache than unleashing hell on our campus in the form of a very friendly chatbot.


With chat ai + guardrail AI it probably will get to the point of it being sure enough that the amount of mistakes won't hit the bottom line.

...and we will find a way to turn it into malicious compliance where rules are not broken but stuff corporation wanted to happen doesn't.


I second that, trust is broken if there is ads. The line of great ads to weird ads to pushy-borderline-scam ads into personal context is thin. Hopefully the price of local will go down and maybe apple will be able to push most of it on-device. The day chatGPT push ads in a conversation I stop using it.

  The thing is with llm, it went so fast to get that many users, it means people are used to adopt new stuff as well. With proper marketing and specific feature I won’t be surprised to see people switch service as easily they start  using it in the first place because the barrier is so low.


The worse thing is the companies behind those tools know it, they know it’s not reliable and it looks like they want to build a world where a majority of people won’t check sources anymore and blindly trust the LLM as authority. I fear that, by a tour de force of some kind, they’ll shield themselves from giving any source is the future in the name of « intelligence ». What will happen in the next generations where young that have grown up only with LLM giving them answers they saw as truth and older generations died? What a different world it will be.


Not to be sarcastic but the sample of your study is quite light. (I too I found it useful for my particular use case but that doesn’t say much either)


To be honest, the sample of the Guardian's study is also quite light (a dozen people) and with much more selection bias (people who work with AI).


As is the sample in this article. You will surely find as many physicists saying earth is flat, mathematicians who hold that Cantor was wrong, and medical doctors that tell you vaccination against measles is overall worse than not.


So? People sharing their experiences is good. Not everything is a scientific study.


True and opinions are often discarded as irrelevant in internet discussions in favor of large-scale studies.


Unless there is « proof like » messages, exchange for a specific case years later or something you bought and need proof of that for insurance.


Massive amounts of people with unsustainable lifestyle will stop consuming stuff. How will the whole capitalist economy continue to function with massive amounts of medium-high income people not having the same income anymore ?


It's complete market failure.

By the time the AI bubble bursts, those that have investments in those companies would have already exited.

Then in 10 years time, they will look back at how they scammed the back off of the layoffs with AI driving the entire problem.


I fail to see features that default iOS calendar app already has. The UI seems really simple and there is dozens of amazing calendar apps that have been on the market for 10+ years of features in this price range.


> I fail to see features that default iOS calendar app already has.

presumably local-first


How is iOS calendar not local-first


What does that mean?


Our problem is not coding. Our problem is knowledge. If no one reads it and no one knows how it works and that’s what the company wants because we need to ship fast then the company doesn’t understand what software is all about. Code is a language, we write stories that makes a lot of sense and has consequences. If the companies does not care that humans need to know and decide in details the story and how it’s written then let it accept the consequence of a sttastistically generated story with no human supervision. Let it trust the statistics when there will be a bug and no one knows how it works because no one read it and no one is there anymore to debug. We’ll see in the end if it’s cheaper to let the code be written and only understood by statistical algorithms. Otherwise, just work differently instead of generating thousand of loc, it’s your responsibility to review and understand no matter how long it takes.


Don’t read it, approve it.


Maybe it’s time to ask deeper questions, ask how to reduce complexity while preserving meaning. Doing real pair programming with shared remote code and simulate as much as possible a real day-to-day environment. Not all companies search for the same kind of developers. Some don’t really care about the person as long as the tech skills are there. Some don’t look for the brightest in favor of a better cultural match with the team. Genuine remote interviews aren’t easy but it also depends on the interviewer’s skills. We’ve been touted for year that AI will replace developers, would Elon replace the engineers working on the software of it’s rockets with AI ? It depends what’s at stake. I bet their interviews are quite specific and researched thoroughly. We can find better ways to create a real connexion in the interviews and still make sure the tech skills are sound without leet code. We also need developers who master the use of AI and have real skills of thinking before and designing and deep review code skills


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: