Hacker Newsnew | past | comments | ask | show | jobs | submit | 6031769's commentslogin

With the rise of AI, all audio, images and video is now also suspect.

True, but it's a lot harder to sneak those things than text. I've seen convincing Yanis Varifakis and Neil DeGrasse Tyson fakes, but even those don't survive any scrutiny. I'm sure that will change, and people will find new ways to signal authenticity in videos (leaving in fuckups is already in style).


I have firefox installed on Linux. There is no /etc/firefox/policies/ dir, nor indeed even an /etc/firefox/ dir. Therefore, no need for sudo.


The /etc directory and everything under it is protected.


Of course it is. But there's no point trying to put policies into a directory in a tree which doesn't exist and by extension which Firefox won't be reading.

In Linux (and in any sane system) there is no need for elevated privileges just to alter your browser settings.


Firefox will read it if it exists[1]. You could use the /usr/lib/firefox/distribution directory (or whatever the installation directory may be), but that may be overwritten by an update.

There doesn't seem to be any way to set per-user group policies, so unless you're installing firefox in a user-controlled directory, it will require elevated privileges.

[1]: https://support.mozilla.org/en-US/kb/customizing-firefox-usi...


Pale Moon is actively maintained. https://www.palemoon.org/


So an LLM says that a technique used to foil LLM scrapers is ineffective against LLM scrapers.

It's almost as if it might have an ulterior motive in saying so.


Nor in Pale Moon: "CompileError: wasm validation error: at offset 35: too many returns in signature"

Firefox seems happy enough, though.


And if that puts a nail in the coffin of LLMs then I'm all for it.


At scale? It'll just put a nail in the coffin of mass literacy, cognitive ability, emotional well-being, sheer sanity. The nonsensical will become even more normalized, while the reaction to someone e.g. doing maths will come to be: "ew, die"


Okay, so if you open up your site for free, there should be a way for an AI chat to link to the paid version of the free page when referencing in the chat window. That link can be obfuscated in some way too, perhaps. Also, it'd be very cool to split that payment back to the AI.

- You help the LLM by putting something up for free and pinging it upon publishing

- The LLM helps back by linking to you (hopefully)

- The user helps by paying you when they visit the paid version

- You help the LLM by splitting that payment back to it

optional: your free page can have reference links.. so other pages that helped you reach your final version can get a split of the payment as well. Perhaps the LLM can handle that part in "upstream distributions."

In fact, your reference links can lead to even more reference links further upstream when stepping back through the totality of references: the pyramid slice. Perhaps it should be capped at say 3 steps back.. that can be decided somewhere or the payments can be diluted the further back the focus.

So here's the crux - there should be a way for the user to decide how much they want/can pay you. A tipping culture can work. If you're broke, just don't tip anything or put it on hold. Big Business can be transparent on their payments and build up social capital by disclosing their "giving." There can be a level of transparency and privacy that can be tweaked for each situation.


Oh, it absolutely can work! What you describe makes sense, I'm just not particularly optimistic that it will be allowed to happen at all. If it's a technological inevitability, what I expect to happen is for a self-defeating version of your model to be rolled out at a snail's pace, so that the people who depend on the old ways have the time to live their life, die, and be replaced by ones who'll be like "hey, this was tried last generation and didn't work" (because it wasn't actually tried). Meanwhile the people excited about it will have time to go through enough stages of grief in recursive iterations to forget what they were even excited about.


> I'm just not particularly optimistic that it will be allowed to happen at all

Maybe, but in a recent comment of mine I alluded to a "long-tail" of AIs popping up. So there's a possibility in one of those. But if no one has any money to invent or create, or they feel there is a risk in sharing, it won't really work too well.

I bet to get to AGI, humans will have to actively help: it can't be a parasitic relationship. People are pessimistic about AI, but why can't it lead to free energy, patent obsolescence (https://www.youtube.com/watch?v=sNR_6aBQyDk), supernatural abilities and utopia instead? Wait those things will come, they will just remain with the special people in the "breakaway civilization," perhaps.


If we've learned anything from the history of CSS, JS and the semantic web it is that 99% of the time a feature will be used in ways that were not intended. There is no reason to suppose that this will be any different.


You have to consider why, and the answer was often "there's no other way".

The paths of least resistance on the web are now very different. These features were not delayed due to implementation details, but a deliberate shepherding of the standards. The most powerful features were saved for later and even still this is scoped to media queries, etc. only.


Let your hoster take care of the DDoS and stop using the flaky behemoth.

You haven't actually watched Mad Max, have you? I do recommend it.


They already have it and it redirects to the real Oxford site.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: