Hacker Newsnew | past | comments | ask | show | jobs | submit | dleeftink's commentslogin

Pair this with Unicode plots[0] and you're set!

[0]: https://github.com/JuliaPlots/UnicodePlots.jl


It's waves all the way down!

Not saying this would be the right way to go about preventing undesirable uses, but shouldn't building 'risky' technologies signal some risk to the ones developing them? Safe harbor clauses have long allowed the risks to be externalised onto the user, fostering non-responsibility on the developers behalf.


Foisting the responsibility of the extremely risky transport industry onto the road developers would certainly prevent all undesirable uses of those carriageways. Once they are at last responsible for the risky uses of their technology, like bank robberies and car crashes, the incentive to build these dangerous freeways evaporates.


I think this is meant to show that moving the responsibility this way would be absurd because we don't do it for cars but... yeah, we probably should've done that for cars? Maybe then we'd have safe roads that don't encourage reckless driving.


But I think you're missing their "like bank robberies" point. Punishing the avenue of transport for illegal activity that's unrelated to the transport itself is problematic. I.e. people that are driving safely, but using the roads to carry out bad non-driving-related activities.

It's a stretched metaphor at this point, but I hope that makes sense (:


It is definitely getting stretchy at this point, but there is the point to be made that a lot of roads are built in a way which not only enables but encourages driving much faster than may be desired in the area where they're located. This, among other things, makes these roads more interesting as getaway routes for bank robbers.

If these roads had been designed differently, to naturally enforce the desired speeds, it would be a safer road in general and as a side effect be a less desirable getaway route.

Again I agree we're really stretching here, but there is a real common problem where badly designed roads don't just enable but encourage illegal and potentially unsafe driving. Wide, straight, flat roads are fast roads, no matter what the posted speed limit is. If you want low traffic speeds you need roads to be designed to be hostile to high speeds.


I think you are imagining a high-speed chase, and I agree with you in that case.

But what I was trying to describe is a "mild mannered" getaway driver. Not fleeing from cops, not speeding. Just calmly driving to and from crimes. Should we punish the road makers for enabling such nefarious activity?

(it's a rhetorical question; I'm just trying to clarify the point)


We wouldn't have roads at all is my point, because no contractor in their right mind would take on unbounded risk for limited gain.


Which in case of digital replicas that can feign real people, may be worth considering. Not a blanket legislation as proposed here, but something that signals the downstream risks to the developer to prevent undesired uses.


Then only foreign developers will be able to work with these kinds of technologies... the tools will still be made, they'll just be made by those outside jurisdiction.


Unless they released a model named "Tom Cruise-inator 3000," I don't see any way to legislate that intent that would provide any assurances to a developer that their misused model couldn't result in them facing significant legal peril. So anything in this ballpark has a huge chilling effect in my view. I think it's far too early in the AI game to even be putting pen to paper on new laws (the first AI bubble hasn't even popped, after all) but I understand that view is not universal.


I would say a text-based model carries a different risk profile compared to video-based ones. At some point (now?) we'd probably need to have the difficult conversation of what level of media-impersonation we are comfortable with.


It's messy because media impersonation has been a problem since the advent of communication. In the extreme, we're sort of asking "should we make lying illegal?"

The model (pardon) in my mind is like this:

* The forger of the banknote is punished, not the maker of the quill

* The author of the libelous pamphlet is punished, not the maker of the press

* The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop

In this world view, how do we handle users of the magic bag of math? We've scarcely thought before that a tool should police its own use. Maybe, we can say, because it's too easy to do bad things with, it's crossed some nebulous line. But it's hard to argue for that on principle, as it doesn't sit consistently with the more tangible and well-trodden examples.

With respect to the above, all the harms are clearly articulated in the law as specific crimes (forgery, libel, defamation). The square I can't circle with proposals like the one under discussion is that they open the door for authors of tools to be responsible for whatever arbitrary and undiscovered harms await from some unknown future use of their work. That seems like a regressive way of crafting law.


> The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop

In this case the guy making the images isn't doing anything wrong either.

Why would we punish him for pasting heads onto images, but not punish the artist who supplied the mannequin of Taylor Swift for the music video to Famous?†

https://www.youtube.com/watch?v=p7FCgw_GlWc

Why would we punish someone for drawing us a picture of Jerry Falwell having sex with his mother when it's fine to describe him doing it?

(Note that this video, like the recent SNL "Home Alone" sketch, has been censored by YouTube and cannot be viewed anonymously. Do we know why YouTube has recently kicked censorship up to these levels?)


Selling anything takes on unbounded risk for limited gain. That’s why the limited liability company exists.

Risk becomes bound to the total value of the company and you can start acting rationally.


Historically it's the other way around - limited liability for corporations let juries feel free to award absurdly high judgments against them.


And I am talking about user-facing app development specifically, which has a different risk profile compared to automative or civil engineering.


> then we'd have safe roads that don't encourage reckless driving.

You mean like speed limits, drivers licenses, seat belts, vehicle fitness and specific police for the roads?

I still can't see a legitimate use for anyone cloning anyone else's voice. Yes, satire and fun, but also a bunch of malicious uses as well. The same goes with non-fingerprinted video gen. Its already having a corrosive effect on public trust. Great memes, don't get me wrong, but I'm not sure thats worth it.


Creative work has obvious applications. e.g. AISIS - The Lost Tapes[0] was a sort of Oasis AI tribute album (the songs are all human written and performed, and then the band used a model of Liam Gallagher's mid 90s voice. Liam approved of the album after hearing it, saying he sounded "mega"). Some people have really unique voices and energy, and even the same artist might lose it over time (e.g. 90s vs 00s Oasis), so you could imagine voice cloning becoming just a standard part of media production.

[0] https://www.youtube.com/watch?v=whB21dr2Hlc


So can image gen systems.

As a former VFX person, I know that a couple of shows are testing out how/where it can be used. (currently its still more expensive than trad VFX, unless you are using it to make base models.)

Productivity gains in the VFX industry over the last 20 years has been immense. (ie a mid budget TV show has more, and more complex VFX work than most movies that are 10 years old, and look better.)

But, does that mean we should allow any bad actor to flood the floor with fake clips of whatever agenda they want to push? no. If I as a VFX enthusiast gets fooled by GenAI videos (Picture area done deal, its super hard to stop reliably) then we are super fucked.


You said you can't see a legitimate use, but clearly there are legitimate uses (the "no legitimate use" idea is used to justify bad drug policy for example, so we should be skeptical of it). As to whether we should allow it, I don't see how we have a choice. The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones, and eventually today's training supercomputers will be tomorrow's commodity. The whole idea of AI "fingerprinting" is bad anyway; you don't fingerprint that something is inauthentic. You sign that it is authentic.


> The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones,

Yes, lets just give up as bad actors undermine society, scam everyone and generally profit from us.

> You sign that it is authentic.

Signing means you denote ownership. A signed message means you can prove where it comes from. A service should own the shit it generates.

Which is the point, because if I cannot reliably see what is generated, how is a normal person able to tell. being able to provide a mechanism for the normal person to verify is a reasonable ask.


You put the bad actors in prison, or if they're outside your jurisdiction, and they're harming your citizens, and you're America, you go murder them. This has to be the solution anyway because the technology is already widely available. You can't make everyone in the world delete the models.

Yes signing so the way you show something is authentic. Like when the Hunter Biden email thing happened I didn't understand (well, I did) why the news was pretending we have no way to check whether they're real or whether the laptop was tampered with. It was a gmail account; they're signed by Google. Check the signatures! If that's his email address (presumably easy enough to corroborate), done. Missed opportunity to educate the public about the fact that there's all sorts of infrastructure to prove you made/sent something on a computer.


> You put the bad actors in prison,

how do you detect it?


People who get scammed make police reports, same as without voice models.


Well it would also apply to bike lanes.


How can you know how people are going to use the stuff you make? This is how we end up in a world where a precondition to writing code is having lawyers on staff.


No.

The reason safe harbor clauses externalize risks onto the user is because the user gets the most use (heh) of the software.

No developer is going to accept unbounded risk based on user behavior for a limited reward, especially not if they're working for free.


The reason safe harbor clauses exist is because you don't blame the car manufacturer for making the bank robbery get away car.


Just last weekend I developed a faster reed-solomon encoder. I'm looking forward to my jail time when somebody uses it to cheaply and reliably persist bootlegged Disney assets, just because I had the gall to optimize some GF256 math.


That is not what I said. It is about signalling risks to developers, not criminalising them. And in terms of encoders, I would say it relates more to digital 'form' than 'content' anyways, the container of a creative work vs the 'creative' (created) work itself.

While both can be misused, to me the latter category seems to afford a far larger set of tertiary/unintended uses.


No


In case bricks will be thrown, the response from the receiving party will likely skew to the argument presented here--circumvention of technical locks.

You'd catch the brick, sand it and repurpose so it'll fit your home.


Instead of having chat-interfaces target single developers, moving towards multiplayer interfaces may bring back some of what has been lost--looping in experts or third-party knowledge when a problem is too though to tackle via agentic means.

Now all our interactions are neatly kept in personalised ledgers, bounded and isolated from one another. Whether by design or by technical infeasability, the issue remains that knowledge becomes increasingly bounded too instead of collaborative.


Network effect gaming or true interest? Which blogs have been overshadowed by the lucky few?


The way HN works, you basically need a couple of habitual submitters to subscribe to your RSS feed. Blogs that have that appear here frequently, so there's definitely a positive feedback loop.

I had a blog that used to fare well on HN and it was carried 100% by a single HN regular. When that person went on a hiatus, my stuff stopped appearing on the front page. That's really all it takes.


> ensuring that only those with the right expertise remain

How will this ensure waning/gaining expertise is accurately represented/fostered? Wouldn't you rather attract a steady-stream of experts indefinitely?


In practice, anyone with sufficient funds can become a broker. The pseudo-random selection process means that the probability of Broker A being chosen to audit or inspect an item is positive. If Broker A accepts and validates an item they are unfamiliar with, regardless of its actual validity, the likelihood of a dispute arising increases. Since Broker A lacks knowledge about the item, proving their case becomes challenging, potentially resulting in financial losses. Over time, this situation should lead to a pool of brokers with expertise. Consequently, the system is likely to attract a continuous stream of experts, as expertise will prove itself financially advantageous.


If A could gain sufficient funds through expertise in one area (by way of validating contracts), could it feign/game expertise in area B by having enough funds to recoup the losses from disputes? Or would such a situation be prevented?


If they store both the generated content and the eventual indexed location, they could now filter search results more comprehensively based on content hashes.


Great write-up! It would be useful if various PKMs would settle on a similar format for recording (nested) tasks, dates and metadata, as it seems to have become the standard way to store kanban boards and similar 'enhanced' views. Currently there exist various strategies ranging from embedding JSON as comments to esoteric (non-markdown) formats, often trailing at the end of each task. This makes the source look cluttered and difficult to edit/navigate.

IMO, metadata (such as date ranges) could instead be stored as empty links leading each task (or by showing a custom placeholder symbol such as '@'), paving the way for a 'linked' data format while resulting in a same-width list for easy lookups and editing:

  - [x] [@](/2025/12/30..31.md (15:30:21)) task 1
  - [ ] [@](/2025/12/29..30.md (16:20:31)) task 2
  - [ ] [@](/2025/12/28..28.md (14:20:31)) same day task
    - [ ] undated nested task
For instance, the above tasks would link to the virtual '30..31.md' and '29..30.md' files which collect all backlinked tasks for the provided daterange (akin to Obisidan/Logseq/etc).

In an ideal world, the task marker could hold a custom symbol and linked metadata itself, but would result in non-standard (GFM) markdown:

  - [@](/2025/12/30..31.md (15:30:21)) task 1
  - [ ](/2025/12/29..30.md (16:20:31)) task 2
    - [ ] undated nested task
It would be up to the editor to render this metadata accordingly.


Good use case for @container, @scope and :has(), where you forgo class definitions and use --custom-properties on the parent scope/container which are inherited downwards based on the existence of a scoped DOM pattern/container query, or 'upwards' by using a :has(child-selector) on the parent.

Although be sure to avoid too many :has(:nth-child(n of complex selector)) in case of frequent DOM updates.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: