Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably on of the tools you post it too _or posted it in_ runs a web-scrapper.

And there might be a person specific id in the link you get, which other people copying the image don't get.

For example edge did for a while scrap any link you visited with it, even if you logged out of exactly this feature, not running any heuristics about weather it might a a magic link and then skipping it and AFIK also not stripping query parameters (through it did respect robots.txt I think).

Similar a lot of apps do similar, sometimes not even respecting robots.txt.

Through in any case it's still Meta being rediculous, because it's a known issue and you can't fault your user for doing perfectly normal things in a world where even one of the major browsers behaves like malicious spyware wrt. urls/web scrapping without user explicit consent.



> And there might be a person specific id in the link you get

This is how they know I'm doing it, each url seems to be custom generated for the specific view of the post from an account because they actually expire too eventually. But you know "scraping" is like sharing what? 15 image links off the walled garden a month?

Just think it's an insane amount of effort to go to for them to try to make the internet stop working like the internet.

Upsets me this is where we got to with image sharing vs the open and remix culture of pre-yahoo Tumblr.


The solution is to simply stop using their services. That's how you reclaim the internet: by refusing to participate in walled gardens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: