I get the reasoning, but I don't see this applied to some platforms. Reddit and Discord allow you to both delete and edit older comments, and there's no limits on how far back you can go (so you can, if you wanted, edit or delete your entire history).
Under the GDPR a subject is allowed full erasure rights. If I say I want you to delete my content from x date to y date, or a particular post, or everything entirely then that shouldn't be an issue. A request may be bothersome, but that's what happens when you don't offer that functionality natively.
I noticed a few days back you didn't like it when a user made a new account, except with the internet these days and how everything is archived for all time, throwaway's are the only option. Building a comment history is extremely dangerous, especially when you might forget what details you may have posted or how meta-data can leak through (such as what subs you post in, any details you posted that could identify you etc).
You can't have it both ways: no to multiple accounts and also no to control over your data. I might have 50 accounts, dislike it? Give me proper control over my comments. (to be honest, it may just be worth making a new account for every comment for maximum privacy, it's extreme, but it's a viable option).
If I want to delete them, that's my choice to freely make. Your thoughts or concerns are not relevant to me, thankfully, the GDPR agrees.
I noticed a few days back you didn't like it when a user made a new account
I think you must have misunderstood whatever the moderation comment was, there's no prohibition on throwaway or multiple accounts. Just against using them to violate the site guidelines which is a different thing.
> As the businesses still using it either mature, evolve, or fail, the need for PHP will begin to dry up.
People have been calling for PHP's death, or saying PHP is a dying language, for as long as the internet has been around.
It's always the same arguments, that $newHipLanguage will replace it.
Then you actually do some research and understand just how much of the internet is still powered by PHP and will continue to do so for the forseeable future.
So, I'm still waiting for PHP's death. Or for this same predictable comment in 2030.
> It's always the same arguments, that $newHipLanguage will replace it.
It's not that a new language will replace it. It's that the people that hire PHP to solve their problem will now use a platform with no code to manage.
Engineers writing platforms will choose languages other than PHP to write them. You can't spin up multiple request threads to make concurrent non-blocking queries in PHP.
It's not built into the language? How can we be sure this project with 2k Github stars won't segfault, memleak, deadlock, or have some other horrible bug [1]? I haven't done full due diligence, but I'm already suspicious. The open tickets don't look good [2].
> You mean that you don't know how to
I'm not convinced you found an appropriate answer either.
In any case, this should be a language feature.
[1] It looks like it relies on bindings to an unofficial native code extension.
> nobody came to any harm or suffered any detrimental effects as a result of this breach
Who gets to decide what "harm" is or whether anyone suffered "detrimental effects"? Surveillance is so common and normalized they don't consider the act of collecting so much information itself as a "detrimental harm".
What if that harm only presented itself years down the line? Maybe a creepy stalker who can synthesize mulitple data sets to reconstruct a person's movements or possibly use it against them some way (scammers and fraudsters are increasingly using all these leaked datasets to create a more accurate profile of an individual for more sophisticated attacks/targeting. Your name/address/mobile number must not and can not be considered PII since it's already been leaked probably ten's of times by now).
That's an incredibly shortsighted comment to try and justify developing a system with not even the most basic of security considerations.
I honestly just wish those same people were jailed for 50 years as a result, we'd see a LOT more consideration in the future if they were held personally liable.
You can once the inevitable database leak of driver information details leaks from the DVLA/insurance companies.
It might take a few years, but you can use this dataset in the future to understand who owned the vehicle at this time and reconstruct their movements.
Using collected information it's possible a computer can remember every journey you've ever taken; this car with this reg plate was here at this time at this place, and they did not have a valid tax/insurance at this time, or it could be useful during investigations
What does opt-out mean anyway? I've had examples where Google Fit has reenabled itself without my due consent in the settings of my phone (and I have screenshots to prove it).
We've seen opt-out abused before so it is down to whether you feel you can use a platform known for not respecting your choices.
Often times, it's a simple boolean value, do you want to trust that bool will stay the same, always?
Is Mozilla's new browser on Android not included on that list?
It contains 3 trackers [1]:
Adjust
Google Firebase Analytics
LeanPlum
It also has telemetry selected by default and is NOT opt-in. So yeah, whether it's hardware or software, you're being spied on any time you use an internet connected device.
Just drop a line on twitter saying you've discovered a vulnerability in $popularSoftware and mention $company. Say you'll be disclosing in 90 days if $company doesn't issue a reply publicly.
Make sure to deal with an actual human and that everything is done according to best practice. You may even get publicity this way and even if it's unethical it can be sold or used to your advantage.
If they care, trust me when I say they will make an effort. Most places (like Google) have effective systems in place for dealing with such queries.
it wouldn't even be unethical. responsible disclosure starts with engaging with company at eye-level. all that these bug bounty platforms do is take away exactly this power and allow the company to consolidate the contract to a single entity (e.g. preferred supplier). they deserve even less respect than any shady recruiter or typical outsourcing sweat-shop.
giving these people power is like talking to a cop without a lawyer - regardless of what they say, they don't have your interest in mind and you have lost before the game has even started.
The idea that your would get a no knock forcible entry for disclosing a bug is appalling and potentially an indictment of our entire criminal justice system.
I'm assuming vntok's legal conclusion and claim of the type of law enforcement response is true (please do not make things up on hackernews).
In which case my former support for the police and low and order is SERIOUSLY diminished.
You have a non-violent offense, that is not an actual offense, and they are doing swat door breaches on you. wow! The priorities of these companies and law enforcement is backwards then.
I guess folks are being told to just sell it to a zero day vendor (which also happens to work for the same govt agency that will bust down your door if you disclose publicly). Pretty appalling behavior here!
"Hey, @WhiteHouse, while interacting with your systems with the intent to find security flaws and obtain unauthorized access (I wrote scanners and tools and payloads so you know I really wanted to succeed here), I've found a security flaw that allows me to launch nuclear warheads from my garage in Misouri. I will publish this info online if you don't meet my demands. You have less than a month to comply."
Yeah, that kind of bullshit won't fly in any sane criminal justice system.
Now replace "launch nukes" with "download every movie you're working on" or "flash-crashing the stocks market at any time", you'll see that the argument doesn't change: it doesn't fly anywhere.
No, the disclosure is disconnected from payment, so it's not blackmail. Notifying companies is a courtesy, and considered good form. Companies offering rewards is to incentivize this behavior. Researchers releasing vulnerabilities after a time period no matter what is to incentivize companies to actually fix the problems (not just pay to shut up the researcher). Both are useful for a well functioning system of independent researchers finding vulnerabilities in companies that then get fixed.
Releasing the vulnerability because you weren't paid, regardless of whatever timelines you would have followed? That's blackmail. I imagine having a very clear and consistent policy as a researcher that is not based on money (but can be based on company participation and whether they seem like they are actually trying to fix the problem) will go a long way towards clearing you of any suspicion of blackmail.
That is false. In many jurisdictions, blackmail does not require a financial transaction, merely obtaining something deemed valuable by the blackmailer in exchange for keeping the blackmailee's information private. See [1] for the US:
Whoever, under a threat of informing, or as a consideration for not informing, against any violation of any law of the United States, demands or receives any money or other valuable thing, shall be fined under this title or imprisoned not more than one year, or both.
In cases like this one, "bragging rights" are easy to prove as deemed valuable by the blackmailer: they can bring anything from job prospects to donations from activists to free beers at Blackhat.
> merely obtaining something deemed valuable by the blackmailer in exchange for keeping the blackmailee's information private.
Exactly. That's why if you have a policy about when the information goes public which is entirely independent of any benefits provided by the company in question, it's not blackmail.
You're not saying "unless you give me X benefit I do Y", you're saying "I'm doing Y at Z date, but I may extend that if you show you're working on the problem." which isn't a benefit to you specifically, but to those affected. As long as you make sure any benefit to yourself is removed from that decision, I imagine blackmail would be very hard to prove.
Bragging rights aren't really the company's to give, since you have the information and will be making it public, unless someone else beat you to it. In that case, going live early unless the company says you found it does impart a real benefit to you that you extracted from the company. That's not what I viewed this thread as about though. Saying you'll release the vulnerability you were already going to release (if you had a clear policy applied consistently) is not so much a threat as giving the company an appropriate chance to respond.
I agree in the case of private companies with little or no public component it does get less clear cut. I'm not sure what those would be though.
In the US, Blackmail requires a benefit in exchange for not disclosing information.
A public reply isn't much of a benefit, and my understanding is that the vulnerabilities will be disclosed eventually within a reasonably limited timeframe.
A lovely example of why one shouldn't take legal advice from message boards.
But I would say that if you're doing this sort of thing for the first time, I would strongly advise you to talk to a lawyer who knows this corner of the law, and to someone who has done this before.
Smarts do not substitute for experience and domain-specific knowledge.
That's similar to how project zero (by google) works. Exploits get released in 90 days unless the developers can provide a plausible justification why that deadline can't be reached.
Not trying to be cynical, but to me this seems to be a way to get the mass public "okay" with contact tracing. Then somehow they "mysteriously" manage to get more accurate information from other sources (location, wifi beacons, data sharing etc).
But they'll just say "the information is only from this source, we pinky promise!".
Under the GDPR a subject is allowed full erasure rights. If I say I want you to delete my content from x date to y date, or a particular post, or everything entirely then that shouldn't be an issue. A request may be bothersome, but that's what happens when you don't offer that functionality natively.
I noticed a few days back you didn't like it when a user made a new account, except with the internet these days and how everything is archived for all time, throwaway's are the only option. Building a comment history is extremely dangerous, especially when you might forget what details you may have posted or how meta-data can leak through (such as what subs you post in, any details you posted that could identify you etc).
You can't have it both ways: no to multiple accounts and also no to control over your data. I might have 50 accounts, dislike it? Give me proper control over my comments. (to be honest, it may just be worth making a new account for every comment for maximum privacy, it's extreme, but it's a viable option).
If I want to delete them, that's my choice to freely make. Your thoughts or concerns are not relevant to me, thankfully, the GDPR agrees.