Hacker Newsnew | past | comments | ask | show | jobs | submit | lambada's commentslogin

A Kia authorised dealer being able to look up any Kia has some very useful benefits (for the dealer, and thus Kia).

If a customer has moved into the area and you’re now their local dealer they’re more likely to come to you for any problems, including ones involving remote connectivity problems. Being able to see the state of the car on Kia’s systems is important for that.

Is this a tradeoff? Absolutely. Can you make the argument the trade off isn’t worth it? Absolutely. But I don’t think it’s an unfathomably unreasonable decision to have their dealers able to help customers, even if that customer didn’t purchase the car from that dealer.


In my opinion, the better way to design such a thing would be for there to be a private key held in a secure environment inside the car which is used to sign credentials which offer entitlements to some set of features.

So for example, when provisioning the car initially, the dealer would plug into the OBDii port, authenticate to the car itself, and then request that the car sign a JWT (or similar) which contains the new owner's email address or Kia account ID as well as the list of commands that a user is able to trigger.

In your scenario, they would plug into the OBDii port, authenticate to the car, and sign a JWT with a short expiration time that allows them to query whatever they need to know about the car from the Kia servers.

The biggest thing you would lose in this case is the ability for _any_ dealer to geolocate any car that they don't have physical access to, which could have beneficial use cases like tracking a stolen car. On the other hand, you trade that for actual security against any dealership tracking any car without physical access for a huge range of nefarious reasons.

Of course, those use cases like repossessing the car or tracking a stolen vehicle would still be possible. In the former, the bank or dealership could store a token that allows tracking location, with an expiration date a few months after the end of the lease or loan period. In the latter, the customer could track the car directly from their account, assuming they had already signed up at the time the car was stolen.

You could still keep a very limited unauthenticated endpoint available to every dealer that would only answer the question "what is the connection status for this vehicle?" That is a bit of an information leak, but nowhere near as bad as being able to real-time geolocate any vehicle or find any owner's email address just given a VIN.


Those aren’t the only options. It would be trivial change to allow any dealer to request access to any vehicle and have it tied to the active employees SSO or something similar that at least leave an audit trail and prevents such random access. Allowing anyone to be a dealer is the real oversight. They could put some checks in place also to prevent the stalker situation GP mentioned. It’s always going to be possible but reduces risk a lot if employee just has to ask someone else to approve their access request, even if it’s just a rubber stamp process making sure the vehicle is actually in need of some service


This is quite common in Europe. There is normally no special relationship with the original dealer and the service history is centralised for most manufacturers.


Any stealership shouldn’t be able to lookup information about any active/sold car. These interactions need to have consent (authorization) from car owner. These authorizations should be short lived and can be revoked at any time.

Any of this sound familiar? Yea that’s because it’s a flow (oauth) used by many companies to control access to assets.

Car companies are just not meant to do tech. So common shit like this is ignored.

If these car manufacturers can barely shit out barely usable “infotainment” systems. Why the fuck are they diving into remote access technology?


That's not a benefit to me if I can't control how someone gets access to my vehicle, dealership or not. If I want a dealership to be able to assist me, I should have to authorize that dealership to have access, and have the power to revoke it at any time. Same for the car manufacturer. It ideally should include some combination of factors including a cryptographic secret in the car, and some secret I control. Transfer of ownership should involve using my car's secret and my car's secret to transfer access to those features.

If you feel like this sound like an asinine level of requirements in order for me to feel okay with this featureset, I'd require the same level of controls for any incredibly expensive, and potentially dangerous liability in my control that has some sort of remote backdoor access via a cloud. All of this "value add" ends up being an expense and a liability to me at the end of the day.


This is absurd. If there was a screen on the infotainment system where you could allow (temporarily!) the local service center of your choice to access your car remotely, fine. Otherwise, no thanks.


The site explicitly says they tried to get Oracle to release the trademark before, and that this is their final attempt at doing so before filing wigh the USPTO.


How does this hold up against the First Sale Doctrine in the US?


It doesn't but you would need to sue to make it happen. Similar to the cases against Costco, and Wiley and Sons textbooks.

a. https://caselaw.findlaw.com/court/us-9th-circuit/1689936.htm...

b. https://en.wikipedia.org/wiki/Kirtsaeng_v._John_Wiley_%26_So....

Tesla would have to license the vehicle to you, or you would have to sign a contact to that effect, in order for it to pass muster. Even still, it's possible Tesla would lose in a lawsuit if they tried to enforce it. AFIK Ferrari, another bad actor in this space, as not been sued, but they do refuse to sell to certain people.

IANL but the First Sale Doctrine is pretty clear. Unfortunately you would have to sue to prove it.

c. https://en.wikipedia.org/wiki/First-sale_doctrine


Ferrari won't sell to everyone, but they don't stop buyers reselling their cars.


I can't fathom why Safari would be supported on messenger.com but not Facebook.com/messages.

I also have no idea why Private Mode would make a difference in Firefox.

Perhaps someone here might be able to shed some light?


The article says that they were expired. Using expired vaccines sounds very very risky to me.

The article also says that previously a lot of unexpired doses were given to other countries.


And that exact anecdote is contained in the article. > In fact, the smartphone comparison is not quite right. “The Voyager computers have less memory than the key fob that opens your car door,” Spilker says.


Given it existed for 5 days and you’re only now finding out about it, it sounds to me like it was perhaps a bug that was fixed without realising the full impact of it, or perhaps without realising it made it to production; and only an audit that happened later caught it.

Not ideal by any means. I’d be curious to know if my theory is correct or not.


Their statements indicate they were aware and investigating. My frustration is that they didn't give users the opportunity to do their own timely investigation.

> GitHub learned via a customer support ticket that GitHub Apps were able to generate scoped installation tokens with elevated permissions. Each of these tokens are valid for up to 1 hour.

> GitHub quickly fixed the issue and established that this bug was recently introduced, existing for approximately 5 days between 2022-02-25 18:28 UTC and 2022-03-02 20:47 UTC.

> GitHub immediately began working to fix the bug and started an investigation into the potential impact. However due to the scale and complexity of GitHub Apps and their short-lived tokens, we were unable to determine whether this bug was ever exploited.


The post is from 2011, so i would expect them to have entered much wider circulation now.


These illustrations were used in the first edition in 1937. My edition is from 1992. I don't think frequency of use has changed at all.


In that case I guess that the images in the article aren't the 'never before seen' (in 2011) ones that are meant to have been included in the publicised book.


As the paper says in respect of politically motivated definitions of mis-information

> Helping to address concerns about potential liberal bias among fact-checkers, we also examined untrustworthiness ratings from politically-balanced crowds of demographically representative (quota-sampled) American laypeople recruited via Lucid (15), rather than professional fact-checkers.


From memory I believe FAANG etc all _claim_ that appeals you lodge are reviewed by a human.

Now if you don’t believe them then you’d need to take them to court and show why you think that’s not the case.

Which I guess means my question is why don’t you believe them and how likely is it that they are lying when they claim thy appeals are reviewed by a human?


There was a recent example with Google Drive where it explicitly disabled any way to appeal. I was able to reproduce the issue where it was flagging files that consisted of a single byte, sometimes followed by \r\n or \n.

Here's the HN story: https://news.ycombinator.com/item?id=30060405

Screenshots of trying to "appeal" (Request a review) from when I recreated the issue show pretty clearly there is no human involved: https://imgur.com/a/5YHQtLi

This wasn't an account ban, so I don't know how well it fits the GDPR language. Though I'd be surprised if this was somehow the only "fully automated account action" FAANG type companies are doing.


I don’t see how you get from Google’s statement “Was taken down for legal reasons and cannot be appealed“ to “no human was involved”.


I think the point is they, based on the file content, no human could have been involved in the decision. If there was a human involved, the files never would have been flagged.


The discussion isn’t about the initial flagging, it’s about the review.

That might be as simple as checking for the existence of legal documents claiming copyright infringement, or as reading a web page stating “we already removed X other copies of this file”.

Neither is a fail-safe way of doing such a review, but doing a thorough review might be expensive even for Google. Does anybody know how many such reviews they do each day?

It might also be a bug on their tooling to assist human reviewers.


Right, but I believe for those cases, the (automated) email also said that the ability to appeal was not available. So there is no human review, because there's no review at all.

Regardless -- and I know this is a "how the world should be, not how it is" type thing -- I really think the initial decision should not be allowed to be made by an algorithm. At the very most, an algorithm should be allowed to flag something for human review, but no action is taken until the human has a chance to review it and decide if the flag is warranted or not.


You're assuming the human both has agency, and gives a damn. It's more likely the human just rubber-stamps all bans, to get their KPI of number of appeals processed per day up!


If the human doesn't have agency, then it's not really a "human review", is it?


It still is.


Spirit of the law is a concept I would encourage anyone to think about when arguing about these things. I believe most people, and courts in particular, would not agree that a human rubber-stamping automated decision is in line with the spirit of the law. Clinging onto a technicality isn't going to go well.

I'd also like to point out that these laws don't just come out of nowhere in a vacuum, to be interpreted without any further context. In EU we have recitals and guidelines to give context and support the interpretation of regulations.

If you're interested, do read Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01).

https://ec.europa.eu/newsroom/article29/items/612053/en

Here's what it says about human intervention: "Any review must be carried out by someone who has the appropriate authority and capability to change the decision. The reviewer should undertake a thorough assessment of all the relevant data, including any additional information provided by the data subject."


But the entire premise here is a "letter of the law" thing. Online account bans are pretty clearly not within the spirit of the GDPR restrictions on automated decisionmaking; note how the guidelines you linked, despite providing quite a bit of detail about different kinds of automated decisionmaking and rules around them, don't mention account bans at all.


There's only a handful of examples, and to me it is far from clear whether account bans would be in scope of the law. It's not meant to be an exhaustive list of all the things that are covered.

However, I could make the case that losing an account which holds years of your private correspondence and is your point of contact for private exchange, services you rely on (including where bills, account recovery emails, policy changes, warnings & alerts, 2fa codes, and other very important messages are sent), potential employers or clients, and which doubles as a login for other services (see openid) and so on, can have a significant effect on your life and could potentially fall under "decisions that deny someone an employment opportunity or put them at a serious disadvantage" or (admittedly vague) "lead to the exclusion or discrimination of individuals."

Some of the other examples in the guidelines seem mild by comparison (e.g. getting a reduced limit on credit card).

My perspective is colored by both having lost access to an email account and also being denied a credit card application; the former was a much bigger problem.


And you've just demonstrated why programmatic enforcement of contracts doesn't work. Courts are actually able to see through that semantic nonsense, because humans are able to discern intent.


Here's a human review process for you:

1) Check whether the output of the machine learning model outputs Yes or No.

2) If yes, ban.

3) If no, no ban.

This is not a human review process. The review process is algorithmic, the human is only involved to relay the result.


I think the chances that a judge would accept that argument is potentially a lot lower than you might expect.


it may still be human, but if it's just rubber stamping is it really a review?


They are required to have agency and give a damn. Of course, it is hard to (dis)prove that they actually do.


That's an automated process then.


Because once a human did get involved, via noise from here and twitter, Google admitted that there was a problem.

Also, just the absurdity that a human would review a file containing only "1" and decide the decision to flag it was correct.

https://twitter.com/googledrive/status/1486038872928792576


What's interesting is there's another user there that followed up 2 days after the tweet noting that other numbers weren't fixed. But this is ignored.

Problems shouldn't get fixed just because they got enough likes and reshares on Twitter.


Are you suggesting that Europe has established a fundamental human right to have Google provide free static hosting services?


No, they're suggesting that given that Google has chosen to provide free static hosting, Europe has decided they can't moderate it with purely automated systems with no appeals process.

This is like running a restaurant in the US, and not being able to discriminate by race. You're not required to run a restaurant, and certainly aren't required to run one that gives away free food, but if you are certain obligations come attached.

I'd also argue that your use of the phrase "fundamental human right" is misleading. Europe can and does require you do things for reasons other than respecting fundamental human rights. So does pretty much every other law making authority.


The EU has established that "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." (https://gdpr-info.eu/art-22-gdpr/)


Arguably the opposite where the EU may have in effect outlawed may free services be requiring human review of many activities.


I think reading the part where they say "I don't know how well it fits the GDPR language" would answer your question.


I think the better question is what a "human review" entails. I assume they have some kind of "human review" in there, but no meaningful human review.


> Which I guess means my question is why don’t you believe them and how likely is it that they are lying when they claim thy appeals are reviewed by a human?

Why would we believe them? It's Google's responsibility to prove their assertion, versus regulators taking them for their (not so good) word. The default should be the assumption that the corporation is being dishonest.


If you’re taking them (or anyone else) to court, isn’t the burden of proof on you?


Highly dependent on the law or regulation in question.


> Reviewed by a human

Can just mean some low-paid Amazon Mechanical Turk worker clicked on "Yes".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: