Solvinity (now acquired by Kyndryl) owns and runs a lot of the underlying infrastructure of DigiD, but the application itself and the day-to-day operations are handled by an autonomous body of the government (Logius). DigiD is mainly about translating authentication factors into a social security number (BSN) for authentication to other public institutions.
That allows Logius to pretend it's not much of a problem, and Solvinity maintains (in an unusually sharp and on-point interview) that all data is "encrypted" [1], without mentioning who possesses the keys or whether encryption is relevant at all. They go on to say that they consider the scenario of the US shutting down DigiD "very hypothetical", that they will follow Dutch law and that they have a strong supervisory board (as if that would matter).
Logius also operates MijnOverheid, which collates very sensitive information about all citizens from most government agencies and also relies on Solvinity infrastructure.
The infrastructure that Solvinity maintains goes far beyond servers, as they've concocted themselves an unholy procurement mess with their PICARD / LPC solution (Logius Private Cloud). They were advised multiple times over multiple years by the main advisory body on IT of The Netherlands (AcICT) not to do it in this way and KISS, but then did it anyway.
The intent of structuring it in this way was that it would be easier to switch infrastructure providers, but the outcome is the exact opposite: there is now a non-standard "integration layer" that would need to be rebuilt. Which is exactly what AcICT warned about from the beginning.
You can find a diagram of the responsibilities on both the Solvinity and Logius side on the last page of [2] (in Dutch).
The wild thing is that Logius also owns and maintains "Standaard Platform" [3], which is a very neat and standard Kubernetes environment, but they declined to use this for DigiD and MijnOverheid because they didn't deem it secure enough, and instead of securing their Kubernetes deployment, they went on with PICARD / LPC.
Logius is an autonomous body of the Ministry of the Interior (BZK), but they appear to have completely lost control over setting any policy and now mainly walk from crisis to crisis because any opening on their "SAFe train" is years away.
> The infrastructure that Solvinity maintains goes far beyond servers, as they've concocted themselves an unholy procurement mess with their PICARD / LPC solution (Logius Private Cloud)
> The insane question here is, why would the EU mandate hardware attestation controlled by two private American companies in order to access services?
Please (kindly) ask Paolo De Rosa [1], Policy Officer at the European Commission and driver of many of the decisions behind the wallet and the ARF. His position is one of fatalism: that it's "too late"; the duopoly of Goople is entrenched, and it's therefore not a problem if the wallet project entrenches it even further. Regrettably quite a lot of member states agree, although representatives of France and Germany specifically are frequently standing up to the fatalism.
This is misleading. They are merely exploring options that may allow for issuer unlinkability, but they are actually implementing a linkable solution based on standard cryptography that allows issuers (member state governments) to collude with any verifier (a website requiring age verification) to de-anonymize users. The solution is linkable because both the issuer and the verifier see the same identifiers (the SD-JWT and its signature).
The project is supposed to prove that age verification is viable so that the Commission can use it as a success story, while it completely disregards privacy by design principles in its implementation. That the project intends to perhaps at some point implement privacy enhancing technologies doesn't make it any better. Nothing is more permanent than a temporary solution.
It will also be trivial to circumvent [1], potentially leading to a cycle of obfuscation and weakening of privacy features that are present in the current issuer linkable design.
> This is misleading. They are merely exploring options that may allow for issuer unlinkability, but they are actually implementing a linkable solution based on standard ECDSA..
The repository we're commenting on has the following in the spec[0]: "A next version of the Technical Specifications for Age Verification Solutions will include as an experimental feature the Zero-Knowledge Proof (ZKP)". So given that the current spec is not in use, this seems incorrect.
> It will also be trivial to circumvent
If you have a key with the attribute of course you can 'bypass' it, I don't think that's bug. The statement required should be scaled to the application it's used for; this is "over-asking" is considered in the law[1].
> The project is supposed to prove that age verification is viable, while it completely disregards privacy by design principles in its implementation. That the project intends to perhaps at some point implement privacy enhancing technologies doesn't make it any better.
I agree that in it's current state it is effectively unusable due to the ZKPs being omitted.
> So given that the current spec is not in use, this seems incorrect.
No, that's not what they mean. They just mean that the spec (and for now only the spec, not the implementation) will be amended with an experimental feature, while the implementation will not (yet).
I understand (?) that you are interpreting this as: "we'll later document something that we've already implemented", but this is not the case. That isn't how this project operates, and I'm intimately familiar with the codebase so I'm completely certain they haven't implemented this at all. There is no beginning or even a stub for this feature to land, which is problematic, as an unlinkable signature scheme isn't just a drop-in replacement, but requires careful design. Hence privacy by design.
> If you have a key with the attribute of course you can 'bypass' it, I don't think that's bug.
Anyone of age can make an anonymous age attribute faucet [1] for anyone to use. That it's not technically a bug doesn't make it any less trivial to circumvent. I wouldn't expect the public or even the Commission to make such a distinction. They'll clamor that the solution is broken and that it must be fixed, and at that point I expect the obfuscation and weakening of privacy features to start.
So as we already know that the solution will be trivial to circumvent, it shouldn't be released without at least very clearly and publicly announcing it's limitations. Only if such expectations are correctly set, we have a chance not to end up in a cycle where the open source and privacy story will be abandoned in the name of security.
[1] Because of the linkable signature scheme in principle misuse can be detected by issuers, but this would be in direct contradiction with their privacy claims (namely that the issuer pinky promises not to record any issued credentials or signatures).
> Anyone of age can make an anonymous age attribute faucet [1] for anyone to use. That it's not technically a bug doesn't make it any less trivial to circumvent. I wouldn't expect the public or even the Commission to make such a distinction. They'll clamor that the solution is broken and that it must be fixed, and at that point I expect the obfuscation and weakening of privacy features to start.
I can see this argument, but it has a few caveats:
- The 'faucet', providing infinite key material in an open proxy is also very vulnerable
- If the only attribute is age verification then uniqueness is not required; i.e. you can borrow the key of someone you trust and that should be fine.
- The unlinkability is a requirement from the law itself, i.e. the current implementation cannot be executed upon assuming rule of law holds
A month ago a potential customer automatically included their Otter.ai meeting agent into a Teams call. The customer never turned up (he canceled the meeting somewhat later), but me and a colleague chatted a bit in the meeting. Then the Otter.ai meeting agent posted a link in the chat, from which it was clear that everything had been recorded, up to a complete video of the meeting with full facial imagery.
As I'm a European citizen, I filed a GDPR removal request with them to remove all images of me from their servers. The email address that they list in their privacy policy [1] for GDPR requests immediately bounces and tells you to reply from an Otter.ai account (which I don't have). I was able to fill in a contact form on their website and I did receive replies via email after that.
After a few emails back and forth, their position is that
> You will need to reach out to the conversation owner directly to request to have your information deleted/removed. Audio and screenshots created by the user are under the control of the user, not Otter.
> We are required by law to deny any request to delete personal information that may be contained within a recording or screenshot created by another user under the CCPA, Cal. Civil Code § 1798.145(k), which states in relevant part
> “The rights afforded to consumers and the obligations imposed on the business in this title shall not adversely affect the rights and freedoms of other natural persons. A verifiable consumer request…to delete a consumer’s personal information pursuant to Section 1798.105…shall not extend to personal information about the consumer that belongs to, or the business maintains on behalf of, another natural person…[A] business is under no legal obligation under this title or any other provision of law to take any action under this title in the event of a dispute between or among persons claiming rights to personal information in the business’ possession.”
Which is a ridiculous answer towards a European user, as the CCPA doesn't apply to me at all. Furthermore, I don't think the CCPA prohibits them at all in deleting my face from their servers, as the CCPA merely stipulates that I can't compel them under the CCPA. Otter.ai can perfectly decide this for themselves or be compelled under the GDPR to delete data, and their Terms and Conditions make it clear they may delete any user or data if they wish to do so.
After these emails, and me threatening to file a lawsuit, "Andrew" from "Otter.ai Support Team" promised to escalate the matter to his manager, but I got ghosted after that: they simply stopped replying.
So I'm going to file that lawsuit (a "verzoekschriftprocedure" under Dutch law) this week. It's going to be a very short complaint.
And out of nowhere, after posting this comment, Otter.ai now has responded after ghosting me for 3,5 weeks. They are no longer quoting the CCPA, but now are misinterpreting the GDPR and claim that every user is their own little GDPR data controller island and they're merely a "hosting platform". It's all very convenient and creative.
Their response:
Thank you for reaching out to Otter.ai. Under Articles 12 and 17 of the GDPR, Otter.ai is able to delete personal data that is stored in and controlled by your own account. However, Otter.ai cannot delete personal data that is stored in another user’s account. In those cases, Otter.ai acts as the processor or hosting platform, and the other user is the controller for that content. As such, only that account holder has the authority to remove the content.
If you wish to have such data deleted, we recommend that you contact the relevant user directly and exercise your rights under the GDPR with them.
Thank you,
Otter.ai Privacy Team
To which I responded:
To whom am I speaking? Is this the Privacy Officer? Why have you been ignoring emails for 3,5 weeks since the 23rd of July, while a GDPR request was filed on the 8th of July?
You know very well that a meeting agent of Otter.ai, the emails by Otter.ai and the website of Otter.ai fall under the direct responsibility of Otter.ai as data controller. Your privacy statement in no way supports a narrative that Otter.ai would act as a so called "hosting platform". It's preposterous to suggest that every one of your users – not being a company but a private person – would be it's own little GDPR data controller island and you're merely an accidental processor of data. Jurisprudence is very clear on this and this notion will be outright rejected.
The deadline has long passed, I'm initiating a court procedure this week.
Hoogachtend,
What curious timing! Glad you're using your rights to punish this company. A coworker at a prior company used Otter.ai once or twice, and from then on we all called it the Otter Infection until IT was able to purge it from our systems somehow. It kept getting into meetings it had no business getting into.
It's a very interesting solution that allows for multi-show unlinkability to be married to hardware binding using existing ECDSA hardware keys. It's not limited to age verification; it can be applied to arbitrary attributes.
It's also an unfathomably complex solution [1] which only a few people in the world will grok, and far more complex than existing solutions such as Idemix or BBS+, which lack such a hardware binding on existing hardware.
Age verification in a privacy preserving way is a really hot topic at the moment, but it will always be possible to bypass it – as will any commonly held anonymous boolean – in quite trivial ways. For example by setting up an open proxy to disclose genuine attributes. There are some privacy preserving mitigations, for example cryptography that'll make you linkable when disclosing more than k times per time period, or detecting slower-than-near-light-speed disclosure in a face-to-face disclosure scenario.
However, these mitigations will never be completely secure. That might not be a problem if it's admitted beforehand so expectations are correctly set: it's a barrier to protect the naïve, not an impenetrable fortress. However, if the expectations are that only age verification that cannot be bypassed is "adequate", we only have to wait for the first incidents in production apps after which the open source and privacy story will be abandoned in the name of security.
Jokes aside, I really believe that once all is said and done our system is way simpler than BBS.
How are you going to check the document expiration date in BBS? Yes I know about range proofs, I know about the quaternion norms and the four prime theorem and all that jazz. But nobody is talking about it.
How are you going to bind to a hardware secure element that only uses NIST primes? Yes, there is a very clever variant called BBS# which I believe works, but that's not simple either.
How are you going to deal with existing standard formats? 80% of our complexity is in this step. BBS most likely cannot do it at all. If we can change the format then a lot of my complexity disappears too.
How are you going to deal with the fact that BBS signs an array and not a set, and thus you are leaking the fact that "family_name" is attribute at array index 42? Are you going to leak the schema (which re-introduces tracking) or are you going to agree in advance, now and forever, on a schema? (Our system hides the schema and works on an arbitrary key/value dictionary, up to a maximum size.)
It's easy to say "simple" when one has not built the real thing.
Well, we can split up the credential into multiple ones sharing a serial number to fix the array signing. To bind to NIST there are some solutions based on ZkAttest (which got fixed, I made a few mistakes in it) to show signature under ECDSA while hiding it.
I disagree that no one is talking about it: the solutions are there, it is a question of getting the resources to put it together. Circuit based solutions have some nice properties, but the actual security assumptions are a bit odd, and the reasons people should trust a complex circuit and verification protocol are a bit hard.
I don't however think this is really the big debate. Rather it's about ensuring SD-JWT and related non-private solutions do not get used. To the extent that this work helps show it's possible, and the tradeoffs are desirable, it's good.
> I don't however think this is really the big debate. Rather it's about ensuring SD-JWT and related non-private solutions do not get used. To the extent that this work helps show it's possible, and the tradeoffs are desirable, it's good
I'm not sure sumcheck and MPC in the head are that easy for undergraduates. By contrast cup products are pretty standard in topology and that's where the pairing comes from.
It's even more important if this finding [1] (press release [2]) turns out to be true, namely that the amount of carbon dioxide in the air is not just a proxy for air quality, but that having less of it actually actively destabilizes virus particles.
Also, portable air cleaners seem to work pretty well [3].
It's not illegal, but unlawful for data controllers to process such personal data without free permission. But in this case there's likely an exception in GDPR article 85, for "For processing carried out for journalistic purposes or the purpose of academic artistic or literary expression" [1].
That does not give that right. It just asks states to carve out protections for journalism. Biometric data is very strictly regulated by the GDPR. The exceptions are listed in article 9.
> Privacy Pass in fact doesn't make sense outside of an anonymizing transport
This kind of thinking is pervasive in the discussion of privacy enhancing technologies. It might not make sense against the most sophisticated attacker, but it lays the groundwork of a complex system that will be able to do so.
Allowing more users will provide herd privacy at the token generation phase. Searches being decoupled from user account primary key offers privacy in all kinds of scenario's, comparable with a browser private tab.
> > Privacy Pass in fact doesn't make sense outside of an anonymizing transport
> This kind of thinking is pervasive in the discussion of privacy enhancing technologies
It is in RFC.
Origin-Client, Issuer-Client, and Attester-Origin unlinkability requires that issuance and redemption events be separated over time, such as through the use of tokens that correspond to token challenges with an empty redemption context (see Section 3.4), or that they be separated over space, such as through the use of an anonymizing service when connecting to the Origin.
I've similarly been grasping at straws to find some way that makes the fatigue go away, at least a little (although not for myself). As doctors won't prescribe any medication off-label, I've mainly looked at other methods and nutritional supplements. Here's a list (which I've been meaning to braindump for a while now anyway):
- Like you already described, monitoring heart rate variability and associated 'body battery' with a smartwatch.
- Make a log of good and bad days, note the specific symptoms. See if there are any patterns to be found that you perhaps wouldn't notice otherwise.
- Read up about POTS / orthostatic intolerance and dysautonomia in general. The book / guide "The Dysautonomia Project" is a great read. Do a simple standing test [1] a few times (when you're feeling good and bad) to see if your heart rate increases and keeps increased, even if it does not meet the criteria for POTS.
- Do breathing exercises, humming / vocalization, ear massages and meditation to activate your vagus nerve. That might help with dysautonomia. There are lots of videos / guides to be found.
- Increase salt intake if that is not a risk factor for you. That might help with orthostatic intolerance. Find a good balance of different salts in the CFS community.
- Keep light walking and (if you can) light strength training as much as you can without triggering too much PEM.
- Be outdoors in the sunlight. There are many small studies showing a correlation between getting better from CFS and sunlight. An infrared lamp might help a bit as well, but don't buy into the fancy fads. I personally prefer a simple infrared bulb because of the warmth it gives which is great in winter on its own. It's also more similar to the sun with a continuous spectrum (although relatively low intensity). Many studies emphasize illuminating your brain.
- Be aware that food supplements will probably not help too much and cost quite a bit. But you might get lucky and find something that helps. It's hard to separate correlation from causation though. It might also feel rewarding that you're busy trying something. The placebo effect might help similarly.
- Get your ferritin levels checked, and a some other basic tests around CFS as well. See if your ferritin level has been recorded in the past as a baseline, because the one-level-fits-all approach is flawed (and especially for women). 15 (women) or 30 (men) µg / L is probably too low, even if that's considered 'normal'. Lactoferrin might help your body regulate iron levels and keep pathogens from using iron, but the scientific evidence is pretty weak.
- In case of deficiencies supplement with amino acid or organically bound metals, for example iron bisglycinate. Something like Thorne Basic Prenatal at 1/3 or the recommended dosage (1 pill per day) is quite cheap with many nutrients that might help a bit in a form that absorbs well.
- Creatine is widely used in sporting performance enhancement by facilitating ATP recyling and acting as buffer. Because it's so widely used it seems very low-risk and applicable to CFS. There are some very small scale studies reporting positive effects.
- Other things that are doubtful to help, but perhaps worth trying: wide spectrum probiotics, NAD+, D-ribose, nattokinase / lumbrokinase (but be careful and use a small dose), NAC, ALC.
Same. Including weird audio clicks during supposed 'sleep' (already happend during heavy load while awake). Then one day this summer it just died without any recourse.
That allows Logius to pretend it's not much of a problem, and Solvinity maintains (in an unusually sharp and on-point interview) that all data is "encrypted" [1], without mentioning who possesses the keys or whether encryption is relevant at all. They go on to say that they consider the scenario of the US shutting down DigiD "very hypothetical", that they will follow Dutch law and that they have a strong supervisory board (as if that would matter).
Logius also operates MijnOverheid, which collates very sensitive information about all citizens from most government agencies and also relies on Solvinity infrastructure.
The infrastructure that Solvinity maintains goes far beyond servers, as they've concocted themselves an unholy procurement mess with their PICARD / LPC solution (Logius Private Cloud). They were advised multiple times over multiple years by the main advisory body on IT of The Netherlands (AcICT) not to do it in this way and KISS, but then did it anyway.
The intent of structuring it in this way was that it would be easier to switch infrastructure providers, but the outcome is the exact opposite: there is now a non-standard "integration layer" that would need to be rebuilt. Which is exactly what AcICT warned about from the beginning.
You can find a diagram of the responsibilities on both the Solvinity and Logius side on the last page of [2] (in Dutch).
The wild thing is that Logius also owns and maintains "Standaard Platform" [3], which is a very neat and standard Kubernetes environment, but they declined to use this for DigiD and MijnOverheid because they didn't deem it secure enough, and instead of securing their Kubernetes deployment, they went on with PICARD / LPC.
Logius is an autonomous body of the Ministry of the Interior (BZK), but they appear to have completely lost control over setting any policy and now mainly walk from crisis to crisis because any opening on their "SAFe train" is years away.
[1] https://www.nrc.nl/nieuws/2025/12/03/baas-van-solvinity-prob...
[2] https://www.adviescollegeicttoetsing.nl/site/binaries/site-c...
[3] https://www.logius.nl/onze-dienstverlening/infrastructuur/st...
reply