I think API specs are a wrong problem to solve. It’s usually pretty easy to reverse engineer an API requests and responses from a frontend or network log. What’s hard and what an OpenAPI (or any API, but machine-readable specs tend to suffer most) spec would be typically missing is the documentation about all the concepts and flows for using this API in a meaningful manner.
They maintained census, but for government functions (like accounting and taxes), and actual identity communication almost never involved government.
Passports use for anything except international travel is a very modern thing as well.
For most of the history the source of identify was individual themselves (as it should be), that is, one told their name and origin and others accepted that, unless someone knew otherwise.
We've seen ~20 years of people trying to solve identity without the government. We've seen plenty of solutions that can provide stable identities over time, but we haven't really seen anything that provides meaningful sybil resistance. As computer systems become more and more "autonomous", sybil resistance is increasingly the most important feature of any identity system. Any identity system that doesn't solve that problem pushes to the application layer, where it usually has UX impacts that have serious tradeoffs with adoption.
I understand this. I also understand that if history teaches us anything it’s that any centralized governance (of any nature, not just traditional national and regional governments, but any centrally organized communities, like corporations) is to be constantly distrusted and kept in check, and even then it’s dangerous to let it take over social functions. That’s why I wrote “only as a last resort”, that is, unless and until someone thinks of something better. (And then switching over is another issue… that may need some pre-planning even better new solution exists.)
Or maybe someday we’ll have some interesting revelations about personal identity and sybil resistance won’t be necessary. But that’ll probably be only some centuries later.
To be clear, all we need from the government is to establish a person really exists and verify basic properties. We don't need more than that, so we can and should use all cryptography at our disposal (and invent more) to prevent any more information disclosure to both services and government.
I get that identity is a sort of last holdout for the tech libertarians of old. But after years working in KYC, what I saw was the accumulation of vast amounts of sensitive information held by private actors in a way that was completely democratically unaccountable and couldn't be corrected by the average citizen. It's time to bring identity out of the shadows and make it ours to control.
For establishing facts about person, the problem is, hostile governments are not unknown to revoke passports and cause all sorts of trouble. And if the government is benign that doesn’t mean it never turns hostile. We really don’t want to allow governments to disappear people, not physically, nor digitally.
I’m not a libertarian (was; realized why it doesn’t work in reality we have), but I still believe that no entity ever should be able to deny one’s identity, they can only refuse to attest it.
And the more serious problem is that nowadays we’re collectively so much into that flawed paradigm of “identity providers”[1] I’m afraid if a government-ran system happens it’ll would be still built in the same paradigm and engrave that into collective consciousness even further.
Private corporate-ran identities are IMHO better for the foreseeable interim, until we know for sure how to do things right. Because I suspect that whatever we pick as fundamental ideas is going to stick and bless or curse us for a long while. Nation states have longer lifespans than Internet companies popularity, so as weird as that may sound I’d prefer Gmail to, say, that Estonian X.509 scheme (no offense meant; and I’m only considering use outside of government services), despite latter being short-term better.
And - yes - I 100% agree that it’s past the time we should be using proper cryptography for attestation of all sorts, rather than sending passport photos and live selfies to increasingly more and more private companies. But that shouldn’t be general identity verification, it should be only for compliance, only when a law forces to obtain some information from some government-issued credentials. This part desperately needs moderation. But for the love of what’s still sane - unless we find ourselves with an unavoidable need and no other choice, let’s not use that for any other purposes, for now, please?
___
[1]: My view and understanding is that identity cannot be “provided” - those words simply don’t make sense together. Unless if we’re talking about impersonation and skip the “credentials” for brevity, and then it’s not our identity but someone else’s (even if created specially for us). Of course, I could be wrong.
The neat thing is that if government provides identity, you don't have to use it for any system you build. But I'm curious how you would deal with spam and Sybils?
That’s not generally true, even if it may sound true in some specific location and time. Governments trying to mandate national authentication services is a very real thing.
As for your question: sadly, I don’t have a solution for either. I wish I would. I think ML-based approaches seem to show good promise for spam detection, though? I haven’t looked under the hood any recently, but purely anecdotally, almost every time I upgrade my mail system and antispam has something new ML-based, I’m getting a lot less junk. As for the sybils… I don’t think it’s an issue per se - an ability to have alter egos is not a clear negative. And then it must depends on the exact context. Government elections is one thing, online content popularity measurement is entirely different. Not sure it’s meaningful to envision any universal solutions - they tend to have too many side effects, and usually of undesirable nature.
Good sir, cut that fellow some slack - they’re clearly venting some steam, and in doing so they’re not saying anything particularly harmful or wrong.
The part about disabling conscience feels like a huge stretch (I don’t see it there, not explicitly for sure), given how the article is just some personal rant about task and goal management.
> I want freedom, money, affection, play, power, validation, fulfillment, etc.
Of course I already have these things, but enough never seems enough.
> My brain came pre-installed with Human OS; loss aversion will squander CPU until I install security patches (e.g. Taoism, Zen, stocism).
> But I think I'm allergic to enlightenment. Meditation is difficult, quiet is boring, courage is scary, desire is addicting, etc.
This is just sociopathic. More more more. Turn off my loss aversion with stocism, etc.
Sociopathic how? I re-read the article a few times, as initially I haven’t got much sense out of it. Yet, all I see for sure is personal rant how a person is(was?) unhappy about their self-image and they’re reframing it differently to get at peace with themselves.
It’s one thing if one’s being a shitty person to others, and does some mental gymnastics to not feel bad about it. Plenty of examples out there, but author doesn’t strike me as such. I don’t see any of this here, unless maybe if author’s game or Mandarin skills are beyond atrocious, lol. Just kidding, of course.
It’s a whole other thing to be at peace with yourself about your own stuff. Not doing that is a potential way to become a sociopath, because if one constantly feels shitty about themselves all chances are they’ll start to voluntarily exclude themselves from society (to avoid feeling bad) and get out of touch with it.
And wanting good things is… normal, isn’t it? I would be rather concerned if someone doesn’t want anything - ahedonia is not a good state to be in.
The only social thing I’ve seen there, is author’s admission they want to impose imagination (whatever that means), but in my perception that’s just some random thought that wasn’t followed on.
I have an impression that’s the only thing it actually does, right there in the last paragraph (but sure, it’s quite vaguely defined just by this single example).
It doesn’t really say much else, though - just a bunch of commonplace realizations that most of ideas never get done, and then some jump to “metaprojects”, possibly to reframe the frustrations so they feel less stressful, but I don’t get that part.
Nothing changed since ’87. Machines still can’t be accountable and still shouldn’t make managerial decisions. Acceptance control is one of those decisions, and all the technical knowledge still matters to form a well-informed one. It may change, of course, but I have an impression that those who try otherwise seem to not fare well after the initial vibecoding honeymoon period. Of course, it varies from case to case - sometimes machines get things right, but long-term luck seems to eventually run out.
I think that “one by one” part allows different interpretations of what guessmyname possibly meant.
But I fail to make sense of it either way. Either the nuance of lack of consent is missing, or Google is blamed for not doing what they just did from the very first version.
It’s easy to solve concentration of power, just distribute it more. Nowadays we can have quite large distributed systems.
It’s nigh impossible to invent a system that truly formalizes collective will with the goal of optimizing for everyone’s best long-term interests, minimizing unhappiness.
100% agree, and I think that's sort of what was intended with a lot of democratic government setups. What we fail to realize though (or maybe just remember) is that these systems will ALWAYS be under attack by those who want more power always looking for attack surfaces. (We seem to be under attack by almost all, if not all, current billionaires!)
For example in the US, the executive order is a massive problem. Citizens united as well. And for all democracies the natural appeal of strongman politics is a huge problem.
Every attempt at government overreach really needs to be questioned. I don't say rejected, just questioned. How will it be used by future powers? Is the tradeoff worth it? Can it be temporary? Do we even have a way to claw it back if it turns out to be detrimental? Is it too subtle and nuanced that the majority will miss seeing it? etc.
> these systems will ALWAYS be under attack by those who want more power
I think this is an inherent human problem that prevents us from overcoming it... history has proven that the more equal everyone is, and the less individual ownership they have, the lazier and more bored they get.
Look at the previous attempts at socialism... people stop caring when there's no goal to work towards, they can't all be doing the same thing and just be happy, because humans are naturally competitive. We desire things other people don't have, like possessions, money, or power.
People don't become "lazy". They're lazy from the beginning. Laziness is something they overcome for personal gain. And if the system promises fewer personal gains for overcoming laziness, then why bother?
But of course success is relative to some cultural values. We could just as well wonder about success and failure in implementation of any political system.
The most remarkable trait of humans is cognitive plasticity, so determining any natural tendency that would be more inate than acquired is just a game of pretending there are hypothetical humans living out of any cultural influence that would still exhibit predominent behavioral traits.
Competition is a social construct. There are people out there whose biggest concern is keeping focus on enjoying what they are, freeing their attention from the illusion of possession, avoiding any financial/material bounds they can and staying away of contingent hierarchical servitudes.
They are also many people who holds desires for both of these perspectives, or any interpolation/extrapolation that they can suggest.
We aren’t inherently competitive, we just want nice things. It takes a very special mindset to want others to have less, and society should actively discourage such lines of thought by countering them with examples of how things never end well.
This said, I wasn’t suggesting socialism or equality or anything like that - only minimizing long-term unhappiness. That’s the only thing that I could not think an argument against - like why would anyone rational ever want others to be long-term unhappy?
Is there a way to accept but also limit greed that is reliable and durable?
Like a pragmatic meritocracy. We accept that there will be cheaters, and we won't catch or stop them all, but we have some hard limits. Do we care if you stop working so hard once you hit $1b? Maybe we'd even prefer that you did stop working (against societies interest!)?
This wouldn't even remotely resemble the communism bugaboo. It's basically saying, yes greed can be good, but at some point it gets ridiculous.
Except it's very easy to "sell" government overreach. Whenever a plane flies into a tower, or flu season is extra scary, people will clamor for strict government authority. With every such event, the government gains capabilities and tendencies that always end up with a few people having outsized power over the masses.
Yes, but I don't think it's so straightforward. I think there are bad actors marketing this overreach. Like the surveillance industry for the Patriot Act (tech, defence, telcom, maybe compliance vendors?). I don't think their goal is to create a distopia, but we should always be looking at incentives for large government programs.
Looks like this is only useful for empty databases. Which severely limits possible use cases.
Schema management is only a small part of the problem, and I don’t think this tool handles data migrations. E.g. if I reshape a JSONB column into something more structured, I don’t think it would be able to handle that. Or if I drop a column the backwards migration it generates ADD COLUMN … NOT NULL, which is obviously unusable if the table has any data in it already.
reply