Hacker Newsnew | past | comments | ask | show | jobs | submit | jonathaneunice's commentslogin

It paved the way for a lot more than that. At a time open source in general, and Linux in particular, did not have much corporate buy-in, IBM signaled "we back this" and "we're investing in this" in substantial ways that corporate IT executives could hear and act upon. That was a pre-cloud, pre-hyperscaler era when "enterprise IT" was the generally understood "high end" of the market, and IBM ruled that arena. IBM backing Linux and open source paved the way for a large swath of the industry—customers, software vendors, channel/distribution partners, yadda yadda—to do likewise.

agree - and the big industry consortium building `gcc` was already proving itself

Cosigned!

Em dash forever! Along with en dash for numerical ranges, true ellipsis not that three-period crap, true typographic quotes, and all the trimmings! Good typography whenever and wherever possible!


I agree we all ought to use available punctuation marks correctly. That said, I am compelled to lodge a formal complaint against quoted text arbitrarily assimilating punctuation from its surrounding context.

Quoted text is a sacred verbatim reproduction of its original source. Good authors are very careful to insert [brackets] around words inserted to clarify or add context, and they never miss an oppurtunity (sic) to preserve the source's spelling or grammatical mistakes. And yet quoted text can just suck in a period, comma, or question mark from its quoted context, simply handing the quoting author the key to completely overturn the meaning of a sentence?! Nonsense! Whatever is between the quotes had better be an exact reproduction, save aforementioned exceptions and their explicit annotations. And dash that pathetic “bUt mUH aEstHeTIcS!” argument on the rocks!

“But it's ugly!”, says you.

“Your shallow subjective opinion of the visual appearance of so-called ugly punctuation sequences is irrelevant in the face of the immense opportunity for misbehavior this piffling preference provides perfidious publications.”, says I.


I completely agree, this is perhaps the least sensible part of common English syntax.

   "Hello," he said.  
   "Hello", he said.
Only one of these makes actual sense as a hierarchical grammar, and it's not the commonly accepted one! If enough of us do it correctly perhaps we can change it.

I’ve always wondered about this. I guess typographically they should just occupy the same horizontal space, or at least be kerned closer in such a way as to prevent the ugly holes without cramming.

It’s true, though, that the hierarchically wrong option looks better, IMHO. The whitespace before the comma is intolerable.

This is an interesting case where I am of two autistic hearts, the logical one slowly losing vehemence as I get older and become more accepting of traditions.


It's especially obvious as a programmer.

I am all for using proper typographic symbols, but it is unclear what place the precomposed ellipsis U+2026—what I assume you mean by “true ellipsis”—has in that canon, especially with the compressed form it takes in most fonts.

En dash for ranges is too easily confused for a minus sign. I would rather use a different symbol altogether.

And two spaces after a period! Who's with me?

Not Matthew Butterick (nor all major English-language style guides): https://practicaltypography.com/one-space-between-sentences....

I only discovered two spaces after a full stop/period was a thing after moving to the U.S., and only apparently in people over 40.


I learned of it only by learning by Emacs! There are movement keys to move the to the next/previous sentence, and I wasn't understanding why they never worked for me.

It's how Millennials and our predecessors were taught to type in school, and it's muscle memory. Very hard to unlearn.

It's not that I have any trouble doing one or two spaces. I just think it's a bit arrogant of any group to decide something is "wrong".

Also, Pluto is still a planet because the new planet definition is absolutely stupid, and it wasn't really their word to work with anyway.


And text figures! And proper small caps!!

Agreed. Good typography is good writing.

Debugging is a completely different and better animal when collections have a predictable ordering. Else, every dict needs ordering before printing, studying, or comparing. Needlessly onerous, even if philosophically justifiable.


That's a great link and recommended reading.

It explains a lot about the design of Python container classes, and the boundaries of polymorphism / duck typing with them, and mutation between them.

I don't always agree with the choices made in Python's container APIs...but I always want to understand them as well as possible.

Also worth noting that understanding changes over time. Remember when GvR and the rest of the core developers argued adamantly against ordered dictionaries? Haha! Good times! Thank goodness their first wave of understanding wasn't their last. Concurrency and parallelism in Python was a TINY issue in 2006, but at the forefront of Python evolution these days. And immutability has come a long way as a design theme, even for languages that fully embrace stateful change.


> Also worth noting that understanding changes over time. Remember when GvR and the rest of the core developers argued adamantly against ordered dictionaries? Haha! Good times!

The new implementation has saved space, but there are opportunities to save more space (specifically after deleting keys) that they've now denied themselves by offering the ordering guarantee.


Ordering, like stability in sorting, is an incredibly useful property. If it costs a little, then so be it.

This is optimizing for the common case, where memory is generally plentiful and dicts grow more than they shrink. Python has so many memory inefficiencies that occasional tombstones in the dict internal structure is unlikely to be a major effect. If you're really concerned, do `d = dict(d)` after aggressive deletion.


> Ordering, like stability in sorting, is an incredibly useful property.

I can't say I've noticed any good reasons to rely on it. Didn't reach for `OrderedDict` often back in the day either. I've had more use for actual sorting than for preserving the insertion order.


Ordering is very useful for testing.

This morning for example, I tested an object serialized through a JSON API. My test data seems to never match the next run.

After a while, I realized one of the objects was using a set of objects, which in the API was turned into a JSON array, but the order of said array would change depending of the initial Python VM state.

3 days ago, I used itertools.group by to group a bunch of things. But itertools.group by only works on iterable that are sorted by the grouping key.

Now granted, none of those recent example are related to dicts, but dict is not a special case. And it's iterated over regularly.


Personally, I find lots of reasons to prefer an orders Dict to an unordered one. Even small effects like "the debugging output will appear in a consistent order making it easier to compare" can be motivation enough in many use cases.


It's sometimes nice to be deterministic.

I don't often care about a specific order, only that I get the same order every time.


Thinking about this upfront for me, I am actually wondering why this is useful outside of equality comparisons.

Granted, I live and work in TypeScript, where I can't `===` two objects but I could see this deterministic behavior making it easier for a language to compare two objects, especially if equality comparison is dependent on a generated hash.

The other is guaranteed iteration order, if you are reliant on the index-contents relationship of an iterable, but we're talking about Dicts which are keyed, but extending this idea to List, I see this usefulness in some scenarios.

Beyond that, I'm not sure it matters, but I also realize I could simply not have enough imagination at the moment to think of other benefits


I work on a build system (Bazel), so perhaps I care more than most.

But maybe it does all just come down to equality comparisons. Just not always within your own code.


Being able to parse something into a dict and then serialise it back to the same thing is a bit easier. Not a huge advantage, though.


Same. Recently I saw interview feedback where someone complained that the candidate used OrderedDict instead of the built-in dict that is now ordered, but they'll let it slide... As if writing code that will silently do different things depending on minor Python version is a good idea.


Well it's been guaranteed since 3.7 which came out in 2018, and 3.6 reached end-of-life in 2021, so it's been a while. I could see the advantage if you're writing code for the public (libraries, applications), but for example I know at my job my code is never going to be run with Python 3.6 or older.


Yeah, if you have that guarantee then I wouldn't fault anyone for using dict, but also wouldn't complain about OrderedDict.


Honestly, if I was writing some code that depended on dicts being ordered I think I'd still use OrderedDict in modern Python. I gives the reader more information that I'm doing something slightly unusual.


Same. Usually if a language has an ordered map, it's in the name.


Indeed! I don't understand why it isn't more common for stdlibs to include key-ordered maps and sets. Way more useful than insertion ordering.


Presumably because it involves different performance characteristics.


It seems like opinions really differ on this item then. I love insertion sort ordering in mappings, and python with it was a big revelation. The main reason is that keys need some order, and insertion order -> iteration order is a lot better than pseudorandom order (hash based orders).

For me, it creates more reproducible programs and scripts, even simple ones.


Ordering is specifically a property (useful or not) that a set doesn't have. You need a poset for it to be ordered.

I would expect to use a different data structure if I needed an ordered set.


Does your code actually rely on that? I've never once needed it.


Perl's decline was cultural in the same way VMS's decline was. A fantastic approach and ecosystem—just one overtaken by a world that was moving on to a different set of desires and values.

PHP emerged as a separate language and community and "ate Perl's lunch" when it came to the dominant growing app style of the Aughties... web pages and apps. Had PHP instead been a Rails-like extension of Perl for the web, sigils would have reigned for many years more. But there was never a WordPress, Drupal, or similar in Perl, and no reason for anyone who wasn't already highly connected to the Unix / sysadmin community to really give Perl another look.

By 2005 if you weren't already deep into Perl, you were likely being pulled away to other communities. If you were into Perl, you were constantly lamenting why newbies and web devs weren't using your language, just like the DECUS and VMS crowd did a decade earlier as Unix and Windows consumed all the oxygen and growth opportunities in the room.


VMS decline was bound to a failing hardware business and company. That's a very different thing. Unix was around in the 80s and VMS did fine, its when DEC hardware business went down the tubes VMS lost out big time.


VAX hardware did eventually run out of steam by the early 1990s. But VMS had already failed to capture the next generation of growth. Even while DEC was still the number 2 computer company, 5–10x the size of its competitors, and growing $1B/yr in revenues.

Unix (workstations first, then servers), PCs (DOS and NetWare, later Windows), packaged enterprise apps (e.g. IBM AS/400), "data warehousing," fault tolerant apps—a lot of those were not things happening on VAX or VMS or any other DEC product. The fight was already structurally lost and VMS aficionados were already bemoaning DEC's failure to pick up new workloads and user communities even when VAX was still going strong.

VMS declined like almost all proprietary minicomputer environments. AOS, ClearPath, GCOS, MPE, NonStop, PRIMOS, VOS, and VMS...all fine products just ones no longer serving the high-growth opportunities.

Declining investment streams hamstrung proprietary CPU/system development efforts compared to higher-volume alternatives (PC or Unix), so they got successively slower and relatively more expensive each generation. Proprietary environments weren't designed to ever escape their home fields, nor were their companies set up to profit from opening up / spreading out. A few tried, very belatedly, but... by that point, dead man walking.

So from this vantage, same pattern, not a different thing. Perl and VMS were awesome in their original home fields, but were not quick to, or even able to, capitalize on the new directions and workloads customers grew to want.


> lot of those were not things happening on VAX or VMS or any other DEC product

I would argue that VMSCluster was best in class for clusters at the time and they quickly rolled it out over commodity ethernet. It had many features many others didn't have.

RDB was one of the better databases systems on the market and Oracle says its a great acquisition for them.

DEC storage teams were very innovative and were still growing strongly in the early 90s. And still a profit maker for HP years later. They sadly stuck to VAX only for far to long.

Even their Tape division was innovative and made 100s of millions in profit under Quantum.

And they arguably had the best chip design teams in the world. Those team just worked on VAX and that was of course much more difficult then if they worked on RISC. They were competitive but had to build larger more expensive chips. And when those teams with their internal tools finally did RISC, they blew the doors of everybody with Alpha (Alpha of course opposed by Olson).

And in terms of other chip innovation, they did StrongARM having major potential to capture a new market.

They did have good understanding of internet and networking, the internet having been developed largely on PDP-10s and of course their research division was doing great stuff like Altavista or developing MP3 players.

DEC also developed pretty advanced media servers.

Lack of innovation was not the issue. And VMS didn't miss much from a server OS perspective. DEC was pretty consistency doing pretty amazing stuff technically speaking. And being proprietary really wasn't an issue as Windows NT and UNIX were to. IBM was. EMC was. Sun was even while talking about open systems. Solaris wasn't really open. Sun machines sold because the build good SMP machines that ran Oracle well.

And sure DEC didn't 'win' every new growth market, nobody can and nobody did, but they were very good in a lot of places.

> Declining investment streams hamstrung proprietary CPU/system development efforts compared to higher-volume alternatives (PC or Unix),

Multiple things are wrong with this statement. Unix machines were not high volume really, Sun, IBM, SGI, HP and many other split the market, all with the own CPU and their own fab or fab partner. And DEC invested as much or more in chip development as those other did. And their chip design team and fabs were among the best in the world in the 90s.

Sure, literally nobody could match Intel and the PC and eventually Intel would 'win' for sure.

> Proprietary environments weren't designed to ever escape their home fields, nor were their companies set up to profit from opening up / spreading out.

Massive SMP server weren't Suns 'home field' and not that of Unix and yet Sun made that its business threw most of the second half of the 90s.

HP home field was not PCs and Printers, yet they made it their business. You as company can sell PC, Unix workstation and prosperity mini-computer derived systems. Your OS doesn't have to conquer all.

> Perl and VMS were awesome in their original home fields, but were not quick to, or even able to, capitalize on the new directions and workloads customers grew to want.

I would point out that VMSCluster backed by DEC storage system was exactly a growth field that SHOULD have continued to to be a great seller in the age of massive growth for internet/networking.

Had VMS run on MIPS and by 1990 DEC had sold MIPS Server that could run single or multi-core Unix or VMS (with support for VMSCluster over commodity Ethernet), then I think VMS could have done very well. But VMS was trapped on under-performing insanely expensive VAX systems.

> Proprietary environments weren't designed to ever escape their home fields, nor were their companies set up to profit from opening up / spreading out.

The market for VAX/VMS was large server systems. And you don't need to 'escape' that to make money. Just as IBM made money without escaping their mid-range and high-end mainframe business. What you need to do is you need to execute and have a best in class product in that segment. Continue to serve the massive costumer base you have and compete for those you don't have.

That segment was almost continuously growing so even if you just keep market share, your going to do really well.

The issue was that from about 1986 the competition, namely Sun and HP started to compete much better and later IBM joined. While DEC continue to execute worse and worse, fluttering around without a clear strategy.

And when their core systems don't do well, because their other systems, like storage and others was VAX only, those systems suffer too. DEC storage division was actually successful once they started to become more commodity. Their workstation products and PC did well when they were closer to commodity.

So I think its wrong to suggest these companies could profit from more openness and vertical sales. DEC Storage, DEC Printers, DEC Networking, DEC Tape should all have embraced spreading out. But in many case Olson refused many ideas in that direction.

Here is my list of the biggest issues in execution:

1. Complete mishandling of micro computers. Olsons idea of 3 competing internal products launched on the same day.

2. Compete mishandling of workstations. Refusing to authorize VMS based workstation despite many in the company pushing for something like it. Of course Apollo were former DEC people, and many DEC people ended up at Sun for that exact reason (including Sun essentially CTO Bernard Lacroute).

3. Failure to develop RISC despite MANY internal teams all convinced that it was the right decision. Then trying to unify on one thing, PRISM. Then changing what PRISM was many times over, never making a decision. Then eventually deciding on 32bit PRISM specifically only for workstations narrowly focus at beating Sun, rather then revamping the VAX line around this new architecture.

4. Then canceling PRISM in favor of MIPS, but then not making 32 bit MIPS their new 32 bit standard and porting VMS on it. They even had the license to develop their own MIPS chip or could have even just acquired MIPS (like SGI later did). MIPS had a costumer base and with DEC engineering team and fab power behind it, could have done very well. MIPS was Ultirx only leading to a situation where their Ultrix product was better price/performnace then their VMS product.

5. Believing high end computers would continue to not be microchips. Despite their own VAX CMOS teams putting out very high quality chips, the they had like 3-4 different 'high' end VAX teams all producing machines that were noncompetitive already by 1988. Literally billions of $ waste on machines that had no market. VAX 9000 being the worst offender, but not the only offender. Ironically they had the exact VMSCluster software you needed to sell clusters of mid-range RISC servers, rather the individual mainframes.

6. After the initial micro-computer failure, they also didn't handle the PC ecosystem well. Olson didn't like the PC and despite DEC having lots of knowledge and the infrastructure to become a clone builder, they didn't do great with it. No reason that DEC couldn't have done what HP did, HP was also a minicomputer company that started getting into clones. DEC had more experience mass manufacturing thanks to their massive terminal bushiness.

7. Refusing to see reality that was clear. Not down-sizing or adjusting to reality in any way. And then eventually downsizing in way that cost literally billions, giving people deals that were actually insane in how generous they were (at least in the first few waves). This essentially burned their 80s VAX war-chest to the ground.

8. While Alpha was amazing and a success story, it was also 'to early' on the market. Most people simply didn't needed 64bit yet. HP for example did an analysis of the market and came to the conclusion that 32 bit would be fine for another 1-2 generations. DEC continuing to not do RISC based VMS on 32 bit killed their market in that range. And Alpha wasn't optimized for Unix because again, this VMS-Unix split thinking. They bet on Alpha becoming that standard, but there was zero chance, Sun, HP, IBM, SGI adopting it. And even when they had the opportunity to move it toward being a widely adopted, by selling to Apple, they didn't really even want to do that. Instead of AIM would could of had ADM. Gordon Moore also tried to get Intel to adopt Alpha for their 64 bit, but again, no deal. Intel went with HP and Itanium.

9. This is basically what lead to Olson finally being removed. But then they gave the job to Robert Palmer. And he was just the wrong person for the job, he got the job because he really wanted the job. He was a semiconductor guy who really had no clue what so ever on how to turn around a systems company. He invest to much in semiconductors and not enough in figuring out what the key issue with their core product line was or how to come up with new products. And he quickly pivoted not to saving the company, but restructuring it to sell it. And the board was complicit in this 'strategy' and selling up long term profitable units for short term cash.

10. The had amazing legal leverage on Microsoft and Intel, literally had them dead to rights on major, major violations, and literally fumbled the bag on both of them. Two of the most successful companies in the 90s, and DEC was absolutely vital to their success and DEC failed to do much with either. HP got a deal with Intel that was 10000x times better with no legal leverage.


Asking about Y (or Z, or some other problem a few layers down) is common when yak shaving. Aka doing the thing that's needed to do the thing that's needed to do X. Not to be confused with the also-present problem of ADHD sequential distraction by some other unrelated problem (possibly one sighted along the way to eventually get X done).

It's a gross idealization that every problem can be directly solved, or is "shovel ready." In my world there are often oodles of blockers, dependencies, and preparations that have to be put in place to even start to solve X. Asking about Y and Z along the way? Par for the course.


I had exactly this reaction when gradual typing came to Python. "Do we really need this??"

But over time, I've grown to love it. Programming is communication—not just with the machine, but with other developers and/or future me. Communicating what types are expected, what types are delivered, and doing so in a natural, inline, graceful way? Feels a big win.


But you use types not to communicate with other people - you use them to give more hints to the python interpreter. Otherwise you could use comments. :)


type annotations in python are essentially structured comments with special syntactic support and runtime introspection facilities (i.e. you can get at the annotations from within the code). they are explicitly not "types" as far as the interpreter is concerned, you can say e.g. `x: str = 42` and python will be fine with it. the value comes from tooling like type checkers and LSPs that work with the annotations, and from metaprogramming libraries like dataclasses and pydantic that introspect them at runtime and use them to create classes etc.


Sure you could just use in-program comments and/or external annotation files. There were a handful of standards for doing exactly that... NumPy, Google, RST/Sphinx, Epydoc, Zope, ...probably a few others I'm forgetting. And there were external "stub" definition files like `.pyi`. Not exactly code, but closely-linked metadata. Ruby seems to be trying something like that with its `.rbs` files.

IME not until there was a sufficiently clean, direct, and common way to add typing (PEP 484 and its follow-ons) did the community actually coalesce, standardize, and practically start to use type information on a broad basis. We needed there to be one default way of sufficient attractiveness, not 14 competing, often incompatible, and rather inconvenient options.

You may look at built-in typing as a way to communicate with interpreters, compilers, type-checking and linting tools, and/or editor language support engines. Certainly the old purely static typing languages looked at them this way, out of necessity. The compiler, linker, loader, ABI chain has to know the bit-width of your objects, and pronto, because they're going right on the stack!

But in a highly dynamic language that doesn't strictly need types to run the code, typing works as much a signal to future me and my colleagues and collaborators as it is to any mechanism or tool. It signals structure and intent that we'd otherwise have to intuit or decipher. That some tools can use it too, just as they do in fully static languages, is great and useful but not the whole enchilada.


Wayland is the IPv6 of the windowed display world.

The bright, complete, unfettered future always just a few more versions and a few more years over the horizon.


> I was pretty appalled to see such a basic mistake from a security company, but then again it is Okta.

Oh. Em. Gee.

Is this a common take on Okta? The article and comments suggest...maybe? That is frightening considering how many customers depend on Okta and Auth0.


We evaluated them a while ago but concluded it was amateur-hour all the way down. They seem to be one of those classic tech companies where 90% of resources go to sales/marketing, and engineering remains "minimum viable" hoping they get an exit before anyone notices.


I'm convinced Okta's entire business model is undercutting everyone with a worse product with worse engineering that checks more boxes on the feature page, knowing IT procurement people aren't technical and think more checkboxes means it's better.


"Enterprise Software" is what Tobi Lutke called that in a keynote once. A focus on hitting as many feature checkboxes as possible at the cost of quality.


When I was working at Auth0 the repeated phrase about the value of getting bought by Okta was that they had the best sales org in the industry. It was implied that this was why we were getting bought by them, instead of the reverse.


Okta is already public and has been for years. They had an exit already. For whatever reason, many large organizations trust them.


Okta sucks balls. That's from my perspective as a poor sod who's responsible for some sliver of security at this S&P listed megacorp that makes its purchasing decisions based on golf partners.


Among the reasons to leave my last job was a CISO and his minion who insisted spending $50k+ on Okta for their b2b customer and employee authentication was a bulletproof move.

When I brought it up, they said they didn't have anyone smart enough to host an identity solution.

They didn't have anyone smart enough to use Okta either. I had caught multiple dealbreakers-for-me such dubious / conflicting config settings resulting in exposures, actual outages caused by forced upgrades, not to mention their lackluster responses to bona fide incidents over the years.

I use Authentik for SSO in my homelab, fwiw.


Keycloak is a great authentication suite, not that hard to configure and rock solid.

Ill never understand this thinking.


Auth providers are among the hardest systems to secure. It's not just a question of the underlying code having vulnerabilities - for companies with Internet logins, auth systems (a) are exposed to the internet, (b) are not cache-friendly static content, (c) come under heavy expected load, both malicious (the DDoS kind) and non-malicious (the viral product launch kind), (d) if they ever go down, the rest of the system is offline (failsafe closed).

It's hardly surprising that the market prefers to offload that responsibility to players it thinks it can trust, who operate at a scale where concerns about high traffic go away.


I rather disagree on the difficulty of pulling it off. The problem space is well-defined and there aren't that many degrees of freedom in functional design.

I'll concede there is some complexity in integrating with everything and putting up with the associated confusion. And granted the stakes are a little raised due to the nature of identity and access, and like you point out what could go wrong. Implementation is annoying, both writing the identity solution and then deploying and operating it. But the deployment & operation part is still there if you go with Okta or 1Login or Cognito or whomever.

The implementation is a capital type thing that is substantially solved already with the various F/OSS solutions people are mentioning - it's just a docker pull and some config work to get it going into a POC.

There are much harder problems in tech IMO, anything ill-defined for starters.

The C-level folks seem to think they are buying some kind of indemnity with these "enterprise" grade solutions, but there is no such thing. They'll even turn it around and take Okta's limitations as existential--"if even Okta doesn't get it right, there is no way we could pull it off". Out of touch, or less politely, delusional.


> The C-level folks seem to think they are buying some kind of indemnity with these "enterprise" grade solutions, but there is no such thing.

Something you need to understand about executives, is that they're not really individual God-like figures ruling the world; at the end of the day they answer to their CEO, to their Boards, and want to look good to executive recruiters who might consider them for a C-level role at a larger company for higher pay; and a good many of them lead not-so-affordable lifestyles to keep up appearances among aforementioned folk and might be worse off in their personal finances than you.

All of which is just to say - "nobody got fired for buying IBM." It might be tragic, but going with peer consensus is what helps them stay with their in-crowd. The risks for departing from the herd (holding up deals on compliance concerns, possibly higher downtime for whatever reason, difficulty of hiring people who demand cheaper salaries but already know an Industry Standard Solution) are too high compared to the potential benefits (lower total cost of ownership, increased agility, better security/engineering quality, higher availability assuming for the sake of argument that is actually the case), particularly when increased agility and better quality are difficult to quantify, higher availability is hard to prove (Okta and peers don't exactly publish their real availability figures), and the difference in TCO is not enough to move the needle.

It's very rare to find executives who care more about their company's engineering than their peer group - folks who care that much rarely become executives in the first place.


Keycloak has various vulnerabilities they haven't even responded to after a month of reporting them.


Disclose publicly then, if you haven't already?

Definitely makes things safer than users not knowing about them.


Are these documented anywhere? A full month with no response at all puts you firmly in “responsible disclosure” territory if they are not already publicly known. I'm pretty sure DayJob uses keycloak (or at least is assessing it - I'm a bit removed from that side of things these days) so that information could be pertinent to us.


Yeah, I have the misfortune of inheriting a SaaS that built on auth0, and the whole stack is rather clownish. But they tick all the regulatory boxes, so we're probably stuck with them (until they suffer a newsworthy breach, at any rate...)


Okta and auth0 are, fundamentally, two distinct products – conceived, designed, and engineered by entirely separate entities.

auth0, as a product, distinguished itself with a modern, streamlined architecture and a commendable focus on developer experience. As an organisation, auth0 further cemented its reputation through the publication of a consistently high-calibre technical blog. Its content goes deeply into advanced subjects such as fine-grained API access control via OIDC scopes, RBAC, ABAC and LBAC models – a level of discourse rare amongst vendors in this space.

It was, therefore, something of a jolt – though in retrospect, not entirely unexpected – when Okta acquired auth0 in 2021. Whether this move was intended to subsume a superior product under the mediocrity of its own offering or to force a consolidation of the two remains speculative. As for the fate of the auth0 product itself, I must admit I am not in possession of definitive information – though history offers little comfort when innovation is placed under the heel of corporate, IPO driven strategy.


Apart from auth0 getting hacked, before getting acquired by Okta. [0]

[0] https://auth0.com/blog/auth0-code-repository-archives-from-2...


What is the point that you are trying to make?

Okta has committed to and has had a consitent track record of delivering at least one full scale security breach and the consistent user expericence degradation to their customers every year – and completely free of charge.


Absolutely. And auth0 was also delivering that, before acquisition. It isn't a change of routine.


Auth0 spent more time documenting and blogging about standards than documenting their own software. It was a bit bizarre. Their documentation was absent and or terrible IIRC


Indeed, although I am in no position to make comments on the quality of their own product specific documentation.

Surprisingly, I have found that many people struggle to wrap their heads around the relative simple concepts of RBAC, ABAC and, more recently, LBAC. auth0 did a great job at unfolding such less trivial concepts into a language that made them accessible to a wider audience, which, in my books, is a great feat and accomplishment.


> until they suffer a newsworthy breach, at any rate...

I suppose it has been a couple years since the last... [0]

[0] https://techcrunch.com/2023/11/29/okta-admits-hackers-access...


Yep. They're an Enterprise™ company. That means they prioritize features purchasing departments want, not functionality.


And when something doesn't work well like their super custom LDAP endpoint, talking to support is really painful.


We've recently moved to Auth0. I'm no security expert. Whats the recommended alternative that provides the same features and price, but without the risks suggested here?


Heya, I work for FusionAuth. We have a comparable product for many use cases.

Happy to chat (email in profile), or you can visit our comparison page[0] or detailed technical migration guide[1].

0: https://fusionauth.io/compare/fusionauth-vs-auth0

1: https://fusionauth.io/docs/lifecycle/migrate-users/provider-...


It's not difficult to implement OAuth2. There are good libraries, and even the spec is not complicated. Or use AWS Cognito.


Constructing a new OAuth2/OIDC Identity Provider from the ground up is an undertaking fraught with complexity – and not of the elegant variety. The reasons are numerous, entrenched, and maddeningly persistent.

1. OAuth2 and OIDC are inherently intricate and alarmingly brittle – the specifications, whilst theoretically robust, leave sufficient ambiguity to spawn implementation chaos.

2. The proliferation of standards results in the absence of any true standard – token formats and claim structures vary so wildly that the notion of consistency becomes a farce – a case study in design by committee with no enforcement mechanism.

3. ID tokens and claims lack uniformity across providers – interoperability, far from being an achievable objective, has become an exercise in futility. Every integration must contend with the peculiarities – or outright misbehaviours – of each vendor’s interpretation of the protocol. What ought to be a cohesive interface degenerates into a swamp of bespoke accommodations.

4. There is no consensus on data placement – some providers, either out of ignorance or expedience, attempt to embed excessive user and group metadata within query string parameters – a mechanism limited to roughly 2k characters. The technically rational alternative – the UserInfo endpoint – is inconsistently implemented or left out entirely, rendering the most obvious solution functionally unreliable.

Each of these deficiencies necessitates a separate layer of abstraction – a bespoke «adapter» for every Identity Provider, capable of interpreting token formats, claim nomenclature, pagination models, directory synchronisation behaviour, and the inevitable, undocumented bugs. Such adapters must then be ceaselessly maintained, as vendors alter behaviour, break compatibility, or introduce yet another poorly thought-out feature under the guise of progress.

All of this – the mess, the madness, and the maintenance burden – is exhaustively documented[0]. A resource, I might add, that reads less like a standard and more like a survival manual.

[0] https://www.pomerium.com/blog/5-lessons-learned-connecting-e...


None of this rings true, and I've implemented both OAuth2 and OpenID Connect multiple times, also reading the specs, which are quite direct. I'm sure you're right that vendors take liberties -- that is almost always the case, and delinquency of e.g. Okta is what started this thread.


It's an AI bot. One for @dang


By the same token, if one can use the keyboard, it does not make them a human. Parrots (the non-stochastic kind) and monkeys spring to mind.


Why do you suspect that?


I have also designed and implemented enterprise grade OAuth2 / OIDC IdP's.

Beyond the aforementioned concerns, one encounters yet another quagmire – the semantics of OIDC claims, the obligations ostensibly imposed by the standard, and the rather imaginative ways in which various implementations choose to interpret or neglect those obligations.

Please allow me to illustrate with a common and persistently exasperating example: user group handling, particularly as implemented by Okta and Cognito. The OIDC spec, in its infinite wisdom, declines to define a dedicated claim for group membership. Instead, it offers a mere suggestion – that implementers utilise unique namespaces. A recommendation, not a mandate – and predictably, it has been treated as such.

In perfect accordance with the standard’s ambiguity, Okta provides no native «groups» claim. The burden, as always, is placed squarely upon the customer to define a custom claim with an arbitrary name and appropriate mapping. User group memberships (roles) are typically sourced from an identity management system – not infrequently, and regrettably, from an ageing Active Directory instance or, more recently, a new and shiny Entra instance.

Cognito, by contrast, does define a claim – «cognito:groups» – to represent group membership as understood by Cognito. It is rigid, internally coherent, and entirely incompatible with anything beyond its own boundaries.

Now, consider a federated identity scenario – Okta as the upstream identity provider, federated into Cognito. In this scenario, Cognito permits rudimentary claim mapping – simple KV rewrites. However, such mappings do not extend to the «cognito:groups» structure, nor do they support anything approaching a nuanced translation. The result is a predictable and preventable failure of interoperability.

Thus, despite both platforms ostensibly conforming to the same OIDC standard, they fail to interoperate in one of the most critical domains for medium to large-scale enterprises: user group (role) resolution. The standard has become a canvas – and each vendor paints what they will. The outcome, invariably, is less a federation and more a fragmentation – dressed in the language of protocol compliance.

> I've implemented both OAuth2 and OpenID Connect multiple times

Whilst I do not doubt that you have made multiple earnest attempts to implement the specification, I must express serious reservations as to whether the providers in question have ever delivered comprehensive, interoperable support for the standard in its entirety. It is far more plausible that they focused on a constrained subset of client requirements, tailoring their implementation to satisfy those expectations alone at the IdP level and nothing else. Or, they may have delivered only the bare minimum functionality required to align themselves, nominally, with OAuth2 and OIDC.

Please allow me to make it abundantly clear: this is neither an insult aimed at you nor an indictment of your professional capabilities. Rather, it is a sober acknowledgement of the reality – that the standard itself is both convoluted and maddeningly imprecise, making it extraordinarily difficult for even seasoned engineers to produce a high-quality, truly interoperable implementation.

> I'm sure you're right that vendors take liberties -- that is almost always the case, and delinquency of e.g. Okta is what started this thread.

This, quite precisely, underscores the fundamental purpose of a standard – to establish a clear, concise, and unambiguous definition of that which is being standardised. When a standard permits five divergent interpretations, one does not possess a standard at all – one has five competing standards masquerading under a single name.

Regrettably, this is the exact predicament we face with OAuth2 and OIDC. What should be a singular foundation for interoperability has devolved into a fragmented set of behaviours, each shaped more by vendor discretion than by protocol fidelity. In effect, we are navigating a battlefield of pluralities under the illusion of unity – and paying dearly for the inconsistency.

Needless to say, OAuth2 and OIDC are still the best that we have had, especially compared to their predecessors, and by a large margin.


https://goauthentik.io/#comparison

They have an enterprise version now (mostly for support and bleeding edge features that later make it into the open source product.)

It's pretty easy to self host. I have been doing it for a small site for years and I couldn't even get any other open source solution to work. They are mostly huge with less features.


No provider has been able to match Auth0 actions unfortunately. Auth0 allows you to execute custom code at any point in the auth lifecycle and allow/deny based on that or enrich user attributes. Super useful when you have a legacy system that is hard to migrate away from. If anyone has any recommendations I'm all ears


I work for FusionAuth.

We have lambdas (basically JavaScript code that can make API calls[0] and be managed and tested[1]) that execute at fixed points in the auth lifecycle:

- before a login is allowed

- before a token is created

- after a user returns from a federated login (SAML, OIDC, etc)

- before a user registers

And more[2].

And we're currently working on one for "before an MFA challenge is issued"[3].

There are some limitations[4]. We don't allow, for instance, loading of arbitrary JavaScript libraries.

Not sure if that meets all your needs, but thought it was worth mentioning.

0: https://fusionauth.io/docs/extend/code/lambdas/lambda-remote...

1: https://fusionauth.io/docs/extend/code/lambdas/testing

2: full list here: https://fusionauth.io/docs/extend/code/lambdas/

3: https://github.com/FusionAuth/fusionauth-issues/issues/2309

4: https://fusionauth.io/docs/extend/code/lambdas/#limitations


thank you I will check you guys out


I am not qualified to say whether Authentik can do all of what you need but it does allow custom python code in a lot of places. Perhaps you can ask whether what you need is available directly. They are very active in Discord.


(authentik maintainer here) It does! Also, not only in the authentication process, but also during individual authorization flows, and in a few other places as well, like when a user edits their settings, or whenever an event (basically whenever something happens in authentik) but that's more a reactive process than inline


Thanks for the mention! (Authentik Security CEO here.) We've become something of Okta migration experts at this point... Cloudflare moved to us a couple years back after they had to be the ones to let Okta know it'd been breached yet again. [1]

[1] https://blog.cloudflare.com/how-cloudflare-mitigated-yet-ano...


Cloudflare??? Damn. that is HUGE! Congratulations. You guys have a super solid product full of features and a decent founder. Maybe enterprises don't care about my favorite feature but it makes securing EVERYTHING a breeze. Embedded proxy! That is GOAT.


If you’re looking for b2b identity, I’m the founder of WorkOS and we power this for a bunch of apps. Feel free to email me, mg@workos.com


We use WorkOS to support some of our offerings but not for our own corporate identity/authentication. I’m not close to the project so I don’t have experience using WorkOS but definitely curious about replacing Okta.


It's not the same as Auth0, but you might be interested in Zitadel, if only because it's open source and you can use it hosted or self-hosted.

(Disclaimer: I work for Zitadel).


okta is the worst. Their support is the worst (we always got someone overseas who only seemed to understand anything, probably they were trained on some corpus) and would take forever to loop in anyone that could actually help.


Honestly, I'm expressly not a big fan of outsourcing authentication/authorization.. . and even then, my personal list of trust is VERY limited. For the most part, I'll use Azure Entra (formerly Azure AD) and Windows AD only because of their entrenchment with other systems, and generally don't have much need to build more on top of what they already provide in the box.

That said, a lot of these things are very well documented... there are self-host systems and options both open-source, paid and combinations not to mention self-hosted options for both.

I've worked on auth systems used in banking and govt applications as well as integration with a number of platforms including Okta/Auth0. And while I can understand some of the appeal, it's just such a critical point of potential failure, I don't have that much trust in me.

I wish I could have open-sourced the auth platform I wrote a few years ago, as it is pretty simple in terms of both what it can do and how to setup/configure/integrate into applications. Most such systems are just excessively complex for very little reason with no reasonable easy path.


Yea auth0 is an absolute clown show.


"I always want to say to people who want to be rich and famous: 'try being rich first'. See if that doesn't cover most of it. There's not much downside to being rich, other than paying taxes and having your relatives ask you for money. But when you become famous, you end up with a 24-hour job." -- Bill Murray


I've heard a similar maxim that being rich is fantastic, rich and famous is good, poor is bad, poor and famous is a nightmare.


I was poor and recognizable in a tiny niche related to open source tools and 100% this was true. The recognition creates envy and ambitious people invest extra effort to sabotage you... Often, people who are rich see you as a threat and go all out war on you... And you don't have any buffer or support so you have to be 10x better just to stay afloat. Convincing people to work with you is much harder since you can't offer them any money and must offer pure equity... And your reputation, which fades over time, is the only thing that makes such equity potentially valuable.

OP has the problem that his product is much more well known than he is. That's probably why he is not richer. Though at least his product is a mainstream brand by now. He can get recognition by association once he does the reveal "I'm the guy who created Mastodon" this creates opportunities... Though perhaps not as big opportunities as one may think. It depends on the degree of control he has over the product. In general, with open source or other community-oriented products, the control is limited.


The quote from the “jackass” crew is the opposite. If you’re famous you don’t need money. You just walk up and ask.


In addition to that: With money you can always buy popularity easily, but converting popularity into money is hard work at least. I'd even say that turning fame into significant wealth is an art only few have truly mastered.


And if you can't buy popularity you can always buy a really high wall.


> With money you can always buy popularity easily

I don’t know if Elon Musk is an example or a counter-example. Maybe both?


What he can’t buy is being at peace and content with the popularity he already has


Well he was doing a good job at buying popularity until he fired his PR team so..


Elon Musk has never had a PR team... Which is maybe the better point. (Or if he did, he hasn't had one in the 15+ years I've been watching him.)


After the taking Tesla private tweet that got him in trouble with the SEC he hired some people, but that didn't last long. Tesla had a PR team until a few years ago but he probably did not listen to them very much.


His companies have certainly had PR teams a various points in time. Tesla even does advertising now. But the topic is specifically about the man himself which is independent of his companies. Most very rich but (un)popular people have personal PR teams.


Sadly, I suspect he's reasonably successful at being popular amongst the people he wants to be popular with.

Taylor Swift is super popular in the demographic she plays to, while being unpopular with, say, techno or metal fans.

Musk is super popular in the outspoken nazi demographic. (And has fallen way way out of popularity with huge parts of demographics that he used to be popular in, like electric car people, home solar/battery people, and spaceflight fans.)


> Musk is super popular in the outspoken nazi demographic.

It's sad seeing such poor misinformed takes like this on hacker news. I guess Marc Andreessen and the President/Co-Founder of Stripe, among many others, are nazis now. It's well known that among the group that I would call "pro-America technologists" that he's highly appreciated and many want to figure out how to replicate him.

> and spaceflight fans.

As a spaceflight fan who was a fan of Musk all the way back in ~2012, I'm still a fan of him today, even if I have more issues with him today than I did back then. I can confidently say that many spaceflight fans feel the same as I on this. People overstate his controversial opinions (and being a nazi is not one of them) and understate his past achievements (and continued achievements).


> It's well known that among the group that I would call "pro-America technologists" that he's highly appreciated and many want to figure out how to replicate him.

> As a spaceflight fan who was a fan of Musk all the way back in ~2012, I'm still a fan of him today

Elon is a rare human being.

He is pretty much what his haters think of him (a political/social troll/child).

And he is also what his worshipers think (a generationally incredible technical and business visionary).

Most people, whether ordinary or extraordinary themselves, have trouble with dissonance. Elon is dissonance. They see a joke or a god.

A small segment sees both sides clearly. I find it a painful experience. Overlapping extremes of inspiration and damage. But reality isn't all bubblegum and glitter go pops.


> It's well known that among the group that I would call "pro-America technologists"

You should start calling them “pro-India technologists”


[flagged]


Yes this is what people like you do, go around calling everyone nazis and making up hitler salutes.



He’s an example. He has to burn massive amounts of money to counteract the fact that he wants to be the town asshole in public constantly.

If someone who had 5 dollars to their name acted like Elon Musk no one on this forum would question hating the fucker, but he’s got cash so some set of people think he might be right


[flagged]


It's effective for buying popularity with a crowd that likes nazi salutes.


It’s certainly a good way to get people like you to talk about him.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: