Hacker Newsnew | past | comments | ask | show | jobs | submit | mitchellh's commentslogin

For a good example of this sort of pattern in the real world, take a look at the Zig compiler source code. I'm sure others might do it but Zig definitely does. I have a now very outdated series on some of the Zig internals: https://mitchellh.com/zig/parser And Andrew's old DoD talk is very good and relevant to this: https://vimeo.com/649009599

More generally, I believe its fair to call this a form of handle-based designs: https://en.wikipedia.org/wiki/Handle_(computing) Which are EXTREMELY useful for a variety of reasons and imo woefully underused above the lowest system level.


My hypothesis is that handles are underused because programming languages make it very easy to dereference a pointer (you just need the pointer) whereas "dereferencing" a handle requires also having the lookup table in hand at the same time, and that little bit of extra friction is too much for most people. It's not that pointers don't require extra machinery to be dereferenced, it's just that that machinery (virtual memory) is managed by the operating system, and so it's invisible in the language.

My current research is about how to make handles just as convenient to use as pointers are, via a form of context: like a souped-up version of context in Odin or Jai if one is familiar with those, or like a souped-up version of coeffects if one has a more academic background.


I think that it's a generic programming problem: pointers are easier because the type of the pointee is easy to get (a deref) and also its location (memory) but with index-based handles into containers you can no longer say that given a handle `H` (type H = u32) I can use it to get a type `T` and not only that, you've also introduced the notion of "where", that even if for each type `T` there exists a unique handle type `H` you don't know into which container instance does that handle belong. What you need is a unique handle type per container instance. So "Handle of Pool<T>" != "Handle of Pool<T>" unless the Pool is bound to the same variable.

As far as I know no language allows expressing that kind of thing.


I think actually Scala does exactly this style of inferring the container instance from its type: https://docs.scala-lang.org/scala3/book/ca-context-parameter...

But from what I understand (being a nonexpert on Scala), this scheme actually causes a lot of problems. I think I've even heard that it adds more undecidability to the type system? So I'm exploring ways of managing context that don't depend on inferring backward from the type.


> What you need is a unique handle type per container instance.

You can do this with path-dependent types in Scala, or more verbosely with modules in OCaml. The hard part is keeping the container name in scope wherever these handle types are used: many type definitions will need to reference the container handle types. I'm currently trying to structure code this way in my pet compiler written in OCaml.


Great summary and I think your argument is sound.

Non-AMD, but Metal actually has a [relatively] excellent debugger and general dev tooling. It's why I prefer to do all my GPU work Metal-first and then adapt/port to other systems after that: https://developer.apple.com/documentation/Xcode/Metal-debugg...

I'm not like a AAA game developer or anything so I don't know how it holds up in intense 3D environments, but for my use cases it's been absolutely amazing. To the point where I recommend people who are dabbling in GPU work grab a Mac (Apple Silicon often required) since it's such a better learning and experimentation environment.

I'm sure it's linked somewhere there but in addition to traditionally debugging, you can actually emit formatted log strings from your shaders and they show up interleaved with your app logs. Absolutely bonkers.

The app I develop is GPU-powered on both Metal and OpenGL systems and I haven't been able to find anything that comes near the quality of Metal's tooling in the OpenGL world. A lot of stuff people claim is equivalent but for someone who has actively used both, I strongly feel it doesn't hold a candle to what Apple has done.


My initiation into shaders was porting some graphics code from OpenGL on Windows to PS5 and Xbox, and (for your NDA and devkit fees) they give you some very nice debuggers on both platforms.

But yes, when you're stumbling around a black screen, tooling is everything. Porting bits of shader code between syntaxes is the easy bit.

Can you get better tooling on Windows if you stick to DirectX rather than OpenGL?


> Can you get better tooling on Windows if you stick to DirectX rather than OpenGL?

My app doesn't currently support Windows. My plane was to use the full DirectX suite when I get there and go straight to D3D and friends. I lack experience at all on Windows so I'd love if someone who knows both macOS and Windows could compare GPU debugging!


Windows has PIX for Windows, PIX is the name of the GPU debugging since Xbox 360. The Windows version is similar but it relies on debug layers that need to be GPU specific which is usually handled automatically. Although because of that it’s not as deep as the console version but it lets you get by. Most people use RenderDoc on supported platforms though (Linux and Windows). It supports most APIs you can find on these platforms.

Pix predates the XBox.

Yes, Pix.

https://devblogs.microsoft.com/pix/

This is yet another problem with Khronos APIs, expecting each vendor comes up with a debugger, some do, some don't.

At least nowadays there is RenderDoc.

On the Web after a decade, it is still pixel debugging, or trying to reproduce the bug on a native APIs, because why bother with such devtools.


On web there is SpectorJS https://spector.babylonjs.com/

Which offers the basics, but at least works across devices, you can also trigger the traces from code and save the output, then load in the extension. Very useful for debugging mobile.

You can just about run chrome through Nvidias Nsight (of course you're not debugging webgl, but the what ever its translated to on the platform), although I recently tired again and it seems to fail...

these where the command line args i got nsight to pass chrome to make it work

" --disable-gpu-sandbox --disable-gpu-watchdog --enable-dawn-features=emit_hlsl_debug_symbols,disable_symbol_renaming --no-sandbox --disable-direct-composition --use-angle=vulkan <URL> "

but yeah really really wish the tooling was better, especially on performance tracing, currently it's just disable and enable things and guess...


SpectorJS is kind of abandoned nowadays, it hardly has changed and doesn't support WebGPU.

Running the whole browser rendering stack is a masochist exercise, I rather re-code the algorithm in native code, or go back into pixel debugging.

I would vouch the state of bad tooling, and how browsers blacklist users systems, is a big reason studios rather try out streaming instead of rendering on the browser.


yeah... I tired to extend Spectors ui, the code base is "interesting" for simple changes seemed way harder than it should have been. Shame though as its really the only tool like it for web.

My favourite though is safari, graphics driver crashes all the time, the dev tools normally crash as well, so you have zero idea what is happening.

And I've found when the graphics crash the whole browsers graphic state become unreliable until you force close safari and reopen.


It's a full featured and beautifully designed experience, and when it works it's amazing. However it regularly freezes of hangs for me, and I've lost count of the number of times I've had to 'force quit' Xcode or it's just outright crashed. Also, for anything non-trivial it often refuses to profile and I have to try to write a minimal repro to get it to capture anything.

I am writing compute shaders though, where one command buffer can run for seconds repeatedly processing over a 1GB buffer, and it seems the tools are heavily geared towards graphics work where the workload per frame is much lighter. (Will all the AI focus, hopefully they'll start addressing this use-case more).


> However it regularly freezes of hangs for me, and I've lost count of the number of times I've had to 'force quit' Xcode or it's just outright crashed.

This has been my experience too. It isn't often enough to diminish its value for me since I have basically no comparable options on other platforms, but it definitely has some sharp (crashy!) edges.


I didn't even notice who I was replying to at first - so let me start by saying thank you for Ghostty. I spend a great deal of my day in it, and it's a beautifully put together piece of software. I appreciate the work you do and admire your attitude to software and life in general. Enjoy your windfall, ignore the haters, and my best wishes to you and your family with the upcoming addition.

The project I'm mostly working on uses the wgpu crate, https://github.com/gfx-rs/wgpu, which may be of interest if writing cross-platform GPU code. (Though obviously if using Rust, not Zig). With it my project easily runs on Windows (via DX12), Linux (via Vulkan), macOS (via Metal), and directly on the web via Wasm/WebGPU. It is a "lowest common denominator", but good enough for most use-cases.

That said, ever with simple shaders I had to implement some workarounds for Xcode issues (e.g. https://github.com/gfx-rs/wgpu/issues/8111). But still vastly preferable to other debugging approaches and has been indispensable in tracking down a few bugs.


Yeah, Xcode's Metal debugger is fantastic, and Metal itself is imo a really nice API :]. For whatever reason it clicked much better for me compared to OpenGL.

Have you tried RenderDoc for the OpenGL side? Afaik that's the equivalent of Xcode's debugger for Vulkan/OpenGL.


> To the point where I recommend people who are dabbling in GPU work grab a Mac (Apple Silicon often required) since it's such a better learning and experimentation environment.

I don't know, buying a ridiculously overpriced computer with the least relevant OS on it just to debug graphics code written in an API not usable anywhere else doesn't seem like a good idea to me.

For anyone who seriously does want to get into this stuff, just go with Windows (or Linux if you're tired of what Microsoft is turning Windows into, you can still write Win32 applications and just use VK for your rendering, or even DX12 but have it be translated, but then you have to debug VK code while using DX12), learn DX12 or Vulkan, use RenderDoc to help you out. It's not nearly as difficult as people make it seem.

If you've got time you can learn OpenGL (4.6) with DSA to get a bit of perspective why people might feel the lower-level APIs are tedious, but if you just want to get into graphics programming just learn DX12/VK and move on. It's a lower-level endeavor and that'll help you out in the long run anyway since you've got more control, better validation, and the drivers have less of a say in how things happen (trust me, you don't want the driver vendors to decide how things happen, especially Intel).

P.S.: I like Metal as an API; I think it's the closest any modern API got to OpenGL while still being acceptable in other ways (I think it has pretty meh API validation, though). The problem is really that they never exported the API so it's useless on the actual relevant platforms for games and real interactive graphics experiences.


Is your code easy to transfer to other environments? The Apple vendor lock-in is not a great place for development if the end product runs on servers, unlike using AMD Gpus which can be found on the backend. Same goes for games because most gamers either have an AMD or an Nvidia graphics card as playing on Mac is still rare, so priority should be supporting those platforms

Its probably awesome to use Metal and everything but the vendor lock-in sounds like an issue.


It has been easy. All modern GPU APIs are basically the same now unless you're relying on the most cutting edge features. I've found that converting between MSL, OpenGL (4.3+), and WebGPU to be trivial. Also, LLMs are pretty good at it on first pass.

Thats pretty cool then!

Same, Metal is a clean and modern API.

Is anyone here doing Metal compute shaders on iPad? Any tips?


> (I think as they were gearing up to be a more attractive target for an exit).

A common conspiracy theory, but not true.


Source: the guy the company was named after

Where did you read that?

Then why move away from open source?

Yeah how would you know?

j/k Love ghostty!


> while that shown in blue is the stapled notarisation ticket (optional)

This is correct, but practically speaking non-notarized apps are pretty terrible to use for a user enough so that this isn't optional and you're going to pay your $99/yr Apple tax.

(This only applies to distributed software, if you are only building and running apps for your own personal use, its not bad because macOS lets you do that without the scary warnings)

For users who aren't aware of notarization, your app looks straight up broken. See screenshots in the Apple support site here: https://support.apple.com/en-us/102445

For users who are aware, you used to be able to right click and "run" apps and nowadays you need to actually go all the way into system settings to allow it: https://developer.apple.com/news/?id=saqachfa

I'm generally a fan of what Apple does for security but I think notarization specifically for apps outside the App Store has been a net negative for all parties involved. I'd love to hear a refutation to that because I've tried to find concrete evidence that notarization has helped prevent real issues and haven't been able to yet.


I thought the macOS notarization process was annoying until we started shipping Windows releases.

It’s basically pay to play to get in the good graces of Windows Defender.

I think all-in it was over $1k upfront to get the various certs. The cert company has to do a pretty invasive verification process for both you and your company.

Then — you are required to use a hardware token to sign the releases. This effectively means we have one team member who can publish a release currently.

The cert company can lock your key as well for arbitrary reasons which prevents you from being able to make a release! Scary if the release you’re putting out is a security patch.

I’ll take the macOS ecosystem any day of the week.


The situation on Windows got remarkably better and cheaper recently-ish with the addition of Azure code signing. Instead of hundreds or thousands for a cert it’s $10/month, if you meet the requirements (I think the business must have existed for some number of years first, and some other things).

If you go this route I highly recommend this article, because navigating through Azure to actually set it up is like getting through a maze. https://melatonin.dev/blog/code-signing-on-windows-with-azur...


Thanks for the link, I see only available to basically US, Canada and EU though.


That's not easier and cheaper than before. That's how it's always been only now you can buy the cert through Azure.

For an individual the Apple code signing process is a lot easier and more accessible since I couldn't buy a code signing certificate for Windows without being registered as a business.


> That's how it's always been only now you can buy the cert through Azure.

Where can you get an EV cert for $120/year? Last time I checked, all the places were more expensive and then you also had to deal with a hardware token.

Lest we talk past each other: it's true that it used to be sufficient to buy a non-EV cert for around the same money, where it didn't require a hardware token, and that was good enough... but they changed the rules in 2023.


> it’s $10/month

So $120 a year but no it's only Apple with a "tAx"


Millions of Windows power users are accustomed to bypassing SmartScreen.

A macOS app distributed without a trusted signature will reach a far smaller audience, even of the proportionately smaller macOS user base, and that's largely due to deliberate design decisions by Apple in recent releases.


As you said, you need to have a proper legal entity for about 2 years before this becomes an option.

My low-stakes conspiracy theory is that MS is deliberately making this process awful to encourage submission of apps to the Microsoft Store since you only have to pay a one-time $100 fee there for code-signing. The downside is of course that you can only distribute via the MS store.


The EV cert system is truly terrible on Windows. Worst of all, getting an EV cert isn’t even enough to remove the scary warnings popping up for users! For that you still need to convince windows defender that you’re not a bad actor by getting installs on a large number of devices, which of course is a chicken-and-egg problem for software with a small number of users.

At least paying your dues to Apple guarantees a smooth user experience.


No, this information is wrong (unless it’s changed in the last 7 years). EV code signing certs are instantly trusted by Windows Defender.

Source: We tried a non-EV code signing certificate for our product used by only dozens of users at the time, never stopped showing scary warnings. When we got an EV, no more issues.

In case it makes a difference, we use DigiCert.


Not true for us. We EV cert sign (the more expensive one) and my CEO ( the only one left that uses Windows) had this very problem. Apparently the first time a newly signed binary is run it can take up to 15 minutes for defender to allow it. First time I saw this, it was really annoying and confusing.


Interesting.

I regularly download our signed installer often within a minute of it being made available, never noticed a delay.

Maybe it’s very the first time Windows Defender sees a particular org on a cert.

I renewed our cert literally on Friday, tested by making a new build of our installer and could instantly install it fine.

You sure there was no other non Windows default security software on your bosses machine?


They did change it, I think after some debacle with Nvidia pushing an update. They seem to want devs to submit their files via their portal now to get rid of the screen: https://www.microsoft.com/en-us/wdsi/filesubmission

I've never submitted our installers to there (or anywhere). I'm often the very first to install new builds (particularly our nightlies) and never had a delay or anything.

Did you install it on the same machine or a different one?

I was always able to install immediately on the same machine.


Wow. I haven't written software for Windows in over a decade. I always thought Apple was alone in its invasive treatment of developers on their platform. Windows used to be "just post the exe on your web site, and you're good to go." I guess Microsoft has finally managed to aggressively insert themselves into the distribution process there, too. Sad to see.


> Windows used to be "just post the exe on your web site, and you're good to go."

That's also one of the main reasons why Windows was such a malware-ridden hellspace. Microsoft went the Apple route to security and it worked out.

At least Microsoft doesn't require you to dismiss the popup, open the system settings, click the "run anyway" button, and enter a password to run an unsigned executable. Just clicking "more details -> run anyway" still exists on the SmartScreen popup, even if they've hidden it well.

Despite Microsoft's best attempts, macOS still beats Windows when it comes to terribleness for running an executable.


I just wish these companies could solve the malware problem in a way that doesn't always involve inserting themselves as gatekeepers over what the user runs or doesn't run on the user's computer. I don't want any kind of ongoing relationship with my OS vendor once I buy their product, let alone have them decide for me what I can and cannot run.

I get that if you're distributing software to the wider public, you have to make sure these scary alerts don't pop up regardless of platform. But as a savvy user, I think the situation is still better on Windows. As far as I've seen there's still always a (small) link in these popups (I think it's SmartScreen?) to run anyway - no need to dig into settings before even trying to run it.


Are you sure? I had not used Windows for years and assumed "Run Anyway" would work. Last month, I tested running an unsigned (self-signed) .MSIX on a different Windows machine. It's a 9-step process to get through the warnings: https://www.advancedinstaller.com/install-test-certificate-f...

Perhaps .exe is easier, but I wouldn't subject the wider public (or even power users) to that.

So yeah, Azure Trusted Signing or EV certificate is the way to go on Windows.


I solved it by putting a "How to install.rtf" file alongside the program.

Another alternative would be to bundle this app: https://github.com/alienator88/Sentinel

It allows to easily unlock it by drag'n'drop.


What is the subset of users who are going to investigate and read an rtf file but don’t know how to approve an application via system settings (or google to do so)?


I would say quite a lot of users because even the previous simple method of right clicking wasn't that known even by power users. Lot of them just selected "allow applications from anyone" in the settings (most likely just temporarily).

In one application I also offered an alternative by using a web app in case they were not comfortable with any of the option.

Also it's presented in a .dmg file where you have two icons, the app and the "How to install". I would say that's quite inviting for investigation :)


You certainly don't need a hardware token, you can store it in any FIPS 140 Level 2+ stores. This includes stuff like Azure KeyVault and AWS KMS.

Azure Trusted Signing is 100% the best choice, but if for whatever reason you cannot use it, you can still use your own cloud store and hook in the signing tools. I wrote an article on using AWS KMS earlier this year: https://moonbase.sh/articles/signing-windows-binaries-using-...

TLDR: Doing this yourself requires a ~400-500$/year EV cert and miniscule cloud costs


Can confirm this, we use Azure KeyVault and are able to have Azure Pipelines use it to sign our release builds.

We’re (for the moment) a South African entity, so can’t use Azure Trusted Signing, but DigiCert has no issue with us using Azure KeyVault for our EV code signing certificate.

I had ours renewed just this week as it happens. Cost something like USD 840 before tax, don’t have a choice though and in the grand scheme of things it’s not a huge expense for a company.


I have been trying to get people to realize that this is the same or worse for like a year now.

It’s unfortunate it’s come to this but Apple is hardly the worst of the two now.


That's right, there's a similar comparison between the iOS App Store and Android Play Store. Although the annual $99 fee is indeed expensive, the Play Store requires every app to find 12 users for 14 days of internal testing before submission for review, which is utterly incomprehensible, not to mention the constant warnings about inactive accounts potentially being disabled.


In my case, as a developer of a programming language that can compile to all supported platforms from any platform the signing (and notarization) is simply incompatible with the process.

Not only is such signing all about control (the Epic case is a great example of misuse and a reminder that anyone can be blocked by Apple) it is also anti-competitive to other programming languages.

I treat each platform as open only when it allows running unsigned binaries in a reasonable way (or self-signed, though that already has some baggage of needing to maintain the key). When it doesn't I simply don't support such platform.

Some closed platforms (iOS and Android[1]) can be still supported pretty well using PWAs because the apps are fullscreen and self-contained unlike the desktop.

[1] depending on if Google will provide a reasonable way to run self-signed apps, but the trust that it will remain open in the future is already severely damaged


The signing is definitely about control, as is all things with Apple, but there are security benefits. It's a pretty standard flow for dev tools to ad-hoc (self) sign binaries on macOS (either shelling out to codesign, or using a cross-platform tool like https://github.com/indygreg/apple-platform-rs). Nix handles that for me, for example.

It makes it easy for tools like Santa or Little Snitch to identify binaries, and gives the kernel/userspace a common language to chat process identity. You can configure similar for Linux: https://www.redhat.com/en/blog/how-use-linux-kernels-integri...

But Apple's system is centralized. It would be nice if you could add your own root keys! They stay pretty close to standard X.509.


I’m only aware of two times that Apple has revoked certificates for apps distributed outside of the App Store. One was for Facebook’s Research App. The other was for Google’s Screenwise Meter. Both apps were basically spyware for young teens.

In each case, Apple revoked the enterprise certificate for the company, which caused a lot of internal fallout beyond just the offending app, because internal tools were distributed the same way.

Something may have changed, though, because I see Screenwise Meter listed on the App Store for iOS.

https://www.wired.com/story/facebook-research-app-root-certi...

https://www.eff.org/deeplinks/2019/02/google-screenwise-unwi...


The article is about macOS apps, but you're talking about iOS apps.

Apple revokes macOS Developer ID code signing certificates all the time, mostly for malware, but occasionally for goodware, e.g., Charlie Monroe and HP printer drivers.

Also, infamously, Apple revoked the macOS Developer ID cert of Epic Games, as punishment for their iOS App Store dispute.


The problem is not that it’s $99/year. The problem is that it requires strong ID, and if you are doing it as a company (ie if you don’t want Apple to publicize your ID name to everyone who uses your app) then you have to go through an invasive company verification process that you can fail for opaque reasons unrelated to fraud or anything bad.

The system sucks. I’d love to be able to sign my legitimate apps with my legitimate company, but I don’t wish to put the name on my passport onto the screens of millions of people, and my company (around and operating for 20-ish years now) doesn’t pass the Apple verification for some reason.

I also can’t use auto-enroll (DEP) MDM for this reason.


I think the lack of any human to talk to is the worst part of modern tech. Especially for business, where your income may depend on it. It's beyond cruel to prevent people from operating with no explanation of why and no way to find out how to fix it.


Well, what can I say except that the 80s, with their little independent app vendors shipping floppy disks in little baggies, are long behind us. Computers are now commonplace enough, with all the attendant dangers, that platform vendors are demanding a bit of accountability if you want to ship for their platforms, and unfortunately accountability means money and paperwork. The platform vendors are well within their rights to do so. They have a right to protect their reputations, and when malicious or buggy software appears on their platform, their reputation suffers. Half or more of the blue screens on Windows in the late 90s and early 2000s for instance, were due to buggy third-party drivers, yet Microsoft caught the blame for Windows crashing. It took a new driver model, standards on how drivers are expected to behave, and signed drivers to bring this under control.

The future is signed code with deep identity verification for every instruction that runs on a consumer device, from boot loader through to application code. Maybe web site JavaScript will be granted an exception (if it isn't JIT-compiled). This will be a good thing for most consumers. Until Nintendo cleaned out all the garbage and implemented strict controls on who may publish what on their console, the North American video game market was a ruin. The rest of computing is likely to follow suit, for similar reasons.


Congratulations on writing the most servile corporate apologia I've seen all week. This is a masterpiece of Stockholm syndrome.

"Accountability means money and paperwork." Beautiful. Just beautiful. You know what else means money and paperwork? A protection racket. "Nice app you got there, shame if something happened to it before it reached customers. That'll be 30% please." But sure, let's call extortion "accountability" because Tim Apple said so.

Your driver signing example is chef's kiss levels of missing the point. Microsoft said "hey, sign your drivers so we know they're not malware" they didn't say "only drivers we approve can run, and also we get a cut." You're comparing a bouncer checking IDs to a mafia don enforcing territory. These are not the same thing.

And oh my god, the Nintendo argument. You're seriously holding up Nintendo's lockout chip as consumer protection? The same lockout chip they used to squeeze third-party developers, control game production, and maintain an iron grip on pricing? "Until Nintendo cleaned out the garbage" yeah, they cleaned it out alright, straight into their own pockets. The video game crash was caused by publishers like Atari flooding the market with garbage like E.T., not by independent developers needing more "accountability."

"The future is signed code with deep identity verification for every instruction." Holy hell. You're not describing a security feature, you're describing a prison. You're literally fantasising about a world where every line of code needs corporate permission to execute. That's techno feudalism with RGB lighting.

This isn't about protecting anyone from bugs. It's about trillion-dollar companies convincing people like you that you need their permission to use the computer you bought. And somehow, SOMEHOW, you've decided this is good actually, and the 1980s with its freedom and innovation was the problem.

The fact that you think general-purpose computing is a "danger" that needs to be locked down says everything about how effectively these corporations have trained you to beg for your own chains.


> "The future is signed code with deep identity verification for every instruction." Holy hell. You're not describing a security feature, you're describing a prison. You're literally fantasising about a world where every line of code needs corporate permission to execute. That's techno feudalism with RGB lighting.

Yeah. It's gonna suck for us but the consumer market will eat it up. An Xbox that runs Excel. It's not a fantasy. What do you think the Windows 11 hardware requirements were all about? It's Microsoft's way of getting people to get rid of their old PCs without the necessary security hardware, so that when Windows 12 comes out the PC will be a fully locked down platform.

Again, consumers ate up the NES. They ate up the iPhone. This happened partially because of, not in spite of, the iron grip the vendor had over the platform, because they came with a guarantee (a golden seal even, in Nintendo's case!) that no bad stuff would slip through. It filtered out a lot of good stuff, too, but the market has shown that's a price it's willing to pay for some measure of assurance that the bad stuff will be stopped at the source. It's a business strategy that works in the broader market, even though it harms techies. Techies are a tiny, tiny minority, and it's time they learned their place in the grand scheme of things.


At least you can use your ID. If you want to get a code signing certificate for Microsoft at least in Switzerland all the CAs I tried using required me to be incorporated. I'm not sure how it is now but at least a few years ago I couldn't get a code signing certificate as an individual.

Maybe half of the 3rd party apps I have on my applications folder right now are not notarized. It’s really not that big of a deal.


It’s a friction point for potential customers, so we do it with our Electron based app,

The USD 99 annual fee is almost inconsequential, the painful part was getting a DUNS number (we’re a South African entity) and then getting it to work in a completely automated manner on our build server.

Fortunately, once set up it’s been almost no work since.


It is a big deal. You can no longer just right click apps to run them, you have to take a trip to a subpanel of system settings, after clicking though two different dialogs that are designed to scare you into thinking something is wrong (one mentions malware by name).

For normal users this might as well be impossible.

Remember, your average user needs a shortcut to /Applications inside the .dmg image otherwise they won’t know where to drag the app to to install it.


The stapled ticket is optional beyond notarization itself. If you notarize but don’t staple the ticket, users may need an internet connection to check the notarization status.


Apple’s Mac security team in general kind of sucks at their job. They are ineffectual at stopping real issues and make the flow for most users more annoying for little benefit.


> notarization has been a net negative for all parties involved

Notarization made it significantly harder to cross-compile apps for macOS from linux, which means people have to buy a lot of macOS hardware to run in CI instead of just using their existing linux CI to build mac binaries.

You also need to pay $99/year to notarize.

As such, I believe it's resulted in profit for Apple, so at least one of the parties involved has had some benefit from this setup.

Frankly I think Apple should keep going, developer licenses should cost $99 + 15% of your app's profit each year, and notarization should be a pro feature that requires a macbook pro or a mac pro to unlock.


There are second order effects. You definitely attract different types of talent depending on the technology stack of choice. And building the right group of talent around an early stage product/company is an extremely impactful thing on the product. And blogs are an impactful talent marketing source.

This doesn't guarantee any sort of commercial success because there are so many follow on things that are important (product/market fit, sales, customer success, etc.) but it's pretty rough to succeed in the follow ons when the product itself is shit.

For first order effects, if a product's target market is developer oriented, then marketing to things developers care about such as a programming language will help initial adoption. It can also help the tool get talked about more organically via user blogs, social media, word of mouth, etc.

Basically, yeah, it matters, but as a cog in a big machine like all things.


What does it say about your latest project that it attracts the most toxic types from Germany to China^? Are you even aware? Do you consider this "building the right group of talent" for your project?

^https://xcancel.com/QULuseslignux/status/1918296149724692968


I did a for-profit course registration tool called uwrobot too if you or any of your friends were customers of that...


My intention is that the project isn't wholly dependent on me, so that I can move on (one day) and refocus my efforts elsewhere. I think no matter who the donor is, any charity dependent on the welfare of a single large whale is not a healthy organization. I intend to resolve this over time.

That all being said, everyone should give where they want, and if you don't want to give to a terminal emulator non-profit project, then don't! Don't let anyone bully you (me, the person I'm responding to, or anyone else) into what you should and shouldn't charitably support. Enjoy.

(Also, I don't want to repeat this everywhere but I paid taxes and I lost a comma, so no need to worry about that anymore! Everyone please pull out your most microscopic violins! )


> Also, I don't want to repeat this everywhere but I paid taxes and I lost a comma, so no need to worry about that anymore! Everyone please pull out your most microscopic violins!

Well, since we're talking about it, maybe you're down to answer a question I've always wondered about: money into the hundred millions, let alone billions, is for me an unfathomable amount of capital for one person to wield. I've always thought, if I ever had that kind of power to swing around, I'd spend it all trying to solve every problem I could get my hands on, until there was nothing left but my retirement fund (which could be 10 million and still let me spend hundreds of millions while retiring in permanent wealthy comfort). Hunger in specific areas, housing crises, underfunded education, across the world many issues that, at least locally, one individual with that kind of money could, so far as I can tell, independently resolve.

Why aren't the ultra rich doing it? You seem to have a more philanthropic mind than most, you're doing this cool project and nobody can deny your FOSS contributions. But even you are still holding onto keeping that count into the hundreds rather than the tens - is there some quality of life aspect hidden to us that's just really difficult to imagine giving up or something? Yacht life? Private flights? Chumming it up with Gabe and Zuck?

Becoming that wealthy won't happen to me but if it did, what would change about me that'd make me not want to spend it all anymore?


While I understand that people might downvote the parent post because it seems in bad taste and touches on a culturally sensitive thing, haven't we all wondered this? Why is it that the poor give relatively more generously than the rich?

It's such an interesting phenomenon that so many ultra rich people are essentially just hoarding wealth beyond what they should reasonably be able to even have use of in multiple generations. Worse, some of them simply cannot seem to get enough and will literally commit crimes and/or do indisputably morally wrong things to get even more.

I would personally never ask anyone this, and I wouldn't expect anyone who could answer it to actually answer it, but I think what komali2 asked is one of the most interesting questions out there.


I think it might be because I'm autistic but can you help me understand why it's in bad taste to ask it? I see YouTube videos of people talking about how they became really wealthy or showing off their houses or cars, and this person was talking about his bank account directly and has mentioned the 3 comma thing before, so I'm a bit confused why it's not ok to ask more about it.

You did mention something I didn't think of which is lifetimes, I guess if someone wanted to guarantee an ultra wealthy lifestyle for all generations of their kids and grandkids forever, that would be a reason to hoard wealth into the hundreds of millions.


GTK is also merged. Main branch has search. Its also exposed via libghostty for embedders.


You can't build a house without the foundation (pun intended).

I said in the linked post that I remain the largest donor, but this helps lay bricks such that we can build a sustainable community that doesn't rely on me financially or technically. There simply wasn't a vehicle before that others could even join in financially. Now there is.

All of the above was mentioned in the post. If you want more details, please read it. I assume you didn't.

I'll begin some donor reach out and donor relationship work eventually. The past few months has been enough work simply coordinating this process, meeting with accountants and lawyers to figure out the right path forward, meeting with other software foundations to determine proper processes etc. I'm going to take a breather, then hop back in. :)


Note search has landed in main, and the core of it is cross platform and exposed via libghostty (Zig API, C to follow).


Woohoo! Thank you!

(Edit) Download it here: https://github.com/ghostty-org/ghostty/releases/tag/tip


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: