I thought the macOS notarization process was annoying until we started shipping Windows releases.
It’s basically pay to play to get in the good graces of Windows Defender.
I think all-in it was over $1k upfront to get the various certs. The cert company has to do a pretty invasive verification process for both you and your company.
Then — you are required to use a hardware token to sign the releases. This effectively means we have one team member who can publish a release currently.
The cert company can lock your key as well for arbitrary reasons which prevents you from being able to make a release! Scary if the release you’re putting out is a security patch.
I’ll take the macOS ecosystem any day of the week.
The situation on Windows got remarkably better and cheaper recently-ish with the addition of Azure code signing. Instead of hundreds or thousands for a cert it’s $10/month, if you meet the requirements (I think the business must have existed for some number of years first, and some other things).
That's not easier and cheaper than before. That's how it's always been only now you can buy the cert through Azure.
For an individual the Apple code signing process is a lot easier and more accessible since I couldn't buy a code signing certificate for Windows without being registered as a business.
> That's how it's always been only now you can buy the cert through Azure.
Where can you get an EV cert for $120/year? Last time I checked, all the places were more expensive and then you also had to deal with a hardware token.
Lest we talk past each other: it's true that it used to be sufficient to buy a non-EV cert for around the same money, where it didn't require a hardware token, and that was good enough... but they changed the rules in 2023.
Millions of Windows power users are accustomed to bypassing SmartScreen.
A macOS app distributed without a trusted signature will reach a far smaller audience, even of the proportionately smaller macOS user base, and that's largely due to deliberate design decisions by Apple in recent releases.
As you said, you need to have a proper legal entity for about 2 years before this becomes an option.
My low-stakes conspiracy theory is that MS is deliberately making this process awful to encourage submission of apps to the Microsoft Store since you only have to pay a one-time $100 fee there for code-signing. The downside is of course that you can only distribute via the MS store.
The EV cert system is truly terrible on Windows. Worst of all, getting an EV cert isn’t even enough to remove the scary warnings popping up for users! For that you still need to convince windows defender that you’re not a bad actor by getting installs on a large number of devices, which of course is a chicken-and-egg problem for software with a small number of users.
At least paying your dues to Apple guarantees a smooth user experience.
No, this information is wrong (unless it’s changed in the last 7 years). EV code signing certs are instantly trusted by Windows Defender.
Source: We tried a non-EV code signing certificate for our product used by only dozens of users at the time, never stopped showing scary warnings. When we got an EV, no more issues.
Not true for us. We EV cert sign (the more expensive one) and my CEO ( the only one left that uses Windows) had this very problem. Apparently the first time a newly signed binary is run it can take up to 15 minutes for defender to allow it. First time I saw this, it was really annoying and confusing.
They did change it, I think after some debacle with Nvidia pushing an update. They seem to want devs to submit their files via their portal now to get rid of the screen: https://www.microsoft.com/en-us/wdsi/filesubmission
I've never submitted our installers to there (or anywhere). I'm often the very first to install new builds (particularly our nightlies) and never had a delay or anything.
Wow. I haven't written software for Windows in over a decade. I always thought Apple was alone in its invasive treatment of developers on their platform. Windows used to be "just post the exe on your web site, and you're good to go." I guess Microsoft has finally managed to aggressively insert themselves into the distribution process there, too. Sad to see.
> Windows used to be "just post the exe on your web site, and you're good to go."
That's also one of the main reasons why Windows was such a malware-ridden hellspace. Microsoft went the Apple route to security and it worked out.
At least Microsoft doesn't require you to dismiss the popup, open the system settings, click the "run anyway" button, and enter a password to run an unsigned executable. Just clicking "more details -> run anyway" still exists on the SmartScreen popup, even if they've hidden it well.
Despite Microsoft's best attempts, macOS still beats Windows when it comes to terribleness for running an executable.
I just wish these companies could solve the malware problem in a way that doesn't always involve inserting themselves as gatekeepers over what the user runs or doesn't run on the user's computer. I don't want any kind of ongoing relationship with my OS vendor once I buy their product, let alone have them decide for me what I can and cannot run.
I get that if you're distributing software to the wider public, you have to make sure these scary alerts don't pop up regardless of platform. But as a savvy user, I think the situation is still better on Windows. As far as I've seen there's still always a (small) link in these popups (I think it's SmartScreen?) to run anyway - no need to dig into settings before even trying to run it.
Are you sure? I had not used Windows for years and assumed "Run Anyway" would work. Last month, I tested running an unsigned (self-signed) .MSIX on a different Windows machine. It's a 9-step process to get through the warnings: https://www.advancedinstaller.com/install-test-certificate-f...
Perhaps .exe is easier, but I wouldn't subject the wider public (or even power users) to that.
So yeah, Azure Trusted Signing or EV certificate is the way to go on Windows.
What is the subset of users who are going to investigate and read an rtf file but don’t know how to approve an application via system settings (or google to do so)?
I would say quite a lot of users because even the previous simple method of right clicking wasn't that known even by power users. Lot of them just selected "allow applications from anyone" in the settings (most likely just temporarily).
In one application I also offered an alternative by using a web app in case they were not comfortable with any of the option.
Also it's presented in a .dmg file where you have two icons, the app and the "How to install". I would say that's quite inviting for investigation :)
You certainly don't need a hardware token, you can store it in any FIPS 140 Level 2+ stores. This includes stuff like Azure KeyVault and AWS KMS.
Azure Trusted Signing is 100% the best choice, but if for whatever reason you cannot use it, you can still use your own cloud store and hook in the signing tools. I wrote an article on using AWS KMS earlier this year: https://moonbase.sh/articles/signing-windows-binaries-using-...
TLDR: Doing this yourself requires a ~400-500$/year EV cert and miniscule cloud costs
Can confirm this, we use Azure KeyVault and are able to have Azure Pipelines use it to sign our release builds.
We’re (for the moment) a South African entity, so can’t use Azure Trusted Signing, but DigiCert has no issue with us using Azure KeyVault for our EV code signing certificate.
I had ours renewed just this week as it happens. Cost something like USD 840 before tax, don’t have a choice though and in the grand scheme of things it’s not a huge expense for a company.
That's right, there's a similar comparison between the iOS App Store and Android Play Store. Although the annual $99 fee is indeed expensive, the Play Store requires every app to find 12 users for 14 days of internal testing before submission for review, which is utterly incomprehensible, not to mention the constant warnings about inactive accounts potentially being disabled.
Exciting work! I’ve often wondered if an LLM with the right harness could restore and optimize an aging C/C++ codebase. It would be quite compelling to get an old game engine running again on a modern system.
I would expect most of these systems come with very carefully guarded access controls. It also strikes me as a uniquely difficult challenge to track down the decision maker who is willing to take the risk on revamping these systems (AI or not). Curious to hear more about what you’ve learned here.
Also curious to hear how LLMs perform on a language like COBOL that likely doesn’t have many quality samples in the training data.
The decision makers we work with are typically modernization leaders and mainframe owners — usually director or VP level and above. There are a few major tailwinds helping us get into these enterprises:
1. The SMEs who understand these systems are retiring, so every year that passes makes the systems more opaque.
2. There’s intense top-down pressure across Fortune 500s to adopt AI initiatives.
3. Many of these companies are paying IBM 7–9 figures annually just to keep their mainframes running.
Modernization has always been a priority, but the perceived risk was enormous. With today’s LLMs, we’re finally able to reduce that risk in a meaningful way and make modernization feasible at scale.
You’re absolutely right about COBOL’s limited presence in training data compared to languages like Java or Python. Given COBOL is highly structured and readable, the current reasoning models get us to an acceptable level of performance where it's now valuable to use them for these tasks. For near-perfect accuracy (95%+), that is where we see an large opportunity to build domain-specific frontier models purpose built for these legacy systems.
In terms of training your own models, is there enough COBOL available for training or are you going to have to convince your customers to let you train on their data (do you think banks would push back against that?)
> It also strikes me as a uniquely difficult challenge to track down the decision maker who is willing to take the risk on revamping these systems (AI or not).
here that person is a manager which got demoted from ~500 reports to ~40 and then convinced his new boss that it's good to reuse his team for his personal AI strategy which will make him great again.
the libs in the bench don’t really have an external deps. will be much more interesting to see the results with ffmpeg, Qt, etc. The original source releases from any repo here would also be great candidates: https://github.com/id-software
This looks great. I would love something that can just incrementally ingest all of the unread messages across various platforms locally and give a summary about what needs a reply most urgently
I find the Chromium codebase to be one of the best resources for learning large-scale, high quality modern C++ best practices. They stay on a bleeding edge toolchain: near tip of tree clang, lld and libc++ is used on all platforms. They stay pretty recent on c++ standards — they’re on C++20 for the last year or so.
I think they are also contributing a lot of the new libc++ hardening features that have been landing: https://libcxx.llvm.org/Hardening.html IIRC, production chrome ships with Extensive hardening.
They also roll out massive code changes via automated clang rewrites. Recent examples are replacing all raw pointer members with their raw_ptr wrappers that bring some additional safety. They also switched from absl::optional to std::optional in one swoop.
If you haven’t done it before, take a browse through //base sometime, it’s got a lot of well commented code to learn from. You’ll see they’re already using c++20 concepts and requires clauses in a number of places.
I agree. Topics like “how is the lifetime of a QObject exposed to the QML engine managed?” lack coverage and clarity. On the surface it is simple, but in practice it’s an extremely complex topic that has been a source for many bugs in our medium to large QML desktop app.
We contribute upstream a bit and we had documentation updates to clarify the stability of pointers to QHash elements rejected. ¯\_(ツ)_/¯
Personally, we superbuild bleeding edge Qt with the source code in tree so that it’s trivial to go to definition and inspect behavior. Makes the experience substantially better.
I was seeing constant instability when running basically any large C++ build that was saturating all of the cores. I was getting odd clang segfaults indicating an AST was invalid, that would succeed on a re-run.
This was getting very frustrating, at various points I tried every other option online (including restoring bios to Intel Baseline settings), etc.
I came across Keean's investigations into the matter on the Intel forums:
> I think there is an easy solution for Intel, and that is to limit p-cores with both hyper-threads busy to 5.8GHz and allow cores with only one hyper-thread active to boost up to 5.9/6.2 they would then have a chip that matched advertised multi-core and single-thread performance, and would be stable without any specific power limits.
> I still think the real reason for this problem is that hyper-threading creates a hot-spot somewhere in the address arithmetic part of the core, and this was missed in the design of the chip. Had a thermal sensor been placed there the chip could throttle back the core ratio to remain stable automatically, or perhaps the transistors needed to be bigger for higher current - not sure that would solve the heat problem. Ultimately an extra pipeline stage might be needed, and this would be a problem, because it would slow down when only one hyper-thread is in use too. I wonder if this has something to do with why intel are getting rid hyper-threading in 15th gen?
Based on this, I set a P-Core limit to 5.8 in my bios and after several months of daily-use building Chromium I can say this machine is now completely stable.
If you're seeing instability on an i9 14900k or 13900k see the above forum post for more details, and try setting the all-core limit. I've now seen this fix instability in 3+ build machines we use so far.
I'll also add that I was never able to get the instability to show up when running the classic stress testing tools: MemBench, Prime95, and Intel's own stability tests could all run for hours and pass.
There's something unique about the workload of ninja launching a bunch of clang processes that draws this out.
On my machine, a clean build of the llvm-project would consistently fail to complete, so that may be a reasonable workload to A/B test with if you're looking into this.
The user quoted above was running gentoo builds on specific p-cores to test various solutions, ultimately finding that the p-core limit was the only fix that yielded stability.
Just provide a not-related-at-all but IMHO still interesting case: I used to have a Kioxia CM6 U2 SSD drive, it would pass all sorts of benchmarks the reseller is willing to run, but whenever I tried to clean-compile Rust on it, the drive would fail almost certainly somewhere in the build process. While there are configurations you can compile Rust using pre-built LLVM, in my tests I'm compiling LLVM along the way. So I can agree with the comment here, there might be some unique property when doing multi-core compilations, though my tests show a potentially faulty drive, while the above comment here is about Intel CPU.
I’ve kicked around with the C++ interop a few times. Projects like Qt that use raw pointers extensively seem hard to expose. Ended up having to wrap Qt classes to model as a value type which was straight forward but it’s not nearly as seamless as the Objective-C++ experience
It’s basically pay to play to get in the good graces of Windows Defender.
I think all-in it was over $1k upfront to get the various certs. The cert company has to do a pretty invasive verification process for both you and your company.
Then — you are required to use a hardware token to sign the releases. This effectively means we have one team member who can publish a release currently.
The cert company can lock your key as well for arbitrary reasons which prevents you from being able to make a release! Scary if the release you’re putting out is a security patch.
I’ll take the macOS ecosystem any day of the week.