Why not take the best of both worlds?
Use pre-commit hooks for client-side validation, and run the same checks in CI as well.
I’ve been using this setup for years without any issues.
One key requirement in my setup is that every hook is hermetic and idempotent. I don’t use Rust in production, so I can’t comment on it in depth, but for most other languages—from clang-format to swift-format—I always download precompiled binaries from trusted sources (for example, the team’s S3 storage). This ensures that the tools run in a controlled environment and consistently produce the same results.
Suppressing car usage isn’t about punishing individuals; it’s about correcting urban systems that made car dependency the default in the first place. The Lewis–Mogridge position is well established, and making driving less convenient while improving proximity and alternatives is a core principle of sustainable urban planning.
A lifestyle that requires burning large amounts of fuel just to buy groceries, or maintaining water-intensive lawns at scale, only works under very specific economic and environmental conditions. As those conditions disappear, cities have to adapt—even if the cultural shift feels uncomfortable at first.
I'll take my sprawling suburb with a big yard to grow ample food any day over a densely populated and carefully planned cityscape. With the advent of cheaper solar panels and electric vehicles, it's not a big issue.
While reading the README and related documentation, I noticed that Samsung Exynos NPU acceleration was listed, which immediately caught my attention. According to
https://docs.pytorch.org/executorch/main/backends/samsung/sa..., Samsung has finally built and released an NPU SDK—so I followed the link to check it out.
Unfortunately, the experience was disappointing.
The so-called “version 1.0” SDK is available only for Ubuntu 22.04 / 20.04. There is no release date information per version, nor any visible roadmap. Even worse, downloading the SDK requires logging in. The product description page itself https://soc-developer.semiconductor.samsung.com/global/devel... does contain explanations, but they are provided almost entirely as images rather than text—presented in a style more reminiscent of corporate PR material than developer-facing technical documentation.
This is, regrettably, very typical of Samsung’s software support: opaque documentation, gated access, and little consideration for external developers. At this point, it is hard not to conclude that Exynos remains a poor choice, regardless of its theoretical hardware capabilities.
For comparison, Qualcomm and MediaTek actively collaborate with existing ecosystems, and their SDKs are generally available without artificial barriers. As a concrete example, see how LiteRT distributes its artifacts and references in this commit:
https://github.com/google-ai-edge/LiteRT/commit/eaf7d635e1bc...
They're charging you for orchestration, log storage, artifact storage, continued development of the runner binary itself and features available to self-hosted machines. What would your own machine do without the runner and service it connects to?
Qualcomm talked a lot about Snapdragon X Elite as the future of Windows and Linux on ARM, but results so far are mixed. Windows on ARM is finally usable on recent laptops, yet compatibility gaps remain, and Linux support is still far from mature.
The high idle power on the Framework ARM upgrade board shouldn’t be blamed solely on MetaComputing or CIX. Poor idle power efficiency is a long-standing issue on Linux laptops, especially with new platforms, so this looks more like an ecosystem-level power-management problem than a single-vendor failure.
What stands out to me is that Chinese companies are actually shipping hardware and pushing into every possible market segment. Their decentralized, diversified corporate ecosystem seems to enable fast experimentation and broad market penetration.
While it makes sense that LLMs and machine translation systems primarily rely on English Wikipedia as a data source, depending on smaller-language Wikipedias for training is far less ideal. English Wikipedia is generally well-regulated by its community, but many other language editions are not — so treating all of Wikipedia as an authoritative source is misguided.
For instance, my mother tongue’s Wikipedia (Korean Wikipedia) suffers from serious governance issues. The community often rejects outside contributors, and many experienced editors have already moved to alternative platforms. As a result, I sometimes get mixed, low-quality responses in my native language when using LLMs.
Ultimately, we need high-quality open data. Yet most Korean-language content is locked behind walled gardens run by chaebols like Naver and Kakao — and now they’re lobbying the government to fund their own “sovereign AI” projects. It’s a lose-lose situation.
Whether you use Bazel or not, this is a well-known issue with an equally well-known solution. There’s no need for such a lengthy write-up: just use a consistent sysroot across your entire build environment.
If you can’t create your own sysroot image, you can simply download Chromium’s prebuilt one and configure your C++ compile rules correctly. Problem solved.
We also have an dockerfile for clang/LLVM in that repo so the whole thing is hermetic. It’s a bit of shame Bazel doesn’t come with stronger options/defaults here, because I feel like I want to reproduce this same toolchain on every C++ project with Bazel
It seems I’m not the only one who feels that Google’s ad tracking has improved a lot — to the point where it makes me a bit nervous. A few days ago, I searched for a medicine just once using Firefox’s private mode, and now both Google search ads and YouTube ads are filled with related topics. Maybe I need to reset all my Google ad tracking IDs and stop using Google for anything sensitive.
If I remember correctly, the issue was related to newer APIs like MediaStreamTrackProcessor, offscreen surfaces, and WebRTC–WebCodecs interoperability, as well as the ability to run ML inference efficiently in the browser. At the time, Firefox hadn’t fully implemented some of these features, which impacted Google Meet’s ability to apply effects like background blur or leverage hardware-accelerated video processing.
I know many people are dissatisfied with existing C++ build systems like CMake, and I understand why. However, before creating a new build system, one must first understand C++’s current position. These days, C++ is chosen less frequently due to the wide range of alternatives available. Still, C++ shines when it comes to bridging distinct components. While it’s tempting to optimize for the simplest use cases and make things as concise as possible, real-world C++ build systems often need to handle a wide variety of tasks—many of which are not even C++-specific. That’s where CMake proves its value: it can do almost anything. If a new build system can’t handle those cases, it’s likely to be dead on arrival.
An excellent point! I hope I'll be able to handle those cases in the future, and I am kind of seeing this situation: people aren't understanding that this is _the first alpha pre-release of 0.1.0 software._ It isn't going to do everything. It was mostly a test of my skills that started to turn into a serious project, which it wasn't really intended to be.
One key requirement in my setup is that every hook is hermetic and idempotent. I don’t use Rust in production, so I can’t comment on it in depth, but for most other languages—from clang-format to swift-format—I always download precompiled binaries from trusted sources (for example, the team’s S3 storage). This ensures that the tools run in a controlled environment and consistently produce the same results.
reply