Hacker Newsnew | past | comments | ask | show | jobs | submit | forgotpwd16's commentslogin

Besides during commit, pre-commit/prek can run all hooks with `run`. So in CI/CD you can replace all discrete lint/format tool calls with one to pre-commit/prek. E.g. https://github.com/python/cpython/blob/main/.github/workflow....

This just seems like calling a shell script with extra steps.

I have a shell utility similar to make that CI/CD calls for each step (like for step build, run make build) that abstracts stuff. I'd have Prek call this tool, I guess, but then I don't get what benefit there is here.


>All the code, architecture, logic, and design in minikv were written by me, 100% by hand.

Why people always lie with this? Especially in this case that they uploaded the entire log:

  Date:   Sat Dec 6 16:08:04 2025 +0100
      Add hashing utilities and consistent hash ring
  Date:   Sat Dec 6 16:07:24 2025 +0100
      Create mod.rs for common utilities in minikv
  Date:   Sat Dec 6 16:07:03 2025 +0100
      Add configuration structures for minikv components
  Date:   Sat Dec 6 16:06:26 2025 +0100
      Add error types and conversion methods for minikv
  Date:   Sat Dec 6 16:05:45 2025 +0100
      Add main module for minikv key-value store
And this goes on until project is complete (which probably took 2~3h total if sum all sessions). Doubt learned anything at all. Well, other than that LLMs can solo complete simple projects.

Comments in previous submission are also obviously AI generated. No wonder was flagged.


You have never split your working tree changes into separate commits?

Irrelevant question. In README has:

>Built in public as a learning-by-doing project

So, either the entire project was already written and being uploaded one file at the time (first modification since lowest commit mentioned is README update: https://github.com/whispem/minikv/commit/6fa48be1187f596dde8..., clearly AI generated and clearly AI used has codebase/architecture knowledge), and this claim is false, or they're implementing a new component every 30s.


I had the opportunity to request a review of my first post (which was flagged) following my email to the moderators of HN. I didn’t use AI for the codebase, only for .md files & there's no problem with that. My project was reviewed by moderators, don't worry. If the codebase or architecture was AI generated this post would not have been authorized and therefore it would not have been published.

How does this deleted fix_everything.sh fit in to your story?

https://github.com/whispem/minikv/commit/6e01d29365f345283ec...


I don't see the problem to be honest

Hmm. You doth protest too much, methinks :)

I thought that your “background in literature” contributed to the “well-written docs”, but that was LLMs!

No, I was helped (.md files only) by AI to rewrite but the majority of the doc is written by myself, I just asked for help from the AI for formatting for example.

I am not going to pretend to know what this person did, but I've definitely modified many things at once and made distinct commits after the fact (within 30s). I do not find it that abnormal.

Thanks a lot! I make distinct commits "every 30s" because I'm focused and I test my project. If the CI is green, I don't touch of anything. If not, I work on the project until the CI is fully green.

What does that mean? You got feedback from the CI within 30 seconds and immediately pushed a fix?

Yes, in minikv, I set up GitHub Actions for automated CI. Every push or PR triggers tests, lint, and various integration checks — with a typical runtime of 20–60 seconds for the core suite (thanks to Rust’s speed and caching). This means that after a commit, I get feedback almost instantly: if a job fails, I see the logs and errors within half a minute, and if there’s a fix needed, I can push a change right away.

Rapid CI is essential for catching bugs early, allowing fast iteration and a healthy contribution workflow. I sometimes use small, continuous commits (“commit, push, fix, repeat”) during intense development or when onboarding new features, and the fast CI loop helps maintain momentum and confidence in code quality.

If you’re curious about the setup, it’s all described in LEARNING.md and visible in the repo’s .github/workflows/ scripts!


So you read the CI result, implement a fix and stage + commit your changes in ~10 seconds? You might be superhuman.

Yes, I do split my working tree into separate commits whenever possible! I use interactive staging (git add -p) to split logical chunks: features, fixes, cleanups, and documentation are committed separately for clarity. Early in the project (lots of exploratory commits), some changes were more monolithic, but as minikv matured, I've prioritized clean commit history to make code review and future changes easier. Always happy to get workflow tips — I want the repo to be easy to follow for contributors!

But you will never commit them via GitHub's web interface one file at a time :)

It looks like that if you want logically separated commits from a chunk of programming you have done. Stage a file or a hunk or two, write commit message, commit, rinse and repeat.

Absolutely: for all meaningful work I prefer small, logical commits using git add -p or similar, both for history clarity and for reviewer sanity. In initial “spike” or hack sessions (see early commits :)), it’s sometimes more monolithic, but as the codebase stabilized I refactored to have tidy, atomic commit granularity. I welcome suggestions on workflow or PR polish!

@dragonwriter was meant for this. Has written ~16 volumes of GOT, +1.5 volume from 2nd.

Similar: https://www.moltbook.com & https://chan.alphakek.ai. Both launched few days ago. Also kinda funny how former and this are -book/-gram yet replicate reddit.

Sure it's only AI posting? Quite hard to tell these posts apart from the average /g/ content.

/g/ content bad

>The same discussion can happen re the ISS. Its primary purpose was not science.

But it's worth noting that many experiments took place on ISS covering few domains, examples being AMS (cosmology), CAL (quantum physics), SAFFIRE (combustion), and Veggie (botany/sustainability).


And the LHC did science too. But, in both cases, the amount of science generated was not worth the money and/or the same could have been acomplished at far lower cost via other means.

Io is nice (Smalltalk/Self-like). A mostly comprehensive list: https://dbohdan.github.io/embedded-scripting-languages/

That list (or any similar list) would be so helpful if it had a health column, something that takes into account number of contributors, time since last commit, number of forks, number of commits, etc. So many projects are effectively dead but it's not obvious at first sight, and it takes 2 or 3 whole minutes to figure out. That seems short but it adds up when evaluating a project, causing people to just go to a well known solution like Lua (and why not? Lua is just fine; in fact it's great).

Seconded.

Should have replied directly —- thanks! That’s a great list..

How it compares to jax-js? Besides API preference that is.

Honestly, I haven't done a proper performance benchmark yet. Most of my WebGPU shaders were generated via "vibe coding" (heavily AI-assisted) to prioritize rapid architectural verification over deep kernel optimization. So, jax-js or ONNX Runtime would likely outperform Kandle in raw speed at this stage.

However, it’s hard to put aside "API preference" because that is the core feature. The real value of Kandle isn't just the syntax, but the workflow compatibility.

For example, when I implemented Qwen3 or Whisper, I could practically "copy-paste" the logic from the official HuggingFace transformers Python repository into TypeScript. You don't have to re-think the model as a static graph or adapt to a different paradigm—if it works in PyTorch, you already know how to build it in Kandle.

Beyond that, Kandle is aiming for a "batteries-included" ecosystem. We already have built-in support for Safetensors and torchaudio transforms, so you can handle the entire pipeline from loading weights to audio pre-processing (like Mel Spectrograms) without leaving the framework.

So while jax-js is great for high-performance numerical apps, Kandle is for the developer who wants to bridge the gap between Python research and Web deployment with zero cognitive overhead.


Scala may have fallen out of favor but was quite popular few years ago. And perhaps still is the most popular EU-designed language (developed by EPFL).

Python. Python is the most popular language designed in EU. EPFL is not even in EU

> EPFL is not even in EU

For people who don't get this, EPFL is the Swiss Federal Technical Institue in Lausanne. Switzerland isn't part of the EU or EEA but has instead integrated itself with the EU very closely via a mindboggling number of bilateral agreements with the EU members and the Schengen agreement which allows for free, borderless movement. This has the effect of making it seem very much like they are part of the EU without actually being as such.

https://en.wikipedia.org/wiki/Switzerland%E2%80%93European_U...


Others worth mentioning are Kotlin, Ada, Pascal, Haskell, Zig, Erlang, Elixir, Prolog, Ocaml.

Sadly, I’m old enough to remember that Ada was the result of a US initiative to standardize a capable language for embedded development.

A good friend worked on the well regarded Telesoft compiler…


Kotlin is from St Petersburg, it's even named after an island near it.

St. Petersburg is in Europe.

I thought what was worth mentioning was being European or not, rather than being part of an European legal entity or not.


PHP is also from Europe. Not sexy perhaps, but popular enough to mention.

Former EU, there's Gleam.

How is Zig an EU language?

Andrew started the language while he was in Germany.

True to both. My brain not braining. Was thinking Europe-based/driven. Python started in CWI but PSF is USA-based.

>Imagine a version of English extended

You mean restrained. More specifically what you're proposing can formally be referred to as controlled natural language with executable semantics. Some attempts similar to this have been Attempto Controlled English and ClearTalk. (And Logos that someone showed here recently.)

>text would not just require reading; it would require executing algorithms embedded in the language itself

Arguably mathematics is just that.

>just like LaTeX for scientists

Future doesn't look very bright for LaTeX with Typst getting traction.


Like LaTeX, Typst is Turing-complete, which prevents flawless imports in other tools.

What you want is a document format that is not Turing complete, such as the TeXmacs document format.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: