Hacker Newsnew | past | comments | ask | show | jobs | submit | sestep's commentslogin


I tried this five years ago back when I was an engineer on the PyTorch project, and it didn't work well enough to be worth it. Has it improved since then?

For me, no. Spend days trying to get it to recreate a production environment workflow. It is too different than production.

It works well enough that I didn’t realize this wasn’t first party till right now.

It works, but there are fair amount of caveats, especially for someone working on things like Pytorch, the runtime is close but not the same, and its support of certain architectures etc can create annoying bugs.

it has. it's improved to work with ~ 75% of steps . fast enough to worth trying before push

This sounds cool but is extremely uninteresting without performance measurements. Are there any?

Same question but for Jai.

Jai does not compile to C. It has a bytecode representation that is used primarily for compile time execution of code, a native backend used mostly for iteration speed and debug builds, and a LLVM target for optimized release builds.

Noob question: if it just compiles to threads, is there any need for special syntax in the first place? My understanding was that no language support should be required for blocking on a thread.

One advantage is that it gives you the opportunity to move to a more sophisticated implementation later without breaking backwards compatibility (assuming the abstraction does not leak).

Async/await should do a little more under the hood than what the typical OS threading APIs provide, for example forwarding function parameters and return values automatically instead of making the user write their own boilerplate structs for that.

Hey Eric, great to see you've now published this! I know we chatted about this briefly last year, but it would be awesome to see how the performance of jax-js compares against that of other autodiff tools on a broader and more standard set of benchmarks: https://github.com/gradbench/gradbench

For sure! It looks like this is benchmarking the autodiff cpu time, not the actual kernels though, which (correct me if I’m wrong) isn’t really relevant for an ML library — it’s more for if you have a really complex scientific expression

Nope, both are measured! In fact, the time to do the autodiff transformation isn't even reflected in the charts shown on the README and the website; those charts only show the time to actually run the computations.

Hm okay, seems like an interesting set of benchmarks — let me know if there’s anything I can do to help make jax-js more compatible with your docker setup

It should be fairly straightforward; feel free to open a PR following the instructions in CONTRIBUTING.md :)

I don’t think this is straightforward but it may be a skill issue on my part. It would require dockerizing headless Chrome with WebGPU support and dynamically injecting custom bundled JavaScript into the page, then extracting the results with Chrome IPC

Ahh no you're right, I forgot about the difficulties for GPU specifically; apologies for my overly curt earlier message. More accurately: I think this is definitely possible (Troels and I have talked a bit about this previously) and I'd be happy to work together if this is something you're interested in. I probably won't work on this if you're not interested on your end, though.

I'm a big fan of svg-term myself: https://github.com/marionebl/svg-term-cli


Hm, very interesting! This only converts asciinema recordings, though, right? It doesn't automatically record anything?


If you have asciinema already installed then you can invoke it through svg-term like this!

  svg-term --command 'cowsay hey there'
But that has the aforementioned issues about not pausing enough, so I usually just record with asciinema first and then invoke svg-term.


Alternatively you can use Nix! :P https://github.com/pranshuparmar/witr/pull/5


That's a 404; here's a working link: https://algassert.com/post/2500


Oops, updated. Thanks!


For fiction I read a lot of Brandon Sanderson: the second Mistborn series, plus a few of the Secret Projects. I quite liked Tress of the Emerald Sea. Also currently reading R. F. Kuang's Katabasis which I'm really enjoying so far.

For nonfiction, I found Amanda Ripley's High Conflict to be excellent and insightful. I also finally got around to reading The Selfish Gene by Richard Dawkins; I expected it to be fine, but it far exceeded my expectations! On top of that, the edition I read also had "end notes" interspersed throughout the book with retrospectives from decades later, which only added to the book's richness.


Sanderson really writes incredible stuff. I read the entire cosmere last year and got a tattoo =)


Wow, that's a lot of reading! Cosmere is, what, 20-ish books at this point?

What was the tattoo if you don't mind sharing?


Yeah, it helps that I became obsessed =)

I got an American traditional rendition of the symbol of the Almighty with a sword through it and the Knights Radiant motto encircled

“life before death, strength before weakness, journey before destination”

It’s on my forearm


That's awesome! Sounds like a cool tattoo.

And I totally relate on the obsession, I've been a big Sanderson fan for 15 years at this point and read almost everything he's ever written :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: