Hacker Newsnew | past | comments | ask | show | jobs | submit | enduku's commentslogin



TinyChat is an efficient, lightweight, Python-native serving framework for 4-bit LLMs by AWQ. It delivers 2.3x generation speed up on RTX4090.

Code: https://github.com/mit-han-lab/llm-awq/tree/main/tinychat


I find myself using J [0] for such type of ideation/programming exercises. Klong/K/APL also lend to similar productivity.

[0] https://www.hillelwayne.com/handwriting-j/



Kagi search is brilliant in this aspect. There used to be a similar service [0] not too long ago that blocked many spam sites from the search results.

[0] http://millionshort.com/


Mine is https://densebit.com

I have a tendency to jot down brief thoughts or raw ideas :-)


The code correctness issue is certainly a big problem -- it is simply not enough to get 90% correct. The real world problems often lie in the remaining 10% edge cases.

I have a very different take on how AI can come up with a correct by construction code, with not necessarily using a probabilistic model (Deep learning, for example). I have it written as a blogpost here[0]. The sketch of the idea is that any problem is a data problem, and an algorithm could be discovered, and new code could be generated by projecting it into topological space, finding the code there, and reducing the dimensions back in program space. It could well be a decent application of abstract algebra/algebraic topology to AI and code generation problems.

[0] https://densebit.com/posts/24


A new way of looking at supervised learning: as generalizations of variational calculus.


REBOL Parse maybe?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: