Hacker Newsnew | past | comments | ask | show | jobs | submit | jerpint's commentslogin

Nice! I made my own version of this many years ago, with a very basic manim animation

https://www.jerpint.io/blog/2021-03-18-cnn-cheatsheet/


I did a post [0] about this last year, and vanilla LLMs didn’t do nearly as well as I’d expected on advent of code, though I’d be curious to try this again with Claude code and codex

[0] https://www.jerpint.io/blog/2024-12-30-advent-of-code-llms/


LLMs, and especially coding focused models, have come a very long way in the past year.

The difference when working on larger tasks that require reasoning is night and day.

In theory it would be very interesting to go back and retry the 2024 tasks, but those will likely have ended up in the training data by now...


> LLMs, and especially coding focused models, have come a very long way in the past year.

I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.

I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.


Last April I asked Claude Sonnet 3.7 to solve AoC 2024 day 3 in x86-64 assembler and it one-shotted solutions for part 1 and 2(!)

It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.

Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.

https://adventofcode.com/2024/day/3


Last year, I saw LLMs do well on the first week and accuracy drop off after that.

But as others have said, it’s a night and day difference now, particularly with code execution.


Current frontier agents can one shot solve all 2024 AoC puzzles, just by pasting in the puzzle description and the input data.

From watching them work, they read the spec, write the code, run it on the examples, refine the code until it passes, and so on.

But we can’t tell whether the puzzle solutions are in the training data.

I’m looking forward to seeing how well current agents perform on 2025’s puzzles.


They obviously have the puzzles in the training data, why are you acting like this is uncertain?


Best is very subjective depends what you want it to do and if you want to fine tune and how big you consider small


Let me ask the same with: - runs on a laptop CPU - decide if a long article is relevant to a specified topic. Maybe even a summary of the article or picking the interesting part as specified in prompt instructions. - no fine tuning please.

Thank you for any response!


Runs on a laptop. Good at friendly conversational dialogue.


Hey this is awesome! I built something very similar, context-llemur, it’s a CLI and MCP interface to manage context

https://github.com/jerpint/context-llemur

Major difference is a conversation doesn’t get stored, the LLM (or you) can use the MCP/CLI to update with the relevant context updates


I like the concept and have built my own context management tool for this very purpose!

https://github.com/jerpint/context-llemur

Though instead of being a single file, you and LLMs cater your context to be easily searchable (folders and files). It’s all version controlled too so you can easily update context as projects evolves.

I made a video showing how easy it is to pull in context to whatever IDE/desktop app/CLI tool you use https://m.youtube.com/watch?v=DgqlUpnC3uw


This is great! Gonna try this on my next project


I also developed yet another memory system !

https://github.com/jerpint/context-llemur

Although I developed it explicitly without search, and catered it to the latest agents which are all really good at searching and reading files. Instead you and LLMs cater your context to be easily searchable (folders and files). It’s meant for dev workflows (i.e a projects context, a user context)

I made a video showing how easy it is to pull in context to whatever IDE/desktop app/CLI tool you use

https://m.youtube.com/watch?v=DgqlUpnC3uw


I’ve been building a tool to help me co-manage context better with LLMs

When you load it to your favourite agents, you can safely assume whatever agent you’re interacting is immediately up to speed with what needs to get done, and it too can update the context via MCP

https://github.com/jerpint/context-llemur


I built a tool for this exact workflow in mind but with MCP and versioning included so you can easily track and edit the files on any platform including cursor, Claude desktop etc

https://github.com/jerpint/context-llemur


I wrote a tool for this, which allows you to co-create and maintain a context repository with your LLMs

https://github.com/jerpint/context-llemur

CLI for the humans, MCP for the LLMS. Whatever is in the context repository should be used by the LLM for its next steps and you are both responsible for maintaining it as tasks start getting done and the project evolves

I’ve been having good success with it so far


I have a similar flow to this, but using files which are part of the repository. For you tool; What made you choose to version it with git but also write context is not code? Wouldn't you end up with multiple versions of say your backlog, and to what benefit?


The way I see it is that context evolves somewhat orthogonally to code. But you still want to track it in similar ways. Having git under the hood makes it easy to track its evolution and undo/diff things LLMs might decide to change, but also means that tracking your todos and new feature ideas don’t pollute your codebase


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: