Hacker Newsnew | past | comments | ask | show | jobs | submit | acossta's commentslogin

We dogfood our own product to build itself. The post walks through the actual workflow: a one-liner becomes a structured spec with acceptance criteria, the spec becomes implementation tasks scoped to specific files, and an AI agent executes them sequentially with validation after each step.

The interesting part is the review layer — AI traces each acceptance criterion to specific lines in the PR diff, catching semantic gaps that lint and type-check miss.

We also run browser tests against the live deployment with database-level verification. Happy to answer questions about the tooling or where it breaks down.


Crazy it's only been a year since Karpathy coined the term.


Built this out of frustration. AI can write your code, but it doesn't know your brand.

We added /brand.json and /brand.txt to our website - structured files that define how we sound, what words we use, and what to avoid, what colors to use, and where to get the logos from. Now AI tools have context instead of guessing.

Feels like this should be standard. Curious what others think.


I ran into a recurring problem when working with LLMs and coding agents: it is surprisingly hard to consistently communicate a product’s brand.

When we rebranded BrainGrid, I wanted a simple, repeatable way to tell any LLM or coding agent what the brand is, without re-explaining it in prompts every time.

I ended up creating two files:

https://www.braingrid.ai/brand.json

https://www.braingrid.ai/brand.txt

Together, they describe tone, voice, terminology, naming conventions, and visual guidelines in a way that is easy for both humans and LLMs to consume.

I tested this by having Claude Code update the branding across our docs site: https://docs.braingrid.ai/ . The experience was smooth and required very little back and forth. The agent had the context it needed up front.

This made me wonder if we should treat brand context the same way we treat things like README files or API specs.

Would it make sense to standardize something like /brand.json or /brand.txt as a common convention for LLM-assisted development?

Curious if others have run into the same issue, or are solving brand consistency with AI in a different way.


Author here . I grew increasingly frustrated by the mess coding agents made with the design system, so I took a crack at creating a tighter structure with AI agent instructions in the form of Claude.md and a Claude Skill to hopefully enforce it better.

Curious any thoughts. What's working / not working for folks




We are getting hit with exactly the same at a much greater scale. 260K in our case. Exactly the same issue.

When you create Gemini Flash Cache with a TTL of 1 or 3 hrs, it creates the cache and TTLs it correctly, but the billing system keeps charing the hourly rate for the cache making the charges grow exponentially.

We've seen charges go up since 9/19 even though we turned off all the services from that account.

Struggling to get the attention of folks at Google (ticket, account manager, sales engineer: no one responds)


Interesting deep dive into how Adobe built a streaming ingestion layer on top of Apache Iceberg to handle massive volumes of Experience Platform data, addressing challenges like the small‑file problem and commit bottlenecks with asynchronous writes and compaction. All stuff I've had to deal with in the past.

Good nuggets on they partition tables by time, stage writes in separate ingestion and reporting tables, and tune snapshot and metadata handling to deliver a lakehouse‑style pipeline that scales without melting the object store.


The Eclipse Foundation just opened up its Theia AI platform and an alpha Theia IDE that let you bolt the LLM of your choice into your workflow and actually see what it’s doing. You get complete control over prompt engineering and agent behavior, can plug in a local model or a cloud model, and even wire up external tools via Model Context Protocol. The AI‑powered Theia IDE bakes in coding agents, an AI terminal and context sensitive assistants while giving you license‑compliance scanning via SCANOSS. Instead of being locked into a proprietary copilot, you can customize the entire AI stack to your needs and still keep your code private, which is the kind of hackable openness Hacker News loves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: