Hacker Newsnew | past | comments | ask | show | jobs | submit | majormajor's commentslogin

I'm having trouble reconciling "30 sheet mind numbingly complicated Excel financial model" and "Two or three prompts got it there, using plan mode to figure out the structure of the Excel sheet, then prompting to implement it. It even added unit tests to the Python model itself, which I was impressed with!"

"1 or 2 plan mode prompts" to fully describe a 30-sheet complicated doc suggests a massively higher level of granularity than Opus initial plans on existing codebases give me or a less-than-expected level of Excel craziness.

And the tooling harnesses have been telling the models to add testing to things they make for months now, so why's that impressive or suprising?


No it didn't make a giant plan of every detail. It made a plan of the core concepts and then when it was in implementation mode it kept checking the excel file to get more info. It took around ~30 mins in implementation mode to build it.

I was impressed because the prompt didn't ask it to do that. It doesn't normally add tests for me without asking, YMMV.


Ah, I see.

Did it build a test suite for the Excel side? A fuzzer or such?

It's the cross-concern interactions that still get me.

80% of what I think about these days when writing software is how to test more exhaustively without build times being absolute shit (and not necessarily actually being exhaustive anyway).


One of the dirty secrets of a lot of these "code adjacent" areas is that they have very little testing.

If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? Or maybe you'll be too worried about getting chided for "not being data driven" enough.

If an exec tells an intern or temp to vibecode that thing instead, then you definitely won't have any checkpoints in the process to make sure the human-language prompt describing process was properly turned into the right simulation. But unlike in coding, you don't have a user-facing product that someone can click around in, or send requests to, and verify. Is there a test suite for the giant excel doc? I'm assuming no, maybe I'm wrong.

It feels like it's going to be very hard for anyone working in areas with less black-and-white verifiability or correctness like that sort of financial modeling.


> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output?

I recently watched a demo from a data science guy about the impending proliferation of AI in just about all related fields, his position was highly sceptical but with a "let's make the most of it while we can"

The part that stood out to me which I have repeated to colleagues since, was a demo where the guy fed his tame robot a .csv of price trends for apples and bananas, and asked it to visualise this. Sure enough, out comes a nice looking graph with two jagged lines. Pack it ship it move on..

But then he reveals that, as he wrote the data himself, he knows that both lines should just be an upward trend. Expands the axis labels - the LLM has alphabetized the months but said nothing of it in any of the outputs.


Always a good idea to spot check the labels and make sure you've got JFMAMJ..JASON Derulo

Like every anecdote out there where an LLM makes a basic mistake, this one is worthless without knowing the model and prompt.

If choosing the "wrong" model, or not wording your prompt in just the right way, is sufficient to not just degrade your output but make it actively misleading and worse than useless, then what does that say about the narrative that all this sort of work is about to be replaced?

I don't recall the bot he was using, it was a rushed portion of the presentation to make the point that "yes these tools exist, but be mindful of the output - they're not a magic wand"

This has had tremendous real world consequences. The European austerity wave of the early 2010s was largely downstream of an excel spreadsheet errors that changed the result of a major study on the impact of debt/gdp.

https://www.newscientist.com/article/dn23448-how-to-stop-exc...


This is a pet peeve of mine at work.

Any and I mean any statistic someone throws at me I will try and dig in. And if I'm able to, I will usually find that something is very wrong somewhere. As in, the underlying data is usually just wrong, invalidating the whole thing or the data is reasonably sound but the person doing the analysis is making incorrect assumptions about parts of the data and then drawing incorrect conclusions.


It seems to be an ever-present trait of modern business. There is no rigor, probably partly because most business professionals have never learned how to properly approach and analyze data.

Can't tell you how many times I've seen product managers making decisions based on a few hundred analytics events, trying to glean insight where there is none.


Also rigor is slow. Looks like a waste of time.

What are you optimizing all that code for, it works doesnt it? Dont let perfect be the enemy of good. If it works 80% thats enough, just push it. What is technical debt?


If what you're saying 1) is true and 2) does matter in the success of a business, then wouldn't anyone be able to displace an incumbent trivially by applying a bit of rigor?

I think 1) holds (as my experience matches your cynicism :), but I have a feeling that data minded people tend to overestimate the importance of 2)...


> does matter in the success of a business

In many experience, many of the statistics these people use doesn't matter in the success of a business --- they are vanity metrics. But people use statistics, and especially the wrong statistics, to pass their agenda. Regardless, it's important to fix the statistics.


Rigor helps for better insights about data. That can help for entrepreneurship.

What also can help for entrepreneurship is having a bias for action. So even if your insights are wrong, if you act and keep acting you will keep acting then you will partially shape reality to your will and bend to its will.

So there are certain forces where you can compensate for your lack of rigor.

The best companies have both of those things by their side.


I've frequently found, over a few decades, that numerical systems are cyclically 'corrected' until results and performance match prior expectations.

There are often more errors. Sometimes the actual results are wildly different in reality to what a model expects .. but the data treatment has been bug hunted until it does what was expected .. and then attention fades away.


Or the company just changes the definition of success, so that the metrics (that used to be bad last quarter) are suddenly good

> Any and I mean any statistic someone throws at me I will try and dig in.

I bet you do this only 75% of the time.


This is, unfortunately, a feature of a lot of these systems. The sponsors don’t want truth, they want validation. Generative AI means there don’t even have to be data engineers in the mix to create fake numbers.

> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output?

The local statistics office here recently presented salary statistics claiming that teachers' salaries had unexpectedly increased by 50%. All the press releases went out, and it was only questions raised by the public that forced the statistics office to review and correct the data.


Lies, damn lies, and (unsourced) statistics.

> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late.

Back in my data scientist days I used to push for testing and verification of models. Got told off for reducing the teams speed. If the model works well enough to get money in, and the managers that make the final calls do not understand the implications of being wrong, this would be the majority of cases.


I would say that although Claude may hallucinate at least it can be told to test the scripts. Many data scientists will just balk at the idea of testing a crazy excel workbook with lots of formulas that they themselves inherited.

Excel doesn't have any standard tooling for testing or verification, they're all just "trust me bro".

I did a fair about of data analysis and deciding when or if my report was correct was a huge adrenaline rush.

A huge test for me was to have people review my analyses and poke holes. You feel good when your last 50 reports didn’t have a single thing anyone could point out.

I’ve been seeing a lot of people try to build analyses with AI who haven’t been burned with the “just because it sounds correct doesn’t mean it’s right” dilemma who haven’t realized what it takes before you can stamp your name on an analysis.


The use of specialization of interfaces is apparent if you compare Photoshop with Gemini Pro/Nano Banana for targeted image editing.

I can select exactly where I want changes and have targeted element removal in Photoshop. If I submit the image and try to describe my desired changes textually, I get less easily-controllable output. (And I might still get scrambled text, for instance, in parts of the image that it didn't even need to touch.)

I think this sort of task-specific specialization will have a long future, hard to imagine pure-text once again being the dominant information transfer method for 90% of the things we do with computers after 40 years of building specialized non-text interfaces.


One reasonable niche application I've seen of image models is in real estate, as a way to produce "staged" photos of houses without shipping in a bunch of furniture for a photo shoot (and/or removing a current tenant's furniture for a clean photo). It has to be used carefully to avoid misrepresenting the property, of course, but it's a decent way of avoiding what is otherwise a fairly toilsome and wasteful process.

This sort of thing (not for real estate, but for "what would this furniture actually look like in this room) is definitely somewhere the open-ended interface is fantastic vs targeted-remove in Photoshop (but could also easily be integrated into a Photoshop-like tool to let me be more specific about placement and such).

I was a bit surprised by how it still resulted in gibberish text on posters in the background in an unaffected part of the image that at first glance didn't change at all. So even just the "masking" ability of like "anything outside of this range should not be touched" of a GUI would be a godsend.


It behooves me that Gemini et al dont have these standard video editing tools. Do the engineers seriously think prompting by text is the way people want videos to be generated? Nope. People want to customise. E.g. Check out capcut in the context of social media.

Ive been trying to create a quick and dirty marketing promo via an LLM to visualise how a product will fit into the world of people - it is incredibly painful to 'hope and pray' that by refining the prompt via text you can make slight adjustments come through.

The models are good enough if you are half-decent at prompting and have some patience. But given the amount invested, I would argue they are pretty disappointing. Ive had to chunk the marketing promo into almost a frame-by-frame play to make it somewhat work.


Speaking as someone who doesn't like the idea of AI art so take my words with a grain of salt, but my theory is that this input method exclusivity is intentional on their part, for exactly the reason you want the change. If you only let people making AI art communicate what they want through text or reference attachments (the latter of which they usually won't have), then they have to spend time figuring out how to put it into words. It IS painful to ask for those refinements, because any human would clearly understands it. In the end, those people get to say that they spent hours, days, or weeks refining "their prompt" to get a consistent and somewhat-okay looking image; the engineers get to train their AI to better understand the context of what someone is saying; and all the while the company gets to further legitimize a false art form.

Yeah any time you're translating "user args" and "system state" to actions + execution and supporting a "dry run" preview it seems like you only really have two options: the "ad-hoc quick and dirty informal implementation", or the "let's actually separate the planning and assumption checking and state checking from the execution" design.

"it's nothing new and it's a lot of scams and garbage, but it's just bigger than before, but I still think there will be something transformative there eventually"

Seems like a Rorschach test. If you think this sort of thing is gonna change the world in a good way: here's evidence of it getting to scale. If you think it's gonna be scams, garbage, and destruction: here's evidence of that.


Agents are many things but they are definitely not "scams" - if you think this you've probably stubbornly avoided using Claude Code etc.

What the fuck are you talking about man. What are you ever saying. You are the mark.

This feels like the "prompt engineering" wave of 2023 all over again. A bunch of hype about a specific point-in-time activity based on a lot of manual setup of prompts compared to naive "do this thing for me" that eventually faded as the tooling started integrating all the lessons learned directly.

I'd expect that if there is a usable quality of output from these approaches it will get rolled into existing tools similarly, like how multi-agents using worktrees already was.


2023 was the year of “look at this dank prompt I wrote yo”-weekly demos.

And 2026 is shaping up to be the year of "look at this prompt my middle manager agent wrote for his direct reports" :)

It's the sort of thing where you'd expect true believers (or hype-masters looking to sell something) would try very hard to nudge it in certain directions.

Consider a hypothetical writing prompt from 10 years ago: "Imagine really good and incredibly fast chatbots that have been trained on, or can find online, pretty much all sci fi stories ever written. What happens when they talk to each other?"

Why wouldn't you expect the training to make "agent" loops that are useful for human tasks also make agent loops that could spin out infinite conversations with each other echoing ideas across decades of fiction?


None of that attacks the motivation of FB to look the other way to kids clicking the "I'm an adult" button and pocketing money from advertisers buying un-targeted ads for snacks, clothes, makeup, computers/gaming, and a million other things that are equally as aimed at kids as they are at anyone else.

(Remember how many kids bought car magazines before they even had drivers' licenses? Advertising has never been "oh, ads for things adults will buy will be completely boring to children.")


Or a ton of "personalized agents" will start bugging upstream to complain about suspected issues with all those forks all the time...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: