Hacker Newsnew | past | comments | ask | show | jobs | submit | armanj's commentslogin


Years ago, I often struggled to choose between Amazon products with high ratings from a few reviews and those with slightly lower ratings but a large volume of reviews. I used the Laplace Rule of Succession to code a browser extension to calculate Laplacian scores for products, helping to make better decisions by balancing high ratings with low review counts. https://greasyfork.org/en/scripts/443773-amazon-ranking-lapl...


Just for reference, in case you find yourself in an optimization under uncertainty situation again: The decision-theoretic right way to do this is generate a bayesian posterior over true ranking given ranking count and a prior on true rankings, add a loss function (it can just be the difference between the true rating of the selected item and the true rating of the non-selected item for simplicity) then choose your option to minimize the expected loss. This produces exactly the correct answer.


Can you please provide an example or link to read more? Seems very interesting.



I always assume that all the ratings are fake when there is a low count of ratings since it is easy for the seller to place a bunch of game orders when they are starting out.


A bigger problem I find is many Amazon listings having a large number of genuine positive reviews, but for a completely different product than the one currently for sale.

Recently I was buying a chromecast dongle thing and one of the listings had some kind of "Amazon recommends" badge on it, from the platform. It had hundreds of 5 start reviews, but if you read them they were all for a jar of guava jam from Mexico.

I'm baffled why Amazon permits and even seemingly endorses this kind of rating farming


While this is a good idea, I think it's unrelated to the Laplace transform except that they're named after the same dude?


I referenced 3B1B for the name: youtube.com/watch?v=8idr1WZ1A7Q


Took a while to notice it's xbow and not xbox


Nice job. It's exciting that the quality is approaching human level, but still I think we are spending way too many tokens, and the automation speed-up isn't really worth the total token price yet (unless you have very high-end gpus and you don't care about the completion speed of your tasks)


Thanks! I agree with your sentiment for a lot of basic mundane tasks, but there are a number of tasks that exist today that are very high value yet still mundane and requires manual work.

Examples include form filling, sales prospecting, lead enrichment, or even just keeping track of prices of important things.

Over time, we do expect the cost of tokens on these models to decrease drastically. Powerful vision models are still relatively new compared to other generic LLM models for text. Definitely a lot of room for optimizations that we expect will come quickly!


I wish AI products competed on token efficiency.


wrote the whole `internationalization` instead of `i18n`. definitely not a developer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: