The full title was supposed to be "we asked an AI to do something AIs aren't intended to do and feigned shock when it didn't do a good job."
As I've always said... AI isn't going to suddenly gain sapience and decide to kill us. What's going to kill us is our misplaced trust in LLMs.
And there's something cynical in Littlefoot's business model. If their businss was based on making recommendations for interesting places to see in various cities, they would have collected lists of locations and their features and then done sophomore level constraint programming to match location, features (like dog friendly or open for lunch) and added a schedule based on typical travel times. But they had to throw an AI in the mix because 18 months ago that was how you guranteed round B financing.
As I've always said... AI isn't going to suddenly gain sapience and decide to kill us. What's going to kill us is our misplaced trust in LLMs.
And there's something cynical in Littlefoot's business model. If their businss was based on making recommendations for interesting places to see in various cities, they would have collected lists of locations and their features and then done sophomore level constraint programming to match location, features (like dog friendly or open for lunch) and added a schedule based on typical travel times. But they had to throw an AI in the mix because 18 months ago that was how you guranteed round B financing.