It seems like a lot of the difficulty is in finding arrangements that satisfy constraints. I wonder if an alternative approach would be to use a SAT solver. I suppose the problem with that approach would be that the solver might always find an 'easy' solution that doesn't look random. I know that some SAT solvers let you randomly assign the initial assignments of the variables, but that doesn't mean you get a random solution. Has anyone tried a similar approach?
I think the problem with SAT solvers is that they’re complicated, in terms of computation and also how easy it is to understand by someone who didn’t study formal methods.
WFC is brute-force-simple, but because it’s simple it’s quite computationally inexpensive (unless it hits a lot of dead-ends) and I wouldn’t be surprised if it could often find an adequate solution quicker than a SAT solver. At least for games, where a result doesn’t need to be perfect, just good enough.
Less than perfect solutions can make certain types of video games more interesting because the domain of potential results is generally larger and can include many more variations of challenges to the player.
This is a deliberate choice by Congress to give the Fed a mandate to target 2% inflation. In particular Congress hasn't given them any instruction to try to make up for mistakes. If inflation overshoots in one year then they don't try to undershoot in the next year. They just keep trying to hit 2% inflation.
So if retailers tried to lower prices to pre-COVID levels then they would fail. The Fed would see the falling prices and cut rates until 2% inflation was achieved.
Does this mean that if Earth stays a fixed distance from the sun then its equilibrium temperature is fixed? I remember people saying things like that the albedo of the ice caps affected the Earth's temperature.
Is it known which algorithms produce 'difficult' mazes? I'm imagining you could run all the maze solving algorithms against all the maze generating algorithms many times, and then calculate what the Nash equilibrium would be if the solver is trying to minimise expected time and the generator is trying to maximise it.
There is another old site ("since September 23, 1996"), my second favorite maze site, that has some articles about things like that. Like on the page below ("Tips on how to create difficult and fun Mazes, and how to solve and analyze them").
I think there is a difference if you want to make it only expensive to solve using popular maze solver algorithms, vs to make it difficult for a human to solve. Many of the recommendations on that page are for how to do things that can make a maze more difficult for humans to solve, but will not always matter to an algorithm that just mechanically tries solutions in some order.
Enclosing a settlement gets easier as it gets larger. The amount of work is proportional to the perimeter, while the number of people to do the work is proportional to the area. The area is proportional to the square of the perimeter, so the work per person is inversely proportional to the length of the wall.
That only applies if you have the people. The limit of village population is the ability to grow enough food to feed it withing walking distance of the gate. And so your village cannot get a large enough population to overcome this.
Cities can only pull off a large population if there are a lot of villages growing surplus food (which is not what the village wants - in the medieval world cities were not making luxury goods farmers wanted so farmers would want to work less), and cities needed good bulk transport for all that food. Rome was a massive city in the day with just over a million people - today that is considered a tiny city and there are many much bigger.
Another example: ears are excellent at breaking down the frequency of sounds, but are imprecise about where the sound is coming from; whereas eyes are excellent at telling you where light is coming from, but imprecise about how its frequencies break down.
Right. Interesting small patterns can be found using clever search algorithms. There's also the approach of running trillions of random 'soups' and scanning the results for interesting patterns. These small patterns are then pieced together to build the larger structures.
One thing this doesn't take into account (and the paper acknowledges this) is that the characters are assigned by picking cards from a deck. So the two players cannot have the same character.
Taking this into account would make the game much more complicated, because it can introduce an element of bluff.
For a simple example, imagine that there are only 5 characters. On your first turn you know the opponent doesn't have the same card as you, so you've got 4 options remaining. You'd like to ask a question that splits them into 2+2, but if you do this then the card you're holding will make one of the groups into a 3. Your opponent will know that your card is one of the 3, so you've effectively given them a head start. Instead you might sometimes want to split the options 3+2 with your card in the 2, as a bluff.
How often you want to do this must be described by some Nash equilibrium probabilities. It would be interesting to set up a linear programming solver to find these exactly, but so far I haven't had time to set this up. I don't know if it would be practical to solve the full version of the game with 24 characters.
Doesn’t that strategy only work in games like Clue, where everyone is trying to uncover the same hidden character?
In Guess Who, you’re identifying your opponent’s character, not a shared one, so any misdirection only hurts you … because it doesn’t generate extra signal for your opponent, so there’s no strategic benefit to misleading them.
The problem is when you bisect an odd sized group. You necessarily have to make one half larger than the other. So you're not trying to misdirect, you're trying to avoid creating a signal. But to do this you have to sometimes put your character in the smaller half, which trades off against your other goal of shrinking the pool as fast as possible.
reply