Xiao-Li Meng writes about the trade-off between efficiency and robustness (bottom of page 208, left side). A solution that works all the time is likely to be inefficient. You can greatly optimize it by making a few assumptions, but then it only works when those assumptions hold. His example is finding a parked car. If you have ever forgotten where you parked at a mall, you know the problem. The robust solution is always to park in the same spot (or very near it), and the way you guarantee that spot is available is to pick the worst one. No one is competing for the back of the lot or the furthest point in the parking structure. You could park at the spot closest to your destination, but that will vary with not only your destination but also lot crowding, who just left, etc.
In our games, we refer to mobs’ having an AI, but we mean that in a very broad sense of AI. They have a few basic behavioral commands and the equivalent of a few buttons to push. Really fancy fights involve unvarying, scripted dances. A few even inspire to pre-planned reactions to certain events, but let’s not tax the system too much.
This is far from an artificial general intelligence that could hold a conversation, but it usually works just fine. The goblin is not expected to do much: close and stab. There are some details about its aggro range and its use of the standard aggro system, but there is no depth, and it really does not matter for the 10 seconds the goblin will be alive. More complex encounters maintain their fidelity by limiting the variables: they fight in limited arenas with closed doors, reset conditions, and things like rage timers to sweep up problems.
Take a step or two outside the assumed parameters, however, and the simple AI has no idea how to vary its behavior. It sphexishly follows its programming even if that programming works against its ostensible goals. You can kite enemies right past your perfectly safe allies. They get caught on rocks or try to run laps on buildings instead of making an ankle-high hop. You can turn their powers against them, and they will not stop following a script that has become suicidal.
I occasionally wonder how Deep Blue or one of the other chess supercomputers would react to blatant cheating. Replace one of your pawns with a rook mid-game or take two moves in a row. A human player will smack you and tell you to stop being an idiot. Does the computer even have the parameters to deal with that? I would expect an error and refusal to continue.
H/T to Andrew Gelman for the Xiao-Li Meng link.