# A somewhat natural average-case NP-complete problem

A sampling NP problem is determined by an NP language $L$ and distribution $D$ that can be sampled in probabilistic polynomial time. The problem is to determine for an $x$ sampled frome $D$ whether $x \in L$. This is a model for the average-case complexity of an NP problem. It is a little known fact that there are “sampling-NP-complete” problems in a certain sense (definition here). Such problems are rarer and less likely to arise naturally than NP-complete, and so are less well known. An example of such a problem is: Given a Turing machine $M$, sampled by an exponential distribution on codes for Turing machines, and a uniformly random bitstring $x$, solve the 3SAT problem which is produced by running $M$ on $x$ for $|x|^2$ steps.

It strikes me that idealized forms of roguelike games such as Nethack could provide a somewhat natural example of a sampling NP complete problem, and this could give an intuitive picture of what such problems could look like. In such a game, the player explores a procedurally generated map trying to reach a particular target. The player must overcome a diverse variety of obstacles and can acquire a diverse variety of special items to help them. To make this fit the sampling NP framework assume this game is deterministic and perfect information and the player has a limited number of turns to reach the target (none of these are the case in most real roguelike games). Then in an arbitrary-sized world the player has a fixed probability of getting into this situation: The player is trapped in some region of the map with only a fixed particular set of items. The effects of these items interact in a peculiar way that imitate running a nondeterministic Turing machine with features of the map as input. If the playey can make the Turing machine accept then they can get to the next stage of the map, after which winning is routine.

Here’s a roughly outline of a more formal model that fits this schema:

Consider a finite 2D lattice of locked boxes (this generalizes easily to the infinite lattice). You have keys to boxes (0,0) and (1,0) and you want to open box (2,0). Within each box is either a white or black token, sampled uniformly. Each box also has two compartments containing a blue token and a red token, built such that opening one compartment destroys the contents of other. Finally, boxes $(x,y)$ for $x \geq 0$ each have special keys that can open any box under suitable conditions: specifically, for each such key there’s a sequence $(x_0, y_0, c_0), \dots, (x_{n-1}, y_{n-1}, c_{n-1})$ where $x_i, y_i \in \mathbb {Z}$ and $c_i \in \{\mathrm {black}, \mathrm {white}, \mathrm {blue}, \mathrm {red}\}$, such that the key opens box $(x,y)$ when given the $c_i$-colored token from box $(x+x_i, y+y_i)$ for each $i$. Tokens are reusable across multiple keys. At each box the key is sampled from some probability distribution supported on all the (countably-many) possible keys; for example, $n$ could be sampled from an exponential distribution, each $c_i$ sampled uniformly and $x_i, y_i$ sampled from a discretized Gaussian distribution. Then the following happens with finite probability: The first two boxes have white tokens and the keys $[(0, -1, \mathrm {white}), (1, -1, \mathrm {white})]$ and $[(-1, -1, \mathrm {white}), (0, -1, \mathrm {white})]$. This allows opening boxes $(0,k)$ and $(1,k)$ as long as both have white tokens. This occurs for $n$ layers granting certain predetermined keys. Conditional on these keys being just right, the black token nucleates a pattern of open boxes with $x<0$ encoding the history of a 1D cellular automaton with both random and controllable inputs. This can emulate any NP problem with any sampling procedure. An ‘accept’ state can create a pattern of open boxes that spreads to the start site and to (2,0).