I already talked about MCMC methods before, but today I want to cover one of the most well known methods of all, Metropolis-Hastings. The goal is to obtain samples according to to the equilibrium distribution of a given physical system, the Boltzmann distribution. Incidentally, we can also rewrite arbitrary probability distributions in this form, which is what allows for the cross pollination of methods between probabilistic inference and statistical mechanics (look at my older post on this). Since we don’t know how to sample directly from the boltzmann distribution in general, we need to use some sampling method. Continue reading “An introduction to the metropolis method with python”

# Month: February 2014

## Pinch to zoom in libgdx

So I was a bit confused how to reproduce the multitouch gesture you often see in mobile gallery apps using libgdx. The idea is to zoom and recenter the viewport such that the points where your fingers are anchored are always the same (in game coordinates). Assuming you don’t need to rotate, here is the code I came up with:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
public class MyGestures implements GestureListener { /* more stuff.... */ @Override public boolean pinch(Vector2 initialPointer1, Vector2 initialPointer2, Vector2 pointer1, Vector2 pointer2) { //grab all the positions touchPos.set(initialPointer1.x, initialPointer1.y, 0); camera.unproject(touchPos); float x1n = touchPos.x; float y1n = touchPos.y; touchPos.set(initialPointer2.x, initialPointer2.y, 0); camera.unproject(touchPos); float x2n = touchPos.x; float y2n = touchPos.y; touchPos.set(pointer1.x, pointer1.y, 0); camera.unproject(touchPos); float x1p = touchPos.x; float y1p = touchPos.y; touchPos.set(pointer2.x, pointer2.y, 0); camera.unproject(touchPos); float x2p = touchPos.x; float y2p = touchPos.y; float dx1 = x1n - x2n; float dy1 = y1n - y2n; float initialDistance = (float) Math.sqrt(dx1*dx1+dy1*dy1); float dx2 = x1p - x2p; float dy2 = y1p - y2p; float distance = (float) Math.sqrt(dx2*dx2+dy2*dy2); if(zooming == false) { zooming = true; cx = (_x1 + _x2)/2; cy = (_y1 + _y2)/2; px = camera.position.x; py = camera.position.y; initZoom = camera.zoom; } else { float nextZoom = (initialDistance/distance)*scale; /* do some ifs here to check if nextZoom is too zoomed in or out*/ camera.zoom = nextZoom; camera.update(); Vector3 pos = new Vector3((pointer1.x + pointer2.x)/2, (pointer1.y + pointer2.y)/2, 0f); camera.unproject(pos); dx = cx - pos.x; dy = cy - pos.y; /* do some ifs here to check if we are in bounds*/ camera.translate(dx, dy); camera.update(); } return false; } } |

Of course, you shouldn’t put all this stuff into this method: each logical piece of code should be in its own method (and in minesweeper most of it is actually on another object, since I like to have only code relating to gesture handling on the gesture handler object)

## Simple pattern formation with cellular automata

A cellular automaton is a dynamical system where space, time and dynamic variable are all discrete. The system is thus composed of a lattice of cells (discrete space), each described by a state (discrete dynamic variable) which evolve into the next time step (discrete time) according to a dynamic rule.

\begin{equation}

x_i^{t+1} = f(x_i^t, \Omega_i^t, \xi)

\end{equation}

This rule generally depends on the state of the target cell $x_i^t$, the state of its neighbors $\Omega_i^t$, and a number of auxiliary external variables $\xi$. Since all these inputs are discrete, we can enumerate them and then define the dynamic rule by a transition table. The transition table maps each possible input to the next state for the cell. As an example consider the elementary 1D cellular automaton. In this case the neighborhood consists of only the 2 nearest neighbors $\Omega_i^t = \{x_{i-1}^t, x_{i+1}^t\}$ and no external variables.

In general, there are two types of neighborhoods, commonly classified as Moore or Von Neumann. A Moore neighborhood of radius $r$ corresponds to all cells within a hypercube of size $r$ centered at the current cell. In 2D we can write it as $\Omega_{ij}^t = \{x^t_{kl}:|i-k|\leq r \wedge |j-l|\leq r\}\setminus x^t_{ij}$. The Von Neumann neighborhood is more restrictive: only cells within a manhattan distance of $r$ belong to the neighborhood. In 2D we write $\Omega_{ij}^t = \{x^t_{kl}:|i-l|+|j-k| \leq r\}\setminus x^t_{ij}$.

Finally it is worth elucidating the concept of totalistic automata. In high dimensional spaces, the number of possible configurations of the neighborhood $\Omega$ can be quite large. As a simplification, we may consider instead as an input to the transition table the sum of all neighbors in a specific state $N_k = \sum_{x \in \Omega}\delta(x = k)$. If there are only 2 states, we need only consider $N_1$, since $N_0 = r – N_1$. For an arbitrary number $m$ of states, we will obviously need to consider $m-1$ such inputs to fully characterize the neighborhood. Even then, each input $N_k$ can take $r+1$ different values, which might be too much. In such cases we may consider only the case when $N_k$ is above some threshold. Then we can define as an input the boolean variable

\begin{equation}

P_{k,T}=\begin{cases}

1& \text{if $N_k \geq T$},\\

0& \text{if $N_k < T$}.

\end{cases}

\end{equation}

In the simulation you can find here, I considered a cellular automaton with the following properties: number of states $m=2$; moore neighborhood with radius $r=1$; lattice size $L_x \times L_y$; and 3 inputs for the transition table:

- Current state $x_{ij}^t$
- Neighborhood state $P_{1,T}$ with $T$ unspecified
- One external input $\xi$\begin{equation}

\xi_{ij}=\begin{cases}

1& \text{if $i \geq L_x/2$},\\

0& \text{if $i < L_x/2$}.

\end{cases}

\end{equation} - Initial condition $x_{ij} = 0 \; \forall_{ij}$

For these conditions a deterministic simulation of these conditions yields only a few steady states: homogeneous 1 or 0, half the lattice 1 and the other 0, and oscillation between a combination of the previous.

One possibility would be to add noise to the cellular automaton in order to provide more interesting dynamics. There are two ways to add noise to a cellular automaton:

The most straightforward way is to perform the following procedure at each time step:

- Apply the deterministic dynamics to the whole lattice
- For each lattice site $ij$, invert the state $x_{ij}$ with probability $p$

This procedure only works of course for $m=2$. In the case of more states there is no obvious way to generalize the procedure and we need to use a proper monte carlo method to get the dynamics.

A second way is to implement a probabilistic cellular automaton. In this case the transition table is generalized to a markov matrix: each input is now mapped not to a specific state but rather to a set of probabilities for a transition to each state ($m$ probabilities). Naturally for each input these sum to one. In this case we have $m$ times more parameters than before.

## Pilot waves in fluid dynamics

This is so cool. Source