Simulating networks of nonlinear stochastic systems

arXiv:1209.3700

In this paper we attempt to find a computationally efficient way to numerically simulate networks with nonlinear stochastic dynamics. With this I mean a continuous dynamical model where the differential equation for each variable depends nonlinearly on some or all variables of the system and has additive noise. If $x$ is a vector with all variables and $\eta$ is a random vector of the same size as $x$ with some unspecified distribution, the dynamics can be compactly described as $$\frac{d x}{dt}=f(x,t)+\eta$$

The challenge lies in the nonlinearity combined with stochasticity. Were only one of them to be present, the problem would be simple. A deterministic nonlinear problem can be straightforwardly be integrated with an ODE package, while a linear stochastic system can be reduced to a system of ODEs for the moments of the probability distribution function (PDF). A full solution would require a Monte Carlo algorithm to simulate a sufficient number of paths to allow us to estimate the PDF of $x$ at each time point. For networks with many nodes we are haunted by the curse of dimensionality, as the volume needed to be sampled increases exponentially and so do the number of simulated paths required to get a good approximation of the distribution at later time points. In systems where there is a well defined mode around which most of the probability mass is concentrated we should be able to derive an analytic approximation which is more tractable. This is exactly what we try to do in the paper. Continue reading “Simulating networks of nonlinear stochastic systems”

Information processing systems post-mortem

Slides from the talk

Yesterday I gave an informal talk about information processing systems and lessons learned from the fields of AI and biology. This was a mix of introductory information theory and some philosophical ramblings.

While creating this talk I took the time to review several concepts from machine learning and AI. In Jaynes’ book about probability theory, bayesian inference is presented as a completely general system for logic under uncertainty. The gist of the argument is that an inference system which obeys certain internal consistency requirements must use probability theory as a formal framework. A hypothetical information processing system should obey such consistency requirements when assigning levels of plausibility to all pieces of information, which means its workings should be built upon probability theory. As a bonus, all the theory is developed, so we need only apply it!

To implement such a system we make a connection with biology. I started by arguing that an organism which wants to maximise its long term population growth must be efficient at decoding environmental inputs and responding to them. Thus if we define long term viability of an organism implementing a given information processing system as a finess function, we can obtain good implementations of our system by maximising such a function.

Continue reading “Information processing systems post-mortem”

Opening the labs again

After a few weeks musing over what to do with this old domain I decided this should be a place to keep a decent weblog. Since lately I’ve been boiling over with half complete ideas and projects perhaps it will be helpful to write them down with some structure.

As usual the hardest part about creating an internet presence is devising a witty name. Having started with the inspiring name of ‘blog’, I settled on prototype. Seemingly a fitting description for most of my projects, past and future.