|
The Computational Structure of Spike Trains |
|
Robert Haslinger,1, 2Kristina Lisa Klinkner,3and Cosma Rohilla Shalizi3, 4 |
|
1Martinos Center for Biomedical Imaging,Massachusetts General Hospital, Charlestown MA |
|
2Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge MA |
|
3Department of Statistics, Carnegie Mellon University, Pittsburgh PA |
|
4Santa Fe Institute, Santa Fe NM |
|
(Dated: September 2008; January 2009) |
|
Neurons perform computations, and convey the results of those computations |
|
through the statistical structure of their output spike trains. Here we present a |
|
practical method, grounded in the information-theoretic analysis of prediction, for |
|
inferring a minimal representation of that structure and for characterizing its com- |
|
plexity. Starting from spike trains, our approach nds their causal state models |
|
(CSMs), the minimal hidden Markov models or stochastic automata capable of |
|
generating statistically-identical time series. We then use these CSMs to objec- |
|
tively quantify both the generalizable structure and the idiosyncratic randomness |
|
of the spike train. Specically, we show that the expected algorithmic informa- |
|
tion content (the information needed to describe the spike train exactly) can be |
|
split into three parts describing (1) the time-invariant structure (complexity) of |
|
the minimal spike-generating process, which describes the spike train statistically , |
|
(2) the randomness (internal entropy rate) of the minimal spike-generating process, |
|
and (3) a residual pure noise term not described by the minimal spike generating |
|
process. We use CSMs to approximate each of these quantities. The CSMs are in- |
|
ferred non-parametrically from the data, making only mild regularity assumptions, |
|
via the Causal State Splitting Reconstruction (CSSR) algorithm. The methods |
|
presented here complement more traditional spike train analyses by describing not |
|
only spiking probability, and spike train entropy, but also the complexity of a spike |
|
train's structure. We demonstrate our approach using both simulated spike trains |
|
and experimental data recorded in rat barrel cortex during vibrissa stimulation. |
|
I. INTRODUCTION |
|
The recognition that neurons are computational devices is one of the foundations of modern neuroscience (McCulloch |
|
& Pitts, 1943). However, determining the functional form of such computation is extremely dicult, if only because |
|
while one often knows the output (the spikes) the input (synaptic activity) is almost always unknown. Often, therefore, |
|
scientists must draw inferences about the computation from its results, namely the output spike trains and their |
|
statistics. In this vein, many researchers have used information theory to determine, via calculation of the entropy |
|
rate, a neuron's channel capacity, i.e., how much information the neuron could conceivably transmit, given the |
|
distribution of observed spikes (Rieke et al., 1997). However, entropy quanties randomness, and says little about |
|
how much structure a spike train has, or the amount and type of computation which must have, at a minimum, taken |
|
place to produce this structure. Here, and throughout this paper, we mean \computational structure" information- |
|
theoretically, i.e., the most compact eective description of a process capable of statistically reproducing the observed |
|
spike trains. The complexity of this structure is the number of bits needed to describe it. This is dierent from the |
|
algorithmic information content of a spike train, which is the number of bits needed to reproduce the latter exactly , |
|
describing not only its regularities, but also its accidental, noisy details. |
|
Our goal is to develop rigorous yet practical methods for determining the minimal computational structure necessary |
|
and sucient to generate neural spike trains. We are able to do this through non-parametric analysis of the directly- |
|
observable spike trains, without resorting to a priori assumptions about what kind of structure they have. We do this |
|
by identifying the minimal hidden Markov model (HMM) which can statistically predict the future of the spike train |
|
without loss of information. This HMM also generates spike trains with the same statistics as the observed train. |
|
It thus denes a program which describes the spike train's computational structure, letting us quantify, in bits, the |
|
structure's complexity. |
|
From multiple directions, several groups, including our own, have shown that minimal generative models of time |
|
series can be discovered by clustering histories into \states", based on their conditional distributions over future events |
|
(Crutcheld & Young, 1989; Grassberger, 1986; Jaeger, 2000; Knight, 1975; Littman et al., 2002; Shalizi & Crutcheld, |
|
2001). The observed time series need notbe Markovian (few spike trains are), but the construction always yieldsarXiv:1001.0036v1 [q-bio.NC] 30 Dec 20092 |
|
the minimal HMM capable of generating and predicting the original process. Following Shalizi (2001); Shalizi & |
|
Crutcheld (2001), we will call such a HMM a \Causal State Model" (CSM). Within this framework, the model |
|
discovery algorithm called Causal State Splitting Reconstruction , or CSSR (Shalizi & Klinkner, 2004) is an adaptive |
|
non-parametric method which consistently estimates a system's CSM from time-series data. In this paper we adapt |
|
CSSR for use in spike train analysis. |
|
CSSR provides us with non-parametric estimates of the time- and history- dependent spiking probabilities found by |
|
more familiar parametric analyses. Unlike those analyses, it is also capable, in the limit of innite data, of capturing all |
|
the information about the computational structure of the spike-generating process contained in the spikes themselves. |
|
In particular, the CSM quanties the complexity of the spike-generating process by showing how much information |
|
about the history of the spikes is relevant to their future, i.e., how much information is needed to reproduce the |
|
spike train statistically. This is equivalent to the log of the eective number of statistically-distinct states of the |
|
process (Crutcheld & Young, 1989; Grassberger, 1986; Shalizi & Crutcheld, 2001). While this is not the same as |
|
the algorithmic information content, we show that CSMs can also approximate the average algorithmic information |
|
content, splitting it into three parts: (1) The generative process's complexity in our sense; (2) the internal entropy |
|
rateof the generative process, the extra information needed to describe the exact state transitions the undergone while |
|
generating the spike train; and (3) the residual randomness in the spikes, unconstrained by the generative process. |
|
The rst of these quanties the spike train's structure, the last two its randomness. |
|
Below, we give precise denitions of these quantities, both their ensemble averages ( xII.C) and their functional |
|
dependence on time ( xII.D). The time-dependent versions allow us to determine when the neuron is traversing states |
|
requiring complex descriptions. Our methods put hard numerical lower bounds on the amount of computational |
|
structure which must be present to generate the observed spikes. They also quantify, in bits, the extent to which the |
|
neuron is driven by external forces. We demonstrate our approach using both simulated and experimentally recorded |
|
single-neuron spike trains. We discuss the interpretation of our measures, and how they add to our understanding of |
|
neuronal computation. |
|
II. THEORY AND METHODS |
|
Throughout this paper we treat spike trains as stochastic binary time series, with time divided into discrete, equal- |
|
duration bins steps (typically at one millisecond resolution); \1" corresponds to a spike and \0" to no spike. Our aim is |
|
to nd a minimal description of the computational structure present in such a time series. Heuristically, the structure |
|
present in a spike train can be described by a \program" which can reproduce the spikes statistically. The information |
|
needed to describe this program (loosely speaking the program length) quanties the structure's complexity. Our |
|
approach uses minimal, optimally predictive HMMs, or Causal State Models (CSMs), reconstructed from the data, to |
|
describe the program. (We clarify our use of \minimal" below.) The CSMs are then used to calculate various measures |
|
of the computational structure, such as its complexity. |
|
The states are chosen so that they are optimal predictors of the spike train's future, using only the information |
|
available from the train's history. (We discuss the limitations of this below.) Specically the states Stare dened |
|
by grouping the histories of past spiking activity Xt |
|
|