BCPNN

Source: Wikipedia, the free encyclopedia.

A Bayesian Confidence Propagation Neural Network (BCPNN) is an

artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology.[1]
This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.

The basic model is a

recurrent neural network (losing the strict interpretation of their activations as probabilities)[4] but becoming a possible abstract model of biological neural networks and associative memory.[5][6][7][8][9]

BCPNN has been used for machine learning classification[10] and data mining, for example for discovery of adverse drug reactions.[11]  The BCPNN learning rule has also been used to model biological synaptic plasticity and intrinsic excitability in large-scale spiking neural network (SNN) models of cortical associative memory[12][13] and reward learning in Basal ganglia.[14]

Network architecture

The BCPNN network architecture is modular in terms of

hypercolumns and minicolumns. This modular structure is inspired by and generalized from the modular structure of the mammalian cortex. In abstract models, the minicolumns serve as the smallest units and they typically feature a membrane time constant and adaptation. In spiking models of cortex, a layer 2/3 minicolumn is typically represented by some 30 pyramidal cells and one double bouquet cell.[15]
The latter turns the negative BCPNN-weights formed between neurons with anti-correlated activity into di-synaptic inhibition.

Lateral inhibition within the hypercolumn makes it a soft winner-take-all module. Looking at real cortex, the number of minicolumns within a hypercolumn is on the order of a hundred, which makes the activity sparse, at the level of 1% or less, given that hypercolumns can also be silent.[16] A BCPNN network with a size of the human neocortex would have a couple of million hypercolumns, partitioned into some hundred areas. In addition to sparse activity, a large-scale BCPNN would also have very sparse connectivity, given that the real cortex is sparsely connected at the level of 0.01 - 0.001% on average.

Bayesian-Hebbian learning rule

The BCPNN learning rule was derived from Bayes rule and is Hebbian such that neural units with activity correlated over time get excitatory connections between them whereas anti-correlation generates inhibition and lack of correlation gives zero connections. The independence assumptions are the same as in naïve Bayes formalism. BCPNN represents a straight-forward way of deriving a neural network from Bayes rule.[2][3][17] In order to allow the use the standard equation for propagating activity between neurons, transformation to log space was necessary. The basic equations for postsynaptic unit intrinsic excitability and synaptic weight between pre- and postsynaptic units, , are:

Schematic flow of BCPNN update equations reformulated as spike-based plasticity. (A) The pre- (A–D, red) and postsynaptic (A–D, blue) neuron spike trains are presented as arbitrary example input patterns. Each subsequent row (B–D) corresponds to a single stage in the exponentially weighted moving average (EWMA) estimate of the terms used in the incremental Bayesian weight update. (B) traces low pass filter input spike trains. (C) traces compute a low pass filtered representation of the traces at slower time scale. Co-activity now enters in a mutual trace (C,D, black). (D) traces feed into traces that have the slowest plasticity and longest memory. represent a "print-now" signal that modulates learning rate.

where the activation and co-activation probabilities are estimated from the training set, which can be done e.g. by exponentially weighted moving averages (see Figure).

There has been proposals for a biological interpretation of the BCPNN learning rule.   may represent binding of glutamate to NMDA receptors, whereas could represent a back-propagating action potential reaching the synapse. The conjunction of these events lead to influx via NMDA channels,

CaMKII activation, AMPA
channel phosphorylation, and eventually enhanced synaptic conductance.

The traces are further filtered  into the traces, which serve as temporal buffers,

activity dependent plasticity
and learning.

Models of brain systems and functions

The cortex inspired modular architecture of BCPNN has been the basis for several spiking neural network models of cortex aimed at studying its associative memory functions. In these models, minicolumns comprise about 30 model pyramidal cells and a hypercolumn comprises ten or more such minicolumns and a population of basket cells that mediate local feedback inhibition. A modelled network is composed of about ten or more such hypercolumns. Connectivity is excitatory within minicolumns and support feedback inhibition between minicolumns in the same hypercolumn via model basket cells. Long-range connectivity between hypercolumns is sparse and excitatory and is typically set up to form number of distributed cell assemblies representing earlier encoded memories. Neuron and synapse properties have been tuned to represent their real counterparts in terms of e.g. spike frequency adaptation and fast non-Hebbian synaptic plasticity.

These cortical models have mainly been used to provide a better understanding of the mechanisms underlying cortical dynamics and oscillatory structure associated with different activity states.[18] Cortical oscillations in the range from theta, over alpha and beta to gamma are generated by this model. The embedded memories can be recalled from partial input and when activated they show signs of fixpoint attractor dynamics, though neural adaptation and synaptic depression terminates activity within some hundred milliseconds. Notably, a few cycles of gamma oscillations are generated during such a brief memory recall. Cognitive phenomena like attentional blink and its modulation by benzodiazepine has also been replicated in this model.[19]

In recent years, Hebbian plasticity has been incorporated into this cortex model and simulated with abstract non-spiking as well as spiking neural units.[17] This made it possible to demonstrate online learning of temporal sequences[20] as well as one-shot encoding and immediate recall in human word list learning.[12] These findings further lead to the proposal and investigation of a novel theory of working memory based on fast Hebbian synaptic plasticity.[13]

A similar approach was applied to model reward learning and behavior selection in a Go-NoGo connected non-spiking and spiking neural network models of the Basal ganglia.[14][21]

Machine learning applications

The point-wise mutual information weights of BCPNN is since long one of the standard methods for detection of drug adverse reactions.[11]

BCPNN has recently been successfully applied to Machine Learning classification benchmarks, most notably the hand written digits of the MNIST database. The BCPNN approach uses biologically plausible learning and structural plasticity for unsupervised generation of a sparse hidden representation, followed by a one-layer classifier that associates this representation to the output layer.[10] It achieves a classification performance on the full MNIST test set around 98%, comparable to other methods based on unsupervised representation learning.[22] The performance is notably slightly lower than that of the best methods that employ end-to-end error back-propagation. However, the extreme performance comes with a cost of lower biological plausibility and higher complexity of the learning machinery. The BCPNN method is also quite well suited for semi-supervised learning.

Hardware designs for BCPNN

The structure of BCPNN with its cortex-like modular architecture and massively parallel correlation based Hebbian learning makes it quite hardware friendly. Implementation with reduced number of bits in synaptic state variables have been shown to be feasible.[23] BCPNN has further been the target for parallel simulators on cluster computers and GPU:s. It was recently implemented on the SpiNNaker compute platform[24] as well as in a series of dedicated neuromorphic VLSI designs.[25][26][27][28] From these it has been estimated that a human cortex sized BCPNN with continuous learning could be executed in real time with a power dissipation on the order of few kW.

References

  1. .
  2. ^ .
  3. ^ .
  4. ^ Lansner A (June 1991). "A recurrent bayesian ANN capable of extracting prototypes from unlabeled and noisy examples.". Artificial Neural Networks. Proceedings of the 1991 International Conference on Artificial Neural Networks (ICANN-91). Vol. 1–2. Espoo, Finland: Elsevier.
  5. ^ Lansner, Anders (1986). INVESTIGATIONS INTO THE PATIERN PROCESSING CAPABILITIES OF ASSOCIATIVE NETS. KTH Royal Institute of Technology.
  6. PMID 9861988
    .
  7. .
  8. .
  9. .
  10. ^
    S2CID 214692985. {{cite book}}: |journal= ignored (help
    )
  11. ^ .
  12. ^ .
  13. ^ .
  14. ^ .
  15. .
  16. .
  17. ^ .
  18. .
  19. .
  20. .
  21. .
  22. ].
  23. .
  24. .
  25. .
  26. .
  27. .
  28. .
This page is based on the copyrighted Wikipedia article: BCPNN. Articles is available under the CC BY-SA 3.0 license; additional terms may apply.Privacy Policy