Get Complete Project Material File(s) Now! »
Hopeld Neural Networks – memorize information
Since the neocortex operation relies on associations between concepts, articial neural networks for associative memory are also proposed. The concept of associative memory is introduced in Section 1.7. The most prominent model in this category is proposed by Hopeld in (Hopeld, 1982). Throughout this document, we consider that neuroinspired associative memories store messages (also called patterns) that they are later capable of retrieving given a suciently large part of their content. The message definition is detailed in the following section. Hopeld Neural Networks (HNNs) consist of neurons that are all interconnected with each other except being connected to itself.
The connections are weighted and their values are restricted to integers. Each neuron in the network of n neurons has its index. Supposing that the set of messages to store contains M messages m1;m2; :::;mM the weight between neurons i and j is obtained as follows.
Spiking Neural Networks – model biological networks and com- pute
Spiking Neural Networks (SNN) form another family of articial neural networks. The spikes represent the action potential from biological neurons. The main aim of SNNs is to realize neural computation. This implies that spikes are related to the quantities relevant to the computations. The main assumption underlying SNNs is that the behavior of neurons depends on the timing of spikes rather than their specic shape or amplitude (Gerstner and Kistler, 2002). Numerous experiments on animals show that the timing of spikes is a means for coding information (Bialek et al., 1991, Heiligenberg, 1991, Kuwabara and Suga, 1993). A few models to describe the behavior of SNN’s neurons are proposed. The most biologically realistic is the Hodgkin-Huxley model (Hodgkin and Huxley, 1952). Despite the fact that it compares well to data from biological experiments, its complexity entails diculties in simulations of large networks. That is why several simplied models as Leaky-Integrate-and-Fire (Lapicque, 1907, Stein, 1965) or Izhikevich’s (Izhikevich, 2003) neurons are proposed. The connection weights of SNNs are modied based on the coincidence of pre- and postsynaptic spikes. Spike Timing-Dependent Plasticity (STDP) is a commonly used approach (Song et al., 2000).
Figure 2.5 illustrates its principle. In this rule, the weight of a connection is increased if a presynaptic spike is followed by a postsynaptic spike. If a presynaptic spike res after the postsynaptic spike the weight of a connection is decreased. The change in the weight depends exponentially on the dierence in time domain between the spikes. Comprehensive descriptions of SNNs are given in (Paugam-Moisy and Bohte, 2009, Vreeken, 2003, Ponulak and Kasinski, 2011, Groning and Bohte, 2014). Neural computation relying on SNNs is used, for instance, to design hardware accelerators (Belhadj et al., 2014a). Such an architecture can be used in multiple computing applications thanks to the possibility to adapt the weights of the connections.
Deep learning – learn information
Currently, in many domains (e.g. vision) state-of-the-art methods for learning are based on deep learning. Deep learning allowes adapting weights of neural networks with multiple layers, typically ten hidden layers (hidden layer is a layer that is used neither as input nor output of the network). Networks made of multiple hidden layers prove to be more ecient than shallow networks (with a single hidden layer) (Bengio, 2009). Moreover, a network with a depth insucient for the targeted task requires more neurons than a network with a depth matched to the problem. In addition, some studies show that the human brain processes information through multiple stages of transformation and representation, i.e. a type of deep architecture (Serre et al., 2007).
Table of contents :
Abstract
Acknowledgements
Contents
List of Figures
List of Tables
Abbreviations
Symbols
Introduction
Context and motivation
Objective
Contribution
Report organization
1 MPSoC power management
1.1 Introduction
1.2 MPSoC architecture
1.2.1 Generalities and denitions
1.2.2 Communication schemes for MPSoCs
1.2.3 Dividing MPSoC on Voltage/Frequency Islands
1.3 Power management on MPSoC
1.4 State-of-the-art of power management decision units
1.4.1 Low-level decision units
1.4.2 High-level decision units
1.5 MPSoC power model and optimization formulation
1.6 Game theory for power management on MPSoC
1.7 CAM-SRAM associative memory for power management on MPSoC .
1.7.1 Generalities and denitions
1.7.2 CAM-SRAM as a decision unit
1.8 Conclusion
2 Introduction to neural networks and networks of neural cliques
2.1 Introduction
2.2 Biological neural networks
2.3 Articial neural networks
2.3.1 McCulloch-Pitts model
2.3.2 Hopeld Neural Networks – memorize information
2.3.3 Spiking Neural Networks – model biological networks and compute
2.3.4 Deep learning – learn information
2.4 Networks of neural cliques
2.4.1 Message denition
2.4.2 Network structure
2.4.3 Message storing procedure
2.4.4 Message retrieval procedure
2.4.5 Density and error probability denitions
2.4.6 Neural cliques as associative memory
2.4.7 Network dimensioning guidelines
2.5 Conclusion
3 Non-uniformly distributed data in networks of neural cliques
3.1 Introduction
3.2 Non-uniform distribution problem positioning
3.3 Strategies to store non-uniform data
3.3.1 Random clusters
3.3.2 Random bits
3.3.3 Using compression codes
3.3.4 Performance comparison
3.4 Twin neurons for ecient real-world data distribution in networks of neural cliques
3.4.1 Introducing twin neurons
3.4.2 Theoretical analysis
3.4.3 Performance comparison
3.4.3.1 Comments on Human coding technique
3.4.3.2 Comparison
3.4.4 Inuence of distribution’s standard deviation
3.5 Real-world data in two practical applications
3.5.1 MPSoC power management for LTE receiver
3.5.1.1 LTE receiver implemented on MAGALI platform
3.5.1.2 Network of neural cliques used as power management unit
3.5.1.3 Simulation results
3.5.2 Dynamic management of PVT variations
3.5.2.1 Introduction
3.5.2.2 Multiprobe sensor for PVT variations
3.5.2.3 Network of neural cliques used as dynamic management unit
3.5.2.4 Network of neural cliques dimensions
3.5.2.5 Simulation results
3.6 Conclusion
4 Hardware neural cliques in practical applications
4.1 Introduction
4.2 Analog and digital ASIC implementation
4.2.1 Analog circuit
4.2.2 Digital circuit
4.2.3 Comparison
4.3 Hardware 3D considerations
4.3.1 General introduction to 3D neural networks
4.3.2 3D technology
4.3.3 3D neural cliques
4.3.4 Methodology
4.3.5 Simulation model
4.3.6 General study results
4.3.7 Case study simulation results
4.4 MPSoC power management: comparison with game theory decision unit .
4.4.1 Generic neural cliques structure
4.4.2 General comparison with game theory decision unit
4.4.3 MPSoC power management for MC-CDMA transmitter
4.4.3.1 MC-CDMA transmitter implemented on FAUST platform 93
4.4.3.2 Network of neural cliques used as power management unit 94
4.4.3.3 Energy gains
4.5 MPSoC power management: comparison with CAM-SRAM associative memory
4.5.1 Neural cliques-based associative memory – implementation complexity
4.5.2 CAM-based associative memory – implementation complexity
4.5.3 Implementation complexity comparison
4.5.4 LTE receiver implemented on MAGALI platform
4.5.4.1 Dimensions of CAM and SRAM
4.5.4.2 Dimensions of neural cliques
4.5.4.3 Simulation results
4.6 Conclusion
Conclusion and perspectives
Contribution and conclusion
Perspectives
Implementation
Applications
A Process variability in neural cliques analog circuits
B Programming the synapses
List of Publications
Bibliography