Study on Various Related Processing Architectures
A Comparative Study on Processing Architectures: Neural Networks vs. Processors
Keywords:
processing architectures, processing units, NN, processor units, multi-input, dynamical systems, linear interconnect, connections, neurons, feed-forward NN, Hopfield networks, weights, system dynamics, interconnectivity, locality, data set, redundant systems, fault-tolerant behavior, catastrophic errors, Ratio Memory, RM, cell interconnect, topologically invariant, long-term memoryAbstract
The processing units and NN are similar. In both cases,the processor units are multi-input, dynamical systems, and the behavior of theoverall systems is driven primarily through the weights of the processingunit’s linear interconnect. The main discriminator is that in , processors,connections are made locally, whereas in ANN, connections are global. Forexample neurons in one layer are fully connected to another layer infeed-forward NN and all the neurons are fully interconnected in Hopfieldnetworks. In ANN, it the weights contain information on the processingsystem’s previous state or feedback, but in, processors, the weights are usedto determine the dynamics of the system. Furthermore, due to the highinterconnectivity of ANN, they tend not exploit locality in either the data setor the processing and as a result, they usually are highly redundant systemsthat allow for robust, fault-tolerant behavior without catastrophic errors. Across between an ANN and a , processor is a Ratio Memory , (RM,). In RM,processors, the cell interconnect is local and topologically invariant, but theweights are used to store previous states and not to control dynamics. Theweights of the cells are modified during some learning state creating long-termmemory.Downloads
Download data is not yet available.
Published
2012-11-01
Issue
Section
Articles