Skip to Main Content
ASME Press Select Proceedings

Intelligent Engineering Systems through Artificial Neural Networks, Volume 20

Cihan H. Dagli
Cihan H. Dagli
Search for other works by this author on:
No. of Pages:
ASME Press
Publication date:

Recently a number of research groups presented transistor-based designs exhibiting behavior similar to biological synapses, facilitating creation of a tangible artificial neuron. Hardware neural networks would possess great advantages in information processing tasks that are inherently parallel, such as image processing, require learning, such as handwriting recognition, or in an environment where the processing unit might be susceptible to physical damage. A number of different possibilities for realization of hardware neural networks currently exist. This paper presents analysis of performance degradation of various architectures of artificial neural networks when subjected to neural damage. An analysis of un-optimized and optimized, feed-forward and recurrent networks, trained with uncorrelated and correlated data sets is conducted. A comparison of networks with single, dual, triple, and quadruple hidden layers is quantified. The main finding is that for damage occurring to cells in hidden layer(s) the architecture that sustains the least damage is that of a single hidden layer. However, when the damage is administered to input layer then the opposite, that is arranging cells in multiple hidden layers, offers the most resilience to the damage. Additionally, recurrent networks offer improved resilience to damage compared to feed-forward networks.

You do not currently have access to this chapter.
Close Modal
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close Modal
Close Modal