Automatic nonlinear-system identification is very useful for various disciplines including, e.g., automatic control, mechanical diagnostics, and financial market prediction. This paper describes a fully automatic structural and weight learning method for recurrent neural networks (RNN). The basic idea is training with residuals, i.e., a single hidden neuron RNN is trained to track the residuals of an existing network before it is augmented to the existing network to form a larger and better network. The network continues to grow until either a desired level of accuracy or a preset maximal number of neurons is reached. The method requires neither guess of initial weight values nor the number of neurons in the hidden layer from users. This new structural and weight learning algorithm is used to find RNN models for a two-degree-of-freedom planar robot, a Van der Pol oscillator and a Mackey-Glass equation using their simulated responses to excitations. In addition, a RNN model is obtained for a real robot using its input and output measurements. The algorithm is effective in all four cases and RNN models were shown to be superior to linear models and hybrid models wherever the comparison was made.

This content is only available via PDF.
You do not currently have access to this content.