The paper proposes an approach to intelligent control of dynamic systems integrating prediction and optimization capabilities of brain-like intelligence in a noisy, unknown and uncertain environment. A new hierarchical learning architecture based on adaptive dynamic programming (ADP) and reinforcement learning (RL) is considered utilizing the information of the system interactions with the environment or situation awareness. The situation information is represented in form of a reinforcement signal generated by a reference module within the framework of the adaptive critic design (ACD). The action and critic modules in ACD are generally implemented using traditional multilayer perceptron (MLP) type artificial neural networks (ANN). In this paper, an alternative form of ANN, namely, single multiplicative neuron (SMN) model is considered in place of MLP for representing action, critic and reference modules. The network modules are trained using a variation of particle swarm optimization. The effectiveness of the proposed approach is illustrated through a realistic nonlinear ship dynamics model in heading control.

This content is only available via PDF.
You do not currently have access to this content.