73 Fitting a Function and Its Derivative
-
Published:2007
Download citation file:
This paper introduces a new procedure for gradient-based training of multilayer perceptron neural networks to simultaneously approximate both a function and its first derivatives. It is assumed that the true function values and the true derivatives are available at the training points. An algorithm is then derived to compute the gradient of a new performance function that combines both squared function error and squared derivative error. Experimental results show that the neural networks trained by the new procedure yield more accurate approximations for both the functions and their first derivatives than networks trained by standard methods. In addition, it is shown that the generalization capabilities of networks trained using this new procedure are better than those trained with early stopping or Bayesian regularization, even though no validation set is used.