Humans are able to well recognize mixtures of speech signals produced by two or more simultaneous speakers. This ability is known as cocktail party effect. To apply the cocktail party effect to engineering, we can construct novel systems of blind source separation such as current automatic speech recognition systems and active noise control systems under environment noises. A variety of methods have been developed to improve the performance of blind source separation in the presence of background noise or interfering speech. Considering blind source separation as the characteristics of human, artificial neural networks are suitable for it. In this paper, we proposed a method of blind source separation using a neural network. The present neural network can adaptively separate sound sources on training the internal parameters. The network was three-layered. Sound pressure was output from two sound sources and the mixed sound was measured with two microphones. The time history of microphone signals was input to the input layer of neural network. The two outputs of hidden layer were corresponding to the two sound pressure separated respectively. The two outputs of output layer were corresponding to the two microphone signals expected at next time step and compared with the actual microphone signals at next time step to train the neural network by a backpropagation method. In this procedure, the signal from each sound source was adaptively separated. There were two conditions of sound source, sinusoidal signals of 440 and 1000 Hz. In order to assess the performance of neural network numerically and experimentally, a basic independent component analysis (ICA) was conducted simultaneously. The results obtained are as follows. The performance of blind separation by the neural network was higher than the basic ICA. In addition, the neural network can successfully separate the sound source in spite of the position of sound sources.

This content is only available via PDF.
You do not currently have access to this content.