Utilizing a neural network, individual down-axis images of combustion waves in the rotating detonation engine (RDE) can be classified according to the number of detonation waves present and their directional behavior. While the ability to identify the number of waves present within individual images might be intuitive, the further classification of wave rotational direction is a result of the detonation wave’s profile, which suggests its angular direction of movement. The application of deep learning is highly adaptive and, therefore, can be trained for a variety of image collection methods across RDE study platforms. In this study, a supervised approach is employed where a series of manually classified images is provided to a neural network for the purpose of optimizing the classification performance of the network. These images, referred to as the training set, are individually labeled as one of ten modes present in an experimental RDE. Possible classifications include deflagration, clockwise and counterclockwise variants of co-rotational detonation waves with quantities ranging from one to three waves, as well as single, double, and triple counter-rotating detonation waves. After training the network, a second set of manually classified images, referred to as the validation set, is used to evaluate the performance of the model. The ability to predict the detonation wave mode in a single image using a trained neural network substantially reduces computational complexity by circumnavigating the need to evaluate the temporal behavior of individual pixel regions throughout time. Results suggest that while image quality is critical, it is possible to accurately identify the modal behavior of detonation waves based on only a single image rather than a sequence of images or signal processing. Successful identification of wave behavior using image classification serves as a steppingstone for further machine learning integration in RDE research and development of comprehensive real-time diagnostics.