This paper presents a machine learning approach for building an object detector for interactive simulation of planar linkages from handmade sketches and drawings found in patents and texts. Touch- and pen-input devices and interfaces have made sketching a more natural way for designers to express their ideas, especially during early design stages, but sketching existing complex mechanisms can be tedious and error-prone. While there are software applications available to help users make drawings, including that of a linkage mechanism, it is both educational and instructive to see existing sketches come to life via automated simulation. However, texts and patents present rich and diverse styles of mechanism drawings, which makes automated recognition difficult. Modern machine learning algorithms for object recognition require an extensive number of training images. However, there are no data sets of planar linkages available online. Therefore, our first goal was to generate images of sketches similar to hand-drawn ones and use state-of-the-art deep generation models, such as β-VAE, to produce more training data from a limited set of images. The latent space of β-VAE was explored by linear and spherical interpolations between sub-spaces and by varying latent space’s dimensions. This served two-fold objectives — 1) examine the possibility of generating new synthesized images via interpolation and 2) develop insights in the dependence of latent space dimension on bar linkage parameters. t-SNE dimensionality reduction technique was implemented to visualize the latent space of a β-VAE in a 2D space. Training images produced by animation rendering were used for fine-tuning a real-time object detection system — YOLOv3.