Computational methods for kinematic synthesis of mechanisms for motion generation problems require input in the form of precision positions. Given the highly non-linear nature of the problem, solutions to these methods are overly sensitive to the input — a small perturbation to even a single position of a given motion can change the topology and dimensions of the synthesized mechanisms drastically. Thus, the synthesis becomes a blind iterative process of maneuvering precision positions in the hope of finding good solutions. In this paper, we present a deep-learning based framework which manages the uncertain user input and provides the user with a higher level control of the design process. The framework also imputes the input with missing information required by the computational algorithms. The approach starts by learning the probability distribution of possible linkage parameters with a deep generative modeling technique, called Variational Auto Encoder (VAE). This facilitates capturing salient features of the user input and relating them with possible linkage parameters. Then, input samples resembling the inferred salient features are generated and fed to the computational methods of kinematic synthesis. The framework post-processes the solutions and presents the concepts to the user along with a handle to visualize the variants of each concept. We define this approach as Variational Synthesis of Mechanisms. In addition, we also present an alternate End-to-End deep neural network architecture for Variational Synthesis of linkages. This End-to-End architecture is a Conditional-VAE (C-VAE), which approximates the conditional distribution of linkage parameters with respect to coupler trajectory distribution. The outcome is a probability distribution of kinematic linkages for an unknown coupler path or motion. This framework functions as a bridge between the current state of the art theoretical and computational kinematic methods and machine learning to enable designers to create practical mechanism design solutions.

You do not currently have access to this content.