Many engineering design tasks involve creating early conceptual sketches that do not require exact dimensions. Although some previous works focus on automatically generating sketches from reference images, many of them output exactly the same objects as the reference images. There are also models that generate sketches from scratch, which can be divided into pixel-based and stroke-based methods. Pixel-based methods generate sketches as a whole, without any information of the strokes, while stroke-based methods generate sketches by outputting strokes in a sequential manner. Pixel-based methods are frequently used to generate realistic color images. Although the pixel-based methods are more popular, stroke-based methods have the advantages to scale to a larger dimension without losing high fidelity. An image generated from stroke-based methods has only strokes on the canvas, resulting in no random noise in the blank areas of the canvas. However, one challenge in the engineering design community is that most of the sketches are saved as pixel-based images. Furthermore, many non-pixel-based methods rely on stroke-based training data, making them ill-suited for generating design conceptual sketches. In order to overcome these limitations, the authors proposed an agent that can learn from pixel-based images and generate stroke-based images. An advantage of such an agent is the ability to utilize pixel-based training data that is abundant in design repositories, to train stroke-based methods that are typically constrained by the lack of access to stroke-based training data.