For detecting potential problems of a cutter path, cutting force simulation in the NC milling process is necessary prior to actual machining. A milling operation is geometrically equivalent to a Boolean subtraction of the swept volume of a cutter moving along a path from a solid model representing the stock shape. In order to precisely estimate the cutting force, the subtraction operation must be executed for every small motion of the cutter. The performance and the cost of the polygon rendering LSI called GPU are dramatically improved these days. By using GPU, the required time for critical computations in the geometric milling simulation can be drastically reduced. In this paper, the computation speed of two known GPU accelerated milling simulation methods, which are the depth buffer based method and the parallel processing based method with CUDA language, are compared. Computational experiments with complex milling simulations show that the implementation with CUDA is several times faster than the depth buffer based method when the cutter motion in the simulation process is sufficiently small.

This content is only available via PDF.
You do not currently have access to this content.