In this paper, a manufacturing work cell with a gantry that is in charge of moving materials/parts between machines and buffers is considered. With the effect of the gantry movement, the system performance becomes quite different from traditional serial production lines. In this paper, reinforcement learning is used to develop a gantry scheduling policy in order to improve system production. The gantry learns to take proper actions under different situations to reduce system production loss by using Q-Learning algorithm and finds the optimal moving policy. A two-machine one-buffer work cell with a gantry is used for case study, by which reinforcement learning is applied. Compare with the FCFS policy, the fidelity and effectiveness of the reinforcement learning method are also demonstrated.
- Manufacturing Engineering Division
Gantry Scheduling for Two-Machine One-Buffer Composite Work Cell by Reinforcement Learning
- Views Icon Views
- Share Icon Share
- Search Site
Arinez, J, Ou, X, & Chang, Q. "Gantry Scheduling for Two-Machine One-Buffer Composite Work Cell by Reinforcement Learning." Proceedings of the ASME 2017 12th International Manufacturing Science and Engineering Conference collocated with the JSME/ASME 2017 6th International Conference on Materials and Processing. Volume 4: Bio and Sustainable Manufacturing. Los Angeles, California, USA. June 4–8, 2017. V004T05A025. ASME. https://doi.org/10.1115/MSEC2017-2854
Download citation file: