Deep Q-Learning versus Proximal Policy Optimization: Performance Comparison in a Material Sorting Task

Reuf Kozlica, Stefan Wegenkittl, Simon Hirländer

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper presents a comparison between two well-known deep Reinforcement Learning (RL) algorithms: Deep Q-Learning (DQN) and Proximal Policy optimization (PPO) in a simulated production system. We utilize a Petri Net (PN)-based simulation environment, which was previously proposed in related work. The performance of the two algorithms is compared based on several evaluation metrics, including average percentage of correctly assembled and sorted products, average episode length, and percentage of successful episodes. The results show that PPO outperforms DQN in terms of all evaluation metrics. The study highlights the advantages of policy-based algorithms in problems with high-dimensional state and action spaces. The study contributes to the field of deep RL in context of production systems by providing insights into the effectiveness of different algorithms and their suitability for different tasks.
Original languageEnglish
JournalIEEE ISIE
DOIs
Publication statusPublished - 31 Aug 2023

Keywords

  • Reinforcement Learning
  • Machine Learning

Fields of Science and Technology Classification 2012

  • 102 Computer Sciences

Cite this