TY - GEN
T1 - ASSESSING SCHEDULING STRATEGIES FOR A SHARED RESOURCE FOR MULTIPLE SYNCHRONOUS LINES
AU - Parasrampuria, Harshita
AU - Barton, Russell R.
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - This study uses discrete-event simulation to explore scheduling policies for a shared resource across three synchronous manufacturing lines. The objective is to enhance operational efficiency and reduce blocking and starving downtime. Scheduling for synchronous environments is a less explored area compared to asynchronous systems. Simulation experiments compare the performance of five easy-to-implement scheduling strategies: First-In-First-Out (FIFO), Upstream Priority, Downstream Priority, Random Selection, and Round Robin. The Round-Robin method is commonly used in CPU and computer network scheduling. Scenarios include random station breakdowns. Statistical analysis identifies FIFO and Round Robin strategies as notably effective. Such an offline study could be used to set policies for a digital twin model to determine real-time decisions based on system state, potentially updating the policies using reinforcement learning based on resulting actual performance.
AB - This study uses discrete-event simulation to explore scheduling policies for a shared resource across three synchronous manufacturing lines. The objective is to enhance operational efficiency and reduce blocking and starving downtime. Scheduling for synchronous environments is a less explored area compared to asynchronous systems. Simulation experiments compare the performance of five easy-to-implement scheduling strategies: First-In-First-Out (FIFO), Upstream Priority, Downstream Priority, Random Selection, and Round Robin. The Round-Robin method is commonly used in CPU and computer network scheduling. Scenarios include random station breakdowns. Statistical analysis identifies FIFO and Round Robin strategies as notably effective. Such an offline study could be used to set policies for a digital twin model to determine real-time decisions based on system state, potentially updating the policies using reinforcement learning based on resulting actual performance.
UR - https://www.scopus.com/pages/publications/85217619084
UR - https://www.scopus.com/inward/citedby.url?scp=85217619084&partnerID=8YFLogxK
U2 - 10.1109/WSC63780.2024.10838832
DO - 10.1109/WSC63780.2024.10838832
M3 - Conference contribution
AN - SCOPUS:85217619084
T3 - Proceedings - Winter Simulation Conference
SP - 1728
EP - 1739
BT - 2024 Winter Simulation Conference, WSC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 Winter Simulation Conference, WSC 2024
Y2 - 15 December 2024 through 18 December 2024
ER -