TY - CHAP
T1 - Parallel Processing and Large-Scale Datasets in Data Envelopment Analysis
AU - Khezrimotlagh, Dariush
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - In order to measure the performance evaluation of a set of decision-making units (DMUs), a general data envelopment analysis (DEA) model should be solved once for each DMU. In data enabled analytics, when a large-scale dataset is evaluated, the elapsed time to apply a DEA model substantially increases. Parallel processing allows splitting the task into several parts so each part can simultaneously be executed on different processors. This study explores the impact of parallel processing to apply a DEA model for a large-scale dataset. The existing methods are clearly explained including their pros and cons. The methods are compared on different datasets according to three parameters: cardinality, dimension, and density. The strength of each existing method is changed when cardinality, dimension, density, and the number of processors in parallel are changed. A new methodology is proposed using the combination of two existing methods. In general, the proposed method is faster than all existing methods regardless of cardinalities, dimensions, and densities.
AB - In order to measure the performance evaluation of a set of decision-making units (DMUs), a general data envelopment analysis (DEA) model should be solved once for each DMU. In data enabled analytics, when a large-scale dataset is evaluated, the elapsed time to apply a DEA model substantially increases. Parallel processing allows splitting the task into several parts so each part can simultaneously be executed on different processors. This study explores the impact of parallel processing to apply a DEA model for a large-scale dataset. The existing methods are clearly explained including their pros and cons. The methods are compared on different datasets according to three parameters: cardinality, dimension, and density. The strength of each existing method is changed when cardinality, dimension, density, and the number of processors in parallel are changed. A new methodology is proposed using the combination of two existing methods. In general, the proposed method is faster than all existing methods regardless of cardinalities, dimensions, and densities.
UR - http://www.scopus.com/inward/record.url?scp=85122472793&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85122472793&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-75162-3_6
DO - 10.1007/978-3-030-75162-3_6
M3 - Chapter
AN - SCOPUS:85122472793
T3 - International Series in Operations Research and Management Science
SP - 159
EP - 174
BT - International Series in Operations Research and Management Science
PB - Springer
ER -