Predicting the performance measures of a 2-dimensional message passing multiprocessor architecture by using machine learning methods

Mehmet Fatih Akay, Ҫiǧdem İnan Aci, Fatih Abut


2-dimensional Simultaneous Optical Multiprocessor Exchange Bus (2D SOME-Bus) is a reliable, robust implementation of petaflops-performance computer architecture. In this paper, we develop models to predict the performance measures (i.e. average channel utilization, average channel waiting time, average network latency, average processor utilization and average input waiting time) of a message passing architecture interconnected by the 2D SOME-Bus by using Multi- layer Feed-forward Artificial Neural Network (MFANN), Support Vector Regression (SVR) and Multiple Linear Regression (MLR). OPNET Modeler is used to simulate the message passing 2D SOME-Bus multiprocessor architecture and to create the training and testing datasets. Using 10-fold cross validation, the performance of the prediction models have been evaluated using several performance metrics. The results show that the SVR model using the radial basis function kernel (SVR-RBF) yields the lowest prediction error among all models.


support vector regression; neural networks; multiprocessors; message passing

Full Text:



ACI C.I., AKAY M.F. A new congestion control algorithm for improving the performance of a broadcast-based multiprocessor architecture. Journal of Parallel and Distributed Computing. 2010, 70(9), pp. 930–940, doi: 10.1016/j.jpdc.2010.06.003.

ADIGA N.R., BLUMRICH M.A., CHEN D., COTEUS P., GARA A., GIAMPAPA M.E., HEIDELBERGER P., SINGH S., STEINMACHER-BUROW B.D., TAKKEN T., TSAO M., VRANAS P. Blue Gene/L torus interconnection network. IBM Journal of Research and Development. 2005, 49(2.3), pp. 265–276, doi: 10.1147/rd.492.0265.

AGIRRE-BASURKO E., IBARRA-BERASTEGI G., MADARIAGA I. Regression and multilayer perceptron-based models to forecast hourly O3 and NO2 levels in the Bilbao area. Environmental Modelling & Software. 2006, 21(4), pp. 430–446, doi: 10.1016/j.envsoft.2004.07.008.

AKAY M.F., ABASIKELES¸ I. Predicting the performance measures of an optical distributed shared memory multiprocessor by using support vector regression. Expert Systems with Applications. 2010, 37(9), pp. 6293–6301, doi: 10.1016/j.eswa.2010.02.092.

AMBROZIC T., TURK G. Prediction of subsidence due to underground mining by artificial neural networks. Computers & Geosciences. 2003, 29(5), pp. 627–637, doi: 10.1016/S0098-3004(03)00044-X.

BUNTINAS D., MERCIER G., GROPP W. Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem. In: Proceedings of Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID ’06), Singapore. IEEE, 2006, 10 pp. – 530.

CRISTIANINI N., SHAWE-TAYLOR J. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, 2000.

DALLY W.J., TOWLES B. Principles and Practices of Interconnection Networks. Morgan Kaufmann Publishers Inc., 2004.

FRIEDRICHS F., IGEL C. Evolutionary tuning of multiple SVM parameters. Neurocomputing. 2005, 64, pp. 107–117, doi: 10.1016/j.neucom.2004.11.022.

GENBRUGGE D., EECKHOUT L. Statistical simulation of chip multiprocessors running multi-program workloads. In: Proceedings of 25th International Conference on Computer Design. IEEE, 2007, pp. 464–471.

GUO X.C., LIANG Y.C., WU C.G., WANG C.Y. PSO-Based Hyper-Parameters Selection for LS-SVM Classifiers. In: Neural Information Processing, Proceedings of 13th International Conference (ICONIP 2006), Part II, Hong Kong, China. Berlin, Heidelberg: Springer, 2006, pp. 1138–1147, doi: 10.1007/11893257 124. Lecture Notes in Computer Science 4233.

HOCHSTEIN L., BASILI V.R., VISHKIN U., GILBERT J. A pilot study to compare programming effort for two parallel programming models. Journal of Systems and Software. 2008, 81(11), pp. 1920–1930, doi: 10.1016/j.jss.2007.12.798.

KATSINIS C. Performance analysis of the simultaneous optical multi-processor exchange bus. Parallel Computing. 2001, 27(8), pp. 1079–1115, doi: 10.1016/S0167-8191(01)00071-0.

KATSINIS C., NABET B. A Scalable Interconnection Network Architecture for Petaflops Computing. The Journal of Supercomputing. 2004, 27(2), pp. 103–128, doi: 10.1023/B:SUPE.0000009318.91562.b0.

KUMAR V., GRAMA A., GUPTA A., KARYPIS G. Introduction to parallel computing. 2nd ed. Pearson, 2003.

OPNET Modeler. OPNET Technologies [software]. [accessed 2015-06-19]. Available from:


M.C. Selection and validation of parameters in multiple linear and principal component regressions. Environmental Modelling & Software. 2008, 23(1), pp. 50–55, doi: 10.1016/j.envsoft.2007.04.012.

SALTAN M., TERZI S. Modeling deflection basin using artificial neural networks with crossvalidation technique in backcalculating flexible pavement layer moduli. Advances in Engineering Software. 2008, 39(7), pp. 588–592, doi: 10.1016/j.advengsoft.2007.06.002.

SCHOLKOPF B., SMOLA A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge: MIT Press, 2001.

STEEL S.J., UYS D.W. Influential data cases when the criterion is used for variable selection in multiple linear regression. Computational Statistics & Data Analysis. 2006, 50(7), pp. 1840–1854, doi: 10.1016/j.csda.2005.02.003.

VAPNIK V. The Nature of Statistical Learning Theory. 2nd ed. New York: Springer, 2000.

WENWEN L., XIAOXUE X., FU L., YU Z. Application of Improved Grid Search Algorithm on SVM for Classification of Tumor Gene. International Journal of Multimedia & Ubiquitous Engineering. 2014, 9(11), pp. 181–188, doi: 10.14257/ijmue.2014.9.11.18.

ZAYID E.I.M., AKAY M.F. Predicting the performance measures of a message-passing multiprocessor architecture using artificial neural networks. Neural Computing and Applications. 2012, 23(7–8), pp. 2481–2491, doi: 10.1007/s00521-012-1267-9.



  • There are currently no refbacks.

Should you encounter an error (non-functional link, missing or misleading information, application crash), please let us know at
Please, do not use the above address for non-OJS-related queries (manuscript status, etc.).
For your convenience we maintain a list of frequently asked questions here. General queries to items not covered by this FAQ shall be directed to the journal editoral office at