Please use this identifier to cite or link to this item: http://repositorio.ufla.br/jspui/handle/1/48110
Full metadata record
DC FieldValueLanguage
dc.creatorCabral, Frederico L.-
dc.creatorOliveira, Sanderson L. Gonzaga de-
dc.creatorOsthoff, Carla-
dc.creatorCosta, Gabriel P.-
dc.creatorBrandão, Diego N.-
dc.creatorKischinhevsky, Mauricio-
dc.date.accessioned2021-09-13T17:56:55Z-
dc.date.available2021-09-13T17:56:55Z-
dc.date.issued2020-10-25-
dc.identifier.citationCABRAL, F. L. et al. An evaluation of MPI and OpenMP paradigms in finite-difference explicit methods for PDEs on shared-memory multi- and manycore systems. Concurrency and Computation: Practice and Experience, Chichester, v. 32, n. 20, e5642, 25 Oct. 2020. Special Issue. DOI: 10.1002/cpe.5642.pt_BR
dc.identifier.urihttps://doi.org/10.1002/cpe.5642pt_BR
dc.identifier.urihttp://repositorio.ufla.br/jspui/handle/1/48110-
dc.description.abstractThis paper focuses on parallel implementations of three two-dimensional explicit numerical methods on Intel® Xeon® Scalable Processor and the coprocessor Knights Landing. In this study, the performance of a hybrid parallel programming with message passing interface (MPI) and Open Multi-Processing (OpenMP) and a pure MPI implementation used with two thread binding policies is compared with an improved OpenMP-based implementation in three explicit finite-difference methods for solving partial differential equations on shared-memory multicore and manycore systems. Specifically, the improved OpenMP-based version is a strategy that synchronizes adjacent threads and eliminates the implicit barriers of a naïve OpenMP-based implementation. The experiments show that the most suitable approach depends on several characteristics related to the nonuniform memory access (NUMA) effect and load balancing, such as the size of the MPI domain and the number of synchronization points used in the parallel implementation. In algorithms that use four and five synchronization points, hybrid MPI/OpenMP approaches yielded better speedups than the other versions did in runs performed on both systems. The pure MPI-based strategy, however, achieved better results than the other proposed approaches did in the method that employs only one synchronization point.pt_BR
dc.languageen_USpt_BR
dc.publisherWileypt_BR
dc.rightsrestrictAccesspt_BR
dc.sourceConcurrency and Computation: Practice and Experiencept_BR
dc.subjectHigh-performance computingpt_BR
dc.subjectMulticore architecturespt_BR
dc.subjectParallelismpt_BR
dc.subjectParallel processingpt_BR
dc.subjectComputação de alto desempenhopt_BR
dc.subjectArquiteturas multicorept_BR
dc.subjectParalelismopt_BR
dc.subjectProcessamento paralelopt_BR
dc.titleAn evaluation of MPI and OpenMP paradigms in finite-difference explicit methods for PDEs on shared-memory multi- and manycore systemspt_BR
dc.typeArtigopt_BR
Appears in Collections:DCC - Artigos publicados em periódicos

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Admin Tools