<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">The COO assembly is entirely based on thrust primitives, I don’t have much experience to say we will get a serious speedup by writing our own kernels, but it is definitely worth a try if we will end up adopting COO as entry point for GPU irregular assembly.<div class="">Jed, you mentioned BDDC deluxe, what do you mean by that? Porting setup/application of deluxe scaling onto GPU?</div><div class=""><br class=""></div><div class="">Timings are not so bad for me joining the hackaton. </div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Mar 13, 2021, at 8:17 AM, Barry Smith <<a href="mailto:bsmith@petsc.dev" class="">bsmith@petsc.dev</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><br style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><br style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><blockquote type="cite" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none;" class="">On Mar 12, 2021, at 10:49 PM, Jed Brown <<a href="mailto:jed@jedbrown.org" class="">jed@jedbrown.org</a>> wrote:<br class=""><br class="">Barry Smith <<a href="mailto:bsmith@petsc.dev" class="">bsmith@petsc.dev</a>> writes:<br class=""><br class=""><blockquote type="cite" class=""><blockquote type="cite" class="">On Mar 12, 2021, at 6:58 PM, Jed Brown <<a href="mailto:jed@jedbrown.org" class="">jed@jedbrown.org</a>> wrote:<br class=""><br class="">Barry Smith <<a href="mailto:bsmith@petsc.dev" class="">bsmith@petsc.dev</a>> writes:<br class=""><br class=""><blockquote type="cite" class=""> I think we should start porting the PetscFE infrastructure, numerical integrations, vector and matrix assembly to GPUs soon. It is dog slow on CPUs and should be able to deliver higher performance on GPUs.<span class="Apple-converted-space"> </span><br class=""></blockquote><br class="">IMO, this comes via interfaces to libCEED, not rolling yet another way to invoke quadrature routines on GPUs.<br class=""></blockquote><br class=""> I am not talking about matrix-free stuff, that definitely belongs in libCEED, no reason to rewrite.<span class="Apple-converted-space"> </span><br class=""><br class=""> But does libCEED also support the traditional finite element construction process where the matrices are built explicitly? Or does it provide some of the code, integration points, integration formula etc. that could be shared and used as a starting point? If it includes all of these "traditional" things then we should definitely get it all hooked into PetscFE/DMPLEX and go to town. (But yes not so much need for the GPU hackathon since it is wiring more than GPU code). The way I have always heard about libCEED was as a matrix-free engine, so I may have miss understood. It is definitely not my intention to start a project that reproduces functionality that we can just use.<span class="Apple-converted-space"> </span><br class=""></blockquote><br class="">MFEM wants this too and it's in a draft libCEED PR right now. My intent is to ensure it's compatible with Stefano's split-phase COO assembly.<span class="Apple-converted-space"> </span><br class=""></blockquote><br style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><span style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; float: none; display: inline !important;" class=""> Cool, would this be something that, in combination with perhaps some libCEED folk, could be incorporated in the Hackathon? Anyone can join our group Hackathon group, they don't have to have any financial connection with "PETSc".<span class="Apple-converted-space"> </span></span><br style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><br style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><blockquote type="cite" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><br class=""><blockquote type="cite" class=""> We do need solid support for traditional finite element assembly on GPUs, matrix-free finite elements alone is not enough.<br class=""></blockquote><br class="">Agreed, and while libCEED could be further optimized for lowest order, even naive assembly will be faster than what's in DMPlex.</blockquote></div></blockquote></div><br class=""></div></body></html>