On Fri, Oct 5, 2012 at 6:50 PM, Karl Rupp <span dir="ltr"><<a href="mailto:rupp@mcs.anl.gov" target="_blank">rupp@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Dear petsc-dev'ers,<br>
<br>
I'll start my undertaking of a common infrastructure for linear algebra operations with a first look at managing memory. Even though this is presumably the part with smaller complexity compared to the actual execution model, there are still a number of subtleties involved. Some introductory information is also given in order to provide the necessary context (and to make sure I haven't misinterpreted something).<br>
<br>
-- 1. Introduction --<br>
<br>
Let's begin with the current datastructure of a Vec (some comments shortened to make everything fit into one line):<br>
<br>
struct _p_Vec {<br>
PETSCHEADER(struct _VecOps);<br>
PetscLayout map;<br>
void *data; /* implementation-specific data */<br>
PetscBool array_gotten;<br>
VecStash stash,bstash; /* storing off-proc values */<br>
PetscBool petscnative; /* ->data: VecGetArrayFast()*/<br>
#if defined(PETSC_HAVE_CUSP)<br>
PetscCUSPFlag valid_GPU_array; /* GPU data up-to-date? */<br>
void *spptr; /* GPU data handler */<br>
#endif<br>
};<br>
<br>
In a purely CPU-driven execution, there is a pointer to the data (*data), which is assumed to reside in a single linear piece of memory (please correct me if I'm wrong), yet may be managed by some external routines (VecOps).<br>
</blockquote><div><br></div><div>No, the 'data' is actually a pointer to the implementation class (it is helpful to compare this to other class headers, which all have</div><div>the data pointer). In this case, it would be Vec_Seq or Vec_MPI</div>
<div><br></div><div><a href="http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/0b92fc173218/src/vec/vec/impls/dvecimpl.h#l14">http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/0b92fc173218/src/vec/vec/impls/dvecimpl.h#l14</a></div>
<div><br></div><div>In fact is VECHEADER that has the array:</div><div><br></div><div> <a href="http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/0b92fc173218/include/petsc-private/vecimpl.h#l435">http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/0b92fc173218/include/petsc-private/vecimpl.h#l435</a></div>
<div><br></div><div>Jed started the practice of linking to code, and I think its the bees knees. You are correct that all these implementations</div><div>assume a piece of linear memory on the CPU. On the GPU, we synchronize some linear memory with Cusp vectors.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
As accelerators enter the game (indicated by PETSC_HAVE_CUSP), the concept of a vector having one pointer to its data is undermined. Now, Vec can possibly have data on CPU RAM, and on one (multiple with txpetscgpu) CUDA accelerator. 'valid_GPU_array' indicates which of the two memory domains holds the most recent data, possibly both.<br>
</blockquote><div><br></div><div>There is an implementation of PETSc Vecs with non-contiguous memory for SAMRAI.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
-- 2. Shortcomings of the Current Model --<br>
<br>
First, the additional preprocessor directive for supporting a dual-memory-domain is a clear sign that this is an add-on to a single-memory-domain model rather than a well-designed multi-memory-domain model. If OpenCL support is to be added, one would end up either infiltrating 'valid_GPU_array' and 'spptr' and thus end up with a model supporting either OpenCL or CUDA, but not both.<br>
<br>
The second subtlety involves the physical location of data. OpenCL and CUDA provide options for CPU-mapped memory, i.e. the synchronization logic can be deferred to the respective drivers. Still, one would have to manage a pair of {CPU pointer; GPU handle} rather than a single pointer. Also, the current welding of CPU and GPU in AMD's Llano and Trinity ultimately lead to a single pointer to main RAM for both portions of the device. Anyhow, a separation of memory handle storage away from a single pointer *data towards *data and *spptr would clearly prevent any such unified handling of memory locations.<br>
<br>
Third, *spptr is not actually referring to a GPU memory handle, but is instead pointing to full memory handlers (GPUarray in the single-GPU case, GPUvector with txpetscgpu). However, such functionality should better be placed in VecOps rather than out-sourcing all the management logic via *spptr, particularly as VecOps is intended to accomplish just that.<br>
<br>
<br>
-- 3. Proposed Modifications --<br>
<br>
I'm proposing to drop the lines<br>
#if defined(PETSC_HAVE_CUSP)<br>
PetscCUSPFlag valid_GPU_array; /* GPU data up-to-date? */<br>
#endif<br>
from the definition of a Vector and similarly for Mat. As *spptr seems to be in use for other stuff in Mat, one may keep *spptr in Vec for reasons of uniformity, but keep it unused for accelerator purposes.<br>
<br>
As for the handling of data, I suggest an extension of the current data container, currently defined by<br>
<br>
#define VECHEADER \<br>
PetscScalar *array; \<br>
PetscScalar *array_allocated; \<br>
PetscScalar *unplacedarray;<br>
<br>
The first option is to use preprocessor magic to inject pointers to accelerator handles (and appropriate use-flags) one after another directly into VECHEADER.<br>
<br>
However, as nested preprocessor magic is detrimental to code legibility, I prefer the second option, which is to add a generic pointer to a struct PetscAcceleratorData.<br>
One is then free to handle all meta information for accelerators in PetscAcceleratorData and place suitable enabler-#defines therein. The additional indirection from *data into PetscAcceleratorData is not problematic for accelerators because of launch overheads in the order of 10 microseconds. Host-based executions such as OpenMP or a thread pool don't need to access the accelerator handles anyway, as they operate in main memory provided by *array.<br>
<br>
The projected definition of PetscAcceleratorData will be something similar to<br>
struct PetscAcceleratorData{<br>
#if defined(PETSC_HAVE_CUDA)<br>
PetscCUDAHandleDescriptor *cuda_handles;<br>
PetscInt cuda_handle_num;<br>
#endif<br>
#if defined(PETSC_HAVE_OPENCL)<br>
PetscOpenCLHandleDescriptor *opencl_handles;<br>
PetscInt opencl_handle_num;<br>
#endif<br>
/* etc. */<br>
}<br>
<br>
Here, the PetscXYZHandleDescriptor holds<br>
- the memory handle,<br>
- the device ID the handles are valid for, and<br>
- a flag whether the data is valid<br>
(cf. valid_GPU_array, but with a much finer granularity).<br>
Additional metainformation such as index ranges can be extended as needed, cf. Vec_Seq vs Vec_MPI. Different types Petsc*HandleDescriptors are expected to be required because the various memory handle types are not guaranteed to have a particular maximum size among different accelerator platforms.<br>
<br>
At this point I have to admit that a few more implementation details might show up, yet the proposed model is able to cover the case of multiple accelerators from different vendors and provides fine-grained meta-information for each buffer.<br>
<br>
Similar modifications would be applied to Mat, where data ultimately needs to be mapped to linear pieces of memory again for the use in accelerators.<br>
<br>
<br>
-- 4. Concluding remarks --<br>
<br>
Even though the mere question of how to hold memory handles is certainly less complex than a full unification of actual operations at runtime, this first step needs to be done right in order to have a solid foundation to built on. Thus, if you guys spot any weaknesses in the proposed modifications, please let me know. I tried to align everything such that integrates nicely into Petsc, yet I don't know many of the implementation details yet...<br>
</blockquote><div><br></div><div>I can't tell from the above how we would synchronize memory. Perhaps it would be easy to show with an example</div><div>of how this would work, as opposed to the current system.</div><div>
<br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks and best regards,<br>
Karli<br>
<br>
<br>
PS: The reverse-lookup of the vector initialization routines revealed a remarkably sophisticated initialization system... Chapeau!<br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>