<div id="_htmlarea_default_style_" style="font:10pt arial,helvetica,sans-serif">Hi Victor,<br><br>Thanks a lot for explanation and comments. We hope
it would be possible to fix this issue. Please keep me informed.<br>If you need something to be tested just let me
know.<br><br>Regards,<br>Alexander<br><br><br>On Wed, 18 May 2011 20:01:26 -0400<br> Victor Minden <victorminden@gmail.com> wrote:<br>>
Hi Alexander,<br>> <br>> Looking through the runs for CPU and GPU with only 1 <br>>process, I'm seeing the<br>> following oddity which you
pointed out:<br>> <br>> CPU 1 process<br>> minden@bb45:~/petsc-dev/src/snes/examples/tutorials$<br>>
/home/balay/soft/mvapich2-1.5-lucid/bin/mpiexec.hydra <br>>-machinefile<br>> /home/balay/machinefile -n 1 ./ex47cu -da_grid_x 65535
<br>>-snes_monitor<br>> -ksp_monitor<br>> 0 SNES Function norm 3.906279802209e-03<br>> 0 KSP Residual
norm 2.600060425819e+01<br>> 1 KSP Residual norm 1.727316216725e-09<br>> 1 SNES Function norm
2.518839280713e-05<br>> 0 KSP Residual norm 1.864270710157e-01<br>> 1 KSP Residual norm
1.518456989028e-11<br>> 2 SNES Function norm 1.475794371713e-09<br>> 0 KSP Residual norm
1.065102315659e-05<br>> 1 KSP Residual norm 1.258453455440e-15<br>> 3 SNES Function norm
2.207728411745e-10<br>> 0 KSP Residual norm 6.963755704792e-12<br>> 1 KSP Residual norm
1.188067869190e-21<br>> 4 SNES Function norm 2.199244040060e-10<br>> <br>> GPU 1 process<br>>
minden@bb45:~/petsc-dev/src/snes/examples/tutorials$<br>> /home/balay/soft/mvapich2-1.5-lucid/bin/mpiexec.hydra <br>>-machinefile<br>>
/home/balay/machinefile -n 1 ./ex47cu -da_grid_x 65535 <br>>-snes_monitor<br>> -ksp_monitor -da_vec_type cusp<br>> 0 SNES Function
norm 3.906279802209e-03<br>> 0 KSP Residual norm 2.600060425819e+01<br>> 1 KSP Residual norm
1.711173401491e-09<br>> 1 SNES Function norm 2.518839283204e-05<br>> 0 KSP Residual norm
1.864270712051e-01<br>> 1 KSP Residual norm 1.123567613474e-11<br>> 2 SNES Function norm
1.475752536169e-09<br>> 0 KSP Residual norm 1.065095925089e-05<br>> 1 KSP Residual norm
8.918344224261e-16<br>> 3 SNES Function norm 2.186342855894e-10<br>> 0 KSP Residual norm
6.313874615230e-11<br>> 1 KSP Residual norm 2.338370003621e-21<br>> <br>> As you noted, the CPU version terminates on
SNES <br>>function norm whereas the<br>> GPU version stops on a KSP residual norm. Looking <br>>through the exact<br>> numbers,
I found that the small differences in values <br>>between the GPU and<br>> CPU versions cause the convergence criterion for the <br>>SNES to
be off by about<br>> 2e-23, so it goes for another round of line search <br>>before concluding it has<br>> found a local minimum and
terminating. By using GPU <br>>matrix as well,<br>> <br>> GPU 1 process with cusp matrix<br>>
minden@bb45:~/petsc-dev/src/snes/examples/tutorials$<br>> /home/balay/soft/mvapich2-1.5-lucid/bin/mpiexec.hydra <br>>-machinefile<br>>
/home/balay/machinefile -n 1 ./ex47cu -da_grid_x 65535 <br>>-snes_monitor<br>> -ksp_monitor -da_vec_type cusp -da_mat_type
aijcusp<br>> 0 SNES Function norm 3.906279802209e-03<br>> 0 KSP Residual norm
2.600060425819e+01<br>> 1 KSP Residual norm 8.745056654228e-10<br>> 1 SNES Function norm
2.518839297589e-05<br>> 0 KSP Residual norm 1.864270723743e-01<br>> 1 KSP Residual norm
1.265482694189e-11<br>> 2 SNES Function norm 1.475659976840e-09<br>> 0 KSP Residual norm
1.065091221064e-05<br>> 1 KSP Residual norm 8.245135443599e-16<br>> 3 SNES Function norm
2.200530918322e-10<br>> 0 KSP Residual norm 7.730316189302e-11<br>> 1 KSP Residual norm
1.115126544733e-21<br>> 4 SNES Function norm 2.192093087025e-10<br>> <br>> It changes the values again just enough to push it to
<br>>the right side of the<br>> convergence check.<br>> <br>> I am still looking into the problems for 2 processes <br>>with GPU, it
seems to<br>> somewhere be using old data as you can see by the fact <br>>that the function<br>> norm is the same at the beginning of each
SNES iteration<br>> <br>> GPU, 2 processes<br>> [agraiver@tesla-cmc new]$ mpirun -np 2 ./lapexp <br>>-da_grid_x 65535<br>> -da_vec_type
cusp -snes_monitor -ksp_monitor<br>> <br>> 0 SNES Function norm 3.906279802209e-03<-----<br>> 0 KSP
Residual norm 5.994156809227e+00<br>> 1 KSP Residual norm 5.927247846249e-05<br>> 1 SNES Function norm
3.906225077938e-03<------<br>> 0 KSP Residual norm 5.993813868985e+00<br>> 1 KSP Residual norm
5.927575078206e-05<br>> <br>> So, it's doing some good calculations and then throwing <br>>them away and<br>> starting over
again. I will continue to look into this.<br>> <br>> Cheers,<br>> <br>> Victor<br>> <br>> ---<br>> Victor L.
Minden<br>> <br>> Tufts University<br>> School of Engineering<br>> Class of 2012<br>> <br>> <br>> On Wed, May 11, 2011 at 8:31
AM, Alexander Grayver<br>> <agrayver@gfz-potsdam.de>wrote:<br>> <br>>> Hello,<br>>><br>>> Victor thanks.
We've got last version and now it doesn't <br>>>crash. However it<br>>> seems there is still problem.<br>>><br>>> Let's look
at three different runs:<br>>><br>>> [agraiver@tesla-cmc new]$ mpirun -np 2 ./lapexp <br>>>-da_grid_x 65535<br>>>
-snes_monitor -ksp_monitor<br>>><br>>> 0 SNES Function norm 3.906279802209e-03<br>>> 0 KSP
Residual norm 5.994156809227e+00<br>>> 1 KSP Residual norm 3.538158441448e-04<br>>> 2 KSP
Residual norm 3.124431921666e-04<br>>> 3 KSP Residual norm 4.109213410989e-06<br>>> 1 SNES Function
norm 7.201017610490e-04<br>>> 0 KSP Residual norm 3.317803708316e-02<br>>> 1 KSP Residual
norm 2.447380361169e-06<br>>> 2 KSP Residual norm 2.164193969957e-06<br>>> 3 KSP Residual
norm 2.124317398679e-08<br>>> 2 SNES Function norm 1.719678934825e-05<br>>> 0 KSP Residual norm
1.651586453143e-06<br>>> 1 KSP Residual norm 2.037037536868e-08<br>>> 2 KSP Residual norm
1.109736798274e-08<br>>> 3 KSP Residual norm 1.857218772156e-12<br>>> 3 SNES Function norm
1.159391068583e-09<br>>> 0 KSP Residual norm 3.116936044619e-11<br>>> 1 KSP Residual norm
1.366503312678e-12<br>>> 2 KSP Residual norm 6.598477672192e-13<br>>> 3 KSP Residual norm
5.306147277879e-17<br>>> 4 SNES Function norm 2.202297235559e-10<br>>> [agraiver@tesla-cmc new]$ mpirun -np 1 ./lapexp
<br>>>-da_grid_x 65535<br>>> -da_vec_type cusp -snes_monitor -ksp_monitor<br>>><br>>> 0 SNES Function norm
3.906279802209e-03<br>>> 0 KSP Residual norm 2.600060425819e+01<br>>> 1 KSP Residual norm
1.711173401491e-09<br>>> 1 SNES Function norm 2.518839283204e-05<br>>> 0 KSP Residual norm
1.864270712051e-01<br>>> 1 KSP Residual norm 1.123567613474e-11<br>>> 2 SNES Function norm
1.475752536169e-09<br>>> 0 KSP Residual norm 1.065095925089e-05<br>>> 1 KSP Residual norm
8.918344224261e-16<br>>> 3 SNES Function norm 2.186342855894e-10<br>>> 0 KSP Residual norm
6.313874615230e-11<br>>> 1 KSP Residual norm 2.338370003621e-21<br>>> [agraiver@tesla-cmc new]$ mpirun -np 2
./lapexp <br>>>-da_grid_x 65535<br>>> -da_vec_type cusp -snes_monitor -ksp_monitor<br>>><br>>> 0 SNES Function
norm 3.906279802209e-03<br>>> 0 KSP Residual norm 5.994156809227e+00<br>>> 1 KSP Residual
norm 5.927247846249e-05<br>>> 1 SNES Function norm 3.906225077938e-03<br>>> 0 KSP Residual norm
5.993813868985e+00<br>>> 1 KSP Residual norm 5.927575078206e-05<br>>> [agraiver@tesla-cmc
new]$<br>>><br>>> lepexp is the default example, just renamed. The first <br>>>run used 2 CPUs, the<br>>> second one used 1
GPU and the third one ran with 2 <br>>>processes using 1 GPU.<br>>> First different is that when use cpu the last string in
<br>>>output is always:<br>>><br>>> 4 SNES Function norm 2.202297235559e-10<br>>> whereas for CPU the last string is "N KSP
<br>>>...something..."<br>>> Then is seems that for 2 processes using 1 GPU example <br>>>doesn't converge,<br>>> the norm is
quite big. The same situation happens when <br>>>we use 2 process and<br>>> 2 GPUs. Can you explain this?<br>>> BTW, we can even
give you access to our server with 6 <br>>>CPUs and 8 GPUs<br>>> within one node.<br>>><br>>> Regards,<br>>>
Alexander<br>>><br>>><br>>> On 11.05.2011 01:07, Victor Minden wrote:<br>>><br>>> I pushed my change to petsc-dev, so
hopefully a new pull <br>>>of the latest<br>>> mercurial repository should do it, let me know if not.<br>>> ---<br>>> Victor
L. Minden<br>>><br>>> Tufts University<br>>> School of Engineering<br>>> Class of 2012<br>>><br>>><br>>> On
Tue, May 10, 2011 at 6:59 PM, Alexander Grayver <<br>>> agrayver@gfz-potsdam.de> wrote:<br>>><br>>>> Hi
Victor,<br>>>><br>>>> Thanks a lot!<br>>>> What should we do to get new version?<br>>>><br>>>>
Regards,<br>>>> Alexander<br>>>><br>>>><br>>>> On 10.05.2011 02:02, Victor Minden
wrote:<br>>>><br>>>> I believe I've resolved this issue.<br>>>><br>>>> Cheers,<br>>>><br>>>> Victor<br>>>>
---<br>>>> Victor L. Minden<br>>>><br>>>> Tufts University<br>>>> School of Engineering<br>>>> Class of
2012<br>>>><br>>>><br>>>> On Sun, May 8, 2011 at 5:26 PM, Victor Minden <br>>>><victorminden@gmail.com>wrote:<br>>>><br>>>>>
Barry,<br>>>>><br>>>>> I can verify this on breadboard now,<br>>>>><br>>>>> with two processes,
cuda<br>>>>><br>>>>> minden@bb45:~/petsc-dev/src/snes/examples/tutorials$<br>>>>>
/home/balay/soft/mvapich2-1.5-lucid/bin/mpiexec.hydra <br>>>>>-machinefile<br>>>>> /home/balay/machinefile -n 2 ./ex47cu
-da_grid_x 65535 <br>>>>>-log_summary<br>>>>> -snes_monitor -ksp_monitor -da_vec_type cusp<br>>>>> 0
SNES Function norm 3.906279802209e-03<br>>>>> 0 KSP Residual norm
5.994156809227e+00<br>>>>> 1 KSP Residual norm 5.927247846249e-05<br>>>>> 1 SNES Function
norm 3.906225077938e-03<br>>>>> 0 KSP Residual norm 5.993813868985e+00<br>>>>> 1
KSP Residual norm 5.927575078206e-05<br>>>>> terminate called after throwing an instance of<br>>>>>
'thrust::system::system_error'<br>>>>> what(): invalid device pointer<br>>>>> terminate called after
throwing an instance of<br>>>>> 'thrust::system::system_error'<br>>>>> what(): invalid device
pointer<br>>>>> Aborted (signal 6)<br>>>>><br>>>>><br>>>>><br>>>>> Without
cuda<br>>>>><br>>>>> minden@bb45:~/petsc-dev/src/snes/examples/tutorials$<br>>>>>
/home/balay/soft/mvapich2-1.5-lucid/bin/mpiexec.hydra <br>>>>>-machinefile<br>>>>> /home/balay/machinefile -n 2 ./ex47cu
-da_grid_x 65535 <br>>>>>-log_summary<br>>>>> -snes_monitor -ksp_monitor<br>>>>> 0 SNES Function norm
3.906279802209e-03<br>>>>> 0 KSP Residual norm 5.994156809227e+00<br>>>>> 1 KSP
Residual norm 3.538158441448e-04<br>>>>> 2 KSP Residual norm 3.124431921666e-04<br>>>>> 3
KSP Residual norm 4.109213410989e-06<br>>>>> 1 SNES Function norm 7.201017610490e-04<br>>>>> 0
KSP Residual norm 3.317803708316e-02<br>>>>> 1 KSP Residual norm 2.447380361169e-06<br>>>>> 2
KSP Residual norm 2.164193969957e-06<br>>>>> 3 KSP Residual norm 2.124317398679e-08<br>>>>> 2
SNES Function norm 1.719678934825e-05<br>>>>> 0 KSP Residual norm
1.651586453143e-06<br>>>>> 1 KSP Residual norm 2.037037536868e-08<br>>>>> 2 KSP
Residual norm 1.109736798274e-08<br>>>>> 3 KSP Residual norm 1.857218772156e-12<br>>>>> 3
SNES Function norm 1.159391068583e-09<br>>>>> 0 KSP Residual norm
3.116936044619e-11<br>>>>> 1 KSP Residual norm 1.366503312678e-12<br>>>>> 2 KSP
Residual norm 6.598477672192e-13<br>>>>> 3 KSP Residual norm 5.306147277879e-17<br>>>>> 4
SNES Function norm 2.202297235559e-10<br>>>>><br>>>>> Note the repeated norms when using cuda. Looks
like <br>>>>>I'll have to take<br>>>>> a closer look at this.<br>>>>><br>>>>>
-Victor<br>>>>><br>>>>> ---<br>>>>> Victor L. Minden<br>>>>><br>>>>> Tufts
University<br>>>>> School of Engineering<br>>>>> Class of 2012<br>>>>><br>>>>><br>>>>><br>>>>>
On Thu, May 5, 2011 at 2:57 PM, Barry Smith <br>>>>><bsmith@mcs.anl.gov> wrote:<br>>>>> ><br>>>>> >
Alexander<br>>>>> ><br>>>>> > Thank you for the sample code; it will be very
<br>>>>>useful.<br>>>>> ><br>>>>> > We have run parallel jobs with CUDA where each
node <br>>>>>has only a<br>>>>> single MPI process and uses a single GPU without the <br>>>>>crash that you
get<br>>>>> below. I cannot explain why it would not work in your <br>>>>>situation. Do you have<br>>>>> access to
two nodes each with a GPU so you could try <br>>>>>that?<br>>>>> ><br>>>>> > It is crashing in a
delete of a<br>>>>> ><br>>>>> > struct _p_PetscCUSPIndices {<br>>>>>
> CUSPINTARRAYCPU indicesCPU;<br>>>>> > CUSPINTARRAYGPU indicesGPU;<br>>>>> >
};<br>>>>> ><br>>>>> > where cusp::array1d<PetscInt,cusp::device_memory><br>>>>>
><br>>>>> > thus it is crashing after it has completed actually <br>>>>>doing the<br>>>>> computation. If
you run with -snes_monitor -ksp_monitor <br>>>>>with and without the<br>>>>> -da_vec_type cusp on 2 processes what do you get
for <br>>>>>output in the two<br>>>>> cases? I want to see if it is running correctly on two
<br>>>>>processes?<br>>>>> ><br>>>>> > Could the crash be due to memory corruption sometime
<br>>>>>doing the<br>>>>> computation?<br>>>>> ><br>>>>> ><br>>>>> >
Barry<br>>>>> ><br>>>>> ><br>>>>> ><br>>>>> ><br>>>>> ><br>>>>>
> On May 5, 2011, at 3:38 AM, Alexander Grayver wrote:<br>>>>> ><br>>>>> >> Hello!<br>>>>>
>><br>>>>> >> We work with petsc-dev branch and ex47cu.cu example. <br>>>>>Our platform is<br>>>>>
>> Intel Quad processor and 8 identical Tesla GPUs. CUDA <br>>>>>3.2 toolkit is<br>>>>> >>
installed.<br>>>>> >> Ideally we would like to make petsc working in a <br>>>>>multi-GPU way within<br>>>>>
>> just one node so that different GPUs could be <br>>>>>attached to different<br>>>>> >>
processes.<br>>>>> >> Since it's not possible within current PETSc <br>>>>>implementation we created<br>>>>>
a<br>>>>> >> preload library (see LD_PRELOAD for details) for <br>>>>>CUBLAS function<br>>>>> >>
cublasInit().<br>>>>> >> When PETSc calls this function our library gets <br>>>>>control and we
assign<br>>>>> >> GPUs according to rank within MPI communicator, then <br>>>>>we call original<br>>>>>
>> cublasInit().<br>>>>> >> This preload library is very simple, see petsc_mgpu.c
<br>>>>>attached.<br>>>>> >> This trick makes each process to have its own context <br>>>>>and ideally
all<br>>>>> >> computations should be distributed over several GPUs.<br>>>>> >><br>>>>> >> We
managed to build petsc and example (see makefile <br>>>>>attached) and we<br>>>>> >> tested it as
follows:<br>>>>> >><br>>>>> >> [agraiver@tesla-cmc new]$ ./lapexp -da_grid_x 65535 <br>>>>>-info
><br>>>>> cpu_1process.out<br>>>>> >> [agraiver@tesla-cmc new]$ mpirun -np 2 ./lapexp <br>>>>>-da_grid_x
65535<br>>>>> -info ><br>>>>> >> cpu_2processes.out<br>>>>> >> [agraiver@tesla-cmc new]$ ./lapexp
-da_grid_x 65535 <br>>>>>-da_vec_type cusp<br>>>>> >> -info > gpu_1process.out<br>>>>> >>
[agraiver@tesla-cmc new]$ mpirun -np 2 ./lapexp <br>>>>>-da_grid_x 65535<br>>>>> >> -da_vec_type cusp -info >
gpu_2processes.out<br>>>>> >><br>>>>> >> Everything except last configuration works well. The
<br>>>>>last one crashes<br>>>>> >> with the following exception and callstack:<br>>>>> >> terminate
called after throwing an instance of<br>>>>> >> 'thrust::system::system_error'<br>>>>> >>
what(): invalid device pointer<br>>>>> >> [tesla-cmc:15549] *** Process received signal ***<br>>>>> >>
[tesla-cmc:15549] Signal: Aborted (6)<br>>>>> >> [tesla-cmc:15549] Signal code: (-6)<br>>>>> >>
[tesla-cmc:15549] [ 0] /lib64/libpthread.so.0() <br>>>>>[0x3de540eeb0]<br>>>>> >> [tesla-cmc:15549] [ 1]
/lib64/libc.so.6(gsignal+0x35) <br>>>>>[0x3de50330c5]<br>>>>> >> [tesla-cmc:15549] [ 2] /lib64/libc.so.6(abort+0x186)
<br>>>>>[0x3de5034a76]<br>>>>> >> [tesla-cmc:15549] [ 3]<br>>>>> >><br>>>>>
/opt/llvm/dragonegg/lib64/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x11d)<br>>>>> >>
[0x7f0d3530b95d]<br>>>>> >> [tesla-cmc:15549] [ 4]<br>>>>> >> /opt/llvm/dragonegg/lib64/libstdc++.so.6(+0xb7b76)
<br>>>>>[0x7f0d35309b76]<br>>>>> >> [tesla-cmc:15549] [ 5]<br>>>>> >>
/opt/llvm/dragonegg/lib64/libstdc++.so.6(+0xb7ba3) <br>>>>>[0x7f0d35309ba3]<br>>>>> >> [tesla-cmc:15549] [
6]<br>>>>> >> /opt/llvm/dragonegg/lib64/libstdc++.so.6(+0xb7cae) <br>>>>>[0x7f0d35309cae]<br>>>>> >>
[tesla-cmc:15549] [ 7]<br>>>>> >><br>>>>> ./lapexp(_ZN6thrust6detail6device4cuda4freeILj0EEEvNS_10device_ptrIvEE+0x69)<br>>>>>
>> [0x426320]<br>>>>> >> [tesla-cmc:15549] [ 8]<br>>>>> >><br>>>>>
./lapexp(_ZN6thrust6detail6device8dispatch4freeILj0EEEvNS_10device_ptrIvEENS0_21cuda_device_space_tagE+0x2b)<br>>>>> >>
[0x4258b2]<br>>>>> >> [tesla-cmc:15549] [ 9]<br>>>>> >> <br>>>>>./lapexp(_ZN6thrust11device_freeENS_10device_ptrIvEE+0x2f)
<br>>>>>[0x424f78]<br>>>>> >> [tesla-cmc:15549] [10]<br>>>>> >><br>>>>>
/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(_ZN6thrust23device_malloc_allocatorIiE10deallocateENS_10device_ptrIiEEm+0x33)<br>>>>> >>
[0x7f0d36aeacff]<br>>>>> >> [tesla-cmc:15549] [11]<br>>>>> >><br>>>>>
/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(_ZN6thrust6detail18contiguous_storageIiNS_23device_malloc_allocatorIiEEE10deallocateEv+0x6e)<br>>>>>
>> [0x7f0d36ae8e78]<br>>>>> >> [tesla-cmc:15549] [12]<br>>>>> >><br>>>>>
/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(_ZN6thrust6detail18contiguous_storageIiNS_23device_malloc_allocatorIiEEED1Ev+0x19)<br>>>>> >>
[0x7f0d36ae75f7]<br>>>>> >> [tesla-cmc:15549] [13]<br>>>>> >><br>>>>>
/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(_ZN6thrust6detail11vector_baseIiNS_23device_malloc_allocatorIiEEED1Ev+0x52)<br>>>>> >>
[0x7f0d36ae65f4]<br>>>>> >> [tesla-cmc:15549] [14]<br>>>>> >><br>>>>>
/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(_ZN4cusp7array1dIiN6thrust6detail21cuda_device_space_tagEED1Ev+0x18)<br>>>>> >>
[0x7f0d36ae5c2e]<br>>>>> >> [tesla-cmc:15549] [15]<br>>>>> >><br>>>>>
/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(_ZN19_p_PetscCUSPIndicesD1Ev+0x1d)<br>>>>> [0x7f0d3751e45f]<br>>>>> >>
[tesla-cmc:15549] [16]<br>>>>> >> <br>>>>>/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(PetscCUSPIndicesDestroy+0x20f)<br>>>>>
>> [0x7f0d3750c840]<br>>>>> >> [tesla-cmc:15549] [17]<br>>>>> >>
<br>>>>>/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(VecScatterDestroy_PtoP+0x1bc8)<br>>>>> >>
[0x7f0d375af8af]<br>>>>> >> [tesla-cmc:15549] [18]<br>>>>> >> <br>>>>>/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(VecScatterDestroy+0x586)<br>>>>>
>> [0x7f0d375e9ddf]<br>>>>> >> [tesla-cmc:15549] [19]<br>>>>> >>
<br>>>>>/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(MatDestroy_MPIAIJ+0x49f)<br>>>>> >> [0x7f0d37191d24]<br>>>>>
>> [tesla-cmc:15549] [20]<br>>>>> >> <br>>>>>/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(MatDestroy+0x546)<br>>>>>
[0x7f0d370d54fe]<br>>>>> >> [tesla-cmc:15549] [21]<br>>>>> >> <br>>>>>/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(SNESReset+0x5d1)<br>>>>>
[0x7f0d3746fac3]<br>>>>> >> [tesla-cmc:15549] [22]<br>>>>> >> <br>>>>>/opt/openmpi_gcc-1.4.3/lib/libpetsc.so(SNESDestroy+0x4b8)<br>>>>>
[0x7f0d37470210]<br>>>>> >> [tesla-cmc:15549] [23] ./lapexp(main+0x5ed) <br>>>>>[0x420745]<br>>>>>
>><br>>>>> >> I've sent all detailed output files for different <br>>>>>execution<br>>>>> >>
configuration listed above as well as configure.log <br>>>>>and make.log to<br>>>>> >> petsc-maint@mcs.anl.gov hoping
that someone could <br>>>>>recognize the<br>>>>> problem.<br>>>>> >> Now we have one node with multi-GPU,
but I'm also <br>>>>>wondering if someone<br>>>>> >> really tested usage of GPU functionality over several
<br>>>>>nodes with one<br>>>>> GPU<br>>>>> >> each?<br>>>>> >><br>>>>>
>> Regards,<br>>>>> >> Alexander<br>>>>> >><br>>>>> >>
<petsc_mgpu.c><makefile.txt><configure.log><br>>>>> ><br>>>>>
><br>>>>><br>>>><br>>>><br>>>><br>>><br>>><br><br></div>