[petsc-users] Vec sizing using DMDA

Antoine Côté Antoine.Cote3 at USherbrooke.ca
Thu Apr 23 11:02:25 CDT 2020


Hi,

I'm using a C++/PETSc program to do Topological Optimization. A finite element analysis is solved at every iteration of the optimization. Displacements U are obtained using KSP solver. U is a Vec created using a 3D DMDA with 3 DOF (ux, uy, uz). Boundary conditions are stored in Vec N, and forces in Vec RHS. They also have 3 DOF, as they are created using VecDuplicate on U.

My problem : I have multiple load cases (i.e. different sets of boundary conditions (b.c.) and forces). Displacements U are solved for each load case. I need to extract rapidly the b.c. and forces for each load case before solving.

One way would be to change the DOF of the DMDA (e.g. for 8 load cases, we could use 3*8=24 DOF). Problem is, prior solving, we would need to loop on nodes to extract the b.c. and forces, for every node, for every load case and for every iteration of the optimization. This is a waste of time, as b.c. and forces are constant for a given load case.

A better way would be to assemble b.c. and forces for every load case once, and read them afterwards as needed. This is currently done using a VecDuplicate on U to create multiple vectors N and RHS (N_0, N_1, RHS_0, RHS_1, etc.). Those vectors are hard coded, and can only solve a set number of load cases.

I'm looking for a way to allocate dynamically the number of N and RHS vectors. What I would like :
Given nlc, the number of load cases and nn, the number of nodes in the DMDA. Create matrices N and RHS of size (DOF*nn lines, nlc columns). While optimizing : for every load case, use N[all lines, current load case column] and RHS[all lines, current load case column], solve with KSP, obtain displacement U[all lines, current load case].

Would that be possible?

Best regards,

Antoine Côté

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200423/a577e19b/attachment.html>


More information about the petsc-users mailing list