[petsc-users] Extra Variable in DMCircuit

Abhyankar, Shrirang G. abhyshr at mcs.anl.gov
Fri Mar 14 14:22:19 CDT 2014



-----Original Message-----
From: Florian Meier <florian.meier at koalo.de<mailto:florian.meier at koalo.de>>
Date: Fri, 14 Mar 2014 19:34:23 +0100
To: Shri <abhyshr at mcs.anl.gov<mailto:abhyshr at mcs.anl.gov>>
Cc: petsc-users list <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>>
Subject: Re: Extra Variable in DMCircuit

On 03/14/2014 06:24 PM, Abhyankar, Shrirang G. wrote:


That sounds great! Although, this solution seems to be very specific to
the equations. Does this approach still work when the equations get more
complex (e.g. handling multiple variables like PRODUCT(1+t*(a-1)) or
SUM(R*q*(1-f)) )?

Managing custom needs is difficult to maintain hence what I'm planning to do is to provide a way for the user to define a own custom MPI_Op that can be called
by DMLocalToGlobalXXX/DMGlobalToLocalXXX. So eventually you'll have to write the MPI_Op and we'll provide hooks to attach it to the DM.


Now I would like to add a single global variable (and a single equation)
to the equation system. Is there an elegant way to do this with DMCircuit?
Is this akin to a "Ground" node for circuits? Is the variable value
constant?

Maybe...
The additional equation is the multiplication of the reliability over a
specific path in the network (a rather arbitrary, but small subset of
the links (e.g. 55 links for a problem with 10000 links)) minus a
constant predefined value.
This gives me the possibility to convert the formerly constant packet
generation (g_i) into a variable.

I see..so it is like an equality constraint on a subset of links, not all the links. Presumably these links form a subnetwork that
may get assigned to one processor/set of neighboring processors.


When adding an additional vertex it works quite good. We will see how it
works out when running in parallel.

After working on your example I realized that specifying a bidirectional
edge as two unidirectional edges in the data may cause problems for the
partitioner. I observed that
the two undirectional edges may be assigned to different processors
although they are connected to the same vertices. This may be a problem
when communicating ghost
values. Hence, I've modified the data format in the attached links1.txt
file to only specify edges via their nodal connectivity and then to
specify the type information.
I've reworked your source code also accordingly and it gives the same
answer as your original code. It gives a wrong answer for parallel runs
because of the incorrect
ghost value exchanges. Once we have the ADD_PROD insertmode, this code
should work fine in parallel too. I think that going forward you should
use a similar data format.

Good idea, but unfortunately it is not always guaranteed that the edge
is bidirectional for the extended formulation of the problem.

Are you saying that the directionality could change during the calculation?
In your example, the INTERFERING edges are bidirectional
while the INFLOWING links are unidirectional. By setting up the appropriate relations in the
data attached with the edges , you can manage the equations for the
edges/vertices. If there is some specific case that cannot be handled then we can take a look at it.

What
exactly is the problem when the two unidirectional edges are assigned to
different processes?

I don't quite remember it right now but I recall seeing weird partitions and incorrect ghost exchanges. I'll have to run it once again
to produce specific details.

Shri

A hackish solution might be to add an additional imaginary vertex that
is excluded from all other calculations, but that does not seem to be
the right way to do it.
Greetings,
Florian

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140314/b9417451/attachment.html>


More information about the petsc-users mailing list