[petsc-users] Extra Variable in DMCircuit
Florian Meier
florian.meier at koalo.de
Fri Mar 14 13:34:23 CDT 2014
On 03/14/2014 06:24 PM, Abhyankar, Shrirang G. wrote:
>
>> Hi,
>> I got quite far with my project, although I still have not managed (or
>> better "have not tried...") to get the parallelization running (Shri:
>> Any news about that?).
> We've figured out what needs to be done but haven't done it yet :-).
> Your application needs either a vertex distribution with an overlap or a
> custom MPI reduction scheme. After speaking with Barry last week, it
> seems to me that the latter option would be best way to proceed. The
> custom MPI reduction scheme is because you have 2 equations for every
> vertex with the first equation needing an ADD operation and the second
> one needs a PROD. Thus, we would need
> to have an ADD_PROD insertmode for DMLocalToGlobalXXX that we currently
> don't have.
That sounds great! Although, this solution seems to be very specific to
the equations. Does this approach still work when the equations get more
complex (e.g. handling multiple variables like PRODUCT(1+t*(a-1)) or
SUM(R*q*(1-f)) )?
>> Now I would like to add a single global variable (and a single equation)
>> to the equation system. Is there an elegant way to do this with DMCircuit?
>
> Is this akin to a "Ground" node for circuits? Is the variable value
> constant?
Maybe...
The additional equation is the multiplication of the reliability over a
specific path in the network (a rather arbitrary, but small subset of
the links (e.g. 55 links for a problem with 10000 links)) minus a
constant predefined value.
This gives me the possibility to convert the formerly constant packet
generation (g_i) into a variable.
When adding an additional vertex it works quite good. We will see how it
works out when running in parallel.
> After working on your example I realized that specifying a bidirectional
> edge as two unidirectional edges in the data may cause problems for the
> partitioner. I observed that
> the two undirectional edges may be assigned to different processors
> although they are connected to the same vertices. This may be a problem
> when communicating ghost
> values. Hence, I've modified the data format in the attached links1.txt
> file to only specify edges via their nodal connectivity and then to
> specify the type information.
> I've reworked your source code also accordingly and it gives the same
> answer as your original code. It gives a wrong answer for parallel runs
> because of the incorrect
> ghost value exchanges. Once we have the ADD_PROD insertmode, this code
> should work fine in parallel too. I think that going forward you should
> use a similar data format.
Good idea, but unfortunately it is not always guaranteed that the edge
is bidirectional for the extended formulation of the problem. What
exactly is the problem when the two unidirectional edges are assigned to
different processes?
>
>>
>> A hackish solution might be to add an additional imaginary vertex that
>> is excluded from all other calculations, but that does not seem to be
>> the right way to do it.
>>
>> Greetings,
>> Florian
>
More information about the petsc-users
mailing list