Print

Print


The double loop isn't really a big deal.  But the overhead of  
parallelism may or may not be worth it to you depending on the amount  
of stuff you can do parallelized.

One gotcha: we recently did a very heavily parallelized model with  
many agents which each kept big maps in their heads.  And it turned  
out that our parallelized version was slower than a single-threaded  
version when we ran it on multi-core Intel boxes (as opposed to actual  
multi-CPU boxes, where it ran great).  The reason: though the code was  
easily parallelized, the amount of data that each parallel process had  
to manipulate was larger than cache.  This forced a lot of memory  
contention and though Intel's chips have multiple cores they only have  
one, generally crummy, memory controller, which turned out to be the  
big bottleneck.  Long story short, you may or may not benefit from  
parallelization.  You'll have to try and see.

Do watch out for race conditions though: they're really fun to debug.

Sean

On May 13, 2010, at 12:21 PM, Steven Saul wrote:

> Hi Sean,
>
> Thanks for your reply!  From the reading I have recently done on
> concurrent programming, that actually makes a lot of sense and  
> avoiding
> race conditions are my primary concern.  It seems like I don't have  
> much
> of a choice but to parallelize my own simulation because it involves
> modeling millions of individual fish that are fished by a fishing  
> fleet.
> It sounds like I can process in parallel all of the things that each  
> fish
> does itself without dependency on anything else i.e. (things like  
> growth,
> maturity, etc.). From what you said, it also sounds like the fish  
> can go
> through their movement algorithm in parallel (which tends to take some
> processing juice), store their new x and y location, then within  
> that time
> step, after all fish went through the parallel sequence, they can go
> serial again at which point the fish are assigned to their new grid
> location.  I wonder if this would be computationally efficient as you
> would kind of need to loop through the objects twice, right, once in
> parallel and then once serially?
>
> Thanks,
> Steve
>
>> On May 12, 2010, at 3:34 PM, Steven Saul wrote:
>>
>>> After looking at HeatBugs and the ThreadedDiffuser class, I was
>>> wondering
>>> if there was a way to use ParallelSequence to split the processing  
>>> of
>>> agents (in this case the actual heat bugs) in each time step.  For
>>> example, if you have 800 heatbugs in any given time step, a computer
>>> with
>>> 8 cores would process 100 bugs on each core simultaneously in that
>>> time
>>> step.
>>
>> Sort of.  You just schedule a ParallelSequence on the schedule.  In
>> the ParallelSequence you put, say, eight RandomSequences.  Each of  
>> the
>> RandomSequences
>> holds 1/8 of all your bugs.
>>
>> However you have to be careful there.  Each HeatBug reads  
>> information,
>> moves, and writes information. This presents opportunities for race
>> conditions.  First, if one HeatBug is reading a location while  
>> another
>> is writing the same location, you'll get messed up results.  Second,
>> if two HeatBugs access the SparseGrid2D at the same time to move,
>> they'll break the hash table.  This means that realistically you can
>> only do the read portion of these operations in parallel.  So you'd
>> have the parallel sequence call step() methods which do the reads and
>> internal computation for each bug in parallel; but after that you'd
>> have to schedule the bugs serially (as before, maybe in a later
>> priority) to move themselves and then write to the heat array.
>>
>> This is advanced threading stuff for people who are really familiar
>> with the perils of threaded coding.  If the above paragraph causes  
>> you
>> to say "huh?", then the answer is NO, you SHOULD NOT parallelize your
>> agents.  :-)
>>
>> Sean
>>
>
>
> -- 
> Steven Saul, M.A.
> Graduate Assistant, Marine Biology and Fisheries
> Cooperative Institute for Marine and Atmospheric Studies
> Cooperative Unit for Fisheries Education and Research
> University of Miami - RSMAS
> 4600 Rickenbacker Cswy.
> Miami, Florida  33149
> + 1 305-421-4831
> http://cufer.rsmas.miami.edu