I *thought* that "easily" might not be the right term. :-) :-)
MASON and Repast, and Swarm, and Netlogo, etc., are all single-memory-
space systems. Some (like MASON) can have multiple threads, but they
all share the same memory space -- that is, MASON provides for
symmetric multiprocessing only. The reason for this is simple: to
build a system which distributes the memory space across multiple
machines, you need to have data structures designed for this purpose,
and it's pretty nontrivial to create structures which are efficient
in both a single-memory-space format and also a multi-memory-space
format. On top of it, in a distributed simulator you have to deal
with synchronization, thought that's often less of a hassle.
So, long story short, you *could* do a distributed thing in MASON (I
might look at JavaSpaces rather than MPI), but we have nothing built-
in. MASON was intended to be a fast, small SMP simulator because
that's the biggest function we've used it for: massively parallel
simultaneous runs. But certain NSF projects for MASON may push us to
develop a distributed version. We'll see: it's a huge overhead!
Here's the SMD support MASON has right now:
- MASON's model core is self-contained. This means you can run
simultaneous, independent MASON simulations in parallel in separate
threads on the same process. We do that a lot.
- MASON's model serializability allows us to build a model on one
processor, then ship it to a remote machine to be processed, then
ship it back.
- MASON has synchronous multithreaded agents: a Steppable can
subdivide itself into several Steppables, running each in parallel,
then gathering them at the end. See ParallelSequence.
- MASON has asynchronous agents: A Steppable can be fired off to
work independently of the Schedule, and be gathered at a later time
(or not at all). See AsynchronousSteppable.
On Oct 7, 2006, at 4:40 PM, GŁnther Greindl wrote:
> Hello Sean,
> sorry for taking so long to respond, but we started term this last
> week here
> in Vienna, and I barely had time to check my mail.
> What a week.
>>> I know that you can add distributed support to Repast Models quite
>> Gunther, could you elaborate on this?
> Hazel Parry (http://www.geog.leeds.ac.uk/people/h.parry/)
> gave a talk on this a couple of months ago here in Vienna - she
> used MPI from lam-mpi.org and a Java wrapper from hpjava.org.
> So it wasn't out of the box repast support, she probably had to
> considerably - but it works ;-)
> I guess you can do this with MASON too. Have you thought about
> parallel support right into mason? Are there reasons against doing
> Of course, considering the MPI overhead, using a parallel simulation
> only makes
> sense if you go real big ;-)
> Kind Regards and sorry again for the lag,