That is very interesting issue you raise. I understand you are interested in
fine-grained distributed memory applications: I have experimented recently
with Proactive.  While learning curve might be steep (which is not helped by
rather unresponsive mailing list for Proactive), after you work out initial
problems with deployment it is great fun to work with. The only draw back is
that after you transform your agents into Proactive ActiveObjects and create
environment based on Proactive Domains, there is no Mason left (Proactive
takes care of scheduling). It is possible to have large scale simulation
deployed to cluster based on Proactive and small scale desktop Mason one
that share 90% of code, in particular you won't have to change neither your
agent code or environment code, just things that deal with instantiation and

Proactive comes bundled with tool called IC2D (Interactive Control and
Debugging of Distribution), which is great help when you want to monitor or
improve performance of your simulation.  I have heard good things about
Network Attached Memory solutions like Terracotta, but have not yet tried
them out: they look interesting though. Check out proceedings of last AGENT
conference, there are some papers that talk about those issues..

What scale of simulation are we talking about that a decent 8 core desktop
with plenty of RAM is not enough?



On Dec 6, 2007 7:59 PM, Glen E. P. Ropella <[log in to unmask]> wrote:

> Hash: SHA1
> What toolchains do y'all use for parallelizing your MASON simulations?
> Note that I'm not talking about multi-processor machines but clusters or
> loose collections of networked machines.
> - --
> glen e. p. ropella, 971-219-3846,
> The only way we can win is to leave before the job is done. -- George W.
> Bush
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla -
> BqymZ5mlZmxgugwz0pqPOX4=
> =WTjs