MASON-INTEREST-L Archives

April 2010

MASON-INTEREST-L@LISTSERV.GMU.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Sean Luke <[log in to unmask]>
Reply To:
MASON Multiagent Simulation Toolkit <[log in to unmask]>
Date:
Wed, 14 Apr 2010 16:52:48 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (112 lines)
Martin, ParallelSequence maintains a pool of threads and semaphores in  
order to avoid reallocating new threads all the time (which is very  
expensive).  Normally these are supposed to be freed during finalize().

But I'm guessing that your VM is not calling finalize() at all -- it's  
not required of Java.  Hence those threads just sit around and start  
filling up all your memory.  It'd make things very slow I imagine.

If you look at the ParallelSequence code there's a non-public function  
called "gatherThreads".  This is called by finalize to clean up when  
the object goes away.  I suggest as a test you make gatherThreads  
public, then you manually call gatherThreads() yourself on every  
ParallelSequence you create before the next iteration of your model.   
This should force the ParallelSequences to kill their threads and get  
rid of their semaphores, at which point you can throw them away.

Tell me if this fixes things.  If it does I need to think about what  
the best way would be to enable this in MASON.

Sean

On Apr 14, 2010, at 4:34 PM, Martin Pokropp wrote:

> Dear Mr. Luke, dear "Masonists",
>
> I'm working on a model where agents (consumers) are connected by a  
> social network. On every time step they take a new decision, based  
> on their individual parameters and the parameters of their direct  
> neighbors in the network.
> In order to implemet multiple runs with multiple parameter  
> specifications for different runs, I wrote a wrapper class from  
> which the model ("MainModel") is called. The basic structure of the  
> wrapper class, holding the main(String[] args) method, is:
>
> 	for # runs
> 		MainModel model = new MainModel(seed, parameter specifications...);	
> 		model.start();
> 		long steps;
> 		do
> 			{
> 			if (!model.schedule.step(model))
> 				{
> 				break;
> 				}
> 			steps = model.schedule.getSteps();
> 			}
> 		while(steps < 240);
> 		model.finish();
> 	
> 	... do some file writing etc.
> 	System.exit(0);
>
> In order to speed up the simulation I decided to let  
> ParallelSequence's do the job of scheduling the model agents,  
> contained in the Bag networkAgents, to step and do their lookups in  
> their neighborhoods and to take new decisions each time step. The  
> structure is as follows:
>
> Steppable steppable = new Steppable[numberOfCPUs];
> 	
> 	for(int j = 0; j<numberOfCPUs;j++)
> 		{
> 		final int p = j;
> 		steppable[j] = new Steppable ()
> 				{
> 				public void step(SimState state)
> 					{
> 					for(int i = p*NUMBER_OF_AGENTS/numberOfCPUs;i<(p 
> +1)*NUMBER_OF_AGENTS/numberOfCPUs;i++)
> 						{
> 						Agent a = (Agent)(networkAgents.get(i));
> 						a does something...
> 						}
> 					}
> 				};
> 		}
> 	
> 	ParallelSequence update = new ParallelSequence(steppable);
> 	schedule.scheduleRepeating(Schedule.EPOCH,0,update,1);
> 		
> The problem I'am facing now is that after some simulation runs,  
> speed drops sharply until it finally breaks down and java throws its  
> "Exception in thread "main" java.lang.OutOfMemoryError: Java heap  
> space". The problem occurs faster if the number of model agents is  
> higher. I also tried it with more space by java -Xmx200m or 400m,  
> even 800m, unfortunately the break down is only postponed but still  
> occurs. The problem occurs both under linux and Mac Osx (10.4), I am  
> using simple personal computers with two or for CPUs.
> The Apple Activity Monitor shows for every new sim run an increasing  
> number of threads. It also shows that the space the simulation uses  
> is constantly growing but at break down is still far below the  
> virtual space reserved by java.  The heap space problem only comes  
> up with ParallelSequences, not with Sequences.
> I'm not an experienced programmer, but this seems to tell me that  
> after a sim run is over and the model instance has been finished the  
> threads that were created for or by the ParallelSequences are not  
> finished and therefore not garbage collected. Obviously the creation  
> of the sequence alone is sufficient to evoke these problems, because  
> when I commented out "schedule.scheduleRepeating(Schedule.EPOCH, 
> 0,update,1);", and the simulation made only "empty" steps without  
> any real actions of the agents, the problem did not go away. Also  
> the idea of putting the Sequences into a OneShotStoppable wrapper  
> (see https://listserv.gmu.edu/cgi-bin/wa?A2=ind0410&L=MASON-INTEREST-L&P=R236&I=-3) 
>  did not do the job.
>
> Do you have an idea, what could be done to avoid these problems?  
> Help is really appreciated...
>
> With Best Regards
>
> Martin

ATOM RSS1 RSS2