Print

Print


On Oct 12, 2013, at 5:50 AM, Ralf Buschermöhle wrote:

> On Oct 11, 2013, at 23:47, Sean Luke <[log in to unmask]> wrote:
> 
>> On Oct 11, 2013, at 4:55 PM, Ralf Buschermöhle wrote:
>> 
>>> in order to make it even more complex ... also different multithreaded sized clients would be ... great! :)
>> 
>> That one would be too tough to implement anytime soon.
>> But as to waiting until the job has all come in: my analysis of the current code suggests that this is NOT what happens.
>> 
>> Slaves can run in one of two modes:
>> 
>> 1. "run-evolve mode".  Here the slave loads the job into a population, then evaluates them, and if there's more time, does some evolution on that population, eventually returning the revised population as new versions of the original individuals.
> 
> The different modes are triggered by ... 
> 
> # return complete individuals
> eval.return-inds = true

No, by eval.run-evolve


> // just to make sure I understand (some) implications correctly ...
> 
> Let's assume there is a multithreaded slave with 4 cores with eval.masterproblem.job-size = 1 and eval.masterproblem.max-jobs-per-slave = 8 with a total population of some hundred individuals ...
> 
> 0. The server fills the queue of the client (completely) with 8 jobs
> 1. The slave would evaluate 4 jobs concurrently 

From my reading of the code I think at present the slave would evaluate all 8 jobs concurrently.  No, I don't think ignoring evalthreads is optimal either.  But it'll take me a little bit to fix it.

> 2. After finishing the first job (the others take significantly longer) the slave sends the result to the server and starts processing the 5. job in the queue

At present the slave has to return the individuals in the order they arrived.  If individual 0 finishes first it'll be returned immediately.  But if individual 4 finishes first it has to wait until the others are returned first.

I know what you're trying for -- perhaps a better option for the time being would be to set the job size to 1, and create FOUR ECJ slave processes on your slave machine.  Then they'd be totally asynchronous.

> 3. The server would fill up the job queue of the slave after receiving the evaluated individual (if eval.masterproblem.job-size = 2 the server would wait until 2 individuals have been received and then refills the queue) as long as individuals need to be processed.

This is assuming that the job queue is only 1 in length.

> Meaning that eval.masterproblem.max-jobs-per-slave defines the maximal concurrency level for each client and eval.masterproblem.job-size defines the "chunk" size of communication fragments between client and server.

job-size is effectively a chunk size.  It's meant to maximize packet utilization.  But if you have big GP individuals it won't matter much.

max-jobs-per-slave is not the concurrency level: it's how many jobs are pushed out to the slave.  Basically it's taking advantage of network bandwidth while the slave is processing.  I'd keep it at 1 perhaps.

I think the thing that should modulate concurrency should be slave's evalthreads parameter.  But at present it's not (unless run-evolve is being used).  That should and can be fixed fairly easily when I can get to it.

Also it's not fully asynchronous -- the individuals have to be returned in the order they were received.  That'd be a harder thing to hack around: instead, I'd do 4 separate ECJ slave processes each with a job size of 1.

Sean