On Oct 16, 2013, at 7:24 PM, Sean Luke wrote:
> On Oct 12, 2013, at 5:50 AM, Ralf Buschermöhle wrote:
>> Meaning that eval.masterproblem.max-jobs-per-slave defines the maximal concurrency level for each client and eval.masterproblem.job-size defines the "chunk" size of communication fragments between client and server.
> job-size is effectively a chunk size. It's meant to maximize packet utilization. But if you have big GP individuals it won't matter much.
> max-jobs-per-slave is not the concurrency level: it's how many jobs are pushed out to the slave. Basically it's taking advantage of network bandwidth while the slave is processing. I'd keep it at 1 perhaps.
Let me revise this a bit more in the context of your assumption. I'd say that at present eval.masterproblem.job-size defines the degree of concurrency in a slave (as well as the job packing), and eval.masterproblem.max-jobs-per-slave is meant to keep the network buffer warm.
In truth I think that eval.masterproblem.max-jobs-per-slave should be either 1 or 2, and that's it. If it's 1, then a new job isn't sent out until the previous one has finished. This would be appropriate for example if the total number of jobs would be no more than slightly more than the number of slaves; or if individual evaluation time was highly variable. Otherwise 2 makes the most sense: send out a job, then while the slave is chewing on it, send out another job so there's no lag. But >2 makes little sense.