Ignore my last email. I think I figure out why. The lambda reductions for
trees are carried out in the evaluation process. And there is some inherit
pitfalls of infinite reduction. This is done in the evaluation process, so
checking of depth on crossover and mutation does not apply. Thanks!



On Tue, Feb 11, 2014 at 3:18 PM, Sean Luke <[log in to unmask]> wrote:

> Xiaomeng, it sounds like you're making truly massive trees, like 100MB
> apiece.  That's so large there's likely a bug in your code.
>
> There's no real gotcha with cloning: it makes a deep copy of the tree.
>
> Sean
>
> On Feb 11, 2014, at 2:00 PM, Xiaomeng Ye <[log in to unmask]> wrote:
>
> > Hello everyone,
> >
> > I am trying to integrate lambda-calculus into the GP package of ECJ.
> Everything works just like any GPIndividual, GPTree, GPNode, except that
> the LambdaTree can go through a process of reduction, which produces a
> minimal form of the LambdaTree.
> > Because I used cloning and the tree could be potentially huge, I keep
> getting this error:
> > java.lang.OutOfMemoryError: Java heap space
> >
> > Even after extended the JVM heapsize to 1 GB, I can only avoid it by
> setting extremely small population size like 20 (which produces nothing
> interesting as the population is too small). Other than that, the program
> runs out of memory at some point.
> >
> > Has anyone play with the cloning? Is there any pitfall? How is Java
> Garbage collection treating those nodes? (If they are treated just like any
> other java object, I think I might exploit cloning wrongly)
> >
> > Thanks a lot!
> >
> > Ye Xiaomeng
> >
> >
> >
>