Faster and slightly less stupid.

Comments are moderated. It may take a few minutes before your comment appears.
Markdown is supported in your comments.

Now, for a slightly different topic, I am going to nerd rage on a common misconception with evolving images. Similarity between two pictures, essential to grading progress. How would you do it? If you said "Compute the difference of the red/green/blue pixels between each image and sum the squares of the differences", you fail. Pixels (especially RGB ones) are a very inefficient representation of visual data. Any such inefficiency hurts the GA. You want the scoring process to be blunt, not caught up in minutia. You want a picture that "looks similar" to the target. Using an RGB scoring algorithm will evolve towards a similar binary file, which coincidentally might look similar. A different color space (one which matches the human eye) does much better. For example, the human eye is more sensitive to shades of green than shades of red. You can discard half the red data without visibly effecting the result.
YCbCr is fairly standard color space for this kind of work. Note an entire channel is devoted to the luminosity. A good chuck of our retina is devoted just to luminosity. YCbCr eliminates invisible redundancy, which would otherwise waste the GAs time. For this project, choice of color space is entirely academic, as the image will be grey scale. More important is how the picture is encoded.

Mail: (not shown)

Please type this: