3

I am basically looking for someone familiar with the Python library moderngl. I am doing some experiments with different evolutionary algorithms to arrange polygons, with the objective being to reproduce a reference image whereby the euclidean distance between the reference and the evolved image is minimized. Probably sounds familiar to someone!

A polygon is encoded as a vector in the format:

x1, y1, x2, y2,... xm, ym, r, g, b, a

where m is the number of vertices of a polygon. xi and yi are positive integers that encode pixel positions in the image and r, g, b, and a are positive integers between 0 and 255 and encode red, green blue and alpha of the pixel values in the convex hull of the polygon.

An individual is encoded as:

[shape1, shape2, shape3,...shapen], where n is the number of shapes in an individual (typically 250).

Furthermore, there may be approximately 500 individuals.

Each individual is converted into an image i.e. array of r, g, b, and a values and compared to the reference image by euclidean distance. This of course requires some alpha composite calculations for overlapping polygons. The conversions to pixel values is always the bottle neck in my code and I have tried multiple image libraries. Opengl seems to be the way to go with Python and moderngl is especially appropriate as I do not require a window. My question is what is the best approach to take with moderngl for my task with regards to speed? (You have to forgive me as I am a complete noob with moderngl). Is it possible to render my entire population simultaneously, as a texture? and calculate all euclidean distances within opengl? Or do I need to do each image individually?

Here's probably the closest example in moderngl to what I want:

import numpy as np

import moderngl
from ported._example import Example

class AlphaBlending(Example):
    gl_version = (3, 3)
    title = "Alpha Blending"

    def __init__(self, **kwargs):
        super().__init__(**kwargs)

        self.prog = self.ctx.program(
            vertex_shader='''
                #version 330

                in vec2 vert;

                in vec4 vert_color;
                out vec4 frag_color;

                uniform vec2 scale;
                uniform float rotation;

                void main() {
                    frag_color = vert_color;
                    float r = rotation * (0.5 + gl_InstanceID * 0.05);
                    mat2 rot = mat2(cos(r), sin(r), -sin(r), cos(r));
                    gl_Position = vec4((rot * vert) * scale, 0.0, 1.0);
                }
            ''',
            fragment_shader='''
                #version 330
                in vec4 frag_color;
                out vec4 color;
                void main() {
                    color = vec4(frag_color);
                }
            ''',
        )

        self.scale = self.prog['scale']
        self.rotation = self.prog['rotation']

        self.scale.value = (0.5, self.aspect_ratio * 0.5)

        vertices = np.array([
            1.0, 0.0,
            1.0, 0.0, 0.0, 0.5,

            -0.5, 0.86,
            0.0, 1.0, 0.0, 0.5,

            -0.5, -0.86,
            0.0, 0.0, 1.0, 0.5,
        ])

        self.vbo = self.ctx.buffer(vertices.astype('f4').tobytes())
        self.vao = self.ctx.simple_vertex_array(self.prog, self.vbo, 'vert', 'vert_color')

    def render(self, time: float, frame_time: float):
        self.ctx.clear(1.0, 1.0, 1.0)
        self.ctx.enable(moderngl.BLEND)
        self.rotation.value = time
        self.vao.render(instances=10)


if __name__ == '__main__':
    AlphaBlending.run()

Available here: https://github.com/moderngl/moderngl/tree/master/examples

I would obviously remove the rotation stuff.

I know I can read the buffer into a numpy array with np.frombuffer() if I need to calulate distances in numpy. But if possible, outputting the distances from moderngl would probably be faster.

Any help in turning this into what I want or pointers?

genpfault
  • 51,148
  • 11
  • 85
  • 139
eM7RON
  • 59
  • 1
  • 4
  • Some thoughts... "Is it possible to render my entire population simultaneously, as a texture?" I suppose it is, if you have the GPU memory for it. You could for example have a very "tall" texture with all images stacked, an simply shift the Y coordinates of the polygons of the individuals accordingly. Then, it should be possible to compute the differences from that texture with a compute shader. I'm not very experienced with that unfortunately so I can't give you detailed instructions... – jdehesa Sep 02 '19 at 16:44
  • Yes, that's exactly what I'm trying to do. I'll have a look into compute shaders thanks. – eM7RON Sep 03 '19 at 19:03

0 Answers0