3

I'm working with Maya 2012 and what I want to do is render a camera view. I have found that it's possible to do this using the MEL command 'render' (calling it through python). However, as far as I know, this command renders the image and saves it to disk while only returning the path to the saved image.

Example:

import maya.cmds as cmds
import cv2 as cv

pathToFile = cmds.render()
renderImage = cv.imread(pathToFile)

Since I'm interested in using the image to perform various computer vision algorithms, saving it to disk and then reading it from disk creates an unnecessary computational overhead.

Is it possible to render a camera and store the image in a variable without the need to do this? This would allow for a quicker loop between rendering and analysing the render image.


In case someone comes upon this question in the future: I tried the ram disk approach (using dataram RAMDisk) which was suggested and it did not yield any speed increase, unfortunately.

  • 1
    Its not necessarily possible tough the Maya API. The code you need depends on what rendering engine you use, as you need to implement a rendering engine back end for the data. See the renderer actually uses the disk to save the amount of data it needs. Even the Maya internal preview render works form a disk based solution. PS: there is a trivial solution for this tough, make a ram disk and put the image there. – joojaa Dec 13 '12 at 09:29

1 Answers1

1

If you're looking for efficiency, you may consider using the OpenMaya packages to access the OpenGL context for the views and render a Viewport 2.0 view to texture. Then access that texture programmatically.

Alternatively, you could write a plugin that wraps another rendering plugin, like Mayatomr, or the Hardware 2.0 renderer, and puts the rendered image into some shared memory space.

But these solutions are so incredibly involved, touching on so many undocumented features. You should probably just set the renderer to Hardware 2.0, save the image as a BMP (which OpenCV reads very quickly anyway), perhaps to a RAM disk like suggested above, and call it a day.

Amendment

There is an easier way, perhaps. Create a custom node that implements MPxHardwareShader specifications described here:

http://images.autodesk.com/adsk/files/viewport_2_0_api_gold.pdf

In other words, override the Hardware / Viewport 2.0 rendering for a some node. Instead of actually drawing something, use your access to the OpenGL context to render the viewport to texture.

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/

Then do whatever you want with it. Clever, eh?

DoctorPangloss
  • 2,994
  • 1
  • 18
  • 22
  • Thank you very much for your answer. That's what I wanted to know, whether there is an equally simple way to simply store the image in a variable! Perhaps in the future I will attempt the more involved solution but for now -as you said- I'll call it a day! :) – Alexandros Gouvatsos Dec 13 '12 at 11:53
  • I have now reached the point that I actually need to optimise this and access the rendering information directly. I went through the documentation and found the renderAccessNode.cpp example (http://tinyurl.com/azjj9x8). I tried creating something similar (basically overriding the renderCallback() function), but even with the plugin loaded, it's as if the renderCallback() is never actually called; there is a print statement which doesn't print anything. Would you (or anyone else) know how to resolve this, or some other way to go about what I need? – Alexandros Gouvatsos Jan 21 '13 at 18:16
  • 1
    I posted you a new solution. – DoctorPangloss Jan 22 '13 at 17:06
  • Thanks so much! I will attempt this now and report back with my results! So in effect, I will use the MPxShaderOverride function to get the drawContext to a texture in memory. Will this override function take effect every time I call the render() function or when will it take place? Unfortunately my rep isn't high enough to be able and give you some reputation! – Alexandros Gouvatsos Jan 23 '13 at 13:03
  • 2
    You don't need to call `render()` I think, in this instance. You move around the camera, and it will render to texture in real time. You could add a switch to the node to tell it when to save and not to save. Or, you can `playblast`, which has the same effect as rendering in OpenGL each frame one after another without any intermediate frames. This is probably what you want. – DoctorPangloss Jan 23 '13 at 17:11
  • I have been tinkering with what you said today and unfortunately I can't seem to make it work with an MPxShaderNode. I had a look at the examples from the Viewport 2.0 API guide and this http://tinyurl.com/ajzsfgy seems to be the closest I could get. Basically, if I enter a print statement in the draw() function I get to see it whenever there is some update (either moving the camera, or moving the objects etc.). However I'm not sure it actually gives me access to the context I want - it gives me access to the M3dView. – Alexandros Gouvatsos Jan 23 '13 at 18:28
  • I also think `playblast` is the way to go for faster rendering. And actually, it can work with Viewport 2.0. – Drake Guan Dec 09 '13 at 05:08