3

I am going to work on a bigger software project soon. Right now I don't know many details about it. The only thing I know is, that the existing code is written in C and new modules have to integrate seamlessly into it.

Within my reasearch I will work with Deep Learning. I am considering Torch and Tensorflow for this. Have to decide for one of them. Both seem to be great so that it depends highly on which one is easier to integrate into a bigger C framework.

What I need (most probably) is the following:

  • Out of C, a Python/Lua script gets called and is handed over some data to process during the inference step in a neural network
  • The inference step generates an output
  • That output needs to be returned to C in order to process it further

There are many articles on the internet that deal especially with embedding Python/Lua scripts in C applications. So it seems to be possible. Lua seems to use a stack for it and Python uses special Python objects.

My question specifically is if the way to embed these scripts change due to the fact that Torch and Tensorflow are used. Could I run into difficulties? If yes, why? Did somebody maybe already use such an embedding? If so - I would be happy if they'd share their experiences =)

Thank you in advance.

Best regards!

EDIT 15.11.2016: To be a bit more explicit now. I tried to pass a C array to Lua/LuaJIT via the Lua Torch C API. Github Link to Lua Torch C API Unfortunately there is no working example which I could use as a guideline.

But I found this snippets here on stackoverflow: Original Question about Lua Torch C API I am not interessted in the second part of that question which deals with both Python and Lua. I just want to get Lua running.

So I took the code from there:

tensor.lua:

require 'torch'
function hi_tensor(t)
   print(‘Hi from lua')
   torch.setdefaulttensortype('torch.FloatTensor')
   print(t)
return t*2
end

cluaf.h:

void multiply (float* array, int m, int n, float *result, int m1, int n1);

cluaf.c:

#include <stdio.h>
#include <string.h>
#include "lua.h"
#include "lauxlib.h"
#include "lualib.h"
#include "luaT.h"
#include "TH/TH.h"

void multiply (float* array, int m, int n, float *result, int m1, int n1)
{
    lua_State *L = luaL_newstate();
    luaL_openlibs( L );

    // loading the lua file
    if (luaL_loadfile(L, "tensor.lua") || lua_pcall(L, 0, 0, 0))
    {
        printf("error: %s \n", lua_tostring(L, -1));
    }

    // convert the c array to Torch7 specific structure representing a tensor
    THFloatStorage* storage =  THFloatStorage_newWithData(array, m*n);
    THFloatTensor* tensor = THFloatTensor_newWithStorage2d(storage, 0, m, n, n, 1);
    luaT_newmetatable(L, "torch.FloatTensor", NULL, NULL, NULL, NULL);

    // load the lua function hi_tensor
    lua_getglobal(L, "hi_tensor");
    if(!lua_isfunction(L,-1))
    {
        lua_pop(L,1);
    }

    //this pushes data to the stack to be used as a parameter
    //to the hi_tensor function call
    luaT_pushudata(L, (void *)tensor, "torch.FloatTensor");

    // call the lua function hi_tensor
    if (lua_pcall(L, 1, 1, 0) != 0)
    {
        printf("error running function `hi_tensor': %s \n", lua_tostring(L, -1));
    }

    // get results returned from the lua function hi_tensor
    THFloatTensor* z = luaT_toudata(L, -1, "torch.FloatTensor");
    lua_pop(L, 1);
    THFloatStorage *storage_res =  z->storage;
    result = storage_res->data;

    return ;
}

And also the following main.c which was not given by the OP:

#include"cluaf.h"

int main(void){
    float a[] = {2, 2, 3, 4};
    float b[] = {1,1,1,1};
    int m = 2;
    int n = 2;
    int m1 = 2;
    int n1 = 2;
    multiply(a,m,n,b,m1,n1);
    printf("c result %f\n", b[0]);
    printf("c result %f\n", b[1]);
    printf("c result %f\n", b[2]);
    printf("c result %f\n", b[3]);
    return 0;
}

I compile everything using the following:

luajit -b tensor.lua tensor.o

gcc -w -c -Wall -Wl,-E -fpic cluaf.c -lluajit -lluaT -lTH -lm -ldl -L /home/manuel/torch/install/lib -I /home/manuel/torch/install/include

gcc -shared cluaf.o tensor.o -L/home/manuel/torch/install/lib -I /home/manuel/torch/install/include -lluajit -lluaT -lTH -lm -ldl -Wl,-E -o libcluaf.so

gcc -L. -Wall -o test main.c -lcluaf <- I placed the libcluaf.so in /usr/local/lib before running this command

I get the following output:

is function 1
Hi from lua
 2  2
 3  4
[torch.FloatTensor of size 2x2]

 4  4
 6  8
[torch.FloatTensor of size 2x2]

isudata 1
c result 1.000000
c result 1.000000
c result 1.000000
c result 1.000000

This means: The first part already works. So I can give Torch a C-array and it works with it as a Tensor.

Problem: Just like in the other question at stackoverflow (link is above), I expected the result in my b array to be the double of the values of my a array. Instead my array b does not change.

What is going wrong in the code?

Thanks for reading!

Best regards

Community
  • 1
  • 1
M. Schmidt
  • 175
  • 1
  • 1
  • 7

2 Answers2

0

You pose a difficult question. First, I would not favor TensorFlow over Torch on the basis of ease of integration. I would use applicability to the problem domain.

As to "My question specifically is if the way to embed these scripts changes due to the fact that Torch and Tensorflow are used":

Python is called with the script and will call TensorFlow to perform the inference. The result is returned to Python, and Python returnes the result to C. So part of the integration is integrating Python and TensorFlow/Torch. The scripts handed to Python will need to be changed to use TensorFlow or Torch.

Paul Ogilvie
  • 25,048
  • 4
  • 23
  • 41
  • 1
    Downvotes without comments stating the why should be forbidden. This is just an honest attempt to support the OP. – Paul Ogilvie Nov 14 '16 at 19:00
  • Thanks Paul for your answer to my question. So the fact that I use the "import tensorflow as tf" or "require 'nn' " makes things more complicated to call the scripts out of C is that right? Regarding Torch vs. Tensorflow - basically both frameworks have all functionality that I need for my applications. So I am quite free to choose and I like both of them. Even though Torch seems to be more transparent to me cause of its imperative nature. Since I have to deal with C soon I thought that including this fact in my decision seems to be a fair apporach. – M. Schmidt Nov 14 '16 at 19:13
  • No, it (i.e. TensorFlow or Torch) will not make it more complicated to call _out of C_. That stays the same. But the Python scripts must be adapted, as those contain ("are") the inference functionality. – Paul Ogilvie Nov 14 '16 at 21:58
0

The last line in your multiply function does nothing:

result = storage_res->data;

You need to copy the results explicitly:

`for(size_t i=0;i<m1*n1; ++i)
   result[i]=storage_res->data[i];`
Joe Chakra
  • 56
  • 3