8

I have a bunch of classes and APIs written in C++ and exposed to Python with help of Boost.Python

I am currently investigating the possibilities of creating the following architecture.
In python:

from boostPythonModule import *
AddFunction( boostPythonObject.Method1, args )
AddFunction( boostPythonObject.Method2, args )
AddFunction( boostPythonObject.Method2, args )
RunAll( ) # running is done by C++

In C++:

void AddFunction( boost::object method,  boost::object args )
{
    /// 1. Here i need to extract a real pointer to a function
    /// 2. Make argument and type checking for a function under method
    /// 3. Unpack all arguments to native types
    /// 4. Store the pointer to a function somewhere in local storage
}

void RunAll( )
{
    /// 1. run all previously stored functions and arguments for them
}

Basically I am trying to put all functions down to the native part of my program. The thing is that I am not sure if it's possible to extract all required data from Boost metainfo to do this in generic way - at compile time I should not know what functions I'm gonna call and what arguments they accept.

Few questions:
1. Is there any shared Python info tables I can access to check for some of this stuff ?
2. Boost.Python does type arguments checking. Can it be reused separately ?

Let me know your thoughts.

Thanks

Alex
  • 81
  • 1
  • Do you really need to check all the types in `AddFunction()`? Wouldn't it be enough just to store the method and arguments somewhere and call those methods in `RunAll()` That way you would get any type related error you run into when you execute the methods and Boost.Python would do it for you. – Arlaharen Oct 17 '10 at 10:07
  • Well, no. Basically the idea is to minimize time gap between those function calls, hence all arguments must be checked before then. RuAll should know only about C++ function pointers (i.e. functors), Any other advise ? – Alex Oct 17 '10 at 20:41
  • 1
    In that case I have none. :-) I don't think this is a use case that Boost.Python was designed for and that you will have a hard time to bend it to do what you want. Is the time spent in Python between the calls to your methods really so significant that you have to do this? But I guess you have done your profiling... :-) – Arlaharen Oct 17 '10 at 20:55
  • Well I have some advice... I would try to look at what combinations of methods that are called often and make those combinations into their own methods. – Arlaharen Oct 17 '10 at 21:05

1 Answers1

1

I would think about caching functions and their arguments on python level - save the arguments using the latest form from Keyword arguments section of tutorial and call your C++ functions later unpacking saved arguments unpacking done at python level will isolate you from any boost typesafety complications (all typechecking will be done on RunAll stage making it slower and less secure).

Speed optimized approach would be to implement a C++ clasess with a common interface that can accept a function calls supporting given arguments and caching their values internally to use in later run.

struct Runner {
  virtual int run() = 0;
};

struct ConcreteRunner: public Runner {
  std::string _arg;
  void setArguments(std::string arg) {_arg=arg;}
  virtual int run() {clog << "ConcreteRunner is called with argument" << _arg << endl;}
};

This approach handles argument parsing outside of RunAll section therefore making it as fast as possible.

Basilevs
  • 22,440
  • 15
  • 57
  • 102