3

I am developing an application which is written in Python3 and is composed of a Python library/package (which contains the core functionality) and a Python application which will provide a cli shell and handles user commands.

In addition the functionality contained within the Python package must be exposed to existing gui applications written in C# (using the Microsoft .Net framework).

I've done a fair bit of research into how this might be done and have come up with a few potential solutions.

  1. Use Python.Net to implement a Python script within a C# application that imports my python package and calls the desired methods/attributes. I haven't been able to get this to work on monodevelop myself yet but this seems to be a popular option despite there not being much documentation in regards to my use case.
  2. Embed my Python library as a DLL using CFFI. This option seems like it would not take a lot of work but it's hard to see how I would maintain my interfaces/what I am exposing to someone using the DLL in C#. This option also doesn't seem to be support by a lot of documentation pertaining to my use case.
  3. Create a small Python application which imports my python package and exposes its functionality via ZeroMQ or gRPC. This seems to be the most flexible option with ample documentation however I am concerned about latency as ultimately this tool is used for hardware control.

Note I am not well versed in C# and will be doing the majority of the development in linux.

I'm really looking to get feedback on which option will provide the best balance between a clean interface to my library and low latency/good performance (emphasis on the later).

user229044
  • 232,980
  • 40
  • 330
  • 338
Reginald Marr
  • 387
  • 2
  • 16
  • Given *(cit.) : "…I am concerned about latency as…this tool is used for hardware control."* - **What is a latency ceiling [us] your system control-loop has designed as its stability threshold?** – user3666197 Jan 18 '20 at 19:53
  • My latency ceiling is about 10ms due to one extreme case. 100ms more typically. – Reginald Marr Jan 19 '20 at 04:59
  • I'm not sure what exactly is the application but with such low latency requirement, have you considered not using python at all? Since hardware is involved, you could probably interact with it in C and C would be easy to integrate with C#. – Emrah Diril Jan 19 '20 at 05:24
  • Well this is actually replacing something that was previously implemented in C. The idea is that if the interface layer of the library and the cli are implemented in Python it would be easier for users to build off the core functionality for their use case. Some of the more demanding control loops may have to be implemented as a static C library or rust library which we would call in with python. In any case the top layer is still implemented in Python which will have to interface with C# – Reginald Marr Jan 19 '20 at 14:03

2 Answers2

1

The Target: latency under ~ 10 [ms] for a SuT-stability ?

Thanks for details added about a rather wide range of latency ceilings ~ 10 .. 100 [ms]
+

…this is actually replacing something that was previously implemented in C. The idea is that if the interface layer of the library and the cli are implemented in Python it would be easier for users to build off the core functionality for their use case. Some of the more demanding control loops may have to be implemented as a static C library or rust library which we would call in with python. In any case the top layer is still implemented in Python which will have to interface with C#
( = the most important takeaway from here
The need to understand both
The Costs of wished-to-have ease of user-extensions & refactoring the architecture
+ Who pays these Costs)


Before we even start a search for the solution:

For this to be done safely & professionally, you will most probably like this, not to repeat common errors of uninformed decisions, where general remarks source from heaps of first-hand experience with crafting a system with control-loop under ~ 80 [us]

Map you control-system's - both the internal eco-system (resources) & exo-system ( interactions with outer world )

enter image description here

Next comes the Architecture :

Without due understanding of toys, no one can decide about The Right-enough Architecture.

Understanding the landscape of devices in a latency-motivated device requires us to first know ( read + test + benchmark also its jitter/wander envelope(s) under (over)-loaded conditions of the System-under-Test ). Not knowing this will lead to but a blind & facts unsupported belief, our SuT will never ever headbang into the wall of reality, which will proof itself wrong, typically in the least pleasant moments.

Irreversibly wrong and bad practice, as all the so far accrued costs have been already burnt...

Knowing & testing is a core step before sketching the architecture - where details matter ( ref. How much does one loose in h2d/d2h latencies [us]? - why these principal costs are so weakly reported? Does that mean those costs do not exist? No. They do exist and your control-loops will pay them each and every time... so better know about all such hidden costs, paid in the TimeDOMAIN, well beforehand ...before Architecture gets designed and drafted. )


Do not hesitate to go Distributed ( where reasonably supported ) :

Learn from NASA Apollo mission design
- it was deeply distributed
and
- proper engineering helped to reach the Moon
- it saved both the National Pride and the lives of these first, and so far the only, Extra Terrestrians
( credits to Ms.Margaret HAMILTON's wisdom in defining her design rules and her changing the minds of the proper engineering of the many control-loops-systems' coordination strategies )

Either ZeroMQ ( zmq, being a mature, composable, well scaling, architecture of principally distributed many-to-many behaviours, developed atop a set of a few Trivial Scalable Formal Communication Pattern Archetypes ) or its Marting SUSTRIK co-fathered younger and light-weighted sister, nanomsg, may help one a lot to compose a smart macro-system, where individual component's strengths (or monopolies of having none substitute for) may get interconnected into a still-within-latency-thresholds stable, priority-aware macro-system, for which one cannot in principle ( or does not want to, due to some other reasons - economy of costs, time-to-market, legal-constraints being the first ones on hand ) design a monolithic all-in-one system.

While on the first look this may sound as complicating the problem, one may soon realise, that it does serve the very opposite :

  • burning no fuel ( yes, investors' money ) on a just another re-inventing wheel(s…)
  • using industry-proven tools most often improves reliability, sure, if using them right…
  • performance scaling may come as a nice side-effect, not as a panic of a too late to re-factor nightmare

not mentioning the positive benefits from such tools independent evolution and their further extensions.

My system was in a similar dilemma - #C not being a way for me for a second (closed source app dependency was too expensive if not fatal for our success).

  • CLI: called a remote-keyboard was the exact example of split away a first python, where remote could be read as a trans-atlantic-keyboard
  • ML: was a least controlled latency element in the town, so fusing was needed
  • core-App: was extended, using industry-standard DLL, into a system, without letting it know that (only the stripped-off core-logic remained in-place, everything else went distributed, so as to minimise all the control-loops' latencies and let to handle the different levels of priorities )
  • non-blocking add-ons: were off-loaded from the core-App
  • core-App-(1+N)-Hot-Standby-Shading: was introduced into an originally monolithic C/S exo-system

Is here any need to add more for going rather Distributed and independent from the original Vendor-Lock-in?

Having chosen but sweat, tears and blood - to start with ZeroMQ in its days of mature v2.x, I regret no single hour of having done so and cannot imagine to meet all of the above without having done so.

halfer
  • 19,824
  • 17
  • 99
  • 186
user3666197
  • 1
  • 6
  • 50
  • 92
  • This does seem to be one of the most attractive options. After putting some work into creating a dll prototype it seems that has its own down sides (as compared to going the more distributed route) in the added cost of having to maintain the dll and its release process. The only question I have is what is the value in choosing zeromq over the more lightweight nanomsg, or something with code generation (being language agnostic) like [gRPC][1] or [Thrift][2]? [1]: https://grpc.io/docs/tutorials/basic/python/ [2]: https://thrift-tutorial.readthedocs.io/en/latest/usage-example.html – Reginald Marr Jan 20 '20 at 18:14
  • StackOverflow strongly discourages from creeping the topic via comments, to go into new directions and/or new questions. Feel free to open another, new question, with posting the new context and any new point of view, that might have sprinkled out of the previous Q/A-s. That is both fair and Community Netiquette compliant step, isn't it? – user3666197 Jan 20 '20 at 19:52
  • Fair enough, part of the original question is getting at zeromq vs gRPC however if you think that would be best explored in its own question I can do that should research fail short. – Reginald Marr Jan 20 '20 at 20:19
0

You say your python application has a cli so another potential option is to have your C# application interact with your python application via the command line.

You would need to expose the python functionality via command line arguments (which you might already be doing anyway) and your python application would need to be able to return results as json data which would probably be the easiest way to consume it from C#.

It all depends on how complicated the interaction between your C# gui and the python application would need to be though.

Emrah Diril
  • 1,687
  • 1
  • 19
  • 27