I am looking for a detailed comparison between different Modelica compilers or simulators, including Dymola, MapleSim, Wolfran System Modler, SimulationX, OpenModelic. The details should include compatibility of mainstream commercial libraries and open-source libraries, simulation speed, support of FMI. Could anyone tell me where to find the information or existing research about this topic?
-
2possible duplicate: https://stackoverflow.com/questions/58939793/what-are-the-differences-between-diffferent-modelica-simulation-environments – Priyanka Mar 30 '20 at 14:21
2 Answers
I don't think something like that exists yet because besides OpenModelica no other implementation (which all are commercial) will openly show their library coverage results. The OpenModelica library coverage you can find at: https://libraries.openmodelica.org/branches/
I agree that it would be interesting to have a comparison like this available and I think that Modelica Association should work to make it possible to provide it in the future such as they do with FMI.

- 4,034
- 13
- 16
-
It is annoying that different platforms and libraries have a compatibility issue, ideally, they should all obey the Modelica Specification, not the platform's rules. But sadly, that is our situation. – Jack Mar 30 '20 at 09:50
There have been several published attempts for benchmarking of Modelica simulation environments. Older ones, apparently not actualized, include:
Olaf Enge-Rosenblatt et al., Comparisons of Different Modelica-Based Simulators Using Benchmark Tasks, Modelica Conference 2008: A benchmark library is discussed. However, it does not seem that serious comparisons have been conducted. Also I am not aware about continuation on this work.
Jens Frenkel et al., Towards a benchmark suite for Modelica Compiler, Modelica Conference 2011: A benchmark suite called ModeliMark was used to compare different simulation environments. The benchmark is focused on compilation and translation speed. It is from the OpenModelica community. I am not aware if the benchmark is regularly executed for updated results. However, I guess the benchmark and the associated code, infrastructure etc. should be available.
A relatively newer and still active benchmark:
- Francesco Casella, Simulation of Large-Scale Models in Modelica: State of the Art and Future Perspectives, Modelica Conference 2015: A Modelica library benchmark containing parameterized models making it easy to scale up model sizes. The ScalableTestSuite library is available at github, actively maintained and regularly contributed to. I am not aware about regular comparitive benchmarking between different simulation environments. But basically any one with several licenses of simulation environments is capable of performing basic comparative benchmarking.
The most recent one with announced runtime performance results:
- Sergio A. Dorado-Rojas et. al Performance Benchmark of Modelica Time-Domain Power System Automated Simulations using Python, American Modelica Conference 2020: Simulation runtime performance were conducted with power systems models.
These are the ones I am aware about, so potentially there could be more. So hint me if there are other benchmarking attempts and I can keep this list active.

- 568
- 3
- 12