4

This is a fairly general question. I am attempting to validate a Modelica model against experimentally measured data. In the past, I have simply added a CombiTable with the data copied into the component. However, I will be working with at least 15 columns of data I would like to match up and compare.

My question is, what methods, tips, tricks do you recommend for comparing measured and simulated data that makes it easier to calibrate and validate your Modelica model?

Justin Shultz
  • 199
  • 1
  • 11
  • 1
    more options or at least a collection of what's out there can be found here: https://stackoverflow.com/questions/36157220/unit-testing-modelica-component-library – Christoph May 29 '17 at 12:40

2 Answers2

2

I've been curious how others do this as well...

For me I created a model that I put in all my Examples that does a regression test and spits out reports on pass/fails. The "correct" data can be input from a combiTable (as many dimensions as you'd like) or put directly as the input variable.

The regression test is a function that takes an array with a tolerance.

Of course you could always take things outside of Modelica to Python, etc. using the mat file results (like BuildingsPy) or both.

Below is representative of what has worked for me so far:

model TestCheck

parameter Integer n "Length of variable vector";
parameter Real tolerance = 100;

input Real[n] x_1 "Values of interest" annotation(Dialog(group="Input Variables:"));
input Real[n] x_2 "Reference values" annotation(Dialog(group="Input Variables:"));

Real passedTest "if 0 (false) then expected and actual values do not match within the expected error";

Real Error_rms "Root Mean Square error sqrt(sum(Error_abs.^2)/n)";
Real[n] Error_abs "Absolute error (x_1 - x_2)";
SIadd.nonDim[n] Error_rel "Relative error (x_1 - x_2)/x_2";

Boolean allPassed(start=true);

equation 

  (Error_rms,Error_abs,Error_rel,passedTest) = ErrorTestFunction(x_1,x_2, tolerance);

 when passedTest < 1 then
   allPassed = false;
 end when;
end TestCheck;
Scott G
  • 2,194
  • 1
  • 14
  • 24
  • Hi Scott, very interesting block of code. This is a great check, very useful. You may find the `assert` function in Modelica to work nicely with your `passedTest` and `allPassed` variables. – Justin Shultz May 26 '17 at 15:38
  • Indeed. However I also have some additional code that isnt included that sends output to a file that a then check nightly for version control. But perhaps the assert with the warning state would be useful... I also have a red vs green color that shows up on the gui for a visual que. – Scott G May 26 '17 at 15:41
  • Scott, did you present at the last North America Modelica User Group Meeting? I remember your presentation! Do you have any resources available online for reference? – Justin Shultz May 26 '17 at 16:05
  • @JustinShultz: I sure did. Was it you I talked with about the Sublime text editor? Anyways, unfortunately I don't yet have my stuff on a public github yet. I hope to be able to have that in few months – Scott G May 26 '17 at 16:44
  • Scott, yes! That was me. Keep me updated when your work goes public. I hope the sublime text syntax highlighting was helpful for you. – Justin Shultz May 26 '17 at 17:59
  • I'll be sure to let you know. – Scott G May 26 '17 at 18:02
1

For model calibration and validation we exported the model into FMU and did it in other tool, we succesfully used e.g. MATLAB or Mathematica.

In the past research project at university we developed tool to estimate model parameters for complex models based on set boundaries. Genetic algorithm borrowed from MATLAB Optimization toolbox tries several tens to several thousands simulation to get as close as possible results. Backend .NET, Frontend HTML 5, Javascript+Jquery communication via REST api, independent computational nodes exposing FMU via REST api. Limited demo at app.physiovalues.org

Tomas Kulhanek
  • 867
  • 1
  • 10
  • 18