5

I'm trying to write code to calculate boundary layer development over an aerofoil.

I start with a set of points, S, that describe the shape of the aerofoil.

set of points, S

I run an inviscid solver on these points and extract the stagnation point, which will be an element of S.

The stagnation point, an element of S.

The set of points S is now divided by the stagnation point into two sets su and sl, which describe the shape of the aerofoil underneath the upper and lower boundary layers.

Sets su and sl describe the shape of the aerofoil underneath the upper and lower boundary layers.

Here is my first problem. Suppose I write a component, BoundaryLayerSolve, that takes a vector s of points in the boundary layer and a vector ue of edge velocities to that boundary layer. Both s and ue will be of the same length. If I wanted to use this one component twice, once for each side of the aerofoil, I would need to know a priori the position of the stagnation point, which cannot be found until the model is already set up and run for the inviscid solver. How could I set it up so that I can handle these unknown input array sizes?

Now the input s and ue arrays are known for the upper and lower boundary layers, the boundary layers can be computed. I will be using two models, one for the laminar region of the boundary layer and one for the turbulent region. Assume that these two models use totally different computations, and that these computations are complicated enough that in order to find their analytical partial derivatives they must be broken up into subcomponents.

The laminar calculations begin at the stagnation point and march along s. At each step, the value of the momentum thickness is calculated. If the transition threshold is reached, the laminar calculations must stop and turbulent calculations must be used for the rest of s. I should stress that it isn't an option to do laminar for all of s, turbulent for all of s, then simply slice off the back part of the laminar output and the front part of the turbulent output. The turbulent calculation must begin with the values from the laminar calculation at the point of transition.

Boundary layer calculations

In psuedocode, this would be something like:

transition_point = 0
for index, point in enumerate(s):
    transition_point += 1
    momentum_thickness = laminar(point)
    if momentum_thickness > transition_threshold:
        break

for point in s[transition_point+1:]:    
    turbulent(s)

Here is my second problem. The transition point is not known until during the laminar calculations, and so the length of the output momentum_thickness array for the laminar calculations is not known a priori. Consequently, the length of the input point array for the turbulent calculations is also not known a priori. How can I work around this?

Here is my third problem. How can I have a component terminate its calculation once a condition is met, and pass on to another component to finish the calculation on the rest of the array?


To collect my questions:

  1. How can I have a component take an slice of initially unknown size from an array of initially known size?
  2. How can I have a component output an initially unknown size array, and have a second component input a second initially unknown size array when both unknown arrays add up to the size of an initially known array?
  3. How can I have a component terminate calculation when a condition is met, and pass on computation to a second component to complete the remaining calculation?

I appreciate that this is a long question, and could maybe be broken up. Thank you for reading, in advance.

BeeperTeeper
  • 151
  • 7

2 Answers2

5

Im going to answer your questions a little out of order:

How can I have a component output an initially unknown size array, and have a second component input a second initially unknown size array when both unknown arrays add up to the size of an initially known array?

As of OpenMDAO V3.6.0 you can't do this. OpenMDAO requires that you tell it the size of all the inputs and output. There is a slight workaround via the shape_by_conn feature which will let downstream components get sized by upstream comps, but ultimately the start of that sizing-chain needs to have a given value at setup time. So, you can't have dynamically size the output values during run time. However, I am going to provide you a work around that you can use instead.

Your questions 1 and 3 can both be address with a trick called over-allocation. Generally, this means allocating a larger array than you actually need, and adding an extra input that helps you keep track of how much to use. Exactly how you use this trick is a bit problem specific, but here is a generic example that shows the basics:

import numpy as np
import openmdao.api as om


class VectorChain(om.ExplicitComponent): 

    def initialize(self):
        self.options.declare('vec_size', types=int)
        self.options.declare('step_size', types=int)

    def setup(self): 
        vec_size = self.options['vec_size']
        step_size = self.options['vec_size']

        # this is the index of the last valid value in the data array
        self.add_input('in_idx', shape=1, val=0) 
        # NOTE: though this could be done as a discrete variable, 
        # that will confuse OpenMDAO's derivative system, so just pass it as a float. 

        # actual data array
        self.add_input('in_vec', shape=vec_size, val=np.zeros(vec_size))

        self.add_output('out_idx', shape=1)
        self.add_output('out_vec', shape=vec_size)

        # NOTES: You do **NOT** define derivatives wrt the idx variable!! 
        self.declare_partials('out_vec', 'in_vec', method='CS')

    def compute(self, inputs, outputs): 

        in_vec = inputs['in_vec']
        out_vec = outputs['out_vec']
        
        out_vec[:] = in_vec.copy()

        # always use the first two entries to
        # indicate the start/end of the valid data
        i_start_idx = int(inputs['in_idx'])
        i_end_idx = i_start_idx + self.options['step_size']
        i_size = i_end_idx - i_start_idx

        if i_start_idx == 0: 
            out_vec[0] = 1 # get the counting started
            i_start_idx = 1

        # note: careful to account for the open end of the  
        # interval when computing the end_idx value
        # print(self.pathname)
        for i in range(i_start_idx,i_start_idx+self.options['step_size']): 
            out_vec[i] = out_vec[i-1] + 1

        o_end_idx = i_start_idx + i_size

        outputs['out_idx'] = o_end_idx


if __name__ == "__main__": 

    p = om.Problem()

    VEC_SIZE = 12
    STEP_SIZE = 3

    c0 = p.model.add_subsystem('c0', VectorChain(vec_size=VEC_SIZE, step_size=STEP_SIZE))
    c1 = p.model.add_subsystem('c1', VectorChain(vec_size=VEC_SIZE, step_size=STEP_SIZE))
    c2 = p.model.add_subsystem('c2', VectorChain(vec_size=VEC_SIZE, step_size=STEP_SIZE))

    p.model.connect('c0.out_idx', 'c1.in_idx')
    p.model.connect('c0.out_vec', 'c1.in_vec')

    p.model.connect('c1.out_idx', 'c2.in_idx')
    p.model.connect('c1.out_vec', 'c2.in_vec')


    p.setup()

    p.run_model()

    p.model.list_outputs(print_arrays=True)

In this toy example, their is a chain of identical components and each one just keeps counting from where the last one left off. It's obviously a trivial example, but the basics are there. In your case, you would have different components for the laminar and transition calcs, but the idea is the same. Start with the laminar part, loop till you hit the trop condition. Stop there, and note the index you stoped at. Then pass that information downstream to the next component.

A few details here are worth noting:

  1. If you wanted to you could do both upper and lower surfaces together in one chain of components. It's not required to do that, but less components is generally faster.
  2. If you had 1000 nodes on the upper surface, you could over-allocate a length 1000 array and pass that through the whole calculation (this is basically what my example does). Alternatively, you might know that the turbulent trip point is always going to be between the 20th and 50th points. In that case you could over-allocate a length 50 array for the first component for laminar calcs, and a length 980 array for the second turbulent component. If you had large grids, where the potential overlap was much more narrow this would save some memory.

A note about derivatives:

This whole concept involves discrete movement of indices which is technically non-differentiable. The way you make this work in continuous derivative context is to assume that for any linearization (any time you compute derivatives) the index values are fixed (and hence don't participate in the derivatives). In your case the specific index value of the end of the laminar calc will change from one run to the next, but anytime you want to know derivatives you assume it's fixed.

In the strict mathematical sense, if you were using gradient based optimization, this assumption is not valid. The function is really not differentiable. However, in practice we can often count on the assume the design converging and the location of the changing index to become effectively fixed. However, if that settling down doesn't happen (perhaps you have a really-really fine discretization and there is enough noise to keep the trip point jumping around by a few nodes no matter what) then you might have trouble with tight convergence.

I don't think it likely that this discrete-index issue will be a problem for your case, so I encourage you to give it a try! However, if you do find the problem to be too noisy using this approach, then you should consider re-formulating the problem in a more continuous manner.

A more continuous formulation: using the discrete data along the airfoil, define a continuous interpolation function (you could either use OpenMDAO's MetaModelStructured component, or use a newton-solver to converge a least-squares fit to a pre-defined basis function). The input to your interpolant would not be an x,y location but instead be a parametric location s along the airfoil surface (i.e. a path length from an arbitrary starting point). There are several ways to handle this, but to keep it simple, let's say you pick a discrete point on the nose of the airfoil as 0, then each point backwards on the top of the airfoil is positive s=1,s=2,s=3,s=4, etc. Each point on the lower surface is negative s=-1, s=-2, s=-3, s=-4 etc. Though I've picked integer values, its important to realize that the interpolant is valid for any real value of s in between the positive and negative limits of your grid.

smooth interpolant of surface pressure distribution

Once you have that interpolation, you can solve for the stagnation point location in the continuous space of the interpolant (i.e. find the value of s that you need). To do this, you would probably need to differentiate the interpolant then solve for where that derivative went to zero. That would be a second implicit component, following the first one that output the coefficients for your smooth fit.

Then you can pass the interpolant coefficients and the location of the stagnation to a laminar component, and perform your marching calculations backwards from that point on the upper and lower surfaces until you hit the transition point. Though you might choose to march in a discrete step to produce a nice evenly spaced output grid of data, you still want to keep things continuous with regard to the transition point. Say you march with a fixed step in s till the first point that is beyond the transition condition. At that point, you would start a bisection search process between the last two search locations to find the "exact" (within some reasonable tolerance) location of the transition point in s. Just to be clear, the laminar and turbulent calculations would be performed on their own sub-grids, which were developed by using the interpolating function.

laminar and continuous sub grids

Then you could pass the conditions at the exact transition position to the next component downstream (along with its s location) and start another marching calculation with the turbulent calculations.

I will note that my suggestion is similar to the answer that Rob Falck gave, though my approach does not require an implicit relationship. You could set the problem up my explicit way, or his implicit way. There are plusses and minuses to both, which I won't go into here. The key point is that a truly continuous formulation is possible, even with an explicit calculation approach. One nice thing about this approach is that its a lot easier to keep the arrays of fixed length. You can just allocate n discrete points for each of the laminar and turbulent sub-grids and let the physical spacing vary a little as the stagnation point and laminar/turbulent point move around.

In practice, I don't think you have to try use continuous approach. I think it's more correct to differentiate it and would be able to converge more tightly. While mathematical correctness is nice, it's not always practical. In practice I think you could find an appropriate grid to work in the more discrete format and be up and running faster. If you found you had noise problems or difficulty converging, then you could switch approaches later on.

One thing I can think of that might make you sensitive to the noise of the discrete approach would be if you plan to create a feed-back loop between the boundary layer solver and the inviscid solver. Some codes will take the boundary layer thickness as a new airfoils shape and pass that back to the inviscid calculations, then re-compute surface pressure distributions and recompute a new boundary layer on the original airfoil surface. If you are going to try something like this, the noise of the discrete approach will likely cause you more trouble. In that case, the more continuous approach is warranted.

flow chart comparing coupled and feed-forward approaches

Justin Gray
  • 5,605
  • 1
  • 11
  • 16
1

For questions 1 and 2: Currently OpenMDAO doesn't handle dynamic sizing of inputs and outputs during execution, and there aren't currently any plans to change this. We offer shape_by_conn to allow variables to be shaped based on their sources/targets, but I don't think that's what you want here, since both sides are undetermined in your formulation.

Question 3: If we handle the problem implicitly, then we can force the transition between the two calculations to occur at the junction between the laminar and turbulent regions. For instance, in Dymos when we're propagating a trajectory we don't use event triggers as a typical time-stepping simulation might. Instead, we use constraints at the beginning or end of a "phase" of the trajectory to force the transition condition to occur at the junction.

My inclination is to try formulating the problem this way:

  1. Use interpolation to provide the point on the airfoil as a function of some independent variable. S = f(x). Imagine the point circled in red sliding continuously around the airfoil as x is changed.

  2. Assume that the transition point is known a priori, so we have two main calculation groups: laminar and turbulent. Each group evaluates some number of points on the airfoil (N_l and N_u). The value of parameter x that determines the transition point can "slide" back and forth such that until the assumed transition point value of x matches the actual desired value (by enforcing residuals or constraints at the transition point, with x as an implicit variable or design variable).

  3. Instead of feeding outputs from the laminar portion into the turbulent portion at the transition point, treat those values as independent variables in the turbulent section and then force them to match the values at the end of the laminar section (either by setting up residuals, or as constraints for the optimizer).

Depending on the particular details of the implementation, you might have to bound the assumed values to get valid calculations out of each section. I'm also not sure what you'd use as an objective function here, if you were formulating this using an optimization approach instead of a solver.

Rob Falck
  • 2,324
  • 16
  • 14
  • Of the two solutions, this seems the easiest to implement. Justin mentioned that both have pros and cons. Could you give some disadvantages of this method? – BeeperTeeper Feb 07 '21 at 11:37
  • Implicit integration is typically less intuitive than an explicit approach. If you use an optimizer to enforce the physical "defects", you can get better performance but you're also subject to potential difficulty in scaling the design variables and constraints. This problem can probably be solved by Dymos, which would handle a lot of the complexity of the implicit integration for you. You'd have two "phases" of integration, as far as Dymos is concerned, with some appropriate linkage constraint at the transition point between them. Just treat `x` as `time` as far as Dymos is concerned. – Rob Falck Feb 07 '21 at 14:23