3

Solve_ivp is an initial value problem solver function from Scipy. In a few words

scipy.integrate.solve_ivp(fun, t_span, y0, method=’RK45’, t_eval=None, dense_output=False, events=None, vectorized=False, args=None, **options)

Solve an initial value problem for a system of ODEs. This function numerically integrates a system of ordinary differential equations given an initial value.

In the solve_ivp function documentation (Scipy reference guide 1.4.1 page 695) we have the following

Parameters fun [callable] Right-hand side of the system. The calling signature is fun(t, y). Here t is a scalar, and there are two options for the ndarray y: It can either have the shape (n,); then fun must return array_like with shape (n,). Alternatively, it can have the shape (n, k); then fun must return an array_like with shape (n, k), i.e. each column corresponds to a single column in y. The choice between the two options is determined by the vectorized argument (see below). The vectorized implementation allows a faster approximation of the Jacobian by finite differences (required for stiff solvers).

Here n stands for the no of dimensions in y. What does k stand for? This might seem a very naive question for those who know the answer. But please, believe me, I really could not find it (at least not in the documentation). The great hpaulj's answer to this question seems to shed some light. But well, IMHO it still is too dark to move around.

uhoh
  • 3,713
  • 6
  • 42
  • 95
Tejas Shetty
  • 685
  • 6
  • 30

1 Answers1

0

Well, y kan depend on other variables, say x.

y generally stands for n fields. If n = 3 then y = numpy.array([u, v, w]) and this means that u, v and w can all depend on x.

k is the number of values that x can have.

Alex van Houten
  • 308
  • 3
  • 17
  • 1
    Sorry, I don't see how this answers the question, or challenges my answer in the link. It needs a working demonstation. – hpaulj Aug 08 '22 at 11:42
  • I agree with you that it remains to be demonstrated. – Alex van Houten Aug 08 '22 at 15:40
  • [Here](https://github.com/scipy/scipy/blob/651a9b717deb68adde9416072c1e1d5aa14a58a1/scipy/integrate/_ivp/base.py#L123) you can find how "vectorized" is implemented. It seems to me that an entire row of y is passed on. – Alex van Houten Aug 11 '22 at 13:48
  • Your link creates two attributes, `fun_single` and `fun_vectorized` - in different ways depending on the parameter. Actual use is by the `method` code. I'm guessing that `fun_single` is used for the objective evaluation, and `fun_vectorized` is used for jacobian evaluations (if the method so desires). Either way the user provided `y0` is supposed to be 1d, and the objective returns a matching size result. The purpose of `vectorized` is to enable more efficient evaluation of the jacobian (objective gradients). – hpaulj Aug 11 '22 at 15:40
  • I am still not sure. I will need to run this through a debugger to get a handle on this. – Alex van Houten Aug 12 '22 at 09:00
  • y0 must be of shape (n,), so 1d, but y can have dimensions (n,k). In both cases (`vectorized=True` or `False`) we have: `self._fun = np.asarray(fun(t, y), dtype=dtype)` . If `vectorized=True` we have `fun_single(t, y) = (np.asarray(fun(t, y[:, None]), dtype=dtype)).ravel(); fun_vectorized = np.asarray(fun(t, y), dtype=dtype)` So it seems to me that `self.fun_single` passes on y (but with shape (n, 1, k) instead of (n, k)) for a given t to `fun`. It returns a flattened (1d) array. – Alex van Houten Aug 12 '22 at 09:41