13

The clamp function is clamp(x, min, max) = min if x < min, max if x > max, else x

I need a function that behaves like the clamp function, but is smooth (i.e. has a continuous derivative).

TylerH
  • 20,799
  • 66
  • 75
  • 101
Roko Mijic
  • 6,655
  • 4
  • 29
  • 36
  • 2
    Essentially, after rescaling you can read this problem as a [convolution of the heaviside function](http://mathworld.wolfram.com/HeavisideStepFunction.html). The kernel you choose determines the kind of approximation you perform. If your kernel has a infinite support you'll end up with a kind of sigmoidal approximation. If it has finite support, the function will reach min and max values – Quickbeam2k1 Jul 18 '17 at 12:49

3 Answers3

13

What you are looking for is something like the Smoothstep function, which has a free parameter N, giving the "smoothness", i.e. how many derivatives should be continuous. It is defined as such:

enter image description here

This is used in several libraries and can be implemented in numpy as

import numpy as np
from scipy.special import comb

def smoothstep(x, x_min=0, x_max=1, N=1):
    x = np.clip((x - x_min) / (x_max - x_min), 0, 1)

    result = 0
    for n in range(0, N + 1):
         result += comb(N + n, n) * comb(2 * N + 1, N - n) * (-x) ** n

    result *= x ** (N + 1)

    return result

It reduces to the regular clamp function given N=0 (0 times differentiable), and gives increasing smoothness as you increase N. You can visualize it like this:

import matplotlib.pyplot as plt

x = np.linspace(-0.5, 1.5, 1000)

for N in range(0, 5):
    y = smoothstep(x, N=N)
    plt.plot(x, y, label=str(N))

plt.legend()

which gives this result:

enter image description here

Jonas Adler
  • 10,365
  • 5
  • 46
  • 73
  • Great answer. For my purposes I prefer my implementation (obviously I am biased!) because it's a one-liner and the n=1 smoothstep does exactly what I want - I don't care about the second derivative being smooth. Thanks! – Roko Mijic Jul 18 '17 at 12:20
  • 1
    Sure, just chiming in with another option. Yours is obviously easier but this works very well for more general applications. – Jonas Adler Jul 18 '17 at 12:21
  • @JonasAdler not sure this implementation really works with different min or max. The other one by RokoMijic scales – Vainmonde De Courtenay May 17 '21 at 15:29
  • Are you sure? I tried it and it works well for me. – Jonas Adler May 19 '21 at 13:19
  • *because it's a one-liner*. In this case, a very long one that is hard to parse for humans. – adr Feb 22 '23 at 13:12
10

Normal clamp:

np.clip(x, mi, mx)

Smoothclamp (guaranteed to agree with normal clamp for x < min and x > max):

def smoothclamp(x, mi, mx): return mi + (mx-mi)*(lambda t: np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) )

Sigmoid (Approximates clamp, never smaller than min, never larger than max)

def sigmoid(x,mi, mx): return mi + (mx-mi)*(lambda t: (1+200**(-t+0.5))**(-1) )( (x-mi)/(mx-mi) )

For some purposes Sigmoid will be better than Smoothclamp because Sigmoid is an invertible function - no information is lost.

For other purposes, you may need to be certain that f(x) = xmax for all x > xmax - in that case Smoothclamp is better. Also, as mentioned in another answer, there is a whole family of Smoothclamp functions, though the one given here is adequate for my purposes (no special properties other than a smooth derivative needed)

Plot them:

import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
x = np.linspace(-4,7,1000)
ax.plot(x, np.clip(x, -1, 4),'k-', lw=2, alpha=0.8, label='clamp')
ax.plot(x, smoothclamp(x, -1, 4),'g-', lw=3, alpha=0.5, label='smoothclamp')
ax.plot(x, sigmoid(x, -1, 4),'b-', lw=3, alpha=0.5, label='sigmoid')
plt.legend(loc='upper left')
plt.show()

graph

Also of potential use is the arithmetic mean of these two:

def clampoid(x, mi, mx): return mi + (mx-mi)*(lambda t: 0.5*(1+200**(-t+0.5))**(-1) + 0.5*np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) )
Roko Mijic
  • 6,655
  • 4
  • 29
  • 36
  • 1
    Built-in functions min and max would not require numpy. – guidot Jul 18 '17 at 11:33
  • 1
    But is the area under the curve preserved? – Bathsheba Jul 18 '17 at 11:35
  • 1
    @guidot agreed, but then when you use them on an array/pandas series etc you will get the "truth value of a series is ambiguous" problem. – Roko Mijic Jul 18 '17 at 11:36
  • 2
    @guidot, another good reason to use numpy functions - they are vectorized. So you can apply such functions on the entire vector (matrix) without looping, which is usually order of magnitude faster. – MaxU - stand with Ukraine Jul 18 '17 at 11:42
  • 3
    FYI: For the normal clamp, you can use [`numpy.clip`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.clip.html). – Warren Weckesser Jul 18 '17 at 11:45
  • @MaxU: The max function already accepts an iterable as first argument. What additional benefit provides *vectorization*? – guidot Jul 18 '17 at 12:08
  • 2
    @guidot, consider the following example: `import numpy as np; a = np.random.rand(10**6); %timeit max(a); %timeit np.max(a);` - on my PC the vectorized Numpy function took __127 times__ less time ;-) – MaxU - stand with Ukraine Jul 18 '17 at 13:20
  • 1
    @MaxU: While the factor was even 137 on my machine, the factor shrank to 45 if a simple list was given to built-in max instead of a numpy array. Interesting to know the numpy way for hard number crunching but the standard solution is not **that** awful. – guidot Jul 24 '17 at 14:20
  • I'm accepting my own answer since it got more upvotes – Roko Mijic Jul 27 '17 at 07:50
1

As an option, if you want to make sure that there is a correspondence with the clamp function, you can convolve the normal clamp function with a smooth bell-like function such as Lorentzian or Gaussian.

This will guarantee the correspondence between the normal clamp function and its smoothed version. The smoothness itself will be defined by the underlying smooth function you choose to use in the convolution.