Lets say I've got some time series data:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
x = np.linspace(0, 10, num=100)
time_series = np.sin(x) + np.random.random(100)
plt.plot(x, time_series)
If I want to 'delay' the time series by some amount, I can do this:
delay = 10
x_delayed = x[delay:]
time_series_delayed = time_series[:-delay]
plt.plot(x, time_series, label='original')
plt.plot(x_delayed, time_series_delayed, label='delayed')
plt.legend()
This is all well and good, but I want to keep the code clean while still allowing delay
to be zero. As it stands, I get an error because the slice my_arr[:-0]
just evaluates to my_arr[:0]
which will always be the empty slice, instead of the full array.
>>> time_series[:-0]
array([], dtype=float64)
This means that if I want to encode the idea that a delay of zero is identical to the original array, I have to special case every single time I use the slice. This is tedious and error prone:
# Make 3 plots, for negative, zero, and positive delays
for delay in (0, 5, -5):
if delay > 0:
x_delayed = x[delay:]
time_series_delayed = time_series[:-delay]
elif delay < 0:
# Negative delay is the complement of positive delay
x_delayed = x[:delay]
time_series_delayed = time_series[-delay:]
else:
# Zero delay just copies the array
x_delayed = x[:]
time_series_delayed = time_series[:]
# Add the delayed time series to the plot
plt.plot(
x_delayed,
time_series_delayed,
label=f'delay={delay}',
# change the alpha to make things less cluttered
alpha=1 if delay == 0 else 0.3
)
plt.legend()
I've had a look at the numpy slicing object and np._s
, but I can't seem to figure it out.
Is there a neat/pythonic way of encoding the idea that a delay of zero is the original array?