You can use regular torch.nn.Conv1d
to do this.
Inputs
In your case you have 1
channel (1D
) with 300
timesteps (please refer to documentation those values will be appropriately C_in
and L_in
).
So, for your input it would be (you need 1
there, it cannot be squeezed!):
import torch
inputs = torch.randn(64, 1, 300)
Convolution
You need torch.nn.Conv1d
with kernel_size
equal to 5
(as indicated by your elements: [0.2 0.2 0.2 0.2 0.2]
) and no bias
. I assume your output has to be of the same size (300
) so 2
elements have to be padded at the beginning and end. All of this gives us this module:
module = torch.nn.Conv1d(
in_channels=1, out_channels=1, kernel_size=5, padding=2, bias=False
)
Weights of this module (0.2
values) can be specified like this:
module.weight.data = torch.full_like(module.weight.data, 0.2)
torch.full_like
will work for kernel of any size in case you want other size than 5
.
Finally run it to average steps and you're done:
out = module(inputs)
GPU
If you want to use GPU
just cast your module
and inputs
like this:
inputs = inputs.cuda()
module = module.cuda()
See CUDA documentation for more information.