Separable temporal convolutions
Given a multivariate time series \(x \in \mathbb{R}^{B \times D \times T}\) with \(D=3\) channels, \(T=4\) timesteps and batch size \(B=1\).
= [
x 1,5,10,20],
[100,150,200,250],
[1000,1500,2000,2500],
[
]
# [batch_size=1, in_channels=3, timesteps=4]
= torch.FloatTensor(x)
xt = xt.unsqueeze(0) xt
The separable convolution learns a group of filters for each channel independently, without interactions across channels. Below, I define a 1D convolutional layer with kernel size 2 and learn 1 filter per channel (num_channels=1
).
= True
separable = xt.shape[1]
in_channels = 1
num_channels = 2
kernel_size = 1
stride = 0
layer_i = 2 ** layer_i
dilation_size = (kernel_size - 1) * dilation_size
padding = in_channels if separable else 1
groups = in_channels * num_channels out_channels
For illustrative purposes, the weights are initialized to 1 and bias to 0.
= nn.Conv1d(
conv1
in_channels,
out_channels,
kernel_size,=stride,
stride=padding,
padding=dilation_size,
dilation=groups,
groups
)1)
torch.nn.init.constant_(conv1.weight, 0) torch.nn.init.constant_(conv1.bias,
A separable convolution with kernel_size=2
and num_channels=1
is simply a weighted sum along each channel.
1, 6, 15, 30],
tensor([[[ 100, 250, 350, 450],
[ 1000, 2500, 3500, 4500]]]) [
Increasing num_channels
will learn a set of independent filters for each channel. For example, with num_channels=3
gives a total number of output channels of num_channels*in_channels=9
.
1, 6, 15, 30],
tensor([[[ 1, 6, 15, 30],
[ 1, 6, 15, 30],
[ 100, 250, 350, 450],
[ 100, 250, 350, 450],
[ 100, 250, 350, 450],
[ 1000, 2500, 3500, 4500],
[1000, 2500, 3500, 4500],
[1000, 2500, 3500, 4500]]]) [
When separable=False
and num_channels=1
, you get mixing between the channels:
1101, 2756, 3865, 4980],
tensor([[[1101, 2756, 3865, 4980],
[1101, 2756, 3865, 4980]]]) [