MaxPool1d¶
- class continual.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, temporal_fill='neg_inf')[source]¶
Applies a Continual 1D max pooling over an input signal.
In the simplest case, the output value of the layer with input size and output can be precisely described as:
If
padding
is non-zero, then the input is implicitly padded with negative infinity on both sides forpadding
number of points.dilation
is the stride between the elements within the sliding window. This link has a nice visualization of the pooling parameters.Note
When
stride
> 1, the forward_step will only produce non-None values everystride
steps.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
- Parameters:
kernel_size (Union[int, Tuple[int]]) – The size of the sliding window, must be > 0.
stride (Union[int, Tuple[int]]) – The stride of the sliding window, must be > 0. Default value is
kernel_size
.padding (Union[int, Tuple[int]]) – Implicit negative infinity padding to be added on both sides, must be >= 0 and <= kernel_size / 2.
dilation (Union[int, Tuple[int]]) – The stride between elements within a sliding window, must be > 0.
ceil_mode (bool) – If
True
, will use ceil instead of floor to compute the output shape. This ensures that every element in the input tensor is covered by a sliding window.temporal_fill (PaddingMode) – How temporal states are initialized.
- Shape:
Input: .
Output: , where
Examples:
m = co.MaxPool1d(kernel_size=3, dilation=2) x = torch.randn(20, 16, 50) assert torch.allclose(m.forward(x), m.forward_steps(x))