Shortcuts

CoModule

class continual.CoModule[source]

Base class for continual modules. Deriving from this class provides base-functionality and enforces the implementation of necessary methods.

Variables:
  • receptive_field (int) – Temporal receptive field of the module.

  • delay (int) – Number of step inputs to observe before the modules produces valid outputs.

  • stride (Tuple[int,...]) – (Spatio)-temporal stride.

  • padding (Tuple[int,...]) – (Spatio)-temporal padding.

static build_from(module, *args, **kwargs)[source]

Copy parameters and weights from a non-continual module and build the corresponding continual version.

Parameters:

module (torch.nn.Module) – Module from which to copy variables and weights

Returns:

Continual Module with the parameters and weights of the passed module.

Return type:

CoModule

clean_state()[source]

Clean model state, resetting the network memory.

forward(input)[source]

Performs a forward computation over multiple time-steps. This function is identical to the corresponding module in _torch.nn_, ensuring cross-compatibility. Moreover, it’s handy for efficient training on clip-based data.

Illustration:

        O            (O: output)
        ↑
        N            (N: network module)
        ↑
-----------------    (-: aggregation)
P   I   I   I   P    (I: input frame, P: padding)
Parameters:

input (Tensor) – Network input.

Return type:

Tensor

forward_step(input, update_state=True)[source]

Performs a forward computation for a single frame and (optionally) updates internal states accordingly. This function performs efficient continual inference.

Illustration:

O+S O+S O+S O+S   (O: output, S: updated internal state)
 ↑   ↑   ↑   ↑
 N   N   N   N    (N: network module)
 ↑   ↑   ↑   ↑
 I   I   I   I    (I: input frame)
Parameters:
  • input (Tensor) – Layer input.

  • update_state (bool) – Whether internal state should be updated during this operation.

Returns:

Step output. This will be a placeholder while the module initializes and every (stride - 1) / stride.

Return type:

Optional[Tensor]

forward_steps(input, pad_end=False, update_state=True)[source]

Performs a forward computation across multiple time-steps while updating internal states for continual inference (if update_state=True). Start-padding is always accounted for, but end-padding is omitted per default in expectance of the next input step. It can be added by specifying pad_end=True. If so, the output-input mapping the exact same as that of forward.

Illustration:

        O            (O: output)
        ↑
-----------------    (-: aggregation)
O  O+S O+S O+S  O    (O: output, S: updated internal state)
↑   ↑   ↑   ↑   ↑
N   N   N   N   N    (N: network module)
↑   ↑   ↑   ↑   ↑
P   I   I   I   P    (I: input frame, P: padding)
Parameters:
  • input (Tensor) – Layer input.

  • pad_end (bool) – Whether results for temporal padding at sequence end should be included.

  • update_state (bool) – Whether internal state should be updated during this operation.

Returns:

Layer output

Return type:

Optional[Tensor]

get_state()[source]

Get model state.

Returns:

A State tuple if the model has been initialised and otherwise None.

Return type:

Optional[State]

set_state(state)[source]

Set model state

Parameters:

state (State) – State tuple to set as new internal internal state

warm_up(step_shape)[source]

Warms up the model state with a dummy input. The initial self.delay steps will produce results, but they will be inexact.

To warm up the model with a user-defined data, pass the data to forward_steps:

net.forward_steps(user_data)
Parameters:

step_shape (Sequence[int]) – input shape with which to warm the model up, including batch size.

Read the Docs v: latest
Versions
latest
stable
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.