vp_suite.models.precipitation_nowcasting.ef_conv_lstm

class EF_ConvLSTM(device, **model_kwargs)

Bases: vp_suite.models.precipitation_nowcasting.ef_blocks.Encoder_Forecaster

This is a reimplementation of the Encoder-Forecaster model based on ConvLSTMs, as introduced in “Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting” by Shi et al. (https://arxiv.org/abs/1506.04214). This implementation is based on the PyTorch implementation on https://github.com/Hzzone/Precipitation-Nowcasting which implements the encoder-forecaster structure from “Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model” by Shi et al. (https://arxiv.org/abs/1706.03458).

The Encoder-Forecaster Network stacks multiple convolutional/up-/downsampling and recurrent layers that operate on different spatial scales.

Note

The default hyperparameter configuration is intended for input frames of size (64, 64). For considerably larger or smaller image sizes, you might want to adjust the architecture.

CODE_REFERENCE = 'https://github.com/Hzzone/Precipitation-Nowcasting'

The code location of the reference implementation.

MATCHES_REFERENCE: str = 'Yes'

A comment indicating whether the implementation in this package matches the reference.

NAME = 'EF-ConvLSTM (Shi et al.)'

The model’s name.

PAPER_REFERENCE = 'https://arxiv.org/abs/1506.04214'

The publication where this model was introduced first.

__init__(device, **model_kwargs)

Initializes the model by first setting all model hyperparameters, attributes and the like. Then, the model-specific init will actually create the model from the given hyperparameters

Parameters
  • device (str) – The device identifier for the module.

  • **model_kwargs (Any) – Model arguments such as hyperparameters, input shapes etc.

dec_c = [96, 96, 96, 96, 64, 16]

Channels for conv and rnn; Length should be 2*num_layers

dec_conv_k = [4, 4, 3]

Decoder conv block kernel sizes per layer

dec_conv_names = ['deconv1_leaky_1', 'deconv2_leaky_1', 'deconv3_leaky_1']

Decoder conv block layer names (for internal initialization)

dec_conv_p = [1, 1, 1]

Decoder conv block paddings per layer

dec_conv_s = [2, 2, 1]

Decoder conv block strides per layer

dec_rnn_k = [3, 3, 3]

Decoder recurrent block kernel sizes per layer

dec_rnn_p = [1, 1, 1]

Decoder recurrent block paddings per layer

dec_rnn_s = [1, 1, 1]

Decoder recurrent block strides per layer

enc_c = [16, 64, 64, 96, 96, 96]

Channels for conv and rnn; Length should be 2*num_layers

enc_conv_k = [3, 3, 3]

Encoder conv block kernel sizes per layer

enc_conv_names = ['conv1_leaky_1', 'conv2_leaky_1', 'conv3_leaky_1']

Encoder conv block layer names (for internal initialization)

enc_conv_p = [1, 1, 1]

Encoder conv block paddings per layer

enc_conv_s = [1, 2, 2]

Encoder conv block strides per layer

enc_rnn_k = [3, 3, 3]

Encoder recurrent block kernel sizes per layer

enc_rnn_p = [1, 1, 1]

Encoder recurrent block paddings per layer

enc_rnn_s = [1, 1, 1]

Encoder recurrent block strides per layer

final_conv_1_c = 16

Final conv block 1 out channels

final_conv_1_k = 3

Final conv block 1 kernel size

final_conv_1_name = 'identity'

Final conv block 1 name

final_conv_1_p = 1

Final conv block 1 padding

final_conv_1_s = 1

Final conv block 1 stride

final_conv_2_k = 1

Final conv block 2 kernel size

final_conv_2_name = 'conv3_3'

Final conv block 2 name

final_conv_2_p = 0

Final conv block 2 padding

final_conv_2_s = 1

Final conv block 2 stride

num_layers = 3

Number of recurrent cell layers

training: bool