vp_suite.models.precipitation_nowcasting.ef_traj_gru
- class EF_TrajGRU(device, **model_kwargs)
Bases:
vp_suite.models.precipitation_nowcasting.ef_blocks.Encoder_Forecaster
This is a reimplementation of the Encoder-Forecaster model based on TrajGRUs, as introduced in “Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model” by Shi et al. (https://arxiv.org/abs/1706.03458). This implementation is based on the PyTorch implementation on https://github.com/Hzzone/Precipitation-Nowcasting .
The Encoder-Forecaster Network stacks multiple convolutional/up-/downsampling and recurrent layers that operate on different spatial scales.
Note
The default hyperparameter configuration is intended for input frames of size (64, 64). For considerably larger or smaller image sizes, you might want to adjust the architecture.
- CODE_REFERENCE = 'https://github.com/Hzzone/Precipitation-Nowcasting'
The code location of the reference implementation.
- MATCHES_REFERENCE: str = 'Yes'
A comment indicating whether the implementation in this package matches the reference.
- NAME = 'EF-TrajGRU (Shi et al.)'
The model’s name.
- PAPER_REFERENCE = 'https://arxiv.org/abs/1706.03458'
The paper where this model was introduced first.
- __init__(device, **model_kwargs)
Initializes the model by first setting all model hyperparameters, attributes and the like. Then, the model-specific init will actually create the model from the given hyperparameters
- Parameters
device (str) – The device identifier for the module.
**model_kwargs (Any) – Model arguments such as hyperparameters, input shapes etc.
- activation = <vp_suite.model_blocks.traj_gru.Activation object>
Activation layer
- dec_c = [96, 96, 96, 96, 64, 16]
Channels for conv and rnn; Length should be 2*num_layers
- dec_conv_k = [4, 4, 3]
Decoder conv block kernel sizes per layer
- dec_conv_names = ['deconv1_leaky_1', 'deconv2_leaky_1', 'deconv3_leaky_1']
Decoder conv block layer names (for internal initialization)
- dec_conv_p = [1, 1, 1]
Decoder conv block paddings per layer
- dec_conv_s = [2, 2, 1]
Decoder conv block strides per layer
- dec_rnn_L = [13, 13, 13]
Decoder recurrent block L parameter
- dec_rnn_h2h_d = [(1, 1), (1, 1), (1, 1)]
Decoder recurrent block h2h dilation
- dec_rnn_h2h_k = [(3, 3), (5, 5), (5, 5)]
Decoder recurrent block h2h kernel size
- dec_rnn_i2h_k = [(3, 3), (3, 3), (3, 3)]
Decoder recurrent block i2h kernel size
- dec_rnn_i2h_p = [(1, 1), (1, 1), (1, 1)]
Decoder recurrent block i2h padding
- dec_rnn_i2h_s = [(1, 1), (1, 1), (1, 1)]
Decoder recurrent block i2h stride
- dec_rnn_z = [0.0, 0.0, 0.0]
Decoder recurrent block zoneout
- enc_c = [16, 64, 64, 96, 96, 96]
Channels for conv and rnn; Length should be 2*num_layers
- enc_conv_k = [3, 3, 3]
Encoder conv block kernel sizes per layer
- enc_conv_names = ['conv1_leaky_1', 'conv2_leaky_1', 'conv3_leaky_1']
Encoder conv block layer names (for internal initialization)
- enc_conv_p = [1, 1, 1]
Encoder conv block paddings per layer
- enc_conv_s = [1, 2, 2]
Encoder conv block strides per layer
- enc_rnn_L = [13, 13, 13]
Encoder recurrent block L parameter
- enc_rnn_h2h_d = [(1, 1), (1, 1), (1, 1)]
Encoder recurrent block h2h dilation
- enc_rnn_h2h_k = [(5, 5), (5, 5), (3, 3)]
Encoder recurrent block h2h kernel size
- enc_rnn_i2h_k = [(3, 3), (3, 3), (3, 3)]
Encoder recurrent block i2h kernel size
- enc_rnn_i2h_p = [(1, 1), (1, 1), (1, 1)]
Encoder recurrent block i2h padding
- enc_rnn_i2h_s = [(1, 1), (1, 1), (1, 1)]
Encoder recurrent block i2h stride
- enc_rnn_z = [0.0, 0.0, 0.0]
Encoder recurrent block zoneout
- final_conv_1_c = 16
Final conv block 1 out channels
- final_conv_1_k = 3
Final conv block 1 kernel size
- final_conv_1_name = 'identity'
Final conv block 1 name
- final_conv_1_p = 1
Final conv block 1 padding
- final_conv_1_s = 1
Final conv block 1 stride
- final_conv_2_k = 1
Final conv block 2 kernel size
- final_conv_2_name = 'conv3_3'
Final conv block 2 name
- final_conv_2_p = 0
Final conv block 2 padding
- final_conv_2_s = 1
Final conv block 2 stride
- num_layers = 3
Number of recurrent cell layers