vp_suite.utils.models

class ScaleToModel(model_value_range, test_value_range)

Bases: torch.nn.modules.module.Module

This class acts as an adapter module that scales pixel values from the test run domain to the model domain.

__init__(model_value_range, test_value_range)

Initializes the scaler module by setting the model domain and test domain value range.

Parameters
  • model_value_range (List[float]) – The model’s value range.

  • test_value_range (List[float]) – The test run’s value range.

forward(img)

Scales the input image from the test run domain to the model domain.

Parameters

img (torch.Tensor) – The image to scale.

Returns: The scaled image.

training: bool
class ScaleToTest(model_value_range, test_value_range)

Bases: torch.nn.modules.module.Module

This class acts as an adapter module that scales pixel values from the model domain to the test run domain.

__init__(model_value_range, test_value_range)

Initializes the scaler module by setting the model domain and test domain value range.

Parameters
  • model_value_range (List[float]) – The model’s value range.

  • test_value_range (List[float]) – The test run’s value range.

forward(img)

Scales the input image from the model domain to the test run domain.

Parameters

img (torch.Tensor) – The image to scale.

Returns: The scaled image.

training: bool
conv_output_shape(h_w, kernel_size=1, stride=1, pad=0, dilation=1)

SOURCE: https://discuss.pytorch.org/t/utility-function-for-calculating-the-shape-of-a-conv-output/11173/6 Utility function for computing output size of convolutions given the input size and the conv layer parameters.

Parameters
  • h_w (Union[int, Tuple[int]]) – The input height and width, either as a single integer number or as a tuple.

  • kernel_size (int) – The layer’s kernel size.

  • stride (int) – The layer’s stride.

  • pad (int) – The layer’s padding.

  • dilation (int) – The layer’s dilation.

Returns: A tuple (height, width) with the resulting height and width after layer application.

convtransp_output_shape(h_w, kernel_size=1, stride=1, pad=0, dilation=1)

SOURCE: https://discuss.pytorch.org/t/utility-function-for-calculating-the-shape-of-a-conv-output/11173/6 Utility function for computing output size of convTransposes given the input size and the convT layer parameters.

Parameters
  • h_w (Union[int, Tuple[int]]) – The input height and width, either as a single integer number or as a tuple.

  • kernel_size (int) – The layer’s kernel size.

  • stride (int) – The layer’s stride.

  • pad (int) – The layer’s padding.

  • dilation (int) – The layer’s dilation.

Returns: A tuple (height, width) with the resulting height and width after layer application.

state_dicts_equal(model1, model2, check_values=False, verbose=False)

Checks whether two models are equal with respect to their state dicts. Modified from: https://gist.github.com/rohan-varma/a0a75e9a0fbe9ccc7420b04bff4a7212

Parameters
  • model1 (nn.Module) – Model 1.

  • model2 (nn.Module) – Model 2.

  • check_values (bool) – If specified, also compares the values of the state dicts. By default, only the keys and

  • checked (dimensionalities are) –

  • verbose (bool) – If specified, prints all state dict components to console

Returns: True if both state dicts are equal in keys and values, False (with debug prints) otherwise.