vp_suite.utils.visualization
- COLORS = {'black': [0, 0, 0], 'green': [0, 200, 0], 'red': [150, 0, 0], 'white': [255, 255, 255], 'yellow': [100, 100, 0]}
Pre-defined border colors for video visualizations (r, g, b).
- add_border_around_vid(vid, c_and_l, b_width=10)
Adds a colored border around given video array.
- Parameters
vid (np.ndarray) – The video as a numpy array of concatenated image frames.
c_and_l (List[Tuple[str, int]]) – A list of tuples that each specify the color and frame count of a colored segment. This permits e.g. to add borders of different colors for different frame segments.
b_width (int) – Border width in pixels.
Returns: The video with a colored border added.
- add_borders(trajs, context_frames)
Adds borders around the provided videos that reflect given (= green border) and predicted (= red border) frames.
- get_color_array(color)
- Parameters
color (str) – A string specifying the border color.
Returns: A list consisting of r, g and b values for the desired color, taken from the COLORS dict.
- get_vis_from_model(dataset, data, model, data_unpack_config, pred_frames)
Given a data point with a sequence, uses given prediction model to obtain a prediction and postprocesses input sequence and prediction to obtain visualizable representations of the sequences.
- Parameters
dataset (VPDataset) – The dataset the data is taken from (needed for postprocessing of the input and predicted sequences)
data (VPData) – The data point containing the input sequence.
model (VPModel) – The prediction model.
data_unpack_config (dict) – Configuration needed for unpacking the sequence from the data point
pred_frames (int) – Number of frames to predict.
Returns: The postprocessed input sequence and prediction.
- save_arr_hist(diff, diff_id)
Given a numpy array contianing values, creates and saves a histogram figure that shows the distribution of values within the array, as well as min, max and average.
- Parameters
diff (np.ndarray) – Input array containing the values to visualize.
diff_id (int) – An id used in the save name.
- save_frame_compare_img(out_filename, context_frames, ground_truth_vis, preds_vis, vis_context_frame_idx)
Given a ground truth frame sequence as well as prediction sequences, creates and saves a large image file that displays the ground truth sequence in the first row and the predictions each in a row below (for them, only the predicted frames are put into the graphic). Specified by vis_context_frame_idx, only the selected input frames are put onto the visualization image.
- Parameters
out_filename (str) – Output filename.
context_frames (int) – Number of input/context frames.
ground_truth_vis (np.ndarray) – The ground truth frame sequence.
preds_vis (List[np.ndarray) – The predicted frame sequences.
vis_context_frame_idx (Iterable[int]) – A list of indices for the context frames. For the ground truth row, only these context frames will be displayed to unlutter the visualization.
- save_vid_vis(out_fp, context_frames, mode='gif', **trajs)
Assembles a video file that displays all given visualizations side-by-side, with a border around each visualization that denotes whether an input or a predicted frame is displayed. Depending on the specified save mode, the resulting visualization file may differ.
- Parameters
out_fp (str) – Where to save the visualization.
context_frames (int) – Number of context frames (needed to add suitable borders around the videos)
mode (str) – A string specifying the save mode: mp4 (uses openCV) vs. gif (uses matplotlib, adds titles to the video visualizations)
**trajs (Any) – Any number of videos (keyword becomes video vis title in the coloring/visualization process)
- visualize_sequences(dataset, context_frames, pred_frames, models, device, out_path, vis_idx, vis_context_frame_idx, vis_vid_mode)
Extracts certain data points from given dataset, uses each given model to obtain predictions for these sequences and saves visualizations of these grond truth/predicted sequences. Also creates a large graphic comparing the visualizations of different models against the ground truth sequence using
save_frame_compare_img()
.- Parameters
dataset (VPDataset) – The dataset the data is taken from.
context_frames (int) – Number of input/context frames.
pred_frames (int) – Number of frames to predict.
models (Iterable[VPModel]) – The prediction models.
device (str) – The device that should be used for visualization creation (GPU vs. CPU).
out_path (Path) – A path object containing the directory where the visualizations should be saved.
vis_idx (Iterable[int]) – An iterable of dataset indices which should be used to obtain the data points that should be used for vis.
vis_context_frame_idx (Iterable[int]) – A list of indices for the context frames. For the ground truth row, only these context frames will be displayed to unlutter the visualization.
vis_vid_mode (str) – A string specifying the save mode: mp4 (uses openCV) vs. gif (uses matplotlib, adds titles to the video visualizations)
- visualize_vid(dataset, context_frames, pred_frames, model, device, out_path, vis_idx, vis_mode)
Extracts certain data points from given dataset, uses given model to obtain predictions for these sequences and saves visualizations of these grond truth/predicted sequences.
- Parameters
dataset (VPDataset) – The dataset the data is taken from.
context_frames (int) – Number of input/context frames.
pred_frames (int) – Number of frames to predict.
model (VPModel) – The prediction model.
device (str) – The device that should be used for visualization creation (GPU vs. CPU).
out_path (Path) – A path object containing the directory where the visualizations should be saved.
vis_idx (Iterable[int]) – An iterable of dataset indices which should be used to obtain the data points that should be used for vis.
vis_mode (str) – A string specifying the save mode: mp4 (uses openCV) vs. gif (uses matplotlib, adds titles to the video visualizations)