flowvision.utils

Useful utils for deep learning tasks

class flowvision.utils.AverageMeter[source]

Computes and stores the average and current value

class flowvision.utils.ModelEmaV2(model, decay=0.9999, device=None)[source]

Model Exponential Moving Average V2 borrowed from: https://github.com/rwightman/pytorch-image-models/blob/master/timm/utils/model_ema.py

Keep a moving average of everything in the model state_dict (parameters and buffers). V2 of this module is simpler, it does not match params/buffers based on name but simply iterates in order.

This is intended to allow functionality like https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage

A smoothed version of the weights is necessary for some training schemes to perform well. E.g. Google’s hyper-params for training MNASNet, MobileNet-V3, EfficientNet, etc that use RMSprop with a short 2.4-3 epoch decay period and slow LR decay rate of .96-.99 requires EMA smoothing of weights to match results. Pay attention to the decay constant you are using relative to your update count per epoch.

To keep EMA from using GPU resources, set device=’cpu’. This will save a bit of memory but disable validation of the EMA weights. Validation will have to be done manually in a separate process, or after the training stops converging.

This class is sensitive where it is initialized in the sequence of model init, GPU assignment and distributed training wrappers.

flowvision.utils.accuracy(output, target, topk=(1))[source]

Computes the accuracy over the k top predictions for the specified values of k

flowvision.utils.dispatch_clip_grad(parameters, value: float, mode: str = 'norm', norm_type: float = 2.0)[source]

Dispatch to gradient clipping method

Parameters
  • parameters (Iterable) – model parameters to clip

  • value (float) – clipping value/factor/norm, mode dependant

  • mode (str) – clipping mode, one of ‘norm’, ‘value’, ‘agc’

  • norm_type (float) – p-norm, default 2.0

flowvision.utils.make_grid(tensor: Union[oneflow.Tensor, List[oneflow.Tensor]], nrow: int = 8, padding: int = 2, normalize: bool = False, range: Optional[Tuple[int, int]] = None, scale_each: bool = False, pad_value: int = 0)oneflow.Tensor[source]

Make a grid of images.

Parameters
  • tensor (Tensor or list) – 4D mini-batch Tensor of shape (B x C x H x W) or a list of images all of the same size.

  • nrow (int, optional) – Number of images displayed in each row of the grid. The final grid size is (B / nrow, nrow). Default: 8.

  • padding (int, optional) – amount of padding. Default: 2.

  • normalize (bool, optional) – If True, shift the image to the range (0, 1), by the min and max values specified by range. Default: False.

  • range (tuple, optional) – tuple (min, max) where min and max are numbers, then these numbers are used to normalize the image. By default, min and max are computed from the tensor.

  • scale_each (bool, optional) – If True, scale each image in the batch of images separately rather than the (min, max) over all images. Default: False.

  • pad_value (float, optional) – Value for the padded pixels. Default: 0.

Example: See this notebook here

flowvision.utils.save_image(tensor: Union[oneflow.Tensor, List[oneflow.Tensor]], fp: Union[str, pathlib.Path, BinaryIO], nrow: int = 8, padding: int = 2, normalize: bool = False, range: Optional[Tuple[int, int]] = None, scale_each: bool = False, pad_value: int = 0, format: Optional[str] = None)None[source]

Save a given Tensor into an image file.

Parameters
  • tensor (Tensor or list) – Image to be saved. If given a mini-batch tensor, saves the tensor as a grid of images by calling make_grid.

  • fp (string or file object) – A filename or a file object

  • format (Optional) – If omitted, the format to use is determined from the filename extension. If a file object was used instead of a filename, this parameter should always be used.

  • **kwargs – Other arguments are documented in make_grid.