Skip to content

v0.4.0

Latest
Compare
Choose a tag to compare
@dfalbel dfalbel released this 17 Apr 09:56

Breaking changes

  • drop_last=TRUE is now the default for training dataloaders created by luz (when eg. you pass a list or a torch dataset as data input) (#117)
  • The default profile callback no longer tracks intra step timings as it adds a non ignorable overhead. (#125)

New features

  • Added support for arm Mac's and the MPS device. (#104)
  • Refactor checkpointing in luz - we now also serialize optimizer state and callbacks state. (#107)
  • Added a luz_callback_autoresume() allowing to easily resume trainining runs that might have crashed. (#107)
  • Added th luz_callback_resume_from_checkpoint() allowing one to resume a training run from a checkpoint file. (#107)
  • Users can now chose if metrics should be called on both training and validation,
    only training or only validation. See luz_metric_set() for more information. (#112)
  • Improved how errors raised on user code, eg while calling metrics or callbacks
    are raised. This helps a lot when debuging errors in callbacks and metrics. (#112)
  • loss_fn is now a field of the context, thus callbacks can override it when needed. (#112)
  • luz_callback_mixup now supports the run_valid and auto_loss arguments. (#112)
  • ctx now aliases to the default opt and opt_name when a single optimizer is specified (ie. most cases) (#114)
  • Added tfevents callback for logging the loss and getting weights histograms. (#118)
  • You can now specify metrics to be evaluated during evaluate. (#123)

Bug fixes

  • Bug fix: accelerators cpu argument is always respected. (#119)
  • Handled rlang and ggplot2 deprecations. (#120)
  • Better handling of metrics environments.
  • Faster garbage collection of dataloaders iterators, so we use less memory. (#122)
  • Much faster loss averaging at every step. Can have hight influence in training times for large number of iterations per epoch. (#124)