Skip to content

Releases: tum-pbs/PhiML

1.11.0

19 Dec 17:16
Compare
Choose a tag to compare

This release adds official support for custom dataclasses in the phiml.dataclasses module.

Highlights

  • Dataclass decorators @sliceable, @data_eq
  • Replacing some fields of a dataclass instance keeps all unaffected caches from @cached_property
  • Dataclasses no longer need to handle None values in comparisons when used in JIT or other tracing.
  • Shorthand function *max, *min, *prod, *pack for specific dim types
  • New syntax for item name renaming while slicing, e.g. tensor['y,x->a,b] selects the slices x and y and renames them.
  • Add squeeze to remove singleton dims
  • Improve formatting of sparse matrices
  • clip() now accepts Shape for the upper limit
  • Dim packing functions can now also be used for unpacking when passing one input and multiple output dims.

1.10.0

26 Nov 22:28
Compare
Choose a tag to compare

This release adds a lot of new functionality and pushes the new paradigm of letting users define custom types as @dataclass with @cached_property derived values.

  • Expanded support for non-uniform tensors and sparse matrices
  • Add tensor transpose shorthands Tensor.Ti, .Tc, .Ts and Tensor.dim.T. The old transpose is now called swap_axes.
  • Add save, load to store using any (nested) tensor data using NumPy
  • Add ncat and tcat variants (ccat, icat, scat, dcat)
  • Add random_permutation, pick_random
  • Functions such as slice, max, min now accept range to return the gather indices
  • Add softmax, nan_to_0, counter_intersections
  • Multi-Tensor unpacking in unpack
  • Add Tensor.print
  • Add *sum and *mean functions for the various dimension types
  • Add map_d2b and map_c2d
  • range is now also available as arange
  • Deprecate rotation functions and move to PhiFlow
  • Improved dataclass attribute detection, add experimental dataclass_getitem

Plus tons of bug fixes!

pip install phiml==1.10.0

1.9.3

18 Oct 15:39
Compare
Choose a tag to compare

Bug fixes plus

  • Performance optimization for stack()
  • experimental save/load
  • at_min, at_max now support sparse matrices

1.9.2

07 Oct 13:19
Compare
Choose a tag to compare

Various smaller bug fixes, map now exposes expand_results.

1.9.0

27 Sep 15:40
Compare
Choose a tag to compare

Highlights

  • convolve now behaves like matrix multiplication, reducing dual dims of the kernel
  • Tensor @ Tensor can now be used to reduce channel dims in the absence of dual dims
  • Improved support for shape spec strings, concat now supports packing using the syntax t->name:t
  • Multi-dimensional cumulative_sum
  • Improved support for non-uniform and sparse tensors
  • New functions d2s, contains, count_occurrences, Tensor.map(), ravel_index and aliases rotate, cross
  • Shape concatenation via Shape + Shape

1.8.0

07 Sep 10:41
Compare
Choose a tag to compare

Highlights

  • NumPy 2 compatibility
  • Tensor.numpy() and .native() now support dim packing
  • wrap() and tensor() now support shape spec strings, e.g. 'example:b,(x,y,z)'
  • Compact sparse tensors can now be created using sparse_tensor (experimental)
  • Support for SVD and eigenvalues
  • Shorthand notation dim in Tensor
  • Various improvements for sparse tensors
  • Support save/load on Stax nets
  • Added tensor.T to transpose a tensor. This switches primal/dual dims.
  • Added functions ravel_index, d2i and aliases length, rand, randn.
  • Shapes can now be stacked using stack
  • unpack_dim can now be used with non-uniform targets

1.7.4

14 Aug 17:58
Compare
Choose a tag to compare

NumPy 2.0 fixes, improved support for sparse tensors.

pip install phiml==1.7.4

1.7.2

11 Aug 21:37
Compare
Choose a tag to compare

Fixes NumPy compatibility and other bug fixes, adds convenience features.

1.7.1

03 Aug 18:15
Compare
Choose a tag to compare

Bug fixes, improved CompactSparseTensor support.

1.7.0

13 Jul 15:36
Compare
Choose a tag to compare

New features

  • Sparse SciPy and ML tensors can now be wrapped like regular tensors.
  • Shorthand shape & dual to add corresponding dual dims
  • Added experimental compact sparse tensor
  • Removing dims from a Shape can now be done using the - operator.
  • Generic type conversion via Shape.as_type().

Improvements

  • scatter() is now more flexible with treat_as_batch argument.
  • Improvements to linear tracing. Improved rank deficiency detection. Linear solves will only use matrix_offset if confirmed by user.
  • minimum and maximum can now be used with None values.
  • Stacked trees may now include None values.
  • reshaped_native() and reshaped_numpy() now support () / None for singleton dims