Releases: tum-pbs/PhiML
Releases · tum-pbs/PhiML
1.11.0
This release adds official support for custom dataclasses in the phiml.dataclasses
module.
Highlights
- Dataclass decorators
@sliceable
,@data_eq
- Replacing some fields of a dataclass instance keeps all unaffected caches from
@cached_property
- Dataclasses no longer need to handle
None
values in comparisons when used in JIT or other tracing. - Shorthand function
*max
,*min
,*prod
,*pack
for specific dim types - New syntax for item name renaming while slicing, e.g.
tensor['y,x->a,b]
selects the slices x and y and renames them. - Add
squeeze
to remove singleton dims - Improve formatting of sparse matrices
clip()
now acceptsShape
for the upper limit- Dim packing functions can now also be used for unpacking when passing one input and multiple output dims.
1.10.0
This release adds a lot of new functionality and pushes the new paradigm of letting users define custom types as @dataclass
with @cached_property
derived values.
- Expanded support for non-uniform tensors and sparse matrices
- Add tensor transpose shorthands
Tensor.Ti
,.Tc
,.Ts
andTensor.dim.T
. The oldtranspose
is now calledswap_axes
. - Add
save
,load
to store using any (nested) tensor data using NumPy - Add
ncat
andtcat
variants (ccat
,icat
,scat
,dcat
) - Add
random_permutation
,pick_random
- Functions such as
slice
,max
,min
now acceptrange
to return the gather indices - Add
softmax
,nan_to_0
,counter_intersections
- Multi-Tensor unpacking in
unpack
- Add
Tensor.print
- Add
*sum
and*mean
functions for the various dimension types - Add
map_d2b
andmap_c2d
range
is now also available asarange
- Deprecate rotation functions and move to PhiFlow
- Improved dataclass attribute detection, add experimental
dataclass_getitem
Plus tons of bug fixes!
pip install phiml==1.10.0
1.9.3
1.9.2
1.9.0
Highlights
convolve
now behaves like matrix multiplication, reducing dual dims of the kernelTensor @ Tensor
can now be used to reduce channel dims in the absence of dual dims- Improved support for shape spec strings,
concat
now supports packing using the syntaxt->name:t
- Multi-dimensional
cumulative_sum
- Improved support for non-uniform and sparse tensors
- New functions
d2s
,contains
,count_occurrences
,Tensor.map()
,ravel_index
and aliasesrotate
,cross
Shape
concatenation viaShape + Shape
1.8.0
Highlights
- NumPy 2 compatibility
Tensor.numpy()
and.native()
now support dim packingwrap()
andtensor()
now support shape spec strings, e.g.'example:b,(x,y,z)'
- Compact sparse tensors can now be created using
sparse_tensor
(experimental) - Support for SVD and eigenvalues
- Shorthand notation
dim in Tensor
- Various improvements for sparse tensors
- Support save/load on Stax nets
- Added
tensor.T
to transpose a tensor. This switches primal/dual dims. - Added functions
ravel_index
,d2i
and aliaseslength
,rand
,randn
. - Shapes can now be stacked using
stack
unpack_dim
can now be used with non-uniform targets
1.7.4
1.7.2
1.7.1
1.7.0
New features
- Sparse SciPy and ML tensors can now be wrapped like regular tensors.
- Shorthand
shape & dual
to add corresponding dual dims - Added experimental compact sparse tensor
- Removing dims from a
Shape
can now be done using the-
operator. - Generic type conversion via
Shape.as_type()
.
Improvements
scatter()
is now more flexible withtreat_as_batch
argument.- Improvements to linear tracing. Improved rank deficiency detection. Linear solves will only use matrix_offset if confirmed by user.
minimum
andmaximum
can now be used withNone
values.- Stacked trees may now include
None
values. reshaped_native()
andreshaped_numpy()
now support()
/None
for singleton dims