How to Install and Uninstall python38-opt-einsum Package on openSuSE Tumbleweed
Last updated: December 24,2024
Deprecated! Installation of this package may no longer be supported.
1. Install "python38-opt-einsum" package
In this section, we are going to explain the necessary steps to install python38-opt-einsum on openSuSE Tumbleweed
$
sudo zypper refresh
Copied
$
sudo zypper install
python38-opt-einsum
Copied
2. Uninstall "python38-opt-einsum" package
This guide covers the steps necessary to uninstall python38-opt-einsum on openSuSE Tumbleweed:
$
sudo zypper remove
python38-opt-einsum
Copied
3. Information about the python38-opt-einsum package on openSuSE Tumbleweed
Information for package python38-opt-einsum:
--------------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python38-opt-einsum
Version : 3.3.0-2.2
Arch : noarch
Vendor : openSUSE
Installed Size : 586,3 KiB
Installed : No
Status : not installed
Source package : python-opt-einsum-3.3.0-2.2.src
Summary : Optimizing numpys einsum function
Description :
Optimized einsum can significantly reduce the overall execution time of einsum-like expressions (e.g.,
`np.einsum`,`dask.array.einsum`,`pytorch.einsum`,`tensorflow.einsum`)
by optimizing the expression's contraction order and dispatching many
operations to canonical BLAS, cuBLAS, or other specialized routines. Optimized
einsum is agnostic to the backend and can handle NumPy, Dask, PyTorch,
Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as well as potentially
any library which conforms to a standard API. See the
[**documentation**](http://optimized-einsum.readthedocs.io) for more
information.
--------------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python38-opt-einsum
Version : 3.3.0-2.2
Arch : noarch
Vendor : openSUSE
Installed Size : 586,3 KiB
Installed : No
Status : not installed
Source package : python-opt-einsum-3.3.0-2.2.src
Summary : Optimizing numpys einsum function
Description :
Optimized einsum can significantly reduce the overall execution time of einsum-like expressions (e.g.,
`np.einsum`,`dask.array.einsum`,`pytorch.einsum`,`tensorflow.einsum`)
by optimizing the expression's contraction order and dispatching many
operations to canonical BLAS, cuBLAS, or other specialized routines. Optimized
einsum is agnostic to the backend and can handle NumPy, Dask, PyTorch,
Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as well as potentially
any library which conforms to a standard API. See the
[**documentation**](http://optimized-einsum.readthedocs.io) for more
information.