How to Install and Uninstall python310-dask-diagnostics Package on openSuSE Tumbleweed
Last updated: November 23,2024
1. Install "python310-dask-diagnostics" package
Please follow the instructions below to install python310-dask-diagnostics on openSuSE Tumbleweed
$
sudo zypper refresh
Copied
$
sudo zypper install
python310-dask-diagnostics
Copied
2. Uninstall "python310-dask-diagnostics" package
Please follow the step by step instructions below to uninstall python310-dask-diagnostics on openSuSE Tumbleweed:
$
sudo zypper remove
python310-dask-diagnostics
Copied
3. Information about the python310-dask-diagnostics package on openSuSE Tumbleweed
Information for package python310-dask-diagnostics:
---------------------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python310-dask-diagnostics
Version : 2024.2.1-1.1
Arch : noarch
Vendor : openSUSE
Installed Size : 68.0 KiB
Installed : No
Status : not installed
Source package : python-dask-2024.2.1-1.1.src
Upstream URL : https://dask.org
Summary : Diagnostics for dask
Description :
A flexible library for parallel computing in Python.
Dask is composed of two parts:
- Dynamic task scheduling optimized for computation. This is similar to
Airflow, Luigi, Celery, or Make, but optimized for interactive
computational workloads.
- “Big Data” collections like parallel arrays, dataframes, and lists that
extend common interfaces like NumPy, Pandas, or Python iterators to
larger-than-memory or distributed environments. These parallel collections
run on top of dynamic task schedulers.
This package contains the dask.diagnostics module
---------------------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python310-dask-diagnostics
Version : 2024.2.1-1.1
Arch : noarch
Vendor : openSUSE
Installed Size : 68.0 KiB
Installed : No
Status : not installed
Source package : python-dask-2024.2.1-1.1.src
Upstream URL : https://dask.org
Summary : Diagnostics for dask
Description :
A flexible library for parallel computing in Python.
Dask is composed of two parts:
- Dynamic task scheduling optimized for computation. This is similar to
Airflow, Luigi, Celery, or Make, but optimized for interactive
computational workloads.
- “Big Data” collections like parallel arrays, dataframes, and lists that
extend common interfaces like NumPy, Pandas, or Python iterators to
larger-than-memory or distributed environments. These parallel collections
run on top of dynamic task schedulers.
This package contains the dask.diagnostics module