How to Install and Uninstall python39-dask Package on openSuSE Tumbleweed
Last updated: November 26,2024
1. Install "python39-dask" package
Please follow the step by step instructions below to install python39-dask on openSuSE Tumbleweed
$
sudo zypper refresh
Copied
$
sudo zypper install
python39-dask
Copied
2. Uninstall "python39-dask" package
This tutorial shows how to uninstall python39-dask on openSuSE Tumbleweed:
$
sudo zypper remove
python39-dask
Copied
3. Information about the python39-dask package on openSuSE Tumbleweed
Information for package python39-dask:
--------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python39-dask
Version : 2024.2.1-1.1
Arch : noarch
Vendor : openSUSE
Installed Size : 1.6 MiB
Installed : No
Status : not installed
Source package : python-dask-2024.2.1-1.1.src
Upstream URL : https://dask.org
Summary : Minimal task scheduling abstraction
Description :
A flexible library for parallel computing in Python.
Dask is composed of two parts:
- Dynamic task scheduling optimized for computation. This is similar to
Airflow, Luigi, Celery, or Make, but optimized for interactive
computational workloads.
- “Big Data” collections like parallel arrays, dataframes, and lists that
extend common interfaces like NumPy, Pandas, or Python iterators to
larger-than-memory or distributed environments. These parallel collections
run on top of dynamic task schedulers.
--------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python39-dask
Version : 2024.2.1-1.1
Arch : noarch
Vendor : openSUSE
Installed Size : 1.6 MiB
Installed : No
Status : not installed
Source package : python-dask-2024.2.1-1.1.src
Upstream URL : https://dask.org
Summary : Minimal task scheduling abstraction
Description :
A flexible library for parallel computing in Python.
Dask is composed of two parts:
- Dynamic task scheduling optimized for computation. This is similar to
Airflow, Luigi, Celery, or Make, but optimized for interactive
computational workloads.
- “Big Data” collections like parallel arrays, dataframes, and lists that
extend common interfaces like NumPy, Pandas, or Python iterators to
larger-than-memory or distributed environments. These parallel collections
run on top of dynamic task schedulers.