How to Install and Uninstall python38-datashader Package on openSuSE Tumbleweed

Last updated: April 28,2024

1. Install "python38-datashader" package

This guide covers the steps necessary to install python38-datashader on openSuSE Tumbleweed

$ sudo zypper refresh $ sudo zypper install python38-datashader

2. Uninstall "python38-datashader" package

This guide let you learn how to uninstall python38-datashader on openSuSE Tumbleweed:

$ sudo zypper remove python38-datashader

3. Information about the python38-datashader package on openSuSE Tumbleweed

Information for package python38-datashader:
--------------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python38-datashader
Version : 0.12.1-1.4
Arch : noarch
Vendor : openSUSE
Installed Size : 22,6 MiB
Installed : No
Status : not installed
Source package : python-datashader-0.12.1-1.4.src
Summary : Data visualization toolchain based on aggregating into a grid
Description :
Traditional visualization systems treat plotting as a unitary process
transforming incoming data into an onscreen or printed image, with
parameters that can be specified beforehand that affect the final
result. While this approach works for small collections of data that
can be viewed in their entirety, the visualization for large datasets
is often the only way to understand what the data consists of, and
there is no objective way to set the parameters to reveal this data.
The datashader library breaks up the rendering pipeline into a series
of stages where user-defined computations can be performed, allowing
the visualization to adapt to and reveal the underlying properties of
the dataset, i.e. the datashader pipeline allows computation *on
the visualization*, not just on the dataset, allowing it to do
automatic ranging and scaling that takes the current visualization
constraints into account. For instance, where a traditional system
would use a transparency/opacity parameter to show the density of
overlapping points in a scatterplot, datashader can automatically
calculate how many datapoints are mapped to each pixel, scaling the
representation to accurately convey the data at every location, with no
saturation, overplotting, or underplotting issues.