How to Install and Uninstall python310-w3lib Package on openSuSE Tumbleweed
Last updated: December 25,2024
1. Install "python310-w3lib" package
Here is a brief guide to show you how to install python310-w3lib on openSuSE Tumbleweed
$
sudo zypper refresh
Copied
$
sudo zypper install
python310-w3lib
Copied
2. Uninstall "python310-w3lib" package
Here is a brief guide to show you how to uninstall python310-w3lib on openSuSE Tumbleweed:
$
sudo zypper remove
python310-w3lib
Copied
3. Information about the python310-w3lib package on openSuSE Tumbleweed
Information for package python310-w3lib:
----------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python310-w3lib
Version : 2.1.2-1.4
Arch : noarch
Vendor : openSUSE
Installed Size : 99.7 KiB
Installed : No
Status : not installed
Source package : python-w3lib-2.1.2-1.4.src
Upstream URL : https://github.com/scrapy/w3lib
Summary : Library of Web-Related Functions
Description :
This is a Python library of web-related functions, such as:
* remove comments, or tags from HTML snippets
* extract base url from HTML snippets
* translate entites on HTML strings
* encoding mulitpart/form-data
* convert raw HTTP headers to dicts and vice-versa
* construct HTTP auth header
* converting HTML pages to unicode
* RFC-compliant url joining
* sanitize urls (like browsers do)
* extract arguments from urls
----------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : python310-w3lib
Version : 2.1.2-1.4
Arch : noarch
Vendor : openSUSE
Installed Size : 99.7 KiB
Installed : No
Status : not installed
Source package : python-w3lib-2.1.2-1.4.src
Upstream URL : https://github.com/scrapy/w3lib
Summary : Library of Web-Related Functions
Description :
This is a Python library of web-related functions, such as:
* remove comments, or tags from HTML snippets
* extract base url from HTML snippets
* translate entites on HTML strings
* encoding mulitpart/form-data
* convert raw HTTP headers to dicts and vice-versa
* construct HTTP auth header
* converting HTML pages to unicode
* RFC-compliant url joining
* sanitize urls (like browsers do)
* extract arguments from urls