How to Install and Uninstall python3-w3lib Package on openSUSE Leap
Last updated: November 26,2024
1. Install "python3-w3lib" package
This is a short guide on how to install python3-w3lib on openSUSE Leap
$
sudo zypper refresh
Copied
$
sudo zypper install
python3-w3lib
Copied
2. Uninstall "python3-w3lib" package
This guide let you learn how to uninstall python3-w3lib on openSUSE Leap:
$
sudo zypper remove
python3-w3lib
Copied
3. Information about the python3-w3lib package on openSUSE Leap
Information for package python3-w3lib:
--------------------------------------
Repository : Main Repository
Name : python3-w3lib
Version : 1.22.0-bp155.2.12
Arch : noarch
Vendor : openSUSE
Installed Size : 139.1 KiB
Installed : No
Status : not installed
Source package : python-w3lib-1.22.0-bp155.2.12.src
Upstream URL : https://github.com/scrapy/w3lib
Summary : Library of Web-Related Functions
Description :
This is a Python library of web-related functions, such as:
* remove comments, or tags from HTML snippets
* extract base url from HTML snippets
* translate entites on HTML strings
* encoding mulitpart/form-data
* convert raw HTTP headers to dicts and vice-versa
* construct HTTP auth header
* converting HTML pages to unicode
* RFC-compliant url joining
* sanitize urls (like browsers do)
* extract arguments from urls
--------------------------------------
Repository : Main Repository
Name : python3-w3lib
Version : 1.22.0-bp155.2.12
Arch : noarch
Vendor : openSUSE
Installed Size : 139.1 KiB
Installed : No
Status : not installed
Source package : python-w3lib-1.22.0-bp155.2.12.src
Upstream URL : https://github.com/scrapy/w3lib
Summary : Library of Web-Related Functions
Description :
This is a Python library of web-related functions, such as:
* remove comments, or tags from HTML snippets
* extract base url from HTML snippets
* translate entites on HTML strings
* encoding mulitpart/form-data
* convert raw HTTP headers to dicts and vice-versa
* construct HTTP auth header
* converting HTML pages to unicode
* RFC-compliant url joining
* sanitize urls (like browsers do)
* extract arguments from urls