How to Install and Uninstall python2-w3lib Package on openSUSE Leap
Last updated: November 23,2024
Deprecated! Installation of this package may no longer be supported.
1. Install "python2-w3lib" package
Please follow the step by step instructions below to install python2-w3lib on openSUSE Leap
$
sudo zypper refresh
Copied
$
sudo zypper install
python2-w3lib
Copied
2. Uninstall "python2-w3lib" package
Please follow the instructions below to uninstall python2-w3lib on openSUSE Leap:
$
sudo zypper remove
python2-w3lib
Copied
3. Information about the python2-w3lib package on openSUSE Leap
Information for package python2-w3lib:
--------------------------------------
Repository : Main Repository
Name : python2-w3lib
Version : 1.22.0-bp153.1.1
Arch : noarch
Vendor : openSUSE
Installed Size : 103,3 KiB
Installed : No
Status : not installed
Source package : python-w3lib-1.22.0-bp153.1.1.src
Summary : Library of Web-Related Functions
Description :
This is a Python library of web-related functions, such as:
* remove comments, or tags from HTML snippets
* extract base url from HTML snippets
* translate entites on HTML strings
* encoding mulitpart/form-data
* convert raw HTTP headers to dicts and vice-versa
* construct HTTP auth header
* converting HTML pages to unicode
* RFC-compliant url joining
* sanitize urls (like browsers do)
* extract arguments from urls
--------------------------------------
Repository : Main Repository
Name : python2-w3lib
Version : 1.22.0-bp153.1.1
Arch : noarch
Vendor : openSUSE
Installed Size : 103,3 KiB
Installed : No
Status : not installed
Source package : python-w3lib-1.22.0-bp153.1.1.src
Summary : Library of Web-Related Functions
Description :
This is a Python library of web-related functions, such as:
* remove comments, or tags from HTML snippets
* extract base url from HTML snippets
* translate entites on HTML strings
* encoding mulitpart/form-data
* convert raw HTTP headers to dicts and vice-versa
* construct HTTP auth header
* converting HTML pages to unicode
* RFC-compliant url joining
* sanitize urls (like browsers do)
* extract arguments from urls