How to Install and Uninstall libstring-tokenizer-perl Package on Ubuntu 16.04 LTS (Xenial Xerus)
Last updated: December 23,2024
1. Install "libstring-tokenizer-perl" package
Here is a brief guide to show you how to install libstring-tokenizer-perl on Ubuntu 16.04 LTS (Xenial Xerus)
$
sudo apt update
Copied
$
sudo apt install
libstring-tokenizer-perl
Copied
2. Uninstall "libstring-tokenizer-perl" package
This guide let you learn how to uninstall libstring-tokenizer-perl on Ubuntu 16.04 LTS (Xenial Xerus):
$
sudo apt remove
libstring-tokenizer-perl
Copied
$
sudo apt autoclean && sudo apt autoremove
Copied
3. Information about the libstring-tokenizer-perl package on Ubuntu 16.04 LTS (Xenial Xerus)
Package: libstring-tokenizer-perl
Priority: optional
Section: universe/perl
Installed-Size: 65
Maintainer: Ubuntu Developers
Original-Maintainer: Debian Perl Group
Architecture: all
Version: 0.05-1
Depends: perl
Filename: pool/universe/libs/libstring-tokenizer-perl/libstring-tokenizer-perl_0.05-1_all.deb
Size: 14820
MD5sum: 662eee2773cde4eb7f860678d6baafae
SHA1: 0f26d32d0590cea03d152b132b34a59dff37cae3
SHA256: 2706a596cb3c64eeb547631baf37a3c880a50fea19fa0b4d4efed808efc57e4e
Description-en: simple string tokenizer
String::Tokenizer is a simple string tokenizer which takes a string and splits
it on whitespace. It also optionally takes a string of characters to use as
delimiters, and returns them with the token set as well. This allows for
splitting the string in many different ways.
.
This is a very basic tokenizer, so more complex needs should be either
addressed with a custom written tokenizer or post-processing of the output
generated by this module. Basically, this will not fill everyones needs, but
it spans a gap between simple split / /, $string and the other options that
involve much larger and complex modules.
.
Also note that this is not a lexical analyser. Many people confuse
tokenization with lexical analysis. A tokenizer mearly splits its input into
specific chunks, a lexical analyzer classifies those chunks. Sometimes these
two steps are combined, but not here.
Description-md5: f418e575fe22d78a7cce08a624b4abb0
Homepage: http://search.cpan.org/dist/String-Tokenizer/
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu
Priority: optional
Section: universe/perl
Installed-Size: 65
Maintainer: Ubuntu Developers
Original-Maintainer: Debian Perl Group
Architecture: all
Version: 0.05-1
Depends: perl
Filename: pool/universe/libs/libstring-tokenizer-perl/libstring-tokenizer-perl_0.05-1_all.deb
Size: 14820
MD5sum: 662eee2773cde4eb7f860678d6baafae
SHA1: 0f26d32d0590cea03d152b132b34a59dff37cae3
SHA256: 2706a596cb3c64eeb547631baf37a3c880a50fea19fa0b4d4efed808efc57e4e
Description-en: simple string tokenizer
String::Tokenizer is a simple string tokenizer which takes a string and splits
it on whitespace. It also optionally takes a string of characters to use as
delimiters, and returns them with the token set as well. This allows for
splitting the string in many different ways.
.
This is a very basic tokenizer, so more complex needs should be either
addressed with a custom written tokenizer or post-processing of the output
generated by this module. Basically, this will not fill everyones needs, but
it spans a gap between simple split / /, $string and the other options that
involve much larger and complex modules.
.
Also note that this is not a lexical analyser. Many people confuse
tokenization with lexical analysis. A tokenizer mearly splits its input into
specific chunks, a lexical analyzer classifies those chunks. Sometimes these
two steps are combined, but not here.
Description-md5: f418e575fe22d78a7cce08a624b4abb0
Homepage: http://search.cpan.org/dist/String-Tokenizer/
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu