How to Install and Uninstall perl-Benchmark-Timer Package on openSuSE Tumbleweed
Last updated: November 23,2024
1. Install "perl-Benchmark-Timer" package
This guide let you learn how to install perl-Benchmark-Timer on openSuSE Tumbleweed
$
sudo zypper refresh
Copied
$
sudo zypper install
perl-Benchmark-Timer
Copied
2. Uninstall "perl-Benchmark-Timer" package
Here is a brief guide to show you how to uninstall perl-Benchmark-Timer on openSuSE Tumbleweed:
$
sudo zypper remove
perl-Benchmark-Timer
Copied
3. Information about the perl-Benchmark-Timer package on openSuSE Tumbleweed
Information for package perl-Benchmark-Timer:
---------------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : perl-Benchmark-Timer
Version : 0.7112-1.23
Arch : noarch
Vendor : openSUSE
Installed Size : 49.1 KiB
Installed : No
Status : not installed
Source package : perl-Benchmark-Timer-0.7112-1.23.src
Upstream URL : http://search.cpan.org/dist/Benchmark-Timer/
Summary : Benchmarking with statistical confidence
Description :
The Benchmark::Timer class allows you to time portions of code
conveniently, as well as benchmark code by allowing timings of repeated
trials. It is perfect for when you need more precise information about the
running time of portions of your code than the Benchmark module will give
you, but don't want to go all out and profile your code.
The methodology is simple; create a Benchmark::Timer object, and wrap
portions of code that you want to benchmark with 'start()' and 'stop()'
method calls. You can supply a tag to those methods if you plan to time
multiple portions of code. If you provide error and confidence values, you
can also use 'need_more_samples()' to determine, statistically, whether you
need to collect more data.
After you have run your code, you can obtain information about the running
time by calling the 'results()' method, or get a descriptive benchmark
report by calling 'report()'. If you run your code over multiple trials,
the average time is reported. This is wonderful for benchmarking
time-critical portions of code in a rigorous way. You can also optionally
choose to skip any number of initial trials to cut down on initial case
irregularities.
---------------------------------------------
Repository : openSUSE-Tumbleweed-Oss
Name : perl-Benchmark-Timer
Version : 0.7112-1.23
Arch : noarch
Vendor : openSUSE
Installed Size : 49.1 KiB
Installed : No
Status : not installed
Source package : perl-Benchmark-Timer-0.7112-1.23.src
Upstream URL : http://search.cpan.org/dist/Benchmark-Timer/
Summary : Benchmarking with statistical confidence
Description :
The Benchmark::Timer class allows you to time portions of code
conveniently, as well as benchmark code by allowing timings of repeated
trials. It is perfect for when you need more precise information about the
running time of portions of your code than the Benchmark module will give
you, but don't want to go all out and profile your code.
The methodology is simple; create a Benchmark::Timer object, and wrap
portions of code that you want to benchmark with 'start()' and 'stop()'
method calls. You can supply a tag to those methods if you plan to time
multiple portions of code. If you provide error and confidence values, you
can also use 'need_more_samples()' to determine, statistically, whether you
need to collect more data.
After you have run your code, you can obtain information about the running
time by calling the 'results()' method, or get a descriptive benchmark
report by calling 'report()'. If you run your code over multiple trials,
the average time is reported. This is wonderful for benchmarking
time-critical portions of code in a rigorous way. You can also optionally
choose to skip any number of initial trials to cut down on initial case
irregularities.