How to Install and Uninstall libcutlass-dev Package on Kali Linux

Last updated: May 18,2024

1. Install "libcutlass-dev" package

Learn how to install libcutlass-dev on Kali Linux

$ sudo apt update $ sudo apt install libcutlass-dev

2. Uninstall "libcutlass-dev" package

This is a short guide on how to uninstall libcutlass-dev on Kali Linux:

$ sudo apt remove libcutlass-dev $ sudo apt autoclean && sudo apt autoremove

3. Information about the libcutlass-dev package on Kali Linux

Package: libcutlass-dev
Source: nvidia-cutlass
Version: 3.1.0+ds-2
Installed-Size: 9599
Maintainer: Debian NVIDIA Maintainers
Architecture: all
Size: 519104
SHA256: 81465202856b539253c0ecad3e7474642c3aa8ef0520720bf03296d4f0912272
SHA1: c8c635ea3f2c700dfa528f8de9f64c0a1ddfb8a7
MD5sum: 5aaaadf2068d8afedbc1cc44ed367b47
Description: CUDA Templates for Linear Algebra Subroutines
CUTLASS is a collection of CUDA C++ template abstractions for implementing
high-performance matrix-matrix multiplication (GEMM) and related computations
at all levels and scales within CUDA. It incorporates strategies for
hierarchical decomposition and data movement similar to those used to implement
cuBLAS and cuDNN. CUTLASS decomposes these "moving parts" into reusable,
modular software components abstracted by C++ template classes. Primitives for
different levels of a conceptual parallelization hierarchy can be specialized
and tuned via custom tiling sizes, data types, and other algorithmic policy.
The resulting flexibility simplifies their use as building blocks within custom
kernels and applications.
.
To support a wide variety of applications, CUTLASS provides extensive support
for mixed-precision computations, providing specialized data-movement and
multiply-accumulate abstractions for half-precision floating point (FP16),
BFloat16 (BF16), Tensor Float 32 (TF32), single-precision floating point
(FP32), FP32 emulation via tensor core instruction, double-precision
floating point (FP64) types, integer data types (4b and 8b), and binary
data types (1b). CUTLASS demonstrates warp-synchronous matrix multiply
operations targeting the programmable, high-throughput Tensor Cores
implemented by NVIDIA's Volta, Turing, Ampere, and Hopper architectures.
.
This is a header-only library.
Description-md5:
Homepage: https://github.com/NVIDIA/cutlass
Tag: devel::library, role::devel-lib
Section: contrib/libdevel
Priority: optional
Filename: pool/contrib/n/nvidia-cutlass/libcutlass-dev_3.1.0+ds-2_all.deb