llama-cpp-cuda on AUR (Arch User Repository)
Last updated: November 24,2024
1. Install "llama-cpp-cuda" effortlessly via AUR with the help of a helper(YAY)
a. Install YAY (https://github.com/Jguer/yay)
$
sudo pacman -S --needed git base-devel && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si
Copied
b. Install llama-cpp-cuda on Arch using YAY
$
yay -S
llama-cpp-cuda
Copied
* (Optional) Uninstall llama-cpp-cuda on Arch using YAY
$
yay -Rns
llama-cpp-cuda
Copied
2. Manually Install "llama-cpp-cuda" via AUR
a. Ensure you have the base development tools and git installed with:
$
sudo pacman -S --needed
base-devel git
Copied
b. Clone llama-cpp-cuda's git locally
$
git clone https://aur.archlinux.org/llama-cpp-cuda.git
~/llama-cpp-cuda
Copied
c. Go to ~/llama-cpp-cuda folder and install it
$
cd
~/llama-cpp-cuda
Copied
$
makepkg -si
Copied
3. Information about the llama-cpp-cuda package on Arch User Repository (AUR)
ID: 1308458
Name: llama-cpp-cuda
PackageBaseID: 195852
PackageBase: llama-cpp
Version: c3e53b4-1
Description: Port of Facebook's LLaMA model in C/C++ (with CUDA)
URL: https://github.com/ggerganov/llama.cpp
NumVotes: 1
Popularity: 0.490555
OutOfDate: 1692967398
Maintainer: Freed
Submitter: Freed
FirstSubmitted: 1689667143
LastModified: 1692877217
URLPath: /cgit/aur.git/snapshot/llama-cpp-cuda.tar.gz
Name: llama-cpp-cuda
PackageBaseID: 195852
PackageBase: llama-cpp
Version: c3e53b4-1
Description: Port of Facebook's LLaMA model in C/C++ (with CUDA)
URL: https://github.com/ggerganov/llama.cpp
NumVotes: 1
Popularity: 0.490555
OutOfDate: 1692967398
Maintainer: Freed
Submitter: Freed
FirstSubmitted: 1689667143
LastModified: 1692877217
URLPath: /cgit/aur.git/snapshot/llama-cpp-cuda.tar.gz