1

GitHub - adammpkins/llama-terminal-completion: A Python application which intera...

 1 year ago
source link: https://github.com/adammpkins/llama-terminal-completion
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Llama Terminal Completion

Ever wish you could look up Linux commands or ask questions and receive responses from the terminal? You probably need a paid service, an API key with paid usage, or at least an internet connection, right? Not with Llama Terminal Completion. Instead, we'll Run a Large Language Model (think ChatGPT) locally, on your personal machine, and generate responses from there.

Table of Contents

image

This Python script interacts with the llama.cpp library to provide virtual assistant capabilities through the command line. It allows you to ask questions and receive intelligent responses, as well as generate Linux commands based on your prompts.

Installation

Llama.cpp installation

  1. Clone the 'llama.cpp' repository to your local machine
git clone https://github.com/ggerganov/llamacpp.git
  1. Build the llama.cpp library by following the instructions in the llama.cpp repository. A good tutorial for this can be found at How to Run LLMs Locally

Llama Terminal Completion installation

  1. Clone the llama-terminal-completion repository to your local machine:
git clone https://github.com/adammpkins/llama-terminal-completion.git
  1. Set up the environment variables (see below)

Environment Variables

Before using this script, you need to set up the LLAMA_COMPLETION_DIR and LLAMA_CPP_DIR environment variables. These variables point to the directories where the llama-terminal-completion and llama.cpp files are located, respectively. You can set these variables in your shell configuration file (e.g., .bashrc or .zshrc) like this:

export LLAMA_COMPLETION_DIR="/path/to/llama-terminal-completion/"
export LLAMA_CPP_DIR="/path/to/llama.cpp/"

Replace /path/to/llama-terminal-completion/ and /path/to/llama.cpp/ with the actual paths to the respective directories on your system.

Usage

Open a terminal window.

Navigate to the directory where the ask_llama.py script is located.

Run the script with the desired options. Here are some examples:

  • To generate a Linux command based on a prompt:

    python3 ask_llama.py "list the contents of the current directory"
  • To ask a question to the virtual assistant:

    python3 ask_llama.py -q "How does photosynthesis work?"
  • To clear the history of commands:

    python3 ask_llama.py -ch

For more options, you can run:

python3 ask_llama.py --help

Its output is as follows:

Usage: python3 ask_llama.py [prompt]
Example: python3 ask_llama.py 'list all files in the current directory'
Options:
-q                ask a question to the virtual assistant
-ch               clear the history of commands
-cqh              clear the history of questions
-h                show the history of commands
-qh               show the history of questions
-v                show the version of llama-terminal-completion
--help            show this help message and exit

Alias

You can create an alias for the script in your shell configuration file (e.g., .bashrc or .zshrc) like this:

alias ask="python3 /path/to/llama-terminal-completion/ask_llama.py"

Then you can run the script like this:

ask "list the contents of the current directory"

Contributing

Contributions to this project are welcome! Feel free to fork the repository, make changes, and submit pull requests.

License

This project is licensed under the MIT License


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK