emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[nongnu] elpa/gptel 85bd47cb4c 164/273: README: Add support for llama.cp


From: ELPA Syncer
Subject: [nongnu] elpa/gptel 85bd47cb4c 164/273: README: Add support for llama.cpp
Date: Wed, 1 May 2024 10:02:19 -0400 (EDT)

branch: elpa/gptel
commit 85bd47cb4c47a23983011d836d61d14251c8ca69
Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmagalur@gmail.com>

    README: Add support for llama.cpp
    
    * README.org: The llama.cpp server supports OpenAI's API, so we
    can reuse it.  Closes #121.
---
 README.org | 43 ++++++++++++++++++++++++++++++++++---------
 1 file changed, 34 insertions(+), 9 deletions(-)

diff --git a/README.org b/README.org
index b6458dbb14..c68c6638b4 100644
--- a/README.org
+++ b/README.org
@@ -4,15 +4,15 @@
 
 GPTel is a simple Large Language Model chat client for Emacs, with support for 
multiple models/backends.
 
-| LLM Backend | Supports | Requires                |
-|-------------+----------+-------------------------|
-| ChatGPT     | ✓        | [[https://platform.openai.com/account/api-keys][API 
key]]                 |
-| Azure       | ✓        | Deployment and API key  |
-| Ollama      | ✓        | [[https://ollama.ai/][Ollama running locally]]  |
-| GPT4All     | ✓        | [[https://gpt4all.io/index.html][GPT4All running 
locally]] |
-| Gemini      | ✓        | [[https://makersuite.google.com/app/apikey][API 
key]]                 |
-| PrivateGPT  | Planned  | -                       |
-| Llama.cpp   | Planned  | -                       |
+| LLM Backend | Supports | Requires                  |
+|-------------+----------+---------------------------|
+| ChatGPT     | ✓       | [[https://platform.openai.com/account/api-keys][API 
key]]                   |
+| Azure       | ✓       | Deployment and API key    |
+| Ollama      | ✓       | [[https://ollama.ai/][Ollama running locally]]    |
+| GPT4All     | ✓       | [[https://gpt4all.io/index.html][GPT4All running 
locally]]   |
+| Gemini      | ✓       | [[https://makersuite.google.com/app/apikey][API 
key]]                   |
+| Llama.cpp   | ✓       | 
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
 running locally]] |
+| PrivateGPT  | Planned  | -                         |
 
 *General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube 
Demo]])
 
@@ -46,6 +46,7 @@ GPTel uses Curl if available, but falls back to url-retrieve 
to work without ext
       - [[#gpt4all][GPT4All]]
       - [[#ollama][Ollama]]
       - [[#gemini][Gemini]]
+      - [[#llamacpp][Llama.cpp]]
   - [[#usage][Usage]]
     - [[#in-any-buffer][In any buffer:]]
     - [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
@@ -221,6 +222,30 @@ You can pick this backend from the transient menu when 
using gptel (see Usage),
 
 #+html: </details>
 
+#+html: <details>
+#+html: <summary>
+**** Llama.cpp
+#+html: </summary>
+
+Register a backend with
+#+begin_src emacs-lisp
+(gptel-make-openai                    ;Not a typo, same API as OpenAI
+ "llama-cpp"                          ;Any name
+ :stream t                            ;Stream responses
+ :protocol "http"
+ :host "localhost:8000"               ;Llama.cpp server location
+ :models '("test"))                   ;List of available models
+#+end_src
+These are the required parameters, refer to the documentation of 
=gptel-make-openai= for more.
+
+You can pick this backend from the transient menu when using gptel (see 
Usage), or set this as the default value of =gptel-backend=:
+#+begin_src emacs-lisp
+(setq-default gptel-backend (gptel-make-openai "llama-cpp" ...)
+              gptel-model   "test")
+#+end_src
+
+#+html: </details>
+
 ** Usage
 
 (This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] 
showing various uses of gptel.)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]