[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[nongnu] elpa/gptel 1752f1d589 180/273: gptel-kagi: Add support for the
From: |
ELPA Syncer |
Subject: |
[nongnu] elpa/gptel 1752f1d589 180/273: gptel-kagi: Add support for the Kagi summarizer |
Date: |
Wed, 1 May 2024 10:02:20 -0400 (EDT) |
branch: elpa/gptel
commit 1752f1d5891007c9abc367aae04969e45a27b002
Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
gptel-kagi: Add support for the Kagi summarizer
* gptel-kagi.el (gptel--request-data, gptel--parse-buffer,
gptel-make-kagi): Add support for the Kagi summarizer. If there
is a url at point (or at the end of the provided prompt), it is
used as the summarizer input. Otherwise the behavior is
unchanged.
* README (Kagi): Mention summarizer support.
* gptel.el: Mention summarizer support.
---
README.org | 44 ++++++++++++++++++++---------------
gptel-kagi.el | 75 +++++++++++++++++++++++++++++++++++++++--------------------
gptel.el | 2 +-
3 files changed, 76 insertions(+), 45 deletions(-)
diff --git a/README.org b/README.org
index 67755213e6..92e6e86508 100644
--- a/README.org
+++ b/README.org
@@ -4,17 +4,18 @@
GPTel is a simple Large Language Model chat client for Emacs, with support for
multiple models and backends.
-| LLM Backend | Supports | Requires |
-|--------------+----------+---------------------------|
-| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API
key]] |
-| Azure | ✓ | Deployment and API key |
-| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
-| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running
locally]] |
-| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API
key]] |
-| Llama.cpp | ✓ |
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
running locally]] |
-| Llamafile | ✓ |
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile
server]] |
-| Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]]
|
-| PrivateGPT | Planned | - |
+| LLM Backend | Supports | Requires |
+|-----------------+----------+---------------------------|
+| ChatGPT | ✓ |
[[https://platform.openai.com/account/api-keys][API key]] |
+| Azure | ✓ | Deployment and API key |
+| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]]
|
+| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running
locally]] |
+| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API
key]] |
+| Llama.cpp | ✓ |
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
running locally]] |
+| Llamafile | ✓ |
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile
server]] |
+| Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]]
|
+| Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]]
|
+| PrivateGPT | Planned | - |
*General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube
Demo]])
@@ -49,7 +50,7 @@ GPTel uses Curl if available, but falls back to url-retrieve
to work without ext
- [[#ollama][Ollama]]
- [[#gemini][Gemini]]
- [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
- - [[#kagi-fastgpt][Kagi FastGPT]]
+ - [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
- [[#usage][Usage]]
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
@@ -252,28 +253,33 @@ You can pick this backend from the menu when using gptel
(see [[#usage][Usage]])
#+html: </details>
#+html: <details><summary>
-**** Kagi FastGPT
+**** Kagi (FastGPT & Summarizer)
#+html: </summary>
-*NOTE*: Kagi's FastGPT model does not support multi-turn conversations,
interactions are "one-shot". It also does not support streaming responses.
+Kagi's FastGPT model and the Universal Summarizer are both supported. A
couple of notes:
+
+1. Universal Summarizer: If there is a URL at point, the summarizer will
summarize the contents of the URL. Otherwise the context sent to the model is
the same as always: the buffer text upto point, or the contents of the region
if the region is active.
+
+2. Kagi models do not support multi-turn conversations, interactions are
"one-shot". They also do not support streaming responses.
Register a backend with
#+begin_src emacs-lisp
-;; :key can be a function that returns the API key
(gptel-make-kagi
- "Kagi" ;Name of your choice
- :key "YOUR_KAGI_API_KEY")
+ "Kagi" ;any name
+ :key "YOUR_KAGI_API_KEY") ;:key can be a function
#+end_src
These are the required parameters, refer to the documentation of
=gptel-make-kagi= for more.
-You can pick this backend from the transient menu when using gptel (see
Usage), or set this as the default value of =gptel-backend=:
+You can pick this backend and the model (fastgpt/summarizer) from the
transient menu when using gptel. Alternatively you can set this as the default
value of =gptel-backend=:
#+begin_src emacs-lisp
;; OPTIONAL configuration
-(setq-default gptel-model "fastgpt" ;only supported Kagi model
+(setq-default gptel-model "fastgpt"
gptel-backend (gptel-make-kagi "Kagi" :key ...))
#+end_src
+The alternatives to =fastgpt= include =summarize:cecil=, =summarize:agnes=,
=summarize:daphne= and =summarize:muriel=. The difference between the
summarizer engines is
[[https://help.kagi.com/kagi/api/summarizer.html#summarization-engines][documented
here]].
+
#+html: </details>
** Usage
diff --git a/gptel-kagi.el b/gptel-kagi.el
index 70d8189be2..5298f3b595 100644
--- a/gptel-kagi.el
+++ b/gptel-kagi.el
@@ -69,42 +69,65 @@
(cl-defmethod gptel--request-data ((_backend gptel-kagi) prompts)
"JSON encode PROMPTS for sending to ChatGPT."
- `(,@prompts :web_search t :cache t))
+ (pcase-exhaustive gptel-model
+ ("fastgpt"
+ `(,@prompts :web_search t :cache t))
+ ((and model (guard (string-prefix-p "summarize" model)))
+ `(,@prompts :engine ,(substring model 10)))))
(cl-defmethod gptel--parse-buffer ((_backend gptel-kagi) &optional
_max-entries)
- (let ((prompts)
+ (let ((url (or (thing-at-point 'url)
+ (get-text-property (point) 'shr-url)
+ (get-text-property (point) 'image-url)))
+ ;; (filename (thing-at-point 'existing-filename)) ;no file upload
support yet
(prop (text-property-search-backward
'gptel 'response
(when (get-char-property (max (point-min) (1- (point)))
'gptel)
t))))
- (if (and (prop-match-p prop)
- (prop-match-value prop))
- (user-error "No user prompt found!")
- (setq prompts (list
- :query
- (if (prop-match-p prop)
- (concat
- ;; Fake a system message by including it in the
prompt
- gptel--system-message "\n\n"
- (string-trim
- (buffer-substring-no-properties
(prop-match-beginning prop)
- (prop-match-end
prop))
- (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
- (regexp-quote (gptel-prompt-prefix-string)))
- (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
- (regexp-quote
(gptel-response-prefix-string)))))
- "")))
- prompts)))
+ (if (and url (string-prefix-p "summarize" gptel-model))
+ (list :url url)
+ (if (and (prop-match-p prop)
+ (prop-match-value prop))
+ (user-error "No user prompt found!")
+ (let ((prompts
+ (string-trim
+ (buffer-substring-no-properties (prop-match-beginning prop)
+ (prop-match-end prop))
+ (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
+ (regexp-quote (gptel-prompt-prefix-string)))
+ (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
+ (regexp-quote (gptel-response-prefix-string))))))
+ (pcase-exhaustive gptel-model
+ ("fastgpt"
+ (setq prompts (list
+ :query
+ (if (prop-match-p prop)
+ (concat
+ ;; Fake a system message by including it in
the prompt
+ gptel--system-message "\n\n" prompts)
+ ""))))
+ ((and model (guard (string-prefix-p "summarize" model)))
+ ;; If the entire contents of the prompt looks like a url, send
the url
+ ;; Else send the text of the region
+ (setq prompts
+ (if-let (((prop-match-p prop))
+ (engine (substring model 10)))
+ ;; It's a region of text
+ (list :text prompts)
+ ""))))
+ prompts)))))
;;;###autoload
(cl-defun gptel-make-kagi
(name &key stream key
(host "kagi.com")
(header (lambda () `(("Authorization" . ,(concat "Bot "
(gptel--get-api-key))))))
- (models '("fastgpt"))
+ (models '("fastgpt"
+ "summarize:cecil" "summarize:agnes"
+ "summarize:daphne" "summarize:muriel"))
(protocol "https")
- (endpoint "/api/v0/fastgpt"))
+ (endpoint "/api/v0/"))
"Register a Kagi FastGPT backend for gptel with NAME.
Keyword arguments:
@@ -142,9 +165,11 @@ Example:
:models models
:protocol protocol
:endpoint endpoint
- :url (if protocol
- (concat protocol "://" host endpoint)
- (concat host endpoint)))))
+ :url
+ (lambda ()
+ (concat protocol "://" host endpoint
+ (if (equal gptel-model "fastgpt")
+ "fastgpt" "summarize"))))))
(prog1 backend
(setf (alist-get name gptel--known-backends
nil nil #'equal)
diff --git a/gptel.el b/gptel.el
index 837616f4cd..d34cdc6bd1 100644
--- a/gptel.el
+++ b/gptel.el
@@ -30,7 +30,7 @@
;; gptel is a simple Large Language Model chat client, with support for
multiple models/backends.
;;
;; gptel supports
-;; - The services ChatGPT, Azure, Gemini, and Kagi (FastGPT)
+;; - The services ChatGPT, Azure, Gemini, and Kagi (FastGPT & Summarizer)
;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All
;;
;; Additionally, any LLM service (local or remote) that provides an
- [nongnu] elpa/gptel 3935a6dcf8 221/273: :recycle:: Untangle Gemini model and endpoint #212 (#213), (continued)
- [nongnu] elpa/gptel 3935a6dcf8 221/273: :recycle:: Untangle Gemini model and endpoint #212 (#213), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 260be9d8d4 230/273: gptel: Consolidate HTTP request process, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 53a905dafc 253/273: gptel: Show chosen system message in header-line, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 306fe3bd8c 269/273: gptel-ollama: Fix parsing error (#179), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 97ab6cbd1e 273/273: gptel: Add .elpaignore, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 4356f6fbec 103/273: gptel: correct system message with gptel-request, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 3308449761 133/273: gptel: Fix prompt string handling in gptel-request, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 190d1d20e2 121/273: gptel: Update header line and package info description, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel f571323174 163/273: gptel-gemini: Simulate system-message for gemini, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 7c2e342f35 176/273: gptel-transient: Add prompting from kill-ring, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 1752f1d589 180/273: gptel-kagi: Add support for the Kagi summarizer,
ELPA Syncer <=
- [nongnu] elpa/gptel d0c685e501 189/273: gptel: checkdoc linting and indentation rules, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 8a25058eed 190/273: gptel-openai: default :header key to simplify config, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel f0e4889c36 196/273: gptel: Update OpenAI model list, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 95925f3571 198/273: Automatically create parent directories for gptel-crowdsourced-prompts-file (#203), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel af5444a2ea 201/273: gptel: docstrings for multi-LLM support, bump version, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 49cfc78378 203/273: gptel: Add page boundaries, restructure files, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 0d6264f268 214/273: gptel-curl: Adjust response beginning position, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 39376aa3f4 020/273: gptel-transient: Add transient menus for setting parameters, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel dfca03a266 028/273: LICENSE: Add GPLv3 license, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 048eaf9b64 044/273: README: Update description of chat parameters, ELPA Syncer, 2024/05/01