emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[nongnu] elpa/gptel 1752f1d589 180/273: gptel-kagi: Add support for the


From: ELPA Syncer
Subject: [nongnu] elpa/gptel 1752f1d589 180/273: gptel-kagi: Add support for the Kagi summarizer
Date: Wed, 1 May 2024 10:02:20 -0400 (EDT)

branch: elpa/gptel
commit 1752f1d5891007c9abc367aae04969e45a27b002
Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmagalur@gmail.com>

    gptel-kagi: Add support for the Kagi summarizer
    
    * gptel-kagi.el (gptel--request-data, gptel--parse-buffer,
    gptel-make-kagi): Add support for the Kagi summarizer.  If there
    is a url at point (or at the end of the provided prompt), it is
    used as the summarizer input.  Otherwise the behavior is
    unchanged.
    
    * README (Kagi): Mention summarizer support.
    
    * gptel.el: Mention summarizer support.
---
 README.org    | 44 ++++++++++++++++++++---------------
 gptel-kagi.el | 75 +++++++++++++++++++++++++++++++++++++++--------------------
 gptel.el      |  2 +-
 3 files changed, 76 insertions(+), 45 deletions(-)

diff --git a/README.org b/README.org
index 67755213e6..92e6e86508 100644
--- a/README.org
+++ b/README.org
@@ -4,17 +4,18 @@
 
 GPTel is a simple Large Language Model chat client for Emacs, with support for 
multiple models and backends.
 
-| LLM Backend  | Supports | Requires                  |
-|--------------+----------+---------------------------|
-| ChatGPT      | ✓      | [[https://platform.openai.com/account/api-keys][API 
key]]                   |
-| Azure        | ✓      | Deployment and API key    |
-| Ollama       | ✓      | [[https://ollama.ai/][Ollama running locally]]    |
-| GPT4All      | ✓      | [[https://gpt4all.io/index.html][GPT4All running 
locally]]   |
-| Gemini       | ✓      | [[https://makersuite.google.com/app/apikey][API 
key]]                   |
-| Llama.cpp    | ✓      | 
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
 running locally]] |
-| Llamafile    | ✓      | 
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile 
server]]    |
-| Kagi FastGPT | ✓      | [[https://kagi.com/settings?p=api][API key]]         
          |
-| PrivateGPT   | Planned  | -                         |
+| LLM Backend     | Supports | Requires                  |
+|-----------------+----------+---------------------------|
+| ChatGPT         | ✓      | 
[[https://platform.openai.com/account/api-keys][API key]]                   |
+| Azure           | ✓      | Deployment and API key    |
+| Ollama          | ✓      | [[https://ollama.ai/][Ollama running locally]]    
|
+| GPT4All         | ✓      | [[https://gpt4all.io/index.html][GPT4All running 
locally]]   |
+| Gemini          | ✓      | [[https://makersuite.google.com/app/apikey][API 
key]]                   |
+| Llama.cpp       | ✓      | 
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
 running locally]] |
+| Llamafile       | ✓      | 
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile 
server]]    |
+| Kagi FastGPT    | ✓      | [[https://kagi.com/settings?p=api][API key]]      
             |
+| Kagi Summarizer | ✓      | [[https://kagi.com/settings?p=api][API key]]      
             |
+| PrivateGPT      | Planned  | -                         |
 
 *General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube 
Demo]])
 
@@ -49,7 +50,7 @@ GPTel uses Curl if available, but falls back to url-retrieve 
to work without ext
       - [[#ollama][Ollama]]
       - [[#gemini][Gemini]]
       - [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
-      - [[#kagi-fastgpt][Kagi FastGPT]]
+      - [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
   - [[#usage][Usage]]
     - [[#in-any-buffer][In any buffer:]]
     - [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
@@ -252,28 +253,33 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 
 #+html: </details>
 #+html: <details><summary>
-**** Kagi FastGPT
+**** Kagi (FastGPT & Summarizer)
 #+html: </summary>
 
-*NOTE*: Kagi's FastGPT model does not support multi-turn conversations, 
interactions are "one-shot".  It also does not support streaming responses.
+Kagi's FastGPT model and the Universal Summarizer are both supported.  A 
couple of notes:
+
+1. Universal Summarizer: If there is a URL at point, the summarizer will 
summarize the contents of the URL.  Otherwise the context sent to the model is 
the same as always: the buffer text upto point, or the contents of the region 
if the region is active.
+
+2. Kagi models do not support multi-turn conversations, interactions are 
"one-shot".  They also do not support streaming responses.
 
 Register a backend with
 #+begin_src emacs-lisp
-;; :key can be a function that returns the API key
 (gptel-make-kagi
- "Kagi"                                 ;Name of your choice
- :key "YOUR_KAGI_API_KEY")
+ "Kagi" ;any name
+ :key "YOUR_KAGI_API_KEY") ;:key can be a function
 #+end_src
 These are the required parameters, refer to the documentation of 
=gptel-make-kagi= for more.
 
-You can pick this backend from the transient menu when using gptel (see 
Usage), or set this as the default value of =gptel-backend=:
+You can pick this backend and the model (fastgpt/summarizer) from the 
transient menu when using gptel.  Alternatively you can set this as the default 
value of =gptel-backend=:
 
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
-(setq-default gptel-model "fastgpt" ;only supported Kagi model
+(setq-default gptel-model "fastgpt"
               gptel-backend (gptel-make-kagi "Kagi" :key ...))
 #+end_src
 
+The alternatives to =fastgpt= include =summarize:cecil=, =summarize:agnes=, 
=summarize:daphne= and =summarize:muriel=.  The difference between the 
summarizer engines is 
[[https://help.kagi.com/kagi/api/summarizer.html#summarization-engines][documented
 here]].
+
 #+html: </details>
 
 ** Usage
diff --git a/gptel-kagi.el b/gptel-kagi.el
index 70d8189be2..5298f3b595 100644
--- a/gptel-kagi.el
+++ b/gptel-kagi.el
@@ -69,42 +69,65 @@
 
 (cl-defmethod gptel--request-data ((_backend gptel-kagi) prompts)
   "JSON encode PROMPTS for sending to ChatGPT."
-  `(,@prompts :web_search t :cache t))
+  (pcase-exhaustive gptel-model
+    ("fastgpt"
+     `(,@prompts :web_search t :cache t))
+    ((and model (guard (string-prefix-p "summarize" model)))
+     `(,@prompts :engine ,(substring model 10)))))
 
 (cl-defmethod gptel--parse-buffer ((_backend gptel-kagi) &optional 
_max-entries)
-  (let ((prompts)
+  (let ((url (or (thing-at-point 'url)
+                 (get-text-property (point) 'shr-url)
+                 (get-text-property (point) 'image-url)))
+        ;; (filename (thing-at-point 'existing-filename)) ;no file upload 
support yet
         (prop (text-property-search-backward
                'gptel 'response
                (when (get-char-property (max (point-min) (1- (point)))
                                         'gptel)
                  t))))
-    (if (and (prop-match-p prop)
-             (prop-match-value prop))
-        (user-error "No user prompt found!")
-      (setq prompts (list
-                     :query
-                     (if (prop-match-p prop)
-                         (concat
-                          ;; Fake a system message by including it in the 
prompt
-                          gptel--system-message "\n\n"
-                          (string-trim
-                           (buffer-substring-no-properties 
(prop-match-beginning prop)
-                                                           (prop-match-end 
prop))
-                           (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
-                                   (regexp-quote (gptel-prompt-prefix-string)))
-                           (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
-                                   (regexp-quote 
(gptel-response-prefix-string)))))
-                       "")))
-      prompts)))
+    (if (and url (string-prefix-p "summarize" gptel-model))
+        (list :url url)
+      (if (and (prop-match-p prop)
+               (prop-match-value prop))
+          (user-error "No user prompt found!")
+        (let ((prompts
+               (string-trim
+                (buffer-substring-no-properties (prop-match-beginning prop)
+                                                (prop-match-end prop))
+                (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
+                        (regexp-quote (gptel-prompt-prefix-string)))
+                (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
+                        (regexp-quote (gptel-response-prefix-string))))))
+          (pcase-exhaustive gptel-model
+            ("fastgpt"
+             (setq prompts (list
+                            :query
+                            (if (prop-match-p prop)
+                                (concat
+                                 ;; Fake a system message by including it in 
the prompt
+                                 gptel--system-message "\n\n" prompts)
+                              ""))))
+            ((and model (guard (string-prefix-p "summarize" model)))
+             ;; If the entire contents of the prompt looks like a url, send 
the url
+             ;; Else send the text of the region
+             (setq prompts
+                   (if-let (((prop-match-p prop))
+                            (engine (substring model 10)))
+                       ;; It's a region of text
+                       (list :text prompts)
+                     ""))))
+          prompts)))))
 
 ;;;###autoload
 (cl-defun gptel-make-kagi
     (name &key stream key
           (host "kagi.com")
           (header (lambda () `(("Authorization" . ,(concat "Bot " 
(gptel--get-api-key))))))
-          (models '("fastgpt"))
+          (models '("fastgpt"
+                    "summarize:cecil" "summarize:agnes"
+                    "summarize:daphne" "summarize:muriel"))
           (protocol "https")
-          (endpoint "/api/v0/fastgpt"))
+          (endpoint "/api/v0/"))
   "Register a Kagi FastGPT backend for gptel with NAME.
 
 Keyword arguments:
@@ -142,9 +165,11 @@ Example:
                   :models models
                   :protocol protocol
                   :endpoint endpoint
-                  :url (if protocol
-                           (concat protocol "://" host endpoint)
-                         (concat host endpoint)))))
+                  :url
+                  (lambda ()
+                    (concat protocol "://" host endpoint
+                            (if (equal gptel-model "fastgpt")
+                                "fastgpt" "summarize"))))))
     (prog1 backend
       (setf (alist-get name gptel--known-backends
                        nil nil #'equal)
diff --git a/gptel.el b/gptel.el
index 837616f4cd..d34cdc6bd1 100644
--- a/gptel.el
+++ b/gptel.el
@@ -30,7 +30,7 @@
 ;; gptel is a simple Large Language Model chat client, with support for 
multiple models/backends.
 ;;
 ;; gptel supports
-;; - The services ChatGPT, Azure, Gemini, and Kagi (FastGPT)
+;; - The services ChatGPT, Azure, Gemini, and Kagi (FastGPT & Summarizer)
 ;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All
 ;;
 ;;  Additionally, any LLM service (local or remote) that provides an



reply via email to

[Prev in Thread] Current Thread [Next in Thread]