[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[GNU ELPA] Llm version 0.6.0
From: |
ELPA update |
Subject: |
[GNU ELPA] Llm version 0.6.0 |
Date: |
Sat, 09 Dec 2023 05:03:07 -0500 |
Version 0.6.0 of package Llm has just been released in GNU ELPA.
You can now find it in M-x list-packages RET.
Llm describes itself as:
===================================
Interface to pluggable llm backends
===================================
More at https://elpa.gnu.org/packages/llm.html
## Summary:
━━━━━━━━━━━━━━━━━━━━━━━
LLM PACKAGE FOR EMACS
━━━━━━━━━━━━━━━━━━━━━━━
1 Introduction
══════════════
This is a library for interfacing with Large Language Models. It
allows elisp code to use LLMs, but allows gives the end-user an option
to choose which LLM they would prefer. This is especially useful for
LLMs, since there are various high-quality ones that in which API
access costs money, as well as locally installed ones that are free,
but of medium quality. Applications using LLMs can use this library
to make sure their application works regardless of whether the user
has a local LLM or is paying for API access.
## Recent NEWS:
1 Version 0.6
═════════════
• Add provider `llm-llamacpp'.
• Fix issue with Google Cloud Vertex not responding to messages with a
system interaction.
• Fix use of `(pos-eol)' which is not compatible with Emacs 28.1.
2 Version 0.5.2
═══════════════
• Fix incompatibility with older Emacs introduced in Version 0.5.1.
• Add support for Google Cloud Vertex model `text-bison' and variants.
• `llm-ollama' can now be configured with a scheme (http vs https).
3 Version 0.5.1
═══════════════
• Implement token counting for Google Cloud Vertex via their API.
• Fix issue with Google Cloud Vertex erroring on multibyte strings.
• Fix issue with small bits of missing text in Open AI and Ollama
streaming chat.
4 Version 0.5
═════════════
• Fixes for conversation context storage, requiring clients to handle
ongoing conversations slightly differently.
• Fixes for proper sync request http error code handling.
• `llm-ollama' can now be configured with a different hostname.
• Callbacks now always attempts to be in the client's original buffer.
• Add provider `llm-gpt4all'.
5 Version 0.4
═════════════
• Add helper function `llm-chat-streaming-to-point'.
• Add provider `llm-ollama'.
6 Version 0.3
═════════════
• Streaming support in the API, and for the Open AI and Vertex models.
• Properly encode and decode in utf-8 so double-width or other
character sizes don't cause problems.
7 Version 0.2.1
═══════════════
• Changes in how we make and listen to requests, in preparation for
streaming functionality.
• Fix overzealous change hook creation when using async llm requests.
8 Version 0.2
═════════════
• Remove the dependency on non-GNU request library.
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [GNU ELPA] Llm version 0.6.0,
ELPA update <=