[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[GNU ELPA] Llm version 0.12.0
From: |
ELPA update |
Subject: |
[GNU ELPA] Llm version 0.12.0 |
Date: |
Sun, 17 Mar 2024 05:03:35 -0400 |
Version 0.12.0 of package Llm has just been released in GNU ELPA.
You can now find it in M-x list-packages RET.
Llm describes itself as:
===================================
Interface to pluggable llm backends
===================================
More at https://elpa.gnu.org/packages/llm.html
## Summary:
━━━━━━━━━━━━━━━━━━━━━━━
LLM PACKAGE FOR EMACS
━━━━━━━━━━━━━━━━━━━━━━━
1 Introduction
══════════════
This library provides an interface for interacting with Large Language
Models (LLMs). It allows elisp code to use LLMs while also giving
end-users the choice to select their preferred LLM. This is
particularly beneficial when working with LLMs since various
high-quality models exist, some of which have paid API access, while
others are locally installed and free but offer medium
quality. Applications using LLMs can utilize this library to ensure
compatibility regardless of whether the user has a local LLM or is
paying for API access.
## Recent NEWS:
1 Version 0.12.0
════════════════
• Add provider `llm-claude', for Anthropic's Claude.
2 Version 0.11.0
════════════════
• Introduce function calling, now available only in Open AI and
Gemini.
• Introduce `llm-capabilities', which returns a list of extra
capabilities for each backend.
• Fix issue with logging when we weren't supposed to.
3 Version 0.10.0
════════════════
• Introduce llm logging (for help with developing against `llm'), set
`llm-log' to non-nil to enable logging of all interactions with the
`llm' package.
• Change the default interaction with ollama to one more suited for
converesations (thanks to Thomas Allen).
4 Version 0.9.1
═══════════════
• Default to the new "text-embedding-3-small" model for Open AI.
*Important*: Anyone who has stored embeddings should either
regenerate embeddings (recommended) or hard-code the old embedding
model ("text-embedding-ada-002").
• Fix response breaking when prompts run afoul of Gemini / Vertex's
safety checks.
• Change Gemini streaming to be the correct URL. This doesn't seem to
have an effect on behavior.
5 Version 0.9
═════════════
• Add `llm-chat-token-limit' to find the token limit based on the
model.
• Add request timeout customization.
6 Version 0.8
═════════════
• Allow users to change the Open AI URL, to allow for proxies and
other services that re-use the API.
• Add `llm-name' and `llm-cancel-request' to the API.
• Standardize handling of how context, examples and history are folded
into `llm-chat-prompt-interactions'.
7 Version 0.7
═════════════
• Upgrade Google Cloud Vertex to Gemini - previous models are no
longer available.
• Added `gemini' provider, which is an alternate endpoint with
alternate (and easier) authentication and setup compared to Cloud
Vertex.
• Provide default for `llm-chat-async' to fall back to streaming if
not defined for a provider.
8 Version 0.6
═════════════
• Add provider `llm-llamacpp'.
• Fix issue with Google Cloud Vertex not responding to messages with a
system interaction.
• Fix use of `(pos-eol)' which is not compatible with Emacs 28.1.
9 Version 0.5.2
═══════════════
• Fix incompatibility with older Emacs introduced in Version 0.5.1.
• Add support for Google Cloud Vertex model `text-bison' and variants.
• `llm-ollama' can now be configured with a scheme (http vs https).
10 Version 0.5.1
════════════════
• Implement token counting for Google Cloud Vertex via their API.
• Fix issue with Google Cloud Vertex erroring on multibyte strings.
• Fix issue with small bits of missing text in Open AI and Ollama
streaming chat.
11 Version 0.5
══════════════
• Fixes for conversation context storage, requiring clients to handle
ongoing conversations slightly differently.
• Fixes for proper sync request http error code handling.
• `llm-ollama' can now be configured with a different hostname.
• Callbacks now always attempts to be in the client's original buffer.
• Add provider `llm-gpt4all'.
12 Version 0.4
══════════════
• Add helper function `llm-chat-streaming-to-point'.
• Add provider `llm-ollama'.
13 Version 0.3
══════════════
• Streaming support in the API, and for the Open AI and Vertex models.
• Properly encode and decode in utf-8 so double-width or other
character sizes don't cause problems.
14 Version 0.2.1
════════════════
• Changes in how we make and listen to requests, in preparation for
streaming functionality.
• Fix overzealous change hook creation when using async llm requests.
15 Version 0.2
══════════════
• Remove the dependency on non-GNU request library.
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [GNU ELPA] Llm version 0.12.0,
ELPA update <=