gptel

gptel

Emacs多模型LLM客户端 支持多种AI后端

gptel是一款为Emacs设计的LLM客户端,支持ChatGPT、Azure、Ollama等多种模型和后端。该工具可在Emacs任何缓冲区中使用,具备异步响应、流式输出和多会话等功能。gptel允许保存和恢复聊天记录,编辑历史提示和回复,并支持自定义工作流程。作为Emacs的扩展,gptel为用户提供了灵活的LLM交互体验。

EmacsgptelLLMChatGPTAPIGithub开源项目

#+title: gptel: A simple LLM client for Emacs

[[https://elpa.nongnu.org/nongnu/gptel.svg][file:https://elpa.nongnu.org/nongnu/gptel.svg]] [[https://stable.melpa.org/packages/gptel-badge.svg][file:https://stable.melpa.org/packages/gptel-badge.svg]] [[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]

gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.

#+html: <div align="center"> | LLM Backend | Supports | Requires | |--------------------+----------+----------------------------| | ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] | | Azure | ✓ | Deployment and API key | | Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] | | GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] | | Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] | | Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] | | Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] | | Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]] | | Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] | | together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] | | Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] | | Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] | | Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] | | Groq | ✓ | [[https://console.groq.com/keys][API key]] | | OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]] | | PrivateGPT | ✓ | [[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running locally]] | | DeepSeek | ✓ | [[https://platform.deepseek.com/api_keys][API key]] | #+html: </div>

General usage: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])

https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4

https://user-images.githubusercontent.com/8607532/230516816-ae4a613a-4d01-4073-ad3f-b66fa73c6e45.mp4

Multi-LLM support demo:

https://github-production-user-asset-6210df.s3.amazonaws.com/8607532/278854024-ae1336c4-5b87-41f2-83e9-e415349d6a43.mp4

  • It's async and fast, streams responses.
  • Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
  • LLM responses are in Markdown or Org markup.
  • Supports conversations and multiple independent sessions.
  • Save chats as regular Markdown/Org/Text files and resume them later.
  • You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
  • Don't like gptel's workflow? Use it to create your own for any supported model/backend with a [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][simple API]].

gptel uses Curl if available, but falls back to url-retrieve to work without external dependencies.

** Contents :toc:

  • [[#installation][Installation]]
    • [[#straight][Straight]]
    • [[#manual][Manual]]
    • [[#doom-emacs][Doom Emacs]]
    • [[#spacemacs][Spacemacs]]
  • [[#setup][Setup]]
    • [[#chatgpt][ChatGPT]]
    • [[#other-llm-backends][Other LLM backends]]
      • [[#azure][Azure]]
      • [[#gpt4all][GPT4All]]
      • [[#ollama][Ollama]]
      • [[#gemini][Gemini]]
      • [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
      • [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
      • [[#togetherai][together.ai]]
      • [[#anyscale][Anyscale]]
      • [[#perplexity][Perplexity]]
      • [[#anthropic-claude][Anthropic (Claude)]]
      • [[#groq][Groq]]
      • [[#openrouter][OpenRouter]]
      • [[#privategpt][PrivateGPT]]
      • [[#deepseek][DeepSeek]]
  • [[#usage][Usage]]
    • [[#in-any-buffer][In any buffer:]]
    • [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
      • [[#save-and-restore-your-chat-sessions][Save and restore your chat sessions]]
    • [[#include-more-context-with-requests][Include more context with requests]]
    • [[#extra-org-mode-conveniences][Extra Org mode conveniences]]
  • [[#faq][FAQ]]
    • [[#i-want-the-window-to-scroll-automatically-as-the-response-is-inserted][I want the window to scroll automatically as the response is inserted]]
    • [[#i-want-the-cursor-to-move-to-the-next-prompt-after-the-response-is-inserted][I want the cursor to move to the next prompt after the response is inserted]]
    • [[#i-want-to-change-the-formatting-of-the-prompt-and-llm-response][I want to change the formatting of the prompt and LLM response]]
    • [[#i-want-the-transient-menu-options-to-be-saved-so-i-only-need-to-set-them-once][I want the transient menu options to be saved so I only need to set them once]]
    • [[#i-want-to-use-gptel-in-a-way-thats-not-supported-by-gptel-send-or-the-options-menu][I want to use gptel in a way that's not supported by =gptel-send= or the options menu]]
    • [[#doom-emacs-sending-a-query-from-the-gptel-menu-fails-because-of-a-key-conflict-with-org-mode][(Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode]]
    • [[#chatgpt-i-get-the-error-http2-429-you-exceeded-your-current-quota][(ChatGPT) I get the error "(HTTP/2 429) You exceeded your current quota"]]
    • [[#why-another-llm-client][Why another LLM client?]]
  • [[#additional-configuration][Additional Configuration]]
  • [[#alternatives][Alternatives]]
    • [[#extensions-using-gptel][Extensions using gptel]]
  • [[#acknowledgments][Acknowledgments]]

** Installation

gptel can be installed in Emacs out of the box with =M-x package-install= ⏎ =gptel=. This installs the latest release.

If you want the development version instead, add MELPA (or NonGNU-devel ELPA) to your list of sources, then install it with =M-x package-install⏎= =gptel=.

(Optional: Install =markdown-mode=.)

#+html: <details><summary> **** Straight #+html: </summary> #+begin_src emacs-lisp (straight-use-package 'gptel) #+end_src

Installing the =markdown-mode= package is optional. #+html: </details> #+html: <details><summary> **** Manual #+html: </summary> Clone or download this repository and run =M-x package-install-file⏎= on the repository directory.

Installing the =markdown-mode= package is optional. #+html: </details> #+html: <details><summary> **** Doom Emacs #+html: </summary> In =packages.el= #+begin_src emacs-lisp (package! gptel) #+end_src

In =config.el= #+begin_src emacs-lisp (use-package! gptel :config (setq! gptel-api-key "your key")) #+end_src "your key" can be the API key itself, or (safer) a function that returns the key. Setting =gptel-api-key= is optional, you will be asked for a key if it's not found.

#+html: </details> #+html: <details><summary> **** Spacemacs #+html: </summary> After installation with =M-x package-install⏎= =gptel=

Optional: Set =gptel-api-key= to the key. Alternatively, you may choose a more secure method such as:

  • Storing in =~/.authinfo=. By default, "api.openai.com" is used as HOST and "apikey" as USER. #+begin_src authinfo machine api.openai.com login apikey password TOKEN #+end_src
  • Setting it to a function that returns the key.

*** Other LLM backends #+html: <details><summary> **** Azure #+html: </summary>

Register a backend with #+begin_src emacs-lisp (gptel-make-azure "Azure-1" ;Name, whatever you'd like :protocol "https" ;Optional -- https is the default :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent :stream t ;Enable streaming responses :key #'gptel-api-key :models '("gpt-3.5-turbo" "gpt-4")) #+end_src Refer to the documentation of =gptel-make-azure= to set more parameters.

You can pick this backend from the menu when using gptel. (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "gpt-3.5-turbo" gptel-backend (gptel-make-azure "Azure-1" :protocol "https" :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" :stream t :key #'gptel-api-key :models '("gpt-3.5-turbo" "gpt-4"))) #+end_src #+html: </details>

#+html: <details><summary> **** GPT4All #+html: </summary>

Register a backend with #+begin_src emacs-lisp (gptel-make-gpt4all "GPT4All" ;Name of your choosing :protocol "http" :host "localhost:4891" ;Where it's running :models '("mistral-7b-openorca.Q4_0.gguf")) ;Available models #+end_src These are the required parameters, refer to the documentation of =gptel-make-gpt4all= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-max-tokens 500 gptel-model "mistral-7b-openorca.Q4_0.gguf" gptel-backend (gptel-make-gpt4all "GPT4All" :protocol "http" :host "localhost:4891" :models '("mistral-7b-openorca.Q4_0.gguf"))) #+end_src

#+html: </details>

#+html: <details><summary> **** Ollama #+html: </summary>

Register a backend with #+begin_src emacs-lisp (gptel-make-ollama "Ollama" ;Any name of your choosing :host "localhost:11434" ;Where it's running :stream t ;Stream responses :models '("mistral:latest")) ;List of models #+end_src These are the required parameters, refer to the documentation of =gptel-make-ollama= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "mistral:latest" gptel-backend (gptel-make-ollama "Ollama" :host "localhost:11434" :stream t :models '("mistral:latest"))) #+end_src

#+html: </details>

#+html: <details><summary> **** Gemini #+html: </summary>

Register a backend with #+begin_src emacs-lisp ;; :key can be a function that returns the API key. (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t) #+end_src These are the required parameters, refer to the documentation of =gptel-make-gemini= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "gemini-pro" gptel-backend (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)) #+end_src

#+html: </details>

#+html: <details> #+html: <summary> **** Llama.cpp or Llamafile #+html: </summary>

(If using a llamafile, run a [[https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles][server llamafile]] instead of a "command-line llamafile", and a model that supports text generation.)

Register a backend with #+begin_src emacs-lisp ;; Llama.cpp offers an OpenAI compatible API (gptel-make-openai "llama-cpp" ;Any name :stream t ;Stream responses :protocol "http" :host "localhost:8000" ;Llama.cpp server location :models '("test")) ;Any names, doesn't matter for Llama #+end_src These are the required parameters, refer to the documentation of =gptel-make-openai= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "test" gptel-backend (gptel-make-openai "llama-cpp" :stream t :protocol "http" :host "localhost:8000" :models '("test"))) #+end_src

#+html: </details> #+html: <details><summary> **** Kagi (FastGPT & Summarizer) #+html: </summary>

Kagi's FastGPT model and the Universal Summarizer are both supported. A couple of notes:

  1. Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.

  2. Kagi models do not support multi-turn conversations, interactions are "one-shot". They also do not support streaming responses.

Register a backend with #+begin_src emacs-lisp (gptel-make-kagi "Kagi" ;any name

编辑推荐精选

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

Refly.AI

Refly.AI

最适合小白的AI自动化工作流平台

无需编码,轻松生成可复用、可变现的AI自动化工作流

酷表ChatExcel

酷表ChatExcel

大模型驱动的Excel数据处理工具

基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。

AI工具酷表ChatExcelAI智能客服AI营销产品使用教程
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
AIWritePaper论文�写作

AIWritePaper论文写作

AI论文写作指导平台

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

AI辅助写作AI工具AI论文工具论文写作智能生成大纲数据安全AI助手热门
博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

AI办公办公工具AI工具博思AIPPTAI生成PPT智能排版海量精品模板AI创作热门
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

下拉加载更多