#+title: gptel: A simple LLM client for Emacs
[[https://elpa.nongnu.org/nongnu/gptel.svg][file:https://elpa.nongnu.org/nongnu/gptel.svg]] [[https://stable.melpa.org/packages/gptel-badge.svg][file:https://stable.melpa.org/packages/gptel-badge.svg]] [[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]
gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.
#+html: <div align="center"> | LLM Backend | Supports | Requires | |--------------------+----------+----------------------------| | ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] | | Azure | ✓ | Deployment and API key | | Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] | | GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] | | Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] | | Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] | | Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] | | Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]] | | Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] | | together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] | | Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] | | Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] | | Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] | | Groq | ✓ | [[https://console.groq.com/keys][API key]] | | OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]] | | PrivateGPT | ✓ | [[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running locally]] | | DeepSeek | ✓ | [[https://platform.deepseek.com/api_keys][API key]] | #+html: </div>
General usage: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])
https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4
https://user-images.githubusercontent.com/8607532/230516816-ae4a613a-4d01-4073-ad3f-b66fa73c6e45.mp4
Multi-LLM support demo:
gptel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
** Contents :toc:
** Installation
gptel can be installed in Emacs out of the box with =M-x package-install= ⏎ =gptel=. This installs the latest release.
If you want the development version instead, add MELPA (or NonGNU-devel ELPA) to your list of sources, then install it with =M-x package-install⏎= =gptel=.
(Optional: Install =markdown-mode=.)
#+html: <details><summary> **** Straight #+html: </summary> #+begin_src emacs-lisp (straight-use-package 'gptel) #+end_src
Installing the =markdown-mode= package is optional. #+html: </details> #+html: <details><summary> **** Manual #+html: </summary> Clone or download this repository and run =M-x package-install-file⏎= on the repository directory.
Installing the =markdown-mode= package is optional. #+html: </details> #+html: <details><summary> **** Doom Emacs #+html: </summary> In =packages.el= #+begin_src emacs-lisp (package! gptel) #+end_src
In =config.el= #+begin_src emacs-lisp (use-package! gptel :config (setq! gptel-api-key "your key")) #+end_src "your key" can be the API key itself, or (safer) a function that returns the key. Setting =gptel-api-key= is optional, you will be asked for a key if it's not found.
#+html: </details> #+html: <details><summary> **** Spacemacs #+html: </summary> After installation with =M-x package-install⏎= =gptel=
Optional: Set =gptel-api-key= to the key. Alternatively, you may choose a more secure method such as:
*** Other LLM backends #+html: <details><summary> **** Azure #+html: </summary>
Register a backend with #+begin_src emacs-lisp (gptel-make-azure "Azure-1" ;Name, whatever you'd like :protocol "https" ;Optional -- https is the default :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent :stream t ;Enable streaming responses :key #'gptel-api-key :models '("gpt-3.5-turbo" "gpt-4")) #+end_src Refer to the documentation of =gptel-make-azure= to set more parameters.
You can pick this backend from the menu when using gptel. (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "gpt-3.5-turbo" gptel-backend (gptel-make-azure "Azure-1" :protocol "https" :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" :stream t :key #'gptel-api-key :models '("gpt-3.5-turbo" "gpt-4"))) #+end_src #+html: </details>
#+html: <details><summary> **** GPT4All #+html: </summary>
Register a backend with #+begin_src emacs-lisp (gptel-make-gpt4all "GPT4All" ;Name of your choosing :protocol "http" :host "localhost:4891" ;Where it's running :models '("mistral-7b-openorca.Q4_0.gguf")) ;Available models #+end_src These are the required parameters, refer to the documentation of =gptel-make-gpt4all= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-max-tokens 500 gptel-model "mistral-7b-openorca.Q4_0.gguf" gptel-backend (gptel-make-gpt4all "GPT4All" :protocol "http" :host "localhost:4891" :models '("mistral-7b-openorca.Q4_0.gguf"))) #+end_src
#+html: </details>
#+html: <details><summary> **** Ollama #+html: </summary>
Register a backend with #+begin_src emacs-lisp (gptel-make-ollama "Ollama" ;Any name of your choosing :host "localhost:11434" ;Where it's running :stream t ;Stream responses :models '("mistral:latest")) ;List of models #+end_src These are the required parameters, refer to the documentation of =gptel-make-ollama= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "mistral:latest" gptel-backend (gptel-make-ollama "Ollama" :host "localhost:11434" :stream t :models '("mistral:latest"))) #+end_src
#+html: </details>
#+html: <details><summary> **** Gemini #+html: </summary>
Register a backend with #+begin_src emacs-lisp ;; :key can be a function that returns the API key. (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t) #+end_src These are the required parameters, refer to the documentation of =gptel-make-gemini= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "gemini-pro" gptel-backend (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)) #+end_src
#+html: </details>
#+html: <details> #+html: <summary> **** Llama.cpp or Llamafile #+html: </summary>
(If using a llamafile, run a [[https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles][server llamafile]] instead of a "command-line llamafile", and a model that supports text generation.)
Register a backend with #+begin_src emacs-lisp ;; Llama.cpp offers an OpenAI compatible API (gptel-make-openai "llama-cpp" ;Any name :stream t ;Stream responses :protocol "http" :host "localhost:8000" ;Llama.cpp server location :models '("test")) ;Any names, doesn't matter for Llama #+end_src These are the required parameters, refer to the documentation of =gptel-make-openai= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model "test" gptel-backend (gptel-make-openai "llama-cpp" :stream t :protocol "http" :host "localhost:8000" :models '("test"))) #+end_src
#+html: </details> #+html: <details><summary> **** Kagi (FastGPT & Summarizer) #+html: </summary>
Kagi's FastGPT model and the Universal Summarizer are both supported. A couple of notes:
Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.
Kagi models do not support multi-turn conversations, interactions are "one-shot". They also do not support streaming responses.
Register a backend with #+begin_src emacs-lisp (gptel-make-kagi "Kagi" ;any name


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

