https://github.com/fleet-ai/context/assets/44193474/80381b25-551e-4602-8987-071e92354f6f
<br><br><br>
Install the package and run context
to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.
pip install fleet-context context
If you'd like to run the CLI tool locally, you can clone this repository, cd into it, then run:
pip install -e . context
If you have an existing package that already uses the keyword context
, you can also activate Fleet Context by running:
fleet-context
<br><br><br>
You can download any library's embeddings and load it up into a dataframe by running:
from context import download_embeddings df = download_embeddings("langchain")
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 901k/901k [00:00<00:00, 2.64MiB/s] id dense_embeddings metadata sparse_values 0 91cd9f22-b3b6-49e1-8672-e1e42a1cf766 [-0.014795871, -0.013938751, 0.02374646, -0.02... {'id': '91cd9f22-b3b6-49e1-8672-e1e42a1cf766',... {'indices': [4279915734, 3106554626, 771291085... 1 80cd620e-7408-4649-aaa7-3fe3c719b4ed [-0.0027519625, 0.013772411, 0.0019546314, -0.... {'id': '80cd620e-7408-4649-aaa7-3fe3c719b4ed',... {'indices': [1497795724, 573857107, 2203090375... 2 87a406ad-e413-42fc-8813-6fa042f80f6a [-0.022883521, -0.0036436971, 0.0026068306, 0.... {'id': '87a406ad-e413-42fc-8813-6fa042f80f6a',... {'indices': [1558403699, 640376310, 358389376,... 3 8bdd8dae-8384-414d-87d2-4390ca29d857 [-0.024882555, -0.0041470923, -0.011419726, -0... {'id': '8bdd8dae-8384-414d-87d2-4390ca29d857',... {'indices': [1558403699, 3778951566, 274301652... 4 8cc5eb61-317a-4196-8099-51c47ef70406 [-0.036361936, 0.0027855083, -0.013214805, -0.... {'id': '8cc5eb61-317a-4196-8099-51c47ef70406',... {'indices': [3586802366, 1110127215, 161253108...
You can see a full list of supported libraries & search through them on our website at the bottom of the page.
<br>If you'd like to directly query from our hosted vector database, you can run:
from context import query results = query("How do I set up Langchain?") for result in results: print(f"{result['metadata']['text']}\n{result['metadata']['text']}")
<br>[ { 'id': '859e8dff-f9ec-497d-aa07-344e48b2f67b', 'score': 0.848275101, 'values': [], 'metadata': { 'library_id': '4506492b-70de-49f1-ba2e-d65bd7048a28', 'page_id': '732e264c-c077-4978-bc93-380d7dc28983', 'parent': '3be9bbcc-b5d6-4a91-9f72-a570c2db33e5', 'section_id': '', 'section_index': 0.0, 'text': "Quickstart ## Installation\u200b To install LangChain run: - Pip - Conda pip install langchain conda install langchain -c conda-forge For more details, see our Installation guide. ## Environment setup\u200b Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs. First we'll need to install their Python package: pip install openai Accessing the API requires an API key, which you can get by creating an account and heading here.", 'title': 'Quickstart | 🦜️🔗 Langchain', 'type': '', 'url': 'https://python.langchain.com/docs/get_started/quickstart' } }, # ...and 9 more ]
You can also set a custom k value and filters by any metadata field we support (listed below), plus library_name
:
<br>results = query("How do I set up Langchain?", k=15, filters={"library_name": "langchain"})
One of the biggest advantages of using Fleet Context's embeddings is the amount of information preserved throughout the chunking and embeddings process. You can take advantage of the metadata to improve the quality of your retrievals significantly.
Here's a full list of metadata that we support.
IDs:
library_id
: the uuid of the library referencedpage_id
: the uuid of the page the chunk was retrieved fromparent
: the uuid of the section the chunk was retrieved from (not to be confused with section_id)Page/section information:
url
: the url of the section or page the chunk was retrieved from, formatted as f"{page_url}#{section_id}
section_id
: the section's id
field from the htmlsection_index
: the ordering of the chunk within the section. If there are 2 chunks that have the same parent, this will tell you which one was presented first.Chunk information:
title
: the title of the section or of the page (if section title does not exist)text
: the text, formatted in markdown. Note that markdown is removed from the embeddings for better retrieval results.type
: the type of the chunk. Can be None
(most common) or a defined value like class
, function
, attribute
, data
, exception
, and more.section_index
Re-ranking is commonly known to improve results pretty dramatically. We can take that a step further and take advantage of the fact that the ordering within each section/page is preserved, because it follows that ordering content in the order of which it is presented to the reader will likely derive the best results.
Use section_index
to do a smart reranking of your chunks.
parent
If you notice 2 or more chunks with the same parent
field and are relatively similar in position on the page via section_index
, you can go up one level and query all chunks with the same parent
uuid and pass in the entire document.
type
On retrieval, you can map intent and filter via type
. If the user intends to generate code, you can pre-filter your retrieval to filter type
to just class
or function
. You can use this in creative ways. We've found that pairing it with OpenAI's function calling works really well.
Also, type
allow you to construct your prompt with more clarity, and display more rich information to the user. For example, adding the type to the prompt followed by the chunk will produce better results, because it allows the language model to understand what the chunk is trying to say.
Note that type
is not guaranteed to be present and defined for all libraries — only the ones that have had their documentation generated by Sphinx/readthedocs.
text
Our text
field preserves all information from the HTML elements by converting it to Markdown. This allows for two big advantages:
url
and section_id
You can link the user to the exact section with url
(if supported, it's already pre-loaded with the section within the page).
<br><br><br>
You can use the -l
or --libraries
followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all. View a list of all supported libraries on our website.
<br>context -l langchain pydantic openai
You can select a different OpenAI model by using -m
or --model
. Defaults to gpt-4
. You can set your model to gpt-4-1106-preview
(gpt-4-turbo), gpt-3.5-turbo
, or gpt-3.5-turbo-16k
.
<br>context -m gpt-4-1106-preview
You can use Claude, CodeLlama, Mistral, and many other models by
OPENROUTER_API_KEY
as an environment variablecontext -m phind/phind-codellama-34b
OpenAI models work this way as well; just use e.g. openai/gpt-4-32k
. Other model options are available here.
Optionally, you can attribute your inference token usage to your app or website by setting OPENROUTER_APP_URL
and OPENROUTER_APP_TITLE
. Your app will show on the homepage of https://openrouter.ai if ranked.
Local model support is powered by LM Studio. To use local models, you can use --local
or -n
:
context --local
You need to download your local model through LM Studio. To do that:
The context window is defaulted to 3000. You can change this by using --context_window
or -w
:
<br>context --local --context_window 4096
You can control the number of retrieved chunks by using -k
or --k_value
(defaulted to 15), and you can toggle whether the model cites its source by using -c
or --cite_sources
(defaults to true).
context -k 25 -c false
<br><br><br>
We saw a 37-point improvement for gpt-4
generation scores and a 34-point improvement for gpt-4-turbo
generation scores amongst a randomly sampled set of 50 libraries.
We attribute this to a lack of knowledge for the most up-to-date versions of libraries for gpt-4
, and a combination of relevant up-to-date information to generate with and relevance of information for gpt-4-turbo
.
<br><br><br>
Check out our visualized data here.
You can download all embeddings here.
<img width="100%" alt="Screenshot 2023-11-06 at 10 01 22 PM"
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号