https://github.com/fleet-ai/context/assets/44193474/80381b25-551e-4602-8987-071e92354f6f
<br><br><br>
Install the package and run context to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.
pip install fleet-context context
If you'd like to run the CLI tool locally, you can clone this repository, cd into it, then run:
pip install -e . context
If you have an existing package that already uses the keyword context, you can also activate Fleet Context by running:
fleet-context
<br><br><br>
You can download any library's embeddings and load it up into a dataframe by running:
from context import download_embeddings df = download_embeddings("langchain")
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 901k/901k [00:00<00:00, 2.64MiB/s] id dense_embeddings metadata sparse_values 0 91cd9f22-b3b6-49e1-8672-e1e42a1cf766 [-0.014795871, -0.013938751, 0.02374646, -0.02... {'id': '91cd9f22-b3b6-49e1-8672-e1e42a1cf766',... {'indices': [4279915734, 3106554626, 771291085... 1 80cd620e-7408-4649-aaa7-3fe3c719b4ed [-0.0027519625, 0.013772411, 0.0019546314, -0.... {'id': '80cd620e-7408-4649-aaa7-3fe3c719b4ed',... {'indices': [1497795724, 573857107, 2203090375... 2 87a406ad-e413-42fc-8813-6fa042f80f6a [-0.022883521, -0.0036436971, 0.0026068306, 0.... {'id': '87a406ad-e413-42fc-8813-6fa042f80f6a',... {'indices': [1558403699, 640376310, 358389376,... 3 8bdd8dae-8384-414d-87d2-4390ca29d857 [-0.024882555, -0.0041470923, -0.011419726, -0... {'id': '8bdd8dae-8384-414d-87d2-4390ca29d857',... {'indices': [1558403699, 3778951566, 274301652... 4 8cc5eb61-317a-4196-8099-51c47ef70406 [-0.036361936, 0.0027855083, -0.013214805, -0.... {'id': '8cc5eb61-317a-4196-8099-51c47ef70406',... {'indices': [3586802366, 1110127215, 161253108...
You can see a full list of supported libraries & search through them on our website at the bottom of the page.
<br>If you'd like to directly query from our hosted vector database, you can run:
from context import query results = query("How do I set up Langchain?") for result in results: print(f"{result['metadata']['text']}\n{result['metadata']['text']}")
<br>[ { 'id': '859e8dff-f9ec-497d-aa07-344e48b2f67b', 'score': 0.848275101, 'values': [], 'metadata': { 'library_id': '4506492b-70de-49f1-ba2e-d65bd7048a28', 'page_id': '732e264c-c077-4978-bc93-380d7dc28983', 'parent': '3be9bbcc-b5d6-4a91-9f72-a570c2db33e5', 'section_id': '', 'section_index': 0.0, 'text': "Quickstart ## Installation\u200b To install LangChain run: - Pip - Conda pip install langchain conda install langchain -c conda-forge For more details, see our Installation guide. ## Environment setup\u200b Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs. First we'll need to install their Python package: pip install openai Accessing the API requires an API key, which you can get by creating an account and heading here.", 'title': 'Quickstart | 🦜️🔗 Langchain', 'type': '', 'url': 'https://python.langchain.com/docs/get_started/quickstart' } }, # ...and 9 more ]
You can also set a custom k value and filters by any metadata field we support (listed below), plus library_name:
<br>results = query("How do I set up Langchain?", k=15, filters={"library_name": "langchain"})
One of the biggest advantages of using Fleet Context's embeddings is the amount of information preserved throughout the chunking and embeddings process. You can take advantage of the metadata to improve the quality of your retrievals significantly.
Here's a full list of metadata that we support.
IDs:
library_id: the uuid of the library referencedpage_id: the uuid of the page the chunk was retrieved fromparent: the uuid of the section the chunk was retrieved from (not to be confused with section_id)Page/section information:
url: the url of the section or page the chunk was retrieved from, formatted as f"{page_url}#{section_id}section_id: the section's id field from the htmlsection_index: the ordering of the chunk within the section. If there are 2 chunks that have the same parent, this will tell you which one was presented first.Chunk information:
title: the title of the section or of the page (if section title does not exist)text: the text, formatted in markdown. Note that markdown is removed from the embeddings for better retrieval results.type: the type of the chunk. Can be None (most common) or a defined value like class, function, attribute, data, exception, and more.section_indexRe-ranking is commonly known to improve results pretty dramatically. We can take that a step further and take advantage of the fact that the ordering within each section/page is preserved, because it follows that ordering content in the order of which it is presented to the reader will likely derive the best results.
Use section_index to do a smart reranking of your chunks.
parentIf you notice 2 or more chunks with the same parent field and are relatively similar in position on the page via section_index, you can go up one level and query all chunks with the same parent uuid and pass in the entire document.
typeOn retrieval, you can map intent and filter via type. If the user intends to generate code, you can pre-filter your retrieval to filter type to just class or function. You can use this in creative ways. We've found that pairing it with OpenAI's function calling works really well.
Also, type allow you to construct your prompt with more clarity, and display more rich information to the user. For example, adding the type to the prompt followed by the chunk will produce better results, because it allows the language model to understand what the chunk is trying to say.
Note that type is not guaranteed to be present and defined for all libraries — only the ones that have had their documentation generated by Sphinx/readthedocs.
textOur text field preserves all information from the HTML elements by converting it to Markdown. This allows for two big advantages:
url and section_idYou can link the user to the exact section with url (if supported, it's already pre-loaded with the section within the page).
<br><br><br>
You can use the -l or --libraries followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all. View a list of all supported libraries on our website.
<br>context -l langchain pydantic openai
You can select a different OpenAI model by using -m or --model. Defaults to gpt-4. You can set your model to gpt-4-1106-preview (gpt-4-turbo), gpt-3.5-turbo, or gpt-3.5-turbo-16k.
<br>context -m gpt-4-1106-preview
You can use Claude, CodeLlama, Mistral, and many other models by
OPENROUTER_API_KEY as an environment variablecontext -m phind/phind-codellama-34b
OpenAI models work this way as well; just use e.g. openai/gpt-4-32k. Other model options are available here.
Optionally, you can attribute your inference token usage to your app or website by setting OPENROUTER_APP_URL and OPENROUTER_APP_TITLE. Your app will show on the homepage of https://openrouter.ai if ranked.
Local model support is powered by LM Studio. To use local models, you can use --local or -n:
context --local
You need to download your local model through LM Studio. To do that:
The context window is defaulted to 3000. You can change this by using --context_window or -w:
<br>context --local --context_window 4096
You can control the number of retrieved chunks by using -k or --k_value (defaulted to 15), and you can toggle whether the model cites its source by using -c or --cite_sources (defaults to true).
context -k 25 -c false
<br><br><br>
We saw a 37-point improvement for gpt-4 generation scores and a 34-point improvement for gpt-4-turbo generation scores amongst a randomly sampled set of 50 libraries.
We attribute this to a lack of knowledge for the most up-to-date versions of libraries for gpt-4, and a combination of relevant up-to-date information to generate with and relevance of information for gpt-4-turbo.
<br><br><br>
Check out our visualized data here.
You can download all embeddings here.
<img width="100%" alt="Screenshot 2023-11-06 at 10 01 22 PM"


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI论文写作指导平台
AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置 海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号