DeepPavlov Dream is a platform for creating multi-skill generative AI assistants.
To learn more about the platform and how to build AI assistants with it, please visit Dream. If you want to learn more about DeepPavlov Agent that powers Dream visit DeepPavlov Agent documentation.
We've already included six distributions: four of them are based on lightweight Deepy socialbot, one is a full-sized Dream chatbot (based on Alexa Prize Challenge version) in English and a Dream chatbot in Russian.
Base version of Lunar assistant. Deepy Base contains Spelling Preprocessing annotator, template-based Harvesters Maintenance Skill, and AIML-based open-domain Program-y Skill based on Dialog Flow Framework.
Advanced version of Lunar assistant. Deepy Advanced contains Spelling Preprocessing, Sentence Segmentation, Entity Linking and Intent Catcher annotators, Harvesters Maintenance GoBot Skill for goal-oriented responses, and AIML-based open-domain Program-y Skill based on Dialog Flow Framework.
FAQ version of Lunar assistant. Deepy FAQ contains Spelling Preprocessing annotator, template-based Frequently Asked Questions Skill, and AIML-based open-domain Program-y Skill based on Dialog Flow Framework.
Goal-oriented version of Lunar assistant. Deepy GoBot Base contains Spelling Preprocessing annotator, Harvesters Maintenance GoBot Skill for goal-oriented responses, and AIML-based open-domain Program-y Skill based on Dialog Flow Framework.
Full version of DeepPavlov Dream Socialbot.
This is almost the same version of the DREAM socialbot as at
the end of Alexa Prize Challenge 4.
Some API services are replaced with trainable models.
Some services (e.g., News Annotator, Game Skill, Weather Skill) require private keys for underlying APIs,
most of them can be obtained for free.
If you want to use these services in local deployments, add your keys to the environmental variables (e.g., ./.env
, ./.env_ru
).
This version of Dream Socialbot consumes a lot of resources
because of its modular architecture and original goals (participation in Alexa Prize Challenge).
We provide a demo of Dream Socialbot on our website.
Mini version of DeepPavlov Dream Socialbot. This is a generative-based socialbot that uses English DialoGPT model to generate most of the responses. It also contains intent catcher and responder components to cover special user requests. Link to the distribution.
Russian version of DeepPavlov Dream Socialbot. This is a generative-based socialbot that uses Russian DialoGPT by DeepPavlov to generate most of the responses. It also contains intent catcher and responder components to cover special user requests. Link to the distribution.
Mini version of DeepPavlov Dream Socialbot with the use of prompt-based generative models.
This is a generative-based socialbot that uses large language models to generate most of the responses.
You can upload your own prompts (json files) to common/prompts,
add prompt names to PROMPTS_TO_CONSIDER
(comma-separated),
and the provided information will be used in LLM-powered reply generation as a prompt.
Link to the distribution.
docker
from 20 and above;docker-compose
v1.29.2;git clone https://github.com/deeppavlov/dream.git
If you get a "Permission denied" error running docker-compose, make sure to configure your docker user correctly.
docker-compose -f docker-compose.yml -f assistant_dists/deepy_base/docker-compose.override.yml up --build
docker-compose -f docker-compose.yml -f assistant_dists/deepy_adv/docker-compose.override.yml up --build
docker-compose -f docker-compose.yml -f assistant_dists/deepy_faq/docker-compose.override.yml up --build
docker-compose -f docker-compose.yml -f assistant_dists/deepy_gobot_base/docker-compose.override.yml up --build
The easiest way to try out Dream is to deploy it via proxy. All the requests will be redirected to DeepPavlov API, so you don't have to use any local resources. See proxy usage for details.
docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml -f assistant_dists/dream/proxy.yml up --build
Please note, that DeepPavlov Dream components require a lot of resources. Refer to the components section to see estimated requirements.
docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml up --build
We've also included a config with GPU allocations for multi-GPU environments:
AGENT_PORT=4242 docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml -f assistant_dists/dream/test.yml up
When you need to restart particular docker container without re-building (make sure mapping in assistant_dists/dream/dev.yml
is correct):
AGENT_PORT=4242 docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml restart container-name
docker-compose -f docker-compose.yml -f assistant_dists/dream_persona_prompted/docker-compose.override.yml -f assistant_dists/dream_persona_prompted/dev.yml -f assistant_dists/dream_persona_prompted/proxy.yml up --build
We've also included a config with GPU allocations for multi-GPU environments.
DeepPavlov Agent provides several options for interaction: a command line interface, an HTTP API, and a Telegram bot
In a separate terminal tab run:
docker-compose exec agent python -m deeppavlov_agent.run agent.channel=cmd agent.pipeline_config=assistant_dists/dream/pipeline_conf.json
Enter your username and have a chat with Dream!
Once you've started the bot, DeepPavlov's Agent API will run on http://localhost:4242
.
You can learn about the API from the DeepPavlov Agent Docs.
A basic chat interface will be available at http://localhost:4242/chat
.
Currently, Telegram bot is deployed instead of HTTP API.
Edit agent
command
definition inside docker-compose.override.yml
config:
agent:
command: sh -c 'bin/wait && python -m deeppavlov_agent.run agent.channel=telegram agent.telegram_token=<TELEGRAM_BOT_TOKEN> agent.pipeline_config=assistant_dists/dream/pipeline_conf.json'
NOTE: treat your Telegram token as a secret and do not commit it to public repositories!
Dream uses several docker-compose configuration files:
./docker-compose.yml
is the core config which includes containers for DeepPavlov Agent and mongo database;
./assistant_dists/*/docker-compose.override.yml
lists all components for the distribution;
./assistant_dists/dream/dev.yml
includes volume bindings for easier Dream debugging;
./assistant_dists/dream/proxy.yml
is a list of proxied containers.
If your deployment resources are limited, you can replace containers with their proxied copies hosted by DeepPavlov.
To do this, override those container definitions inside proxy.yml
, e.g.:
convers-evaluator-annotator:
command: ["nginx", "-g", "daemon off;"]
build:
context: dp/proxy/
dockerfile: Dockerfile
environment:
- PROXY_PASS=proxy.deeppavlov.ai:8004
- SERVICE_PORT=8004
and include this config in your deployment command:
docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml -f assistant_dists/dream/proxy.yml up --build
By default, proxy.yml
contains all available proxy definitions.
Dream Architecture is presented in the following image:
Name | Requirements | Description |
---|---|---|
Rule Based Selector | Algorithm that selects list of skills to generate candidate responses to the current context based on topics, entities, emotions, toxicity, dialogue acts and dialogue history | |
Response Selector | 50 MB RAM | Algorithm that selects a final responses among the given list of candidate responses |
Name | Requirements | Description |
---|---|---|
ASR | 40 MB RAM | calculates overall ASR confidence for a given utterance and grades it as either very low, low, medium, or high (for Amazon markup) |
Badlisted Words | 150 MB RAM | detects words and phrases from the badlist |
Combined Classification | 1.5 GB RAM, 3.5 GB GPU | BERT-based model including topic classification, dialog acts classification, sentiment, toxicity, emotion, factoid classification |
Combined Classification lightweight | 1.6 GB RAM | The same model as Combined Classification, but takes 42% less time thanks to the lighter backbone |
COMeT Atomic | 2 GB RAM, 1.1 GB GPU | Commonsense prediction models COMeT Atomic |
COMeT ConceptNet | 2 GB RAM, 1.1 GB GPU | Commonsense prediction models COMeT ConceptNet |
Convers Evaluator Annotator | 1 GB RAM, 4.5 GB GPU | is trained on the Alexa Prize data from the previous competitions and predicts whether the candidate response is interesting, comprehensible, on-topic, engaging, or erroneous |
Emotion Classification | 2.5 GB RAM | emotion classification annotator |
Entity Detection | 1.5 GB RAM, 3.2 GB GPU | extracts entities and their types from utterances |
Entity Linking | 2.5 GB RAM, 1.3 GB GPU | finds Wikidata entity ids for the entities detected with Entity Detection |
Entity Storer | 220 MB RAM | a rule-based component, which stores entities from the user's and socialbot's utterances if opinion expression is detected with patterns or MIDAS Classifier and saves them along with the detected attitude to dialogue state |
Fact Random | 50 MB RAM | returns random facts for the given entity (for entities from user utterance) |
Fact Retrieval | 7.4 GB RAM, 1.2 GB GPU | extracts facts from Wikipedia and wikiHow |
Intent Catcher | 1.7 GB RAM, 2.4 GB GPU | classifies user utterances into a number of predefined intents which are trained on a set of phrases and regexps |
KBQA | 2 GB RAM, 1.4 GB GPU | answers user's factoid questions based on Wikidata KB |
MIDAS Classification | 1.1 GB RAM, 4.5 GB GPU | BERT-based model trained on a semantic classes subset of MIDAS dataset |
MIDAS Predictor | 30 MB RAM | BERT-based model trained on a semantic classes subset of MIDAS dataset |
NER | 2.2 GB RAM, 5 GB GPU | extracts person names, names of locations, organizations from uncased text |
News API Annotator | 80 MB RAM | extracts the latest news about entities or topics using the GNews API. DeepPavlov Dream deployments utilize our own API key. |
Personality Catcher | 30 MB RAM | the skill is to change the system's personality description via chatting interface, it works as a system command, the response is system-like message |
Prompt Selector | 50 MB RAM | Annotator utilizing Sentence Ranker to rank prompts and selecting N_SENTENCES_TO_RETURN most relevant prompts (based on questions provided in prompts) |
Property Extraction | 6.3 GiB RAM | extracts user attributes from utterances |
Rake Keywords | 40 MB RAM | extracts keywords from utterances with the help of RAKE algorithm |
Relative Persona Extractor | 50 MB RAM | Annotator utilizing Sentence Ranker to rank persona sentences and selecting N_SENTENCES_TO_RETURN the most relevant sentences |
Sentrewrite | 200 MB RAM | rewrites user's utterances by replacing pronouns with specific names that provide more useful information to downstream components |
Sentseg | 1 GB RAM | allows us to handle long and complex user's utterances by splitting them into sentences and recovering punctuation |
Spacy Nounphrases | 180 MB RAM | extracts nounphrases using Spacy and filters out generic ones |
Speech Function Classifier | 1.1 GB RAM, 4.5 GB GPU | a hierarchical algorithm based on several linear models and a rule-based approach for the prediction of speech functions described by Eggins and Slade |
Speech Function Predictor | 1.1 GB RAM, 4.5 GB GPU | yields probabilities of speech functions that can follow a speech function predicted by Speech Function Classifier |
Spelling Preprocessing | 50 MB RAM | pattern-based component to rewrite different colloquial expressions to a more formal style of conversation |
Topic Recommendation | 40 MB RAM | offers a topic for further conversation using the information about the discussed topics and user's preferences. Current version is based on Reddit personalities (see Dream Report for Alexa Prize 4). |
Toxic Classification | 3.5 GB RAM, 3 GB GPU | Toxic classification model from Transformers specified as PRETRAINED_MODEL_NAME_OR_PATH |
User Persona Extractor | 40 MB RAM | determines which age category the user belongs to based on some key words |
Wiki Parser | 100 MB RAM | extracts Wikidata triplets for the entities detected with Entity Linking |
Wiki Facts | 1.7 GB RAM | model that extracts related facts from Wikipedia and WikiHow pages |
Name | Requirements | Description |
---|---|---|
DialoGPT | 1.2 GB RAM, 2.1 GB GPU | generative service based on Transformers generative model, the model is set in docker compose argument PRETRAINED_MODEL_NAME_OR_PATH (for example, microsoft/DialoGPT-small with 0.2-0.5 sec on GPU) |
DialoGPT Persona-based | 1.2 GB RAM, 2.1 GB GPU | generative service based on Transformers generative model, the model was pre-trained on the PersonaChat dataset to generate a response conditioned on a several sentences of the socialbot's persona |
Image Captioning | 4 GB RAM, 5.4 GB GPU | creates text representation of a received image |
Infilling | 1 GB RAM, 1.2 GB GPU | (turned off but the code is available) generative service based on Infilling model, for the given utterance returns utterance where _ from original text is replaced with generated tokens |
Knowledge Grounding | 2 GB RAM, 2.1 GB GPU | generative service based on BlenderBot architecture providing a response to the context taking into account an additional text paragraph |
Masked LM | 1.1 GB RAM, 1 GB GPU | (turned off but the code is available) |
Seq2seq Persona-based | 1.5 GB RAM, 1.5 GB GPU | generative service based on Transformers seq2seq model, the model was pre-trained on the PersonaChat dataset to generate a response conditioned on a several sentences of the socialbot's persona |
Sentence Ranker | 1.2 GB RAM, 2.1 GB GPU | ranking model given |
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱 动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号