Reinforcement-Learning-Papers

Reinforcement-Learning-Papers

强化学习前沿论文收录与汇总

该项目全面收录强化学习领域最新论文,聚焦单智能体强化学习研究。内容涵盖模型无关、基于模型、离线、元学习等多个分支,并收录ICLR、ICML、NeurIPS等顶级会议论文。项目对每篇论文进行简要概括,为研究人员提供强化学习前沿进展参考。

强化学习论文集模型无关模型相关离线学习Github开源项目

Reinforcement Learning Papers

PRs Welcome

Related papers for Reinforcement Learning (we mainly focus on single-agent).

Since there are tens of thousands of new papers on reinforcement learning at each conference every year, we are only able to list those we read and consider as insightful.

We have added some ICLR22, ICML22, NeurIPS22, ICLR23, ICML23, NeurIPS23, ICLR24, ICML24 papers on RL

<!-- NeurIPS23 page 71 ICML24 page21-->

Contents

<a id='Model-Free-Online'></a>

Model Free (Online) RL

<!-- ## <span id='Model-Free-Online'>Model Free (Online) RL</span> ### <span id='classic'>Classic Methods</span> -->

<a id='model-free-classic'></a>

Classic Methods

TitleMethodConferenceon/off policyAction SpacePolicyDescription
Human-level control through deep reinforcement learning, [other link]DQNNature15offDiscretebased on value functionuse deep neural network to train q learning and reach the human level in the Atari games; mainly two trick: replay buffer for improving sample efficiency, decouple target network and behavior network
Deep reinforcement learning with double q-learningDouble DQNAAAI16offDiscretebased on value functionfind that the Q function in DQN may overestimate; decouple calculating q function and choosing action with two neural networks
Dueling network architectures for deep reinforcement learningDueling DQNICML16offDiscretebased on value functionuse the same neural network to approximate q function and value function for calculating advantage function
Prioritized Experience ReplayPriority SamplingICLR16offDiscretebased on value functiongive different weights to the samples in the replay buffer (e.g. TD error)
Rainbow: Combining Improvements in Deep Reinforcement LearningRainbowAAAI18offDiscretebased on value functioncombine different improvements to DQN: Double DQN, Dueling DQN, Priority Sampling, Multi-step learning, Distributional RL, Noisy Nets
Policy Gradient Methods for Reinforcement Learning with Function ApproximationPGNeurIPS99on/offContinuous or Discretefunction approximationpropose Policy Gradient Theorem: how to calculate the gradient of the expected cumulative return to policy
----AC/A2C----on/offContinuous or Discreteparameterized neural networkAC: replace the return in PG with q function approximator to reduce variance; A2C: replace the q function in AC with advantage function to reduce variance
Asynchronous Methods for Deep Reinforcement LearningA3CICML16on/offContinuous or Discreteparameterized neural networkpropose three tricks to improve performance: (i) use different agents to interact with the environment; (ii) value function and policy share network parameters; (iii) modify the loss function (mse of value function + pg loss + policy entropy)
Trust Region Policy OptimizationTRPOICML15onContinuous or Discreteparameterized neural networkintroduce trust region to policy optimization for guaranteed monotonic improvement
Proximal Policy Optimization AlgorithmsPPOarxiv17onContinuous or Discreteparameterized neural networkreplace the hard constraint of TRPO with a penalty by clipping the coefficient
Deterministic Policy Gradient AlgorithmsDPGICML14offContinuousfunction approximationconsider deterministic policy for continuous action space and prove Deterministic Policy Gradient Theorem; use a stochastic behaviour policy for encouraging exploration
Continuous Control with Deep Reinforcement LearningDDPGICLR16offContinuousparameterized neural networkadapt the ideas of DQN to DPG: (i) deep neural network function approximators, (ii) replay buffer, (iii) fix the target q function at each epoch
Addressing Function Approximation Error in Actor-Critic MethodsTD3ICML18offContinuousparameterized neural networkadapt the ideas of Double DQN to DDPG: taking the minimum value between a pair of critics to limit overestimation
Reinforcement Learning with Deep Energy-Based PoliciesSQLICML17offmain for Continuousparameterized neural networkconsider max-entropy rl and propose soft q iteration as well as soft q learning
Soft Actor-Critic Algorithms and Applications, Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, [appendix]SACICML18offmain for Continuousparameterized neural networkbase the theoretical analysis of SQL and extend soft q iteration (soft q evaluation + soft q improvement); reparameterize the policy and use two parameterized value functions; propose SAC

<a id='exploration'></a>

Exploration

TitleMethodConferenceDescription
Curiosity-driven Exploration by Self-supervised PredictionICMICML17propose that curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills when rewards are sparse; formulate curiosity as the error in an agent’s ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model
Curiosity-Driven Exploration via Latent Bayesian SurpriseLBSAAAI22apply Bayesian surprise in a latent space representing the agent’s current understanding of the dynamics of the system
Automatic Intrinsic Reward Shaping for Exploration in Deep Reinforcement LearningAIRSICML23select shaping function from a predefined set based on the estimated task return in real-time, providing reliable exploration incentives and alleviating the biased objective problem; develop a toolkit that provides highquality implementations of various intrinsic reward modules based on PyTorch
Curiosity in Hindsight: Intrinsic Exploration in Stochastic EnvironmentsCuriosity in HindsightICML23consider exploration in stochastic environments; learn representations of the future that capture precisely the unpredictable aspects of each outcome—which we use as additional input for predictions, such that intrinsic rewards only reflect the predictable aspects of world dynamics
Maximize to Explore: One Objective Function Fusing Estimation, Planning, and ExplorationNeurIPS23 spotlight
MIMEx: Intrinsic Rewards from Masked Input ModelingMIMExNeurIPS23propose that the mask distribution can be flexibly tuned to control the difficulty of the underlying conditional prediction task
<!--<a id='off-policy-evaluation'></a> ### Off-Policy Evaluation | Title | Method | Conference | Description | | ---- | ---- | ---- | ---- | | [Weighted importance sampling for off-policy learning with linear function approximation](https://proceedings.neurips.cc/paper/2014/file/be53ee61104935234b174e62a07e53cf-Paper.pdf) | WIS-LSTD | NeurIPS14 | | | [Importance Sampling Policy Evaluation with an Estimated Behavior Policy](https://arxiv.org/pdf/1806.01347.pdf) | RIS | ICML19 | | | [On the Reuse Bias in Off-Policy Reinforcement Learning](https://arxiv.org/pdf/2209.07074.pdf) | BIRIS | IJCAI23 | discuss the bias of off-policy evaluation due to reusing the replay buffer; derive a high-probability bound of the Reuse Bias; introduce the concept of stability for off-policy algorithms and provide an upper bound for stable off-policy algorithms | <a id='soft-rl'></a> ### Soft RL | Title | Method | Conference | Description | | ---- | ---- | ---- | ---- | | [A Max-Min Entropy Framework for Reinforcement Learning](https://arxiv.org/pdf/2106.10517.pdf) | MME | NeurIPS21 | find that SAC may fail in explore states with low entropy (arrive states with high entropy and increase their entropies); propose a max-min entropy framework to address this issue | | [Maximum Entropy RL (Provably) Solves Some Robust RL Problems ](https://arxiv.org/pdf/2103.06257.pdf) | ---- | ICLR22 | theoretically prove that standard maximum entropy RL is robust to some disturbances in the dynamics and the reward function | <a id='data-augmentation'></a> ### Data Augmentation | Title | Method | Conference | Description | | ---- | ---- | ---- | ---- | | [Reinforcement Learning with Augmented Data](https://arxiv.org/pdf/2004.14990.pdf) | RAD | NeurIPS20 | propose first extensive study of general data augmentations for RL on both pixel-based and state-based inputs | | [Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels](https://arxiv.org/pdf/2004.13649.pdf) | DrQ | ICLR21 Spotlight | propose to regularize the value function when applying data augmentation with model-free methods and reach state-of-the-art performance in image-pixels tasks | --> <!-- | [Equivalence notions and model minimization in Markov decision processes](https://www.ics.uci.edu/~dechter/courses/ics-295/winter-2018/papers/givan-dean-greig.pdf) | | Artificial Intelligence, 2003 | | | [Metrics for Finite Markov Decision Processes](https://arxiv.org/ftp/arxiv/papers/1207/1207.4114.pdf) || UAI04 || | [Bisimulation metrics for continuous Markov decision processes](https://www.normferns.com/assets/documents/sicomp2011.pdf) || SIAM Journal on Computing, 2011 || | [Scalable methods for computing state similarity in deterministic Markov Decision Processes](https://arxiv.org/pdf/1911.09291.pdf) || AAAI20 || | [Learning Invariant Representations for Reinforcement Learning without Reconstruction](https://arxiv.org/pdf/2006.10742.pdf) | DBC | ICLR21 || -->

<a id='Representation-RL'></a>

Representation Learning

Note: representation learning with MBRL is in the part World Models

TitleMethodConferenceDescription
CURL: Contrastive Unsupervised Representations for Reinforcement LearningCURLICML20extracts high-level features from raw pixels using contrastive learning and performs offpolicy control on top of the extracted features
Learning Invariant Representations for Reinforcement Learning without ReconstructionDBCICLR21propose using Bisimulation to learn robust latent representations which encode only the task-relevant information from observations
Reinforcement Learning with Prototypical RepresentationsProto-RLICML21pre-train task-agnostic representations and prototypes on environments without downstream task information
Understanding the World Through Action----CoRL21discusse how self-supervised reinforcement learning combined with offline RL can enable scalable representation learning
Flow-based Recurrent Belief State Learning for POMDPsFORBESICML22incorporate normalizing flows into the variational inference to learn general continuous belief states for POMDPs
Contrastive Learning as Goal-Conditioned Reinforcement LearningContrastive RLNeurIPS22show (contrastive) representation learning methods can be cast as RL algorithms in their own right
Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels?----NeurIPS22conduct an extensive comparison of various self-supervised losses under the existing joint learning framework for pixel-based reinforcement learning in many environments from different benchmarks, including one real-world environment
Reinforcement Learning with Automated Auxiliary Loss SearchA2LSNeurIPS22propose to automatically search top-performing auxiliary loss functions for learning better representations in RL; define a general auxiliary loss space of size 7.5 × 1020 based on the collected trajectory data and explore the space with an efficient evolutionary search strategy
Mask-based Latent Reconstruction for Reinforcement LearningMLRNeurIPS22propose an effective self-supervised method to predict complete state representations in the latent space from the observations with spatially and temporally masked pixels
Towards Universal Visual Reward and Representation via Value-Implicit Pre-TrainingVIPICLR23 Spotlightcast representation learning from human videos as an offline goal-conditioned reinforcement learning problem; derive a self-supervised dual goal-conditioned value-function objective that does not depend on actions, enabling pre-training on unlabeled human videos
Latent Variable Representation for Reinforcement Learning----ICLR23provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle in the face of uncertainty for exploration
Spectral Decomposition Representation for Reinforcement LearningICLR23
Become a Proficient Player with Limited Data through Watching Pure VideosFICCICLR23consider the setting where the pre-training data are action-free videos; introduce a two-phase training pipeline; pre-training phase: implicitly extract the hidden action embedding from videos and pre-train the visual representation and the environment dynamics network based on vector quantization; down-stream tasks: finetune with small amount of task data based on the learned models
Bootstrapped Representations in Reinforcement Learning----ICML23provide a theoretical characterization of the state representation learnt by temporal difference learning; find that this representation differs from the features learned by Monte Carlo and residual gradient algorithms for most transition structures of the environment in the policy evaluation setting
[Representation-Driven Reinforcement

编辑推荐精选

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

热门AI工具AI办公办公工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

模型训练热门AI工具内容创作智能问答AI开发讯飞星火大模型多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

Trae

Trae

字节跳动发布的AI编程神器IDE

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

AI助手热门AI工具AI创作AI辅助写作讯飞绘文内容运营个性化文章多平台分发
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

下拉加载更多