From 732196ed0395f815d95f0edacd86ca00435522e3 Mon Sep 17 00:00:00 2001 From: KAIXIANG LIN Date: Tue, 26 Nov 2019 20:17:34 -0500 Subject: [PATCH] Update README.md --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index dd2a03a..c798717 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,8 @@ Ranking Policy Gradient (RPG) is a sample-efficient off-policy policy gradient method that learns optimal ranking of actions to maximize the return. RPG has the following practical advantages: -- It is currently the most sample-efficient model-free algorithm for learning deterministic policies. +- It is a sample-efficient model-free algorithm for learning deterministic policies. - It is effortless to incorporate any exploration algorithm to improve the sample-efficiency of RPG further. -- It is possible to learn a single RPG agent (parameterized by one neural network) that adapts to dynamic action space. This codebase contains the implementation of RPG using the [dopamine](https://github.com/google/dopamine) framework.