- This event has passed.
Reinforcement Learning Reading Group – Non-delusional Q-learning and value-iteration
April 1 @ 6:00 pm - 8:00 pm
This paper analyses the interaction between function approximation and the Bellman backup. It reveals a fundamental problem in the algorithmic machinery that causes instabilities and divergence when training agents using this backup. It then provides a new value backup mechanism and resulting algorithms which guarantee convergence. The session will be lead by Arjun.
Notes/doubts to be jotted down by all attending beforehand over here:
Non-delusional Q-learning and value-iteration, Tyler Lu, Dale Schuurmans, Craig Boutilier, NeurIPS 2018
“We identify a fundamental source of error in Q-learning and other forms of dynamic programming with function approximation. Delusional bias arises when the approximation architecture limits the class of expressible greedy policies. Since standard Q-updates make globally uncoordinated action choices with respect to the expressible policy class, inconsistent or even conflicting Q-value estimates can result, leading to pathological behaviour such as over/under-estimation, instability and even divergence. To solve this problem, we introduce a new notion of policy consistency and define a local backup process that ensures global consistency through the use of information sets—sets that record constraints on policies consistent with backed-up Q-values. We prove that both the model-based and model-free algorithms using this backup remove delusional bias, yielding the first known algorithms that guarantee optimal results under general conditions. These algorithms furthermore only require polynomially many information sets (from a potentially exponential support). Finally, we suggest other practical heuristics for value-iteration and Q-learning that attempt to reduce delusional bias.”