Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning series)
- 作者: Richard S. Sutton,Andrew G. Barto
- 出版社/メーカー: A Bradford Book
- 発売日: 2018/11/13
- メディア: ハードカバー
- この商品を含むブログを見る
Chapter 7
Exercise 7.2
The "TD error sum" algorithm is worse than TD(n) because it doesn't get the benefit of newer value estimation achieved in the later steps.
Chapter 8
Exercise 8.1
Dyna-Q has an advantage over TD(n) because the learned model can be reused in the later episodes in Dyna-Q whereas the learned path is used only for a single set of updates in TD(n). It is also possible for Dyna-Q to update values on the previous paths using data learned in the later paths.
Exercise 8.2
It's because Dyna-Q+ has more exploratory nature, but actual advantage depends on hyper-parameters and environment.
Exercise 8.3
Since Dyna-Q+ has more exploratory nature, once the optimal path has been found, Dyna-Q has less fluctuation (i.e. follows the optimal path) than Dyna-Q+.
Exercise 8.4
In terms of performance, the alternate approach is worse than Dyna-Q+ since the alternate approach doesn't propagate the "unvisited information" to other cells.
Exercise 8.6
It strengthens the sample updates. With highly skewed distribution, you can get the better estimation with fewer samples.