Over 10 Million Study Resources Now at Your Fingertips


Download as :
Rating : ⭐⭐⭐⭐⭐
Price : $10.99
Language:EN
Pages: 2

Figure average performance dyna agents blocking task

In the maze example presented in the previous section, the changes in the model were relatively modest. The model started out empty, and was then filled only with exactly correct information. In general, we can not expect to be so fortunate. Models may be incorrect because the environment is stochastic and only a limited number of samples have been observed, because the model was learned using function approximation that has generalized imperfectly, or simply because the environment has changed and its new behavior has not yet been observed. When the model is incorrect the planning process will compute a suboptimal policy.

In some cases, the suboptimal policy computed by planning quickly leads to the discovery and correction of the modeling error. This tends to happen when the model is optimistic in the sense of predicting greater reward or better state transitions than are actually possible. The planned policy attempts to exploit these opportunities and in so doing discovers that they do not exist.

Figure 9.7: Average performance of Dyna agents on a Blocking task. The left
environment was used for the first 1000 steps, the right environment for the rest. Dyna-Q+ is Dyna-Q with an exploration bonus that encourages exploration. Dyna-AC is a Dyna agent that uses an actor-critic learning method instead of Q-learning.

How It Works
Login account
Login Your Account
Add to cart
Add to Cart
Payment
Make payment
Document download
Download File
PageId: ELI1EE8CFF
img
Uploaded by :
Mz6O7Dk6
Page 1 Preview
figure average performance dyna agents blocking ta
Sell Your Old Documents & Earn Wallet Balance