Deep Q Networks (DQN)
Loss function of DQN
Loss=targetQ−currentQ=R+γQ′(s′,a′)−Q(s,a)Loss = targetQ - currentQ = R + \gamma Q'(s',a') - Q(s,a)Loss=targetQ−currentQ=R+γQ′(s′,a′)−Q(s,a)
0
1
Share
Contributors are:
Who are from:
References
DQN paper
Tags
Data Science
Double DQN
Experience Replay
Fixed Q Target