Performance of TRPO with reward shaping (research objective) (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
In order to measure performance of TRPO with reward shaping, number of episodes per run was set to 40 without any modifications in the other parameters and EFC environment was used. The LSTMs were trained in the following ways:
- Using data from random sample
- Using data from random policy tutor
- Using data from Supermemo tutor
0
1
Tags
Data Science
Related
Experimental Setup (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Reward functions and performance metrics (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Training the LSTM (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Relation between rewards and thresholds (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Performance of RL agent when the number of items are varied (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Performance of TRPO vs. TNPG algorithms (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Performance of TRPO with reward shaping (research objective) (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Comparison between likelihood and average of sum of outcomes based reward functions (research objective) (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)
Evaluation (Using deep reinforcement learning for personalizing review sessions on e-learning platforms with spaced repetition)