tetris/qlearning-results/a0.1-g0.5-e0.1-approximateqlearning

31 lines
1.3 KiB
Text
Raw Normal View History

2020-04-21 10:40:24 -07:00
2020-04-20 19:51:03,272 INFO [tetris::actors::qlearning] Training an actor with learning_rate = 0.1, discount_rate = 0.5, exploration_rate = 0.1
202.91550000000058
201.90000000000043
204.79149999999984
203.40600000000023
204.6635
202.65850000000034
204.23150000000106
207.42250000000072
203.14750000000012
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:24,270 INFO [tetris] Final score: 192
Lost due to: BlockOut(Position { x: 5, y: 20 })
2020-04-20 19:51:24,494 INFO [tetris] Final score: 232
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:24,686 INFO [tetris] Final score: 220
Lost due to: BlockOut(Position { x: 3, y: 20 })
2020-04-20 19:51:24,862 INFO [tetris] Final score: 210
Lost due to: BlockOut(Position { x: 3, y: 20 })
2020-04-20 19:51:25,039 INFO [tetris] Final score: 210
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:25,231 INFO [tetris] Final score: 218
Lost due to: BlockOut(Position { x: 5, y: 20 })
2020-04-20 19:51:25,423 INFO [tetris] Final score: 197
Lost due to: LockOut
2020-04-20 19:51:25,615 INFO [tetris] Final score: 210
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:25,823 INFO [tetris] Final score: 220
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:26,015 INFO [tetris] Final score: 210