tetris/qlearning-results/a0.1-g0.9-e0.1-approximateqlearning

31 lines
1.3 KiB
Text
Raw Permalink Normal View History

2020-04-21 10:40:24 -07:00
2020-04-20 19:48:51,558 INFO [tetris::actors::qlearning] Training an actor with learning_rate = 0.1, discount_rate = 0.9, exploration_rate = 0.1
202.6465000000004
202.39400000000015
202.71350000000058
203.02800000000005
203.03899999999987
203.0555000000006
202.93950000000052
202.5405000000001
202.92950000000036
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:04,774 INFO [tetris] Final score: 214
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:04,950 INFO [tetris] Final score: 206
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:05,191 INFO [tetris] Final score: 214
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:05,398 INFO [tetris] Final score: 208
Lost due to: LockOut
2020-04-20 19:49:05,590 INFO [tetris] Final score: 188
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:05,767 INFO [tetris] Final score: 200
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:05,942 INFO [tetris] Final score: 202
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:06,118 INFO [tetris] Final score: 200
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:06,310 INFO [tetris] Final score: 208
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:49:06,503 INFO [tetris] Final score: 200