tetris/qlearning-results/a0.7-g0.1-e0.9-approximateq...

31 lines
1.3 KiB
Plaintext

2020-04-20 21:34:11,321 INFO [tetris::actors::qlearning] Training an actor with learning_rate = 0.7, discount_rate = 0.1, exploration_rate = 0.9
208.7134999999998
209.66350000000003
208.55499999999992
208.98049999999984
209.2109999999997
210.35750000000024
208.9709999999997
210.14850000000013
209.65449999999998
Lost due to: LockOut
2020-04-20 21:34:53,086 INFO [tetris] Final score: 211
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 21:34:54,175 INFO [tetris] Final score: 200
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 21:34:55,295 INFO [tetris] Final score: 224
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 21:34:56,110 INFO [tetris] Final score: 173
Lost due to: BlockOut(Position { x: 4, y: 19 })
2020-04-20 21:34:57,598 INFO [tetris] Final score: 289
Lost due to: BlockOut(Position { x: 5, y: 20 })
2020-04-20 21:34:58,879 INFO [tetris] Final score: 253
Lost due to: BlockOut(Position { x: 5, y: 20 })
2020-04-20 21:34:59,310 INFO [tetris] Final score: 182
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 21:35:00,270 INFO [tetris] Final score: 202
Lost due to: BlockOut(Position { x: 3, y: 20 })
2020-04-20 21:35:01,215 INFO [tetris] Final score: 247
Lost due to: BlockOut(Position { x: 5, y: 20 })
2020-04-20 21:35:02,959 INFO [tetris] Final score: 280