tetris/qlearning-results/a0.1-g0.9-e0.9-approximateq...

31 lines
1.3 KiB
Plaintext

2020-04-20 19:50:17,433 INFO [tetris::actors::qlearning] Training an actor with learning_rate = 0.1, discount_rate = 0.9, exploration_rate = 0.9
208.73600000000016
209.4225000000004
209.70750000000032
207.65349999999984
207.76400000000055
208.6115000000001
207.88149999999996
207.8700000000006
209.03950000000006
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:50:54,363 INFO [tetris] Final score: 171
Lost due to: BlockOut(Position { x: 5, y: 19 })
2020-04-20 19:50:55,243 INFO [tetris] Final score: 167
Lost due to: BlockOut(Position { x: 3, y: 20 })
2020-04-20 19:50:56,059 INFO [tetris] Final score: 236
Lost due to: LockOut
2020-04-20 19:50:57,324 INFO [tetris] Final score: 206
Lost due to: BlockOut(Position { x: 5, y: 20 })
2020-04-20 19:50:58,987 INFO [tetris] Final score: 251
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:50:59,980 INFO [tetris] Final score: 215
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:00,763 INFO [tetris] Final score: 168
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:01,403 INFO [tetris] Final score: 169
Lost due to: BlockOut(Position { x: 4, y: 20 })
2020-04-20 19:51:02,603 INFO [tetris] Final score: 236
Lost due to: BlockOut(Position { x: 5, y: 20 })
2020-04-20 19:51:03,259 INFO [tetris] Final score: 236