Remove optimizer_step() override
This fixes an issue (#44) causing training to fail due to a change in pytorch-lightning 0.8.4
The override was only for testing; removing it is necessary for upcoming native AMP in PyTorch 1.6 regardless.
Somehow, after this change, the model decreases in loss much faster: may need to investigate if the scheduler no longer works.