On the “https://github.com/facebookresearch/fastMRI/blob/master/models/unet/train_unet.py”, it says that
checkpoint, model, optimizer = load_model(args.checkpoint)
args = checkpoint[‘args’]
best_dev_loss = checkpoint[‘best_dev_loss’]
start_epoch = checkpoint[‘epoch’]
However, I think start_epoch=checkpoint[‘epoch’]+1 is correct because the last epoch is repeated when loading the model in the original code. Especially, when checkpoint[‘epoch’]=args.lr_step_size, scheduler will decay lr twice.
I would appreciate it if you would check this.