In short
-
model.eval() will notify all your layers that you are in eval mode, that way, batchnorm or dropout layers will work in eval mode instead of training mode.
-
torch.no_grad() impacts the autograd engine and deactivate it. It will reduce memory usage and speed up computations but you won’t be able to backprop(which you don’t want in an eval script).
1
2
3
4
5
6
7
8
def val(self, data_loader):
self.model.eval()
self.data_loader = data_loader
for i, data_batch in enumerate(self.data_loader):
with torch.no_grad():
outputs = self.model(data_batch)
...