将gpu改为cpu时,遇到一个报错:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
此时改为:
torch.load("0.9472_0048.weights",map_location='cpu')
就可以解决问题了。
方便查阅,整理:
假设我们只保存了模型的参数(model.state_dict())到文件名为modelparameters.pth, model = Net()
1. cpu -> cpu或者gpu -> gpu:
checkpoint = torch.load('modelparameters.pth') model.load_state_dict(checkpoint)
2. cpu -> gpu 1
torch.load('modelparameters.pth', map_location=lambda storage, loc: storage.cuda(1))
3. gpu 1 -> gpu 0
torch.load('modelparameters.pth', map_location={'cuda:1':'cuda:0'})
4. gpu -> cpu
torch.load('modelparameters.pth', map_location=lambda storage, loc: storage)
原文:https://blog.csdn.net/bc521bc/article/details/85623515