一、Loss Functions损失函数
损失函数的作用:
1,损失函数就是实际输出值和目标值之间的差
2,由这个差便可以通过反向传播对之后的数据进行更新
Loss Functions官网给的API
里面由很多种损失函数,不同的损失函数有其不同的用途及表达式
二、L1Loss损失函数
torch.nn.L1Loss(size_average=None, reduce=None, reduction=‘mean’)
由官网给的各参数使用可知,size_average和reduce参数都已经被弃用了
而reduction有三个模式,none、mean、sum
import torch
from torch.nn import L1Lossinput = torch.tensor([1,2,3],dtype=torch.float32)
target = torch.tensor([4,5,6],dtype=torch.float32)input = torch.reshape(input,(1,1,1,3))
target = torch.reshape(target,(1,1,1,3))loss_1 = L1Loss(reduction='sum')
result_1 = loss_1(input,target)
print(result_1)#tensor(9.) 4-1 + 5-2 + 6-3 = 9loss_2 = L1Loss(reduction='mean')
result_2 = loss_2(input,target)
print(result_2)#tensor(3.) (4-1 + 5-2 + 6-3) / 3 = 3loss_3 = L1Loss(reduction='none')
result_3 = loss_3(input,target)
print(result_3)#tensor([[[[3., 3., 3.]]]])
三、MSELoss均方误差损失函数
torch.nn.MSELoss(size_average=None, reduce=None, reduction=‘mean’)
import torchinput = torch.tensor([1,2,3],dtype=torch.float32)
target = torch.tensor([4,5,6],dtype=torch.float32)input = torch.reshape(input,(1,1,1,3))
target = torch.reshape(target,(1,1,1,3))loss_mse = torch.nn.MSELoss()
result = loss_mse(input,target)
print(result)#tensor(9.) [(4-1)^2 + (5-2)^2 + (6-3)^2] / 3 = 9
四、CrossEntropyLoss交叉熵损失函数
torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction=‘mean’, label_smoothing=0.0)
假设x为三类:狗0、猫1、猪2,对应的输出概率为(1.0,2.0,3.0)
最后的真实目标结果为猪,对应的是(2)
import torchx = torch.tensor([1.0,2.0,3.0])
target = torch.tensor([2])x = torch.reshape(x,(1,3))#将x变成(N,C)形式 ,1维,3类
print(x.shape)#torch.Size([1, 3])loss_cross = torch.nn.CrossEntropyLoss()
result = loss_cross(x,target)print(result)#tensor(0.4076)