pytorch accuracy和Loss 的计算

最近学习代码时发现当自己去实现代码的时候对于样本的Loss和accuracy的计算很不理解,看别人的代码也是靠猜测,所以自己去官方文档学习加上自己做了个小实验以及搜索了别人的博客,总是算明白了怎么回事,所以打算写下来记录(纯粹记录,无参考意义)

accuracy 计算

关于accuracy的计算:acc=正确个数 / 样本总数我们知道,经过模型的输出的最后的一个结果是通过一个softmax算法的出来的,也就是说,输出的给过给出了这个模型对于每个类别的概率预测(且所有概率相加等于1),概率最大的类别也就是模型预测出的类别(插一句:那么我们可能遇到这样的情况,有几个类别预测的概率相差不大,最终结果是那一个比其他类大一点的结果,那其实这个时候表示模型的泛化性能很不好,所以对于网络的评价Loss是最好用的);那么首先我们要做的是要将预测结果最大概率的标签(类别)提取出来,这里我给出一个例子:

a = torch.tensor([[0.03,0.12,0.85], [0.01,0.9,0.09], [0.95,0.01,0.04], [0.09, 0.9, 0.01]])
print(a)
print(a.dtype)

我假设有一个经过模型输出的结果,现在我们需要提出最大概率的结果

我们用torch.max()函数打印了结果,可以看到,输出的是最大类别的概率以及对应索引(即类别),但是我们只要索引,所以取第二个元素predicted=torch.max(output.data, 1)[1],解释代码:这里第一个1表示求行的最大值因为每一行代表了一个样本的输出结果,第二个1代表了我们只要索引,还有另一种方法:predicted=torch.argmax(output, 1),torch.argmax()函数:求最大数值的索引
接下来我们需要计算预测正确的个数:correct += (output == labels).sum().item(),首先,“outpuut == labels” 的语法求出正确的类,类似于[1, 0, 1, 0],1表示预测正确,0表示错误,然后 .sum()将所有正确的预测加起来,得到预测正确的个数,torch.item(),这时候输出的是一个tensor类,比如有两个预测正确:tensor(2),.item() 语法将tensor转化为普通的float或者int,

最后total += label.size(0),求出样本总数:那么acc = correct / total,得出精度

Loss的计算

现在的框架大多是采用的是Minibatch梯度下降法,一般在进行Loss的计算我们都是用的交叉熵损失函数计算损失比如 CrossEntropyLoss(),其实,这个函数求出的是每次minibatch的平均损失,那么当我们将每次Minibatch损失加起来后需要除以的是step(即步长),数据的分批次计算采用的是torch.utils.data.DataLoader()函数,那么就需要步长就是len(Loadr),用最终的总Loss除以len(Loader),这就是Loss计算
下面会给出个人的代码全部(口说无凭,且不太好理解)

from torchvision import transforms
from torchvision.datasets import ImageFolder
import torchvision
import torch.nn as nn
import torch
from tqdm import tqdm
import sys
def train(lr, weight_decay, num_epochs):
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    print("using {} device.\n".format(device))
    train_transfrom = transforms.Compose([
        transforms.Resize(224),
        transforms.RandomResizedCrop(224, scale=(0.64, 1.0), ratio=(0.75, 1.33)),
        transforms.RandomHorizontalFlip(),
        transforms.ColorJitter(brightness=(0.7, 1.3), contrast=(0.8, 1.2), saturation=(0.9, 1.1), hue=0),
        transforms.RandomRotation(degrees=(-20, 20)),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])

    valid_transform = transforms.Compose([
                    transforms.Resize(224),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                ])

    train_dataset = ImageFolder('../input/fruits/fruits-360_dataset/fruits-360/Training', transform=train_transfrom)
    valid_dataset = ImageFolder('../input/fruits/fruits-360_dataset/fruits-360/Test', transform=valid_transform)
    n_classes = len(train_dataset.classes)
    train_loader = torch.utils.data.DataLoader(
                dataset=train_dataset,
                batch_size=32,
                shuffle=True,
                num_workers=2
            )
    valid_loader = torch.utils.data.DataLoader(
        dataset=valid_dataset,
        batch_size=16,
        shuffle=False,
        num_workers=2
    )
    loss_function = nn.CrossEntropyLoss()
    net = torchvision.models.resnet18(pretrained=True)
    in_channel = net.fc.in_features
    net.fc = nn.Sequential(nn.Linear(in_channel, n_classes))
    net.to(device)
    optimizer = torch.optim.AdamW(net.parameters(), lr=lr, weight_decay=weight_decay)
    scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.6, verbose=True)
    best_acc = 0
    for epoch in range(num_epochs):
        net.train()
        train_total = 0
        train_correct = 0
        train_loss = 0
        for batch in tqdm(train_loader):
            imgs, labels = batch
            imgs, labels = imgs.to(device), labels.to(device)
            optimizer.zero_grad()
            optputs = net(imgs)
            loss = loss_function(optputs, labels)
            loss.backward()
            optimizer.step()
            train_loss += loss.item()
            predicted = torch.argmax(optputs, 1)
            train_correct += (predicted == labels).sum().item()
            train_total += labels.size(0)
            del imgs, labels
            torch.cuda.empty_cache()
        scheduler.step()
        train_loss = train_loss / len(train_loader)
        train_accuracy = train_correct / train_total
        print(f"[ Train | {epoch + 1:03d}/{num_epochs:03d} ] loss = {train_loss:.5f}, acc = {train_accuracy:.5f}")

        net.eval()
        valid_correct, valid_total, valid_loss = 0, 0, 0
        for batch in tqdm(valid_loader):
            imgs, labels = batch
            imgs, labels = imgs.to(device), labels.to(device)
            with torch.no_grad():
                outputs = net(imgs)
            loss = loss_function(outputs, labels)
            #predicted = torch.argmax(optputs, 1)
            predicted = torch.max(outputs.data, 1)[1]
            valid_correct += (predicted == labels).sum().item()
            valid_loss += loss.item()
            valid_total += labels.size(0)
            del imgs, labels
            torch.cuda.empty_cache()
        valid_accuracy = valid_correct / valid_total
        valid_loss = valid_loss / len(valid_loader)
        print(f"[ Valid | {epoch + 1:03d}/{num_epochs:03d} ] loss = {valid_loss:.5f}, acc = {valid_accuracy:.5f}\n")
        if valid_accuracy > best_acc:
            best_acc =valid_accuracy
            print(f"best acc [{valid_accuracy:.5f}] in epoch {epoch + 1}\n")
    print(f"last, best acc [{valid_accuracy:.5f}]")



物联沃分享整理
物联沃-IOTWORD物联网 » pytorch accuracy和Loss 的计算

发表评论