defaccuracy(y_hat, y):return(y_hat.argmax(dim=1)==y).float().mean().item()defevaluate_accuracy(data_iter, net):acc_sum, n =0.0,0for X, y in data_iter:acc_sum +=(net(X).argmax(dim=1)== y).float().sum().item()n += y.shape[0]return acc_sum /n
3.6.7. 训练模型
d2lzh
num_epochs, lr =5,0.1# 本函数已保存在d2lzh包中方便以后使用deftrain_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,params=None, lr=None, optimizer=None):for epoch inrange(num_epochs):train_l_sum, train_acc_sum, n =0.0,0.0,0for X, y in train_iter:y_hat = net(X)l = loss(y_hat, y).sum()# 梯度清零if optimizer isnotNone:optimizer.zero_grad()elif params isnotNoneand params[0].grad isnotNone:for param in params:param.grad.data.zero_()l.backward()if optimizer isNone:d2l.sgd(params, lr, batch_size)else:optimizer.step()# “softmax回归的简洁实现”一节将用到train_l_sum += l.item()train_acc_sum +=(y_hat.argmax(dim=1)== y).sum().item()n += y.shape[0]test_acc = evaluate_accuracy(test_iter, net)print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'%(epoch +1, train_l_sum / n, train_acc_sum / n, test_acc))train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size,[W, b], lr)
3.6.8. 预测
X, y =iter(test_iter).next()true_labels = d2l.get_fashion_mnist_labels(y.numpy())
pred_labels = d2l.get_fashion_mnist_labels(net(X).argmax(dim=1).numpy())
titles =[true +'\n'+ pred for true, pred inzip(true_labels, pred_labels)]d2l.show_fashion_mnist(X[0:9], titles[0:9])
异同点 都可以对表行转列;DECODE功能上和简单Case函数比较类似,不能像Case搜索函数一样,进行更复杂的判断在Case函数中,可以使用BETWEEN, LIKE, IS NULL, IN, EXISTS等等(也可以使用NOT IN和NOT EXISTS,但是…