14.深度学习练习:Face Recognition for the Happy House

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization/

Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace.

Face recognition problems commonly fall into two categories:

  • Face Verification - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
  • Face Recognition - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.

FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.

In this assignment, you will:

  • Implement the triplet loss function
  • Use a pretrained model to map face images into 128-dimensional encodings
  • Use these encodings to perform face verification and face recognition

In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape (?,??,??,??)instead of (?,??,??,??)

. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.

Let's load the required packages.

目录

0 - Naive Face Verification

1 - Encoding face images into a 128-dimensional vector

1.1 - Using an ConvNet to compute encodings

1.2 - The Triplet Loss

2 - Loading the trained model

3 - Applying the model

3.1 - Face Verification

3.2 - Face Recognition

References:


from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks import *%matplotlib inline
%load_ext autoreload
%autoreload 2np.set_printoptions(threshold=10000000)

0 - Naive Face Verification

In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!

Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.

You'll see that rather than using the raw image, you can learn an encoding ?(???) so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.


1 - Encoding face images into a 128-dimensional vector

1.1 - Using an ConvNet to compute encodings

The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook).

The key things you need to know are:

  • This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of ?

face images) as a tensor of shape (?,??,??,??)=(?,3,96,96)

  • It outputs a matrix of shape (?,128) that encodes each input face image into a 128-dimensional vector

By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:

So, an encoding is a good one if:

  • The encodings of two images of the same person are quite similar to each other
  • The encodings of two images of different persons are very different

The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.


1.2 - The Triplet Loss

For an image ?, we denote its encoding ?(?), where ? is the function computed by the neural network.

Training will use triplets of images (?,?,?):

  • A is an "Anchor" image--a picture of a person.
  • P is a "Positive" image--a picture of the same person as the Anchor image.
  • N is a "Negative" image--a picture of a different person than the Anchor image.

These triplets are picked from our training dataset. We will write (A^{(i)},P^{(i)},N^{(i)}) to denote the ?-th training example.

You'd like to make sure that an image ?(?) of an individual is closer to the Positive ?(?) than to the Negative image ?(?) by at least a margin ?:

                                           $$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$

You would thus like to minimize the following "triplet cost":

                                        $$\mathcal{J} = \sum^{N}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$

Here, we are using the notation "[?]+" to denote ???(?,0).

Notes:

  • The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
  • The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it.
  • ? is called the margin. It is a hyperparameter that you should pick manually. We will use ?=0.2.

Most implementations also normalize the encoding vectors to have norm equal one (i.e., ||f(img)||_2=1); you won't have to worry about that here.

Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps:

  • Compute the distance between the encodings of "anchor" and "positive" :  $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
  • Compute the distance between the encodings of "anchor" and "negative":  $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
  • Compute the formula per training example:  $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
  • Compute the full formula by taking the max with zero and summing over the training examples:

                                $$\mathcal{J} = \sum^{N}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$

Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.reduce_mean`, `tf.maximum()`.

def triplet_loss(y_true, y_pred, alpha = 0.2):"""Implementation of the triplet loss as defined by formula (3)Arguments:y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.y_pred -- python list containing three objects:anchor -- the encodings for the anchor images, of shape (None, 128)positive -- the encodings for the positive images, of shape (None, 128)negative -- the encodings for the negative images, of shape (None, 128)Returns:loss -- real number, value of the loss"""anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]# Step 1: Compute the (encoding) distance between the anchor and the positivepos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis = -1 )# Step 2: Compute the (encoding) distance between the anchor and the negativeneg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis = -1 )# Step 3: subtract the two previous distances and add alpha.basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.loss = tf.reduce_sum(tf.maximum(basic_loss, 0))return loss

2 - Loading the trained model

FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.

FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)

Here're some examples of distances between the encodings between three individuals:


3 - Applying the model

Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment.

However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food.

So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be.

3.1 - Face Verification

Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image.

Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.

Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps:

  1. Compute the encoding of the image from image_path
  2. Compute the distance about this encoding and the encoding of the identity image stored in the database
  3. Open the door if the distance is less than 0.7, else do not open.

As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)

def verify(image_path, identity, database, model):"""Function that verifies if the person on the "image_path" image is "identity".Arguments:image_path -- path to an imageidentity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).model -- your Inception model instance in KerasReturns:dist -- distance between the image_path and the image of "identity" in the database.door_open -- True, if the door should open. False otherwise."""# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)encoding = img_to_encoding(image_path, model)# Step 2: Compute distance with identity's image (≈ 1 line)dist = np.linalg.norm(encoding - database[identity])# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)if dist < 0.7:print("It's " + str(identity) + ", welcome home!")door_open = Trueelse:print("It's not " + str(identity) + ", please go away")door_open = Falsereturn dist, door_open

3.2 - Face Recognition

Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in!

To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them!

You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input.

Exercise: Implement who_is_it(). You will have to go through the following steps:

  1. Compute the target encoding of the image from image_path
  2. Find the encoding from the database that has smallest distance with the target encoding.
    • Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
    • Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items().
      • Compute L2 distance between the target "encoding" and the current "encoding" from the database.
      • If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
# GRADED FUNCTION: who_is_itdef who_is_it(image_path, database, model):"""Implements face recognition for the happy house by finding who is the person on the image_path image.Arguments:image_path -- path to an imagedatabase -- database containing image encodings along with the name of the person on the imagemodel -- your Inception model instance in KerasReturns:min_dist -- the minimum distance between image_path encoding and the encodings from the databaseidentity -- string, the name prediction for the person on image_path"""## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)encoding = img_to_encoding(image_path, model)## Step 2: Find the closest encoding ### Initialize "min_dist" to a large value, say 100 (≈1 line)min_dist = 100# Loop over the database dictionary's names and encodings.for (name, db_enc) in database.items():# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)dist = np.linalg.norm(encoding - db_enc)# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)if dist < min_dist:min_dist = distidentity = nameif min_dist > 0.7:print("Not in the database.")else:print ("it's " + str(identity) + ", the distance is " + str(min_dist))return min_dist, identity

Your Happy House is running well. It only lets in authorized persons, and people don't need to carry an ID card around anymore!

You've now seen how a state-of-the-art face recognition system works.

Although we won't implement it here, here're some ways to further improve the algorithm:

  • Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.
  • Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.

**What you should remember**: - Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem. - The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image. - The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.

Congrats on finishing this assignment!

References:

  • Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). FaceNet: A Unified Embedding for Face Recognition and Clustering
  • Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). DeepFace: Closing the gap to human-level performance in face verification
  • The pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.
  • Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/439832.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

java Integer 源码学习

转载自http://www.hollischuang.com/archives/1058 Integer 类在对象中包装了一个基本类型 int 的值。Integer 类型的对象包含一个 int 类型的字段。 此外&#xff0c;该类提供了多个方法&#xff0c;能在 int 类型和 String 类型之间互相转换&#xff0c;还提供了处理 int 类…

15.深度学习练习:Deep Learning Art: Neural Style Transfer

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Problem Statement 2 - Transfer Learning 3 - Neural Style Transfer 3.1 - Computing the cont…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222F】Moving On(Floyd变形,思维,离线处理)

https://nanti.jisuanke.com/t/41290 题干&#xff1a; Firdaws and Fatinah are living in a country with nn cities, numbered from 11 to nn. Each city has a risk of kidnapping or robbery. Firdawss home locates in the city uu, and Fatinahs home locates in the…

动手学PaddlePaddle(1):线性回归

你将学会&#xff1a; 机器学习的基本概念&#xff1a;假设函数、损失函数、优化算法数据怎么进行归一化处理paddlepaddle深度学习框架的一些基本知识如何用paddlepaddle深度学习框架搭建全连接神经网络参考资料&#xff1a;https://www.paddlepaddle.org.cn/documentation/doc…

Apollo进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving

原文链接&#xff1a;进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving 自动驾驶车辆的规划决策模块负责生成车辆的行驶行为&#xff0c;是体现车辆智慧水平的关键。规划决策模块中的运动规划环节负责生成车辆的局部运动轨迹&#xff0c;是决定车辆…

JVM核心之JVM运行和类加载全过程

为什么研究类加载全过程&#xff1f; 有助于连接JVM运行过程更深入了解java动态性&#xff08;解热部署&#xff0c;动态加载&#xff09;&#xff0c;提高程序的灵活性类加载机制 JVM把class文件加载到内存&#xff0c;并对数据进行校验、解析和初始化&#xff0c;最终形成J…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222A】Maximum Element In A Stack(动态的栈中查找最大元素)

https://nanti.jisuanke.com/t/41285 题干&#xff1a; As an ACM-ICPC newbie, Aishah is learning data structures in computer science. She has already known that a stack, as a data structure, can serve as a collection of elements with two operations: push, …

动手学PaddlePaddle(2):房价预测

通过这个练习可以了解到&#xff1a; 机器学习的典型过程&#xff1a; 获取数据 数据预处理 -训练模型 -应用模型 fluid训练模型的基本步骤: 配置网络结构&#xff1a; 定义成本函数avg_cost 定义优化器optimizer 获取训练数据 定义运算场所(place)和执行器(exe) 提供数…

JAVA 堆栈 堆 方法区 解析

基础数据类型直接在栈空间分配&#xff0c; 方法的形式参数&#xff0c;直接在栈空间分配&#xff0c;当方法调用完成后从栈空间回收。 引用数据类型&#xff0c;需要用new来创建&#xff0c;既在栈空间分配一个地址空间&#xff0c;又在堆空间分配对象的类变量 。 方法的引用…

动手学PaddlePaddle(3):猫脸识别

你将学会&#xff1a; 预处理图片数据 利用PaddlePaddle框架实现Logistic回归模型&#xff1a; 在开始练习之前&#xff0c;简单介绍一下图片处理的相关知识&#xff1a; 图片处理 由于识别猫问题涉及到图片处理知识&#xff0c;这里对计算机如何保存图片做一个简单的介绍。在…

Java对象分配原理

Java对象模型: OOP-Klass模型 在正式探讨JVM对象的创建前&#xff0c;先简单地介绍一下hotspot中实现的Java的对象模型。在JVM中&#xff0c;并没有直接将Java对象映射成C对象&#xff0c;而是采用了oop-klass模型&#xff0c;主要是不希望每个对象中都包含有一份虚函数表&…

动手学PaddlePaddle(4):MNIST(手写数字识别)

本次练习将使用 PaddlePaddle 来实现三种不同的分类器&#xff0c;用于识别手写数字。三种分类器所实现的模型分别为 Softmax 回归、多层感知器、卷积神经网络。 您将学会 实现一个基于Softmax回归的分类器&#xff0c;用于识别手写数字 实现一个基于多层感知器的分类器&#…

动手学PaddlePaddle(5):迁移学习

本次练习&#xff0c;用迁移学习思想&#xff0c;结合paddle框架&#xff0c;来实现图像的分类。 相关理论&#xff1a; 1. 原有模型作为一个特征提取器: 使用一个用ImageNet数据集提前训练&#xff08;pre-trained)好的CNN&#xff0c;再除去最后一层全连接层(fully-connecte…

Apollo进阶课程㉓丨Apollo规划技术详解——Motion Planning with Environment

原文链接&#xff1a;进阶课程㉓丨Apollo规划技术详解——Motion Planning with Environment 当行为层决定要在当前环境中执行的驾驶行为时&#xff0c;其可以是例如巡航-车道&#xff0c;改变车道或右转&#xff0c;所选择的行为必须被转换成路径或轨迹&#xff0c;可由低级反…

Java对象模型-oop和klass

oop-klass模型 Hotspot 虚拟机在内部使用两组类来表示Java的对象和类。 oop(ordinary object pointer)&#xff0c;用来描述对象实例信息。klass&#xff0c;用来描述 Java 类&#xff0c;是虚拟机内部Java类型结构的对等体 。 JVM内部定义了各种oop-klass&#xff0c;在JV…

Apollo进阶课程㉔丨Apollo 规划技术详解——Motion Planning Environment

原文链接&#xff1a;进阶课程㉔丨Apollo 规划技术详解——Motion Planning Environment 自动驾驶汽车核心技术包括环境感知、行为决策、运动规划与控制等方面。其中&#xff0c;行为决策系统、运动规划与控制系统作为无人驾驶汽车的“大脑”&#xff0c;决定了其在不同交通驾…

一步步编写操作系统 26 打开A20地址线

打开A20地址线 还记得实模式下的wrap-around吗&#xff1f;也就是地址回绕。咱们一起来复习一下。实模式下内存访问是采取“段基址:段内偏移地址”的形式&#xff0c;段基址要乘以16后再加上段内偏移地址。实模式下寄存器都是16位的&#xff0c;如果段基址和段内偏移地址都为1…

一步步编写操作系统 27 处理器微架构之流水线简介

了解处理器内部硬件架构&#xff0c;有助于理解软件运行原理&#xff0c;因为这两者本身相辅相成&#xff0c;相互依存。就像枪和狙击手&#xff0c;枪的操作和外形设计都是要根据人体工学&#xff0c;让人不仅操作容易&#xff0c;而且携带也要轻便&#xff0c;做到能随时射出…

Apollo进阶课程㉚丨Apollo ROS背景介绍

原文链接&#xff1a;进阶课程㉚丨Apollo ROS背景介绍 ROS是机器人学习和无人车学习最好Linux平台软件&#xff0c;资源丰厚。无人车的规划、控制算法通常运行在Linux系统上&#xff0c;各个模块通常使用ROS进行连接。 上周阿波君为大家详细介绍了「进阶课程㉙Apollo控制技术详…

一步步编写操作系统 30 cpu的分支预测简介

人在道路的分岔口时要预测哪条路能够到达目的地&#xff0c;面对众多选择时&#xff0c;计算机也一样要抉择&#xff0c;毕竟计算机的运行方式是以人的思路来设计的&#xff0c;计算机中的抉择其实就是人在抉择。 cpu中的指令是在流水线上执行。分支预测&#xff0c;是指当处理…