人工智能与机器学习---笑脸识别与口罩识别

2022-08-09,,,,

文章目录

  • 一、人脸图像特征提取的方法
    • 1、HOG
    • 2、Dlib
    • 3、卷积神经网络
  • 二、人脸笑脸识别(genki4k数据集)
    • 1、准备工作
      • (1)安装Tensorflow
      • (2)安装Keras
      • (3)下载笑脸数据集(genki4k)
    • 2、操作
      • (1)划分测试集、训练集以及验证集
      • (2)创建模型
      • (3)对图片进行归一化处理
      • (4)进行训练模型
      • (5)数据增强
      • (6)创建网络
      • (7)再次进行训练模型
  • 三、Python3+Dlib+Opencv实现摄像头采集人脸并对表情进行实时分类判读
    • 1、准备工作
      • (1)安装Dlib库
      • (2)安装Opencv库
    • 2、操作
      • (1)实现照片的判别:笑脸还是非笑脸
      • (2) 摄像头采集人脸识别
  • 四、人脸口罩识别
    • 1、下载口罩数据集
    • 2、划分测试集、训练集以及验证集
    • 3、创建模型
    • 4、对图片进行归一化处理
    • 5、数据增强
    • 6、创建网络
    • 7、训练模型
    • 8、实现摄像头采集人脸并对是否戴口罩进行实时分类判读
      • (1)实现照片的判别:戴口罩还是未戴口罩
      • (2)摄像头采集人脸识别

一、人脸图像特征提取的方法

1、HOG

方向梯度直方图(Histogram of Oriented Gradient, HOG)特征是一种在计算机视觉和图像处理中用来进行物体检测的特征描述子。HOG特征通过计算和统计图像局部区域的梯度方向直方图来构成特征。

HOG特征提取的流程图如下图所示:

2、Dlib

Dlib 是一个 C++ 工具库,包含机器学习算法,图像处理,网络及一些工具类库。在工业界,学术界都得到广泛使用。接下来的这篇文章中,我将会分享 dlib 库在人脸识别中的应用。

3、卷积神经网络

神经网络(neual networks)是人工智能研究领域的一部分,当前最流行的神经网络是深度卷积神经网络(deep convolutional neural networks, CNNs),虽然卷积网络也存在浅层结构,但是因为准确度和表现力等原因很少使用。CNNs目前在很多很多研究领域取得了巨大的成功,例如: 语音识别,图像识别,图像分割,自然语言处理等。

二、人脸笑脸识别(genki4k数据集)

1、准备工作

(1)安装Tensorflow

(2)安装Keras

win10+python3.7.4+tensorflow2.2.0+keras2.43

(3)下载笑脸数据集(genki4k)

2、操作

(1)划分测试集、训练集以及验证集

(1.1)导入需要的库

import tensorflow
import keras
keras.__version__
import os,shutil

(1.2)运行此程序,生成train、test、validation三个文件夹,同时这三个文件夹下面都会创建smile与unsmile文件夹

# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = 'genki4k'

# The directory where we will
# store our smaller dataset
base_dir = '笑脸数据'
os.mkdir(base_dir)

# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)

# Directory with our training smile pictures
train_smile_dir = os.path.join(train_dir, 'smile')
os.mkdir(train_smile_dir)

# Directory with our training unsmile pictures
train_unsmile_dir = os.path.join(train_dir, 'unsmile')
#s.mkdir(train_dogs_dir)

# Directory with our validation smile pictures
validation_smile_dir = os.path.join(validation_dir, 'smile')
os.mkdir(validation_smile_dir)

# Directory with our validation unsmile pictures
validation_unsmile_dir = os.path.join(validation_dir, 'unsmile')
os.mkdir(validation_unsmile_dir)

# Directory with our validation smile pictures
test_smile_dir = os.path.join(test_dir, 'smile')
os.mkdir(test_smile_dir)

# Directory with our validation unsmile pictures
test_unsmile_dir = os.path.join(test_dir, 'unsmile')
os.mkdir(test_unsmile_dir)

运行结果:

(1.3)拷贝笑脸与不是笑脸的图片到文件夹下面

手动分需要在jupyter中将这些文件夹的路径引入:

import keras
import os, shutil
train_smile_dir="笑脸数据/train/smile/"
train_umsmile_dir="笑脸数据/train/unsmile/"
test_smile_dir="笑脸数据/test/smile/"
test_umsmile_dir="笑脸数据/test/unsmile/"
validation_smile_dir="笑脸数据/validation/smile/"
validation_unsmile_dir="笑脸数据/validation/unsmile/"
train_dir="笑脸数据/train/"
test_dir="笑脸数据/test/"
validation_dir="笑脸数据/validation/"

(1.4)打印看一下文件夹下的图片数量

print('total training smile images:', len(os.listdir(train_smile_dir)))
print('total training unsmile images:', len(os.listdir(train_umsmile_dir)))
print('total testing smile images:', len(os.listdir(test_smile_dir)))
print('total testing unsmile images:', len(os.listdir(test_umsmile_dir)))
print('total validation smile images:', len(os.listdir(validation_smile_dir)))
print('total validation unsmile images:', len(os.listdir(validation_unsmile_dir)))

运行结果:

有2000张训练图像,然后是1000个验证图像,1000个测试图像,其中每个分类都有相同数量的样本,是一个平衡的二元分类问题,意味着分类准确度将是合适的度量标准。

(2)创建模型

(2.1)创建模型

from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

(2.1)查看模型

model.summary()

运行结果:

(3)对图片进行归一化处理

from keras import optimizers

model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])

from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen=ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        # 目标文件目录
        train_dir,
        #所有图片的size必须是150x150
        target_size=(150, 150),
        batch_size=20,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=20,
        class_mode='binary')
test_generator = test_datagen.flow_from_directory(test_dir,
                                                   target_size=(150, 150),
                                                   batch_size=20,
                                                   class_mode='binary')

运行结果:

for data_batch, labels_batch in train_generator:
    print('data batch shape:', data_batch.shape)
    print('labels batch shape:', labels_batch)
    break

运行结果:

train_generator.class_indices

运行结果:
0代表笑脸,1代表非笑脸。

(4)进行训练模型

(4.1)训练模型
可自行调节epochs的值,epochs值越大,花费时间越久,但训练的精度会越高。

history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=10,
      validation_data=validation_generator,
      validation_steps=50)

运行结果:

(4.2)保存训练好的模型

model.save('smileAndUnsmile_1.h5')

运行结果:

(4.3)画出训练集与验证集的精确度与损失度的图形

import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

运行结果:
没有进行数据增强训练出来的模型过拟合有点严重。

(5)数据增强

(5.1)进行数据增强

datagen = ImageDataGenerator(
      rotation_range=40,
      width_shift_range=0.2,
      height_shift_range=0.2,
      shear_range=0.2,
      zoom_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')

(5.2)查看数据增强之后图片的变化

import matplotlib.pyplot as plt
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_smile_dir, fname) for fname in os.listdir(train_smile_dir)]
img_path = fnames[8]
img = image.load_img(img_path, target_size=(150, 150))
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape)
i = 0
for batch in datagen.flow(x, batch_size=1):
    plt.figure(i)
    imgplot = plt.imshow(image.array_to_img(batch[0]))
    i += 1
    if i % 4 == 0:
        break
plt.show()

运行结果:

(6)创建网络

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])

(7)再次进行训练模型

(7.1)训练模型

#归一化处理
train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,)

# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=32,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')

history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=100,  
      validation_data=validation_generator,
      validation_steps=50)

运行结果:

用了五个小时终于出来了,实在是太慢了,头秃…

(7.2)保存训练好的模型

model.save('smileAndUnsmile_2.h5')

运行结果:

(7.3)画出进行数据增强之后后的训练集与验证集的精确度与损失度的图形

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

运行结果:

与未进行数据增强相比,现在基本没有过拟合,且精度较高。

三、Python3+Dlib+Opencv实现摄像头采集人脸并对表情进行实时分类判读

1、准备工作

(1)安装Dlib库

(2)安装Opencv库

在Anaconda Prompt下输入以下命令:

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-python

检查是否安装成功:

成功!

2、操作

(1)实现照片的判别:笑脸还是非笑脸

识别照片:

import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np

model = load_model('smileAndUnsmile_1.h5')

img_path='笑脸数据/test/smile/file1533.jpg'

img = image.load_img(img_path, target_size=(150, 150))
#img1 = cv2.imread(img_path,cv2.IMREAD_GRAYSCALE)
#cv2.imshow('wname',img1)
#cv2.waitKey(0)

#print(img.size)
img_tensor = image.img_to_array(img)/255.0
img_tensor = np.expand_dims(img_tensor, axis=0)

prediction =model.predict(img_tensor)  
print(prediction)
if prediction[0][0]<0.5:
    result='smile'
else:
    result='unsmile'
print(result)

运行结果:

多次识别仍然正确!

(2) 摄像头采集人脸识别

import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
import dlib
from PIL import Image
model = load_model('smileAndUnsmile_1.h5')
detector = dlib.get_frontal_face_detector()
video=cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
def rec(img):
    gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    dets=detector(gray,1)
    if dets is not None:
        for face in dets:
            left=face.left()
            top=face.top()
            right=face.right()
            bottom=face.bottom()
            cv2.rectangle(img,(left,top),(right,bottom),(0,255,0),2)
            img1=cv2.resize(img[top:bottom,left:right],dsize=(150,150))
            img1=cv2.cvtColor(img1,cv2.COLOR_BGR2RGB)
            img1 = np.array(img1)/255.
            img_tensor = img1.reshape(-1,150,150,3)
            prediction =model.predict(img_tensor)    
            print(prediction)
            if prediction[0][0]>0.5:
                result='unsmile'
            else:
                result='smile'
            cv2.putText(img, result, (left,top), font, 2, (0, 255, 0), 2, cv2.LINE_AA)
        cv2.imshow('Video', img)
while video.isOpened():
    res, img_rd = video.read()
    if not res:
        break
    rec(img_rd)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
video.release()
cv2.destroyAllWindows()

运行结果:

四、人脸口罩识别

1、下载口罩数据集

2、划分测试集、训练集以及验证集

(1)导入需要的库

import tensorflow
import keras
keras.__version__
import os,shutil

(2)运行此程序,生成train、test、validation三个文件夹,同时这三个文件夹下面都会创建have_mask与no_mask文件夹

# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '人脸口罩数据集,正样本加负样本'

# The directory where we will
# store our smaller dataset
base_dir = '口罩数据'
os.mkdir(base_dir)

# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)

# Directory with our training smile pictures
train_smile_dir = os.path.join(train_dir, 'have_mask')
os.mkdir(train_smile_dir)

# Directory with our training unsmile pictures
train_unsmile_dir = os.path.join(train_dir, 'no_mask')
#s.mkdir(train_dogs_dir)

# Directory with our validation smile pictures
validation_smile_dir = os.path.join(validation_dir, 'have_mask')
os.mkdir(validation_smile_dir)

# Directory with our validation unsmile pictures
validation_unsmile_dir = os.path.join(validation_dir, 'no_mask')
os.mkdir(validation_unsmile_dir)

# Directory with our validation smile pictures
test_smile_dir = os.path.join(test_dir, 'have_mask')
os.mkdir(test_smile_dir)

# Directory with our validation unsmile pictures
test_unsmile_dir = os.path.join(test_dir, 'no_mask')
os.mkdir(test_unsmile_dir)

运行结果:
(3)拷贝戴口罩与没戴口罩的图片到文件夹下面

手动分需要在jupyter中将这些文件夹的路径引入:

import keras
import os, shutil
train_havemask_dir="口罩数据/train/have_mask/"
train_nomask_dir="口罩数据/train/no_mask/"
test_havemask_dir="口罩数据/test/have_mask/"
test_nomask_dir="口罩数据/test/no_mask/"
validation_havemask_dir="口罩数据/validation/have_mask/"
validation_nomask_dir="口罩数据/validation/no_mask/"
train_dir="口罩数据/train/"
test_dir="口罩数据/test/"
validation_dir="口罩数据/validation/"

(4)打印看一下文件夹下的图片的数量

print('total training havemask images:', len(os.listdir(train_havemask_dir)))
print('total training nomask images:', len(os.listdir(train_nomask_dir)))
print('total testing havemask images:', len(os.listdir(test_havemask_dir)))
print('total testing nomask images:', len(os.listdir(test_nomask_dir)))
print('total validation havemask images:', len(os.listdir(validation_havemask_dir)))
print('total validation nomask images:', len(os.listdir(validation_nomask_dir)))

运行结果:

有600张训练图像,然后是300个验证图像,300个测试图像,其中每个分类都有相同数量的样本,是一个平衡的二元分类问题,意味着分类准确度将是合适的度量标准。

3、创建模型

from keras import layers
from keras import models

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

查看模型

model.summary()

运行结果:

4、对图片进行归一化处理

from keras import optimizers

model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])

from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen=ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        # 目标文件目录
        train_dir,
        #所有图片的size必须是150x150
        target_size=(150, 150),
        batch_size=20,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=20,
        class_mode='binary')
test_generator = test_datagen.flow_from_directory(test_dir,
                                                   target_size=(150, 150),
                                                   batch_size=20,
                                                   class_mode='binary')

运行结果:

for data_batch, labels_batch in train_generator:
    print('data batch shape:', data_batch.shape)
    print('labels batch shape:', labels_batch)
    break

运行结果:

train_generator.class_indices

运行结果:

“0”表示戴口罩,“1”表示不戴口罩。

5、数据增强

datagen = ImageDataGenerator(
      rotation_range=40,
      width_shift_range=0.2,
      height_shift_range=0.2,
      shear_range=0.2,
      zoom_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')

import matplotlib.pyplot as plt
from keras.preprocessing import image
fnames = [os.path.join(train_havemask_dir, fname) for fname in os.listdir(train_havemask_dir)]
img_path = fnames[5]
img = image.load_img(img_path, target_size=(150, 150))
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape)
i = 0
for batch in datagen.flow(x, batch_size=1):
    plt.figure(i)
    imgplot = plt.imshow(image.array_to_img(batch[0]))
    i += 1
    if i % 4 == 0:
        break
plt.show()

运行结果:

6、创建网络

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])

7、训练模型

(1)训练模型

#归一化处理
train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,)

# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=32,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')

history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=600,  
      validation_data=validation_generator,
      validation_steps=50)

运行结果:

比较费时。

(2)保存模型

model.save('maskAndUnmask_1.h5')

运行结果:

(3)画出进行数据增强之后后的训练集与验证集的精确度与损失度的图形

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

运行结果:

8、实现摄像头采集人脸并对是否戴口罩进行实时分类判读

(1)实现照片的判别:戴口罩还是未戴口罩

0代表戴口罩,1代表未戴口罩。0.5作分界线,如果预测结果大于0.5就是未带口罩,小于0.5就是戴口罩。

import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np

model = load_model('maskAndUnmask_1.h5')

img_path='口罩数据/test/no_mask/334.jpg'

img = image.load_img(img_path, target_size=(150, 150))
#print(img.size)
img_tensor = image.img_to_array(img)/255.0
img_tensor = np.expand_dims(img_tensor, axis=0)
prediction =model.predict(img_tensor)  
print(prediction)
if prediction[0][0]>0.5:
    result='未戴口罩'
else:
    result='戴口罩'
print(result)

判别照片:

运行结果:

判别照片:

运行结果:

多次判别结果仍然正确!

(2)摄像头采集人脸识别

import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
import dlib
from PIL import Image
model = load_model('maskAndUnmask_1.h5')
detector = dlib.get_frontal_face_detector()
# video=cv2.VideoCapture('media/video.mp4')
# video=cv2.VideoCapture('data/face_recognition.mp4')
video=cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
def rec(img):
    gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    dets=detector(gray,1)
    if dets is not None:
        for face in dets:
            left=face.left()
            top=face.top()
            right=face.right()
            bottom=face.bottom()
            cv2.rectangle(img,(left,top),(right,bottom),(0,255,0),2)
def mask(img):
    img1=cv2.resize(img,dsize=(150,150))
    img1=cv2.cvtColor(img1,cv2.COLOR_BGR2RGB)
    img1 = np.array(img1)/255.
    img_tensor = img1.reshape(-1,150,150,3)
    prediction =model.predict(img_tensor)    
    if prediction[0][0]>0.5:
        result='no-mask'
    else:
        result='have-mask'
    cv2.putText(img, result, (100,200), font, 2, (0, 255, 0), 2, cv2.LINE_AA)
    cv2.imshow('Video', img)          
while video.isOpened():
    res, img_rd = video.read()
    if not res:
        break
    #将视频每一帧传入两个函数,分别用于圈出人脸与判断是否带口罩
    rec(img_rd)
    mask(img_rd)
    #q关闭窗口
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
video.release()
cv2.destroyAllWindows()

运行结果:

好啦,本次实验就到此结束了!

本文地址:https://blog.csdn.net/weixin_46492125/article/details/107100732

《人工智能与机器学习---笑脸识别与口罩识别.doc》

下载本文的Word格式文档,以方便收藏与打印。