• OpenGL应用:实现人脸识别并贴纸的功能


    人脸识别贴纸

    整个处理过程大致分为3个步骤:
    1、使用AVFoundation调用摄像头采集视频流获得图像信息
    2、使用CoreImage库判断采集到的图像信息中是否包含有人脸
    3、将结果使用OpenGL渲染显示到屏幕上

    一、调用摄像头采集视频

    self.captureSession = [[AVCaptureSession alloc] init];
    [self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
            
    AVCaptureDevice *captureDevice = nil;
    NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in captureDevices) {
        if (device.position == AVCaptureDevicePositionBack) {
           captureDevice = device;
           break;
         }
    }
    self.captureDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:nil];
            
    if ([self.captureSession canAddInput:self.captureDeviceInput]) {
        [self.captureSession addInput:self.captureDeviceInput];
     }
            
    self.captureDeviceOutput = [[AVCaptureVideoDataOutput alloc] init];
    [self.captureDeviceOutput setAlwaysDiscardsLateVideoFrames:YES];
            
    processQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
    [self.captureDeviceOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
    [self.captureDeviceOutput setSampleBufferDelegate:delegate queue:processQueue];
    if ([self.captureSession canAddOutput:self.captureDeviceOutput]) {
        [self.captureSession addOutput:self.captureDeviceOutput];
    }
            
    AVCaptureConnection *captureConnection = [self.captureDeviceOutput connectionWithMediaType:AVMediaTypeVideo];
    [captureConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
    [self.captureSession startRunning];

    获得视频帧:

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
        CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        [self.faceDetectionView displayPixelBuffer:pixelBuffer];
    }

    二、识别图像中的人脸

    CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];
    NSString *accuracy = CIDetectorAccuracyLow;
    NSDictionary *options = [NSDictionary dictionaryWithObject:accuracy forKey:CIDetectorAccuracy];
    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
    NSArray *featuresArray = [detector featuresInImage:ciImage options:nil];

    得到的featuresArray便是识别的结果,是一个包含有CIFaceFeature对象的数组,我们可以使用获得的结果判断是否包含有人脸。

    三、使用OpenGL渲染原始视频帧和人脸位置贴图

    1.讲我们要使用的贴图图片转换成纹理数据,用于识别人脸后的纹理混合

    CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
    if (!spriteImage) {
        NSLog(@"Failed to load image %@", fileName);
        exit(1);
    }
    
    size_t width = CGImageGetWidth(spriteImage);
    size_t height = CGImageGetHeight(spriteImage);
    
    GLubyte *spriteData = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte));
    
    CGContextRef context = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
    CGContextTranslateCTM(context, 0, height);
    CGContextScaleCTM (context, 1.0, -1.0);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), spriteImage);
    
    CGContextRelease(context);
    
    GLuint texture;
    glActiveTexture(GL_TEXTURE2);
    glGenTextures(1, &texture);
    glBindTexture(GL_TEXTURE_2D, texture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int32_t)width, (int32_t)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
    free(spriteData);
    return texture;

    2.渲染视频帧,同时检测有没有人脸,如果有,计算出人脸位置,转换坐标,将贴图渲染上去。

    - (void)displayPixelBuffer:(CVPixelBufferRef)pixelBuffer {
        if (pixelBuffer != NULL) {
            
            int width = (int)CVPixelBufferGetWidth(pixelBuffer);
            int height = (int)CVPixelBufferGetHeight(pixelBuffer);
            
            if (!_videoTextureCache) {
                NSLog(@"NO Video Texture Cache");
                return;
            }
            if ([EAGLContext currentContext] != _context) {
                [EAGLContext setCurrentContext:_context];
            }
            
            [self cleanUpTextures];
            
            glActiveTexture(GL_TEXTURE0);
            
            CVReturn err;
            err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                               _videoTextureCache,
                                                               pixelBuffer,
                                                               NULL,
                                                               GL_TEXTURE_2D,
                                                               GL_RED_EXT,
                                                               width,
                                                               height,
                                                               GL_RED_EXT,
                                                               GL_UNSIGNED_BYTE,
                                                               0,
                                                               &_lumaTexture);
            
            if (err) {
                NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
            }
            
            glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
            glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
            glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
            
            // UV-plane.
            glActiveTexture(GL_TEXTURE1);
            err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                               _videoTextureCache,
                                                               pixelBuffer,
                                                               NULL,
                                                               GL_TEXTURE_2D,
                                                               GL_RG_EXT,
                                                               width / 2,
                                                               height / 2,
                                                               GL_RG_EXT,
                                                               GL_UNSIGNED_BYTE,
                                                               1,
                                                               &_chromaTexture);
            if (err) {
                NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
            }
            
            glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
            glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
            glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
            
            glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
            
            glViewport(0, 0, _backingWidth, _backingHeight);
            
        }
        
        glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
        glEnable(GL_BLEND);
        glClearColor(0, 0, 0, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);
        
        glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
        [self.shaderManager useProgram];
        glUniform1i(glViewUniforms[UNIFORM_Y], 0);
        glUniform1i(glViewUniforms[UNIFORM_UV], 1);
        
        glUniformMatrix4fv(glViewUniforms[UNIFORM_ROTATE_MATRIX], 1, GL_FALSE, GLKMatrix4MakeXRotation(M_PI).m);
        
        GLfloat quadVertexData[] = {
            -1, -1,
            1, -1 ,
            -1, 1,
            1, 1,
        };
        
        // 更新顶点数据
        glVertexAttribPointer(glViewAttributes[ATTRIB_VERTEX], 2, GL_FLOAT, 0, 0, quadVertexData);
        glEnableVertexAttribArray(glViewAttributes[ATTRIB_VERTEX]);
        
        GLfloat quadTextureData[] =  { // 正常坐标
            0, 0,
            1, 0,
            0, 1,
            1, 1
        };
        
        glVertexAttribPointer(glViewAttributes[ATTRIB_TEXCOORD], 2, GL_FLOAT, GL_FALSE, 0, quadTextureData);
        glEnableVertexAttribArray(glViewAttributes[ATTRIB_TEXCOORD]);
        
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
        
        [LYFaceDetector detectCVPixelBuffer:pixelBuffer completionHandler:^(CIFaceFeature *result, CIImage *ciImage) {
            if (result) {
                [self renderTempTexture:result ciImage:ciImage];
            }
        }];
        
        glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);
        
        if ([EAGLContext currentContext] == _context) {
            [_context presentRenderbuffer:GL_RENDERBUFFER];
        }
    }

    3.转换人脸坐标,计算需要渲染的贴图坐标

    - (void)renderTempTexture:(CIFaceFeature *)faceFeature ciImage:(CIImage *)ciImage {
        dispatch_semaphore_wait(_lock, DISPATCH_TIME_FOREVER);
        //得到图片的尺寸
        CGSize ciImageSize = [ciImage extent].size;
        //初始化transform
        CGAffineTransform transform = CGAffineTransformScale(CGAffineTransformIdentity, 1, -1);
        transform = CGAffineTransformTranslate(transform,0,-ciImageSize.height);
        // 实现坐标转换
        CGSize viewSize =self.layer.bounds.size;
        CGFloat scale = MIN(viewSize.width / ciImageSize.width,viewSize.height / ciImageSize.height);
        
        CGFloat offsetX = (viewSize.width - ciImageSize.width * scale) / 2;
        CGFloat offsetY = (viewSize.height - ciImageSize.height * scale) / 2;
        // 缩放
        CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scale, scale);
        //获取人脸的frame
        CGRect faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform);
        // 修正
        faceViewBounds = CGRectApplyAffineTransform(faceViewBounds,scaleTransform);
        faceViewBounds.origin.x += offsetX;
        faceViewBounds.origin.y += offsetY;
        
        
        NSLog(@"face frame after:%@",NSStringFromCGRect(faceViewBounds));
        [self.textureManager useProgram];
        
        glBindTexture(GL_TEXTURE_2D, _myTexture);
        glUniform1i(glViewUniforms[UNIFORM_TEMP_INPUT_IMG_TEXTURE], 2);
        
        CGFloat midX = CGRectGetMidX(self.layer.bounds);
        CGFloat midY = CGRectGetMidY(self.layer.bounds);
        
        CGFloat originX = CGRectGetMinX(faceViewBounds);
        CGFloat originY = CGRectGetMinY(faceViewBounds);
        CGFloat maxX = CGRectGetMaxX(faceViewBounds);
        CGFloat maxY = CGRectGetMaxY(faceViewBounds);
        
        //贴图顶点
        GLfloat minVertexX = (originX - midX) / midX;
        GLfloat minVertexY = (midY - maxY) / midY;
        GLfloat maxVertexX = (maxX - midX) / midX;
        GLfloat maxVertexY = (midY - originY) / midY;
        GLfloat quadData[] = {
            minVertexX, minVertexY,
            maxVertexX, minVertexY,
            minVertexX, maxVertexY,
            maxVertexX, maxVertexY,
        };
        
        glVertexAttribPointer(glViewAttributes[ATTRIB_TEMP_VERTEX], 2, GL_FLOAT, GL_FALSE, 0, quadData);
        glEnableVertexAttribArray(glViewAttributes[ATTRIB_TEMP_VERTEX]);
        
        GLfloat quadTextureData[] =  { // 正常坐标
            0, 0,
            1, 0,
            0, 1,
            1, 1
        };
        glVertexAttribPointer(glViewAttributes[ATTRIB_TEMP_TEXCOORD], 2, GL_FLOAT, GL_FALSE, 0, quadTextureData);
        glEnableVertexAttribArray(glViewAttributes[ATTRIB_TEMP_TEXCOORD]);
        
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
        dispatch_semaphore_signal(_lock);
    }

    这里只是简单的利用了coreImage返回的CIFaceFeature对象提供的人脸坐标。其实CoreImage返回的CIFaceFeature对象可以提供很多信息,包括人脸坐标、左右眼是否睁开及对应位置、嘴的位置等,所以如果我们需要做更详细的纹理贴图可以分别转换出眼睛、嘴巴的位置,然后使用我们想要的贴图渲染到对应的纹理坐标系中即可。

    这个demo有比较详细的应用iOS CoreImage -- 人脸检测/ 换背景/ 抠图 /贴纸/ 实时视频滤镜

    最后效果图如下(这里为了简单在得到的视频帧中只处理了一个包含人脸的CIFaceFeature对象)

  • 相关阅读:
    Vue2.5 旅游项目实例27 联调测试上线-项目打包上线
    Vue2.5 旅游项目实例26 联调测试上线-真机测试
    Vue2.5 旅游项目实例25 联调测试上线-项目前后端联调
    Vue2.5 旅游项目实例24 详情页-在项目中添加基础动画
    Vue2.5 旅游项目实例23 详情页 Ajax动态获取数据
    Vue2.5 旅游项目实例22 详情页 使用递归组件实现详情页列表
    HTML5标签embed详解
    MongoDB使用经验总结
    16个非常酷的jQuery插件
    kendo-ui的MVVM模式
  • 原文地址:https://www.cnblogs.com/neverMore-face/p/10185867.html
Copyright © 2020-2023  润新知