ios 面部过滤器实现,如 MSQRD/SnapChat
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/36727201/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Face filter implementation like MSQRD/SnapChat
提问by Manish Agrawal
I want to develop the live face filters as MSQRD/Snapchat live filters but did not able to find out how should I proceed should I use Augmented Reality framework and detect face OR use core image to detect the face and process accordingly. Please let me know if anyone has the idea how to implement the same?
我想将实时面部过滤器开发为 MSQRD/Snapchat 实时过滤器,但无法找出应该如何继续使用增强现实框架并检测面部或使用核心图像来检测面部并进行相应处理。请让我知道是否有人知道如何实现相同的想法?
回答by Pau Senabre
I would recommend going with Core Image
and CIDetector. https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_detect_faces/ci_detect_faces.htmlIt has been available since iOS 5 and it has great documentation.
我建议使用Core Image
和CIDetector。https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_detect_faces/ci_detect_faces.html它从 iOS 5 开始可用,并且有很好的文档。
Creating a face detector example:
创建人脸检测器示例:
CIContext *context = [CIContext contextWithOptions:nil]; // 1
NSDictionary *opts = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh }; // 2
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:context
options:opts]; // 3
opts = @{ CIDetectorImageOrientation :
[[myImage properties] valueForKey:kCGImagePropertyOrientation] }; // 4
NSArray *features = [detector featuresInImage:myImage options:opts]; // 5
Here's what the code does:
下面是代码的作用:
1.- Creates a context; in this example, a context for iOS. You can use any of the context-creation functions described in Processing Images.) You also have the option of supplying nil instead of a context when you create the detector.)
1.- 创建上下文;在此示例中,是 iOS 的上下文。您可以使用处理图像中描述的任何上下文创建函数。)您还可以选择在创建检测器时提供 nil 而不是上下文。)
2.- Creates an options dictionary to specify accuracy for the detector. You can specify low or high accuracy. Low accuracy (CIDetectorAccuracyLow) is fast; high accuracy, shown in this example, is thorough but slower.
2.- 创建一个选项字典来指定检测器的精度。您可以指定低精度或高精度。低精度(CIDetectorAccuracyLow)速度快;本例中显示的高精度是彻底但较慢的。
3.- Creates a detector for faces. The only type of detector you can create is one for human faces.
3.- 为人脸创建检测器。您可以创建的唯一类型的检测器是用于人脸的检测器。
4.- Sets up an options dictionary for finding faces. It's important to let Core Image know the image orientation so the detector knows where it can find upright faces. Most of the time you'll read the image orientation from the image itself, and then provide that value to the options dictionary.
4.- 设置用于查找面孔的选项字典。让 Core Image 知道图像方向很重要,这样检测器就知道在哪里可以找到直立的人脸。大多数情况下,您将从图像本身读取图像方向,然后将该值提供给选项字典。
5.- Uses the detector to find features in an image. The image you provide must be a CIImage object. Core Image returns an array of CIFeature objects, each of which represents a face in the image.
5.- 使用检测器查找图像中的特征。您提供的图像必须是 CIImage 对象。Core Image 返回一组 CIFeature 对象,每个对象代表图像中的一张脸。
Here some open projects that could help you out to start with CoreImage
or other technologies as GPUImage
or OpenCV
这里有一些开放项目可以帮助您开始使用CoreImage
或其他技术作为GPUImage
或OpenCV
1https://github.com/aaronabentheuer/AAFaceDetection(CIDetector - Swift)
1 https://github.com/aaronabentheuer/AAFaceDetection(CIDetector - Swift)
2https://github.com/BradLarson/GPUImage(Objective-C)
2 https://github.com/BradLarson/GPUImage(Objective-C)
3https://github.com/jeroentrappers/FaceDetectionPOC(Objective-C: it has deprecated code for iOS9)
3 https://github.com/jeroentrappers/FaceDetectionPOC(Objective-C:它已弃用iOS9代码)
4https://github.com/kairosinc/Kairos-SDK-iOS(Objective-C)
4 https://github.com/kairosinc/Kairos-SDK-iOS(Objective-C)
5https://github.com/macmade/FaceDetect(OpenCV)
5 https://github.com/macmade/FaceDetect(OpenCV)
回答by girish_pro
I am developing the same kind of app. I used OFxfacetracker library from OpenFramework for this. It provide mesh which contain eyes, mouth, face border, nose position and points (vertices).
我正在开发相同类型的应用程序。为此,我使用了 OpenFramework 的 OFxfacetracker 库。它提供了包含眼睛、嘴巴、面部边界、鼻子位置和点(顶点)的网格。
You can use this.
你可以用这个。
回答by LucasRT
I am testing using Unity + OpenCV for unity. Now will try how ofxfacetracker makes the gesture tracking. Filters can be done unsing gles shaders available in unity, there are also lots of plugins in the assets store that help in the real time render that you need.
我正在使用 Unity + OpenCV 进行统一测试。现在将尝试 ofxfacetracker 如何进行手势跟踪。过滤器可以通过统一提供的 gles 着色器来完成,资产商店中还有很多插件可以帮助您进行实时渲染。