IOS Swift - 自定义相机叠加
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/34535452/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
IOS Swift - Custom camera overlay
提问by hellosheikh
hello I would like to open a camera in my app like this
你好,我想像这样在我的应用程序中打开一个摄像头
I want to open a camera only in the middle of the section so user can take a snap only in the rectangle section
我只想在该部分的中间打开一个相机,以便用户只能在矩形部分进行拍摄
the code which I am using is this
我正在使用的代码是这个
import UIKit
import AVFoundation
class TakeProductPhotoController: UIViewController {
let captureSession = AVCaptureSession()
var previewLayer : AVCaptureVideoPreviewLayer?
// If we find a device we'll store it here for later use
var captureDevice : AVCaptureDevice?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
print("Capture device found")
beginSession()
}
}
}
}
}
func updateDeviceSettings(focusValue : Float, isoValue : Float) {
let error: NSErrorPointer = nil
if let device = captureDevice {
do {
try captureDevice!.lockForConfiguration()
} catch let error1 as NSError {
error.memory = error1
}
device.setFocusModeLockedWithLensPosition(focusValue, completionHandler: { (time) -> Void in
//
})
// Adjust the iso to clamp between minIso and maxIso based on the active format
let minISO = device.activeFormat.minISO
let maxISO = device.activeFormat.maxISO
let clampedISO = isoValue * (maxISO - minISO) + minISO
device.setExposureModeCustomWithDuration(AVCaptureExposureDurationCurrent, ISO: clampedISO, completionHandler: { (time) -> Void in
//
})
device.unlockForConfiguration()
}
}
func touchPercent(touch : UITouch) -> CGPoint {
// Get the dimensions of the screen in points
let screenSize = UIScreen.mainScreen().bounds.size
// Create an empty CGPoint object set to 0, 0
var touchPer = CGPointZero
// Set the x and y values to be the value of the tapped position, divided by the width/height of the screen
touchPer.x = touch.locationInView(self.view).x / screenSize.width
touchPer.y = touch.locationInView(self.view).y / screenSize.height
// Return the populated CGPoint
return touchPer
}
func focusTo(value : Float) {
let error: NSErrorPointer = nil
if let device = captureDevice {
do {
try captureDevice!.lockForConfiguration()
} catch let error1 as NSError {
error.memory = error1
}
device.setFocusModeLockedWithLensPosition(value, completionHandler: { (time) -> Void in
//
})
device.unlockForConfiguration()
}
}
let screenWidth = UIScreen.mainScreen().bounds.size.width
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
//if let touchPer = touches.first {
let touchPer = touchPercent( touches.first! as UITouch )
updateDeviceSettings(Float(touchPer.x), isoValue: Float(touchPer.y))
super.touchesBegan(touches, withEvent:event)
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
// if let anyTouch = touches.first {
let touchPer = touchPercent( touches.first! as UITouch )
// let touchPercent = anyTouch.locationInView(self.view).x / screenWidth
// focusTo(Float(touchPercent))
updateDeviceSettings(Float(touchPer.x), isoValue: Float(touchPer.y))
}
func configureDevice() {
let error: NSErrorPointer = nil
if let device = captureDevice {
//device.lockForConfiguration(nil)
do {
try captureDevice!.lockForConfiguration()
} catch let error1 as NSError {
error.memory = error1
}
device.focusMode = .Locked
device.unlockForConfiguration()
}
}
func beginSession() {
configureDevice()
var err : NSError? = nil
var deviceInput: AVCaptureDeviceInput!
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
err = error
deviceInput = nil
};
captureSession.addInput(deviceInput)
if err != nil {
print("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.view.layer.frame
captureSession.startRunning()
}
}
In this code the camera is taking the whole screen.
在这段代码中,相机正在拍摄整个屏幕。
采纳答案by Teja Nandamuri
If you want to start camera in a custom UIView
, you need to change the AVCaptureVideoPreviewLayer
. you can change its bounds, its position, also you can add mask to it.
如果要在自定义中启动相机UIView
,则需要更改AVCaptureVideoPreviewLayer
. 您可以更改其边界、位置,也可以为其添加蒙版。
Coming to your question, the capture layer is taking full screen because you have:
谈到您的问题,捕获层正在全屏显示,因为您有:
previewLayer?.frame = self.view.layer.frame
Change this line to that overlay frame
将此行更改为该覆盖框架
previewLayer?.frame = self.overLayView.layer.frame
or, if you want to position the camera layer manually using raw values:
或者,如果您想使用原始值手动定位相机层:
previewLayer?.frame = CGRectMake(x,y,width,height)
Also , note that, if you want to start the camera in overlay view, you need to add the subview to that overlay view
另外,请注意,如果要在叠加视图中启动相机,则需要将子视图添加到该叠加视图中
so this line:
所以这一行:
self.view.layer.addSublayer(previewLayer!)
will be this:
将是这样的:
self.overLayView.layer.addSublayer(previewLayer!)
To stretch the layer/ fit the preview layer:
要拉伸图层/适合预览图层:
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
var bounds:CGRect
bounds=cameraView.layer.frame;
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer!.bounds=bounds;
previewLayer!.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
self.view.layer.addSublayer(previewLayer!)