xcode 检查 ARReferenceImage 是否在相机视图中不再可见
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/49997025/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Check whether the ARReferenceImage is no longer visible in the camera's view
提问by KNV
I would like to check whether the ARReferenceImageis no longer visible in the camera's view. At the moment I can check if the image's node is in the camera's view, but this node is still visible in the camera's view when the ARReferenceImageis covered with another image or when the image is removed.
我想检查ARReferenceImage是否在相机视图中不再可见。目前我可以检查图像的节点是否在相机的视图中,但是当ARReferenceImage被另一个图像覆盖或图像被删除时,该节点在相机的视图中仍然可见。
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let node = self.currentImageNode else { return }
if let pointOfView = sceneView.pointOfView {
let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
print("Is node visible: \(isVisible)")
}
}
So I need to check if the image is no longer visible instead of the image's node visibility. But I can't find out if this is possible. The first screenshot shows three boxes that are added when the image beneath is found. When the found image is covered (see screenshot 2) I would like to remove the boxes.
所以我需要检查图像是否不再可见,而不是图像的节点可见性。但我不知道这是否可能。第一个屏幕截图显示了在找到下面的图像时添加的三个框。当找到的图像被覆盖时(见截图 2),我想删除这些框。
采纳答案by KNV
I managed to fix the problem! Used a little bit of Maybe1's code and his concept to solving the problem, but in a different way. The following line of code is still used to reactivate the image recognition.
我设法解决了这个问题!使用了一点Maybe1的代码和他的概念来解决问题,但方式不同。以下代码行仍用于重新激活图像识别。
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
Let me explain. First we need to add some variables.
让我解释。首先我们需要添加一些变量。
// The scnNodeBarn variable will be the node to be added when the barn image is found. Add another scnNode when you have another image.
var scnNodeBarn: SCNNode = SCNNode()
// This variable holds the currently added scnNode (in this case scnNodeBarn when the barn image is found)
var currentNode: SCNNode? = nil
// This variable holds the UUID of the found Image Anchor that is used to add a scnNode
var currentARImageAnchorIdentifier: UUID?
// This variable is used to call a function when there is no new anchor added for 0.6 seconds
var timer: Timer!
The complete code with comments below.
完整的代码和下面的注释。
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// The following timer fires after 0.6 seconds, but everytime when there found an anchor the timer is stopped.
// So when there is no ARImageAnchor found the timer will be completed and the current scene node will be deleted and the variable will set to nil
DispatchQueue.main.async {
if(self.timer != nil){
self.timer.invalidate()
}
self.timer = Timer.scheduledTimer(timeInterval: 0.6 , target: self, selector: #selector(self.imageLost(_:)), userInfo: nil, repeats: false)
}
// Check if there is found a new image on the basis of the ARImageAnchorIdentifier, when found delete the current scene node and set the variable to nil
if(self.currentARImageAnchorIdentifier != imageAnchor.identifier &&
self.currentARImageAnchorIdentifier != nil
&& self.currentNode != nil){
//found new image
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
updateQueue.async {
//If currentNode is nil, there is currently no scene node
if(self.currentNode == nil){
switch referenceImage.name {
case "barn":
self.scnNodeBarn.transform = node.transform
self.sceneView.scene.rootNode.addChildNode(self.scnNodeBarn)
self.currentNode = self.scnNodeBarn
default: break
}
}
self.currentARImageAnchorIdentifier = imageAnchor.identifier
// Delete anchor from the session to reactivate the image recognition
self.sceneView.session.remove(anchor: anchor)
}
}
Delete the node when the timer is finished indicating that there was no new ARImageAnchor found.
计时器结束时删除节点,表明没有找到新的 ARImageAnchor。
@objc
func imageLost(_ sender:Timer){
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
In this way the currently added scnNode wil be deleted when the image is covered or when there is found a new image.
这样当图像被覆盖或发现新图像时,当前添加的 scnNode 将被删除。
This solution does unfortunately not solve the positioning problem of images because of the following:
遗憾的是,该解决方案并未解决图像的定位问题,原因如下:
ARKit doesn't track changes to the position or orientation of each detected image.
ARKit 不会跟踪每个检测到的图像的位置或方向的变化。
回答by jlsiewert
I don't think this is currently possible.
我认为目前这是不可能的。
From the Recognizing Images in an AR Experience documentation:
Design your AR experience to use detected images as a starting point for virtual content.
ARKit doesn't track changes to the position or orientation of each detected image. If you try to place virtual content that stays attached to a detected image, that content may not appear to stay in place correctly. Instead, use detected images as a frame of reference for starting a dynamic scene.
设计您的 AR 体验,以使用检测到的图像作为虚拟内容的起点。
ARKit 不会跟踪每个检测到的图像的位置或方向的变化。如果您尝试放置与检测到的图像保持连接的虚拟内容,则该内容可能无法正确放置。相反,使用检测到的图像作为启动动态场景的参考框架。
New Answer for iOS 12.0
iOS 12.0 的新答案
ARKit 2.0 and iOS 12 finally adds this feature, either via ARImageTrackingConfiguration
or via the ARWorldTrackingConfiguration.detectionImages
property that now also tracks the position of the images.
ARKit 2.0 和 iOS 12 最终添加了此功能,通过ARImageTrackingConfiguration
或通过ARWorldTrackingConfiguration.detectionImages
现在还跟踪图像位置的属性。
The Apple documentation to ARImageTrackingConfiguration
lists advantages of both methods:
Apple 文档ARImageTrackingConfiguration
列出了两种方法的优点:
With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. ARWorldTrackingConfiguration can also detect images, but each configuration has its own strengths:
World tracking has a higher performance cost than image-only tracking, so your session can reliably track more images at once with ARImageTrackingConfiguration.
Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view.
World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations—for example, an advertisement inside a moving subway car.
使用 ARImageTrackingConfiguration,ARKit 不是通过跟踪设备相对于世界的运动来建立 3D 空间,而是仅通过检测和跟踪已知 2D 图像在相机视野中的运动来建立 3D 空间。ARWorldTrackingConfiguration 也可以检测图像,但每种配置都有自己的优势:
世界跟踪比仅图像跟踪具有更高的性能成本,因此您的会话可以使用 ARImageTrackingConfiguration 一次可靠地跟踪更多图像。
仅图像跟踪可让您将虚拟内容锚定到已知图像,仅当这些图像在相机视野内时。带有图像检测的世界跟踪让您可以使用已知图像将虚拟内容添加到 3D 世界,并继续跟踪该内容在世界空间中的位置,即使图像不再可见。
世界追踪在稳定、静止的环境中效果最好。您可以使用仅图像跟踪在更多情况下向已知图像添加虚拟内容 - 例如,移动的地铁车厢内的广告。
回答by Abraham Torres
The correct way to check if an image that you are tracking is not currently tracked by ARKit is by using the "isTracked" property in the ARImageAnchor on the didUpdate node for anchor function.
检查您正在跟踪的图像当前是否没有被 ARKit 跟踪的正确方法是在 didUpdate 节点上使用 ARImageAnchor 中的“isTracked”属性来实现锚点功能。
For that, I use the next struct:
为此,我使用下一个结构:
struct TrackedImage {
var name : String
var node : SCNNode?
}
And then an array of that struct with the name of all the images.
然后是该结构的数组,其中包含所有图像的名称。
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
Then in the didAdd node for anchor, set the new content to the scene and also add the node to the corresponding element in the array of trackedImages
然后在anchor的didAdd节点中,将新的内容设置到场景中,同时将该节点添加到trackedImages数组中的对应元素中
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// Check if the added anchor is a recognized ARImageAnchor
if let imageAnchor = anchor as? ARImageAnchor{
// Get the reference ar image
let referenceImage = imageAnchor.referenceImage
// Create a plane to match the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor(red: 1, green: 1, blue: 1, alpha: 0.5)
// Create SCNNode from the plane
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
// Add the plane to the scene.
node.addChildNode(planeNode)
// Add the node to the tracked images
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
trackedImage[index].node = planeNode
}
}
}
}
Finally in the didUpdate node for anchor function we search for the anchor name in our array and check if the property isTracked is false.
最后,在锚函数的 didUpdate 节点中,我们在数组中搜索锚名称并检查属性 isTracked 是否为 false。
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
if let imageAnchor = anchor as? ARImageAnchor{
// Search the corresponding node for the ar image anchor
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
// Check if track is lost on ar image
if(imageAnchor.isTracked){
// The image is being tracked
trackedImage.node?.isHidden = false // Show or add content
}else{
// The image is lost
trackedImage.node?.isHidden = true // Hide or delete content
}
break
}
}
}
}
This solution works when you want to tracked multiple images at the same time and know when any of them is lost.
当您想同时跟踪多个图像并知道其中任何一个丢失时,此解决方案有效。
Note: For this solution to work the maximumNumberOfTrackedImages
in the AR configuration must be set to a nonzero number.
注意:要使此解决方案maximumNumberOfTrackedImages
在 AR 配置中起作用,必须将其设置为非零数字。
回答by vx2ko
For what its worth, I spent hours trying to figure out how to constantly check for image references. The didUpdate function was the answer. Then you just need to test of the reference image is being tracked using the .isTracked property. At that point, you can set the .isHidden property to true or false. Heres my example:
对于它的价值,我花了几个小时试图弄清楚如何不断检查图像参考。didUpdate 函数就是答案。然后您只需要使用 .isTracked 属性测试正在跟踪的参考图像。此时,您可以将 .isHidden 属性设置为 true 或 false。这是我的例子:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
let trackedNode = node
if let imageAnchor = anchor as? ARImageAnchor{
if (imageAnchor.isTracked) {
trackedNode.isHidden = false
print("\(trackedNode.name)")
}else {
trackedNode.isHidden = true
//print("\(trackedImageName)")
print("No image in view")
}
}
}
回答by Дмитрий Акимов
This code works only if You hold the device strictly horizontally or vertically. If You hold iPhone tilted or starting to tilt if, this code doesn't work:
仅当您严格水平或垂直握住设备时,此代码才有效。如果您将 iPhone 倾斜或开始倾斜,则此代码不起作用:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}
回答by BlackMirrorz
I'm not entirely sure I have understood what your asking (so apologies), but if I have then perhaps this might help...
我不完全确定我是否理解您的要求(非常抱歉),但是如果我理解了,那么这可能会有所帮助...
It seems that for insideOfFrustum
to work correctly, that their must be some SCNGeometry
associated with the node for it to work (an SCNNode alone will not suffice).
似乎insideOfFrustum
要正常工作,它们必须SCNGeometry
与节点相关联才能工作(仅 SCNNode 是不够的)。
For example if we do something like this in the delegate
callback and save the added SCNNode
into an array:
例如,如果我们在delegate
回调中执行类似的操作并将添加的内容保存SCNNode
到数组中:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Store The Node
imageTargets.append(node)
}
And then use the insideOfFrustum
method, 99% of the time it will say that the node is in view even when we know it shouldn't be.
然后使用该insideOfFrustum
方法,99% 的情况下它会说节点在视图中,即使我们知道它不应该在视图中。
However if we do something like this (whereby we create a transparent marker node e.g. one that has some geometry):
但是,如果我们做这样的事情(由此我们创建一个透明的标记节点,例如一个具有一些几何形状的节点):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Create A Transpanrent Geometry
node.geometry = SCNSphere(radius: 0.1)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
//3. Store The Node
imageTargets.append(node)
}
And then call the following method, it does detect if the ARReferenceImage
is inView:
然后调用以下方法,它会检测是否ARReferenceImage
是 inView:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}
In regard to your other point about an SCNNode being occluded by another one, the Apple Docs
state that the inViewOfFrostrum
:
关于您关于 SCNNode 被另一个 SCNNode 遮挡的另一点,Apple Docs
声明inViewOfFrostrum
:
does not perform occlusion testing. That is, it returns true if the tested node lies within the specified viewing frustum regardless of whether that node's contents are obscured by other geometry.
不执行遮挡测试。也就是说,如果测试节点位于指定的视锥内,则无论该节点的内容是否被其他几何图形遮挡,它都会返回 true。
Again, apologies if I haven't understood you correctly, but hopefully it might help to some extent...
再次道歉,如果我没有正确理解你,但希望它可能在某种程度上有所帮助......
Update:
更新:
Now I fully understand your question, I agree with @orangenkopf that this isn't possible. Since as the docs state:
现在我完全理解你的问题,我同意@orangenkopf 的观点,这是不可能的。由于文档状态:
ARKit doesn't track changes to the position or orientation of each detected image.
ARKit 不会跟踪每个检测到的图像的位置或方向的变化。
回答by YMonnier
From the Recognizing Images in an AR Experience documentation:
ARKit adds an image anchor to a session exactly once for each reference image in the session configuration's detectionImages array. If your AR experience adds virtual content to the scene when an image is detected, that action will by default happen only once. To allow the user to experience that content again without restarting your app, call the session's remove(anchor:) method to remove the corresponding ARImageAnchor. After the anchor is removed, ARKit will add a new anchor the next time it detects the image.
对于会话配置的 detectionImages 数组中的每个参考图像,ARKit 会向会话添加一次图像锚点。如果您的 AR 体验在检测到图像时将虚拟内容添加到场景中,则默认情况下该操作只会发生一次。要让用户在不重启应用的情况下再次体验该内容,请调用会话的 remove(anchor:) 方法以删除相应的 ARImageAnchor。移除锚点后,ARKit 会在下次检测到图像时添加一个新的锚点。
So, maybe you can find a workaround for your case:
因此,也许您可以为您的案例找到解决方法:
Let's say we are that structure which saves our ARImageAnchor
detected and the virtual content associated:
假设我们是保存我们ARImageAnchor
检测到的和关联的虚拟内容的结构:
struct ARImage {
var anchor: ARImageAnchor
var node: SCNNode
}
Then, when the renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor)
is called, you save the image detected into a temporary list of ARImage:
然后,当renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor)
调用 时,将检测到的图像保存到 ARImage 的临时列表中:
...
var tmpARImages: [ARImage] = []
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// If the ARImage does not exist
if !tmpARImages.contains(where: {##代码##.anchor.referenceImage.name == referenceImage.name}) {
let virtualContent = SCNNode(...)
node.addChildNode(virtualContent)
tmpARImages.append(ARImage(anchor: imageAnchor, node: virtualContent))
}
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
}
If you understood, while your camera's view point out of the image/marker, the delegate function will loop endlessly... (because we removed the anchor from the session).
如果你明白,当你的相机的视角指向图像/标记之外时,委托函数将无休止地循环......(因为我们从会话中删除了锚点)。
The idea will be to combine the image recognition loop, the image detected saved into the tmp list and the sceneView.isNode(node, insideFrustumOf: pointOfView)
function to determine if the image/marker detected is no longer view.
想法是将图像识别循环、检测到的图像保存到 tmp 列表中以及sceneView.isNode(node, insideFrustumOf: pointOfView)
确定检测到的图像/标记是否不再查看的功能。
I hope it was clear...
我希望很清楚...