Javascript Three.js Projector 和 Ray 对象
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/11036106/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Three.js Projector and Ray objects
提问by Cory Gross
I have been trying to work with the Projector and Ray classes in order to do some collision detection demos. I have started just trying to use the mouse to select objects or to drag them. I have looked at examples that use the objects, but none of them seem to have comments explaining what exactly some of the methods of Projector and Ray are doing. I have a couple questions that I am hoping will be easy for someone to answer.
我一直在尝试使用 Projector 和 Ray 类来做一些碰撞检测演示。我开始只是尝试使用鼠标来选择对象或拖动它们。我查看了使用这些对象的示例,但它们似乎都没有解释 Projector 和 Ray 的某些方法究竟在做什么的评论。我有几个问题,希望有人能轻松回答。
What exactly is happening and what is the difference between Projector.projectVector() and Projector.unprojectVector()?I notice that it seems in all the examples using both projector and ray objects the unproject method is called before the ray is created. When would you use projectVector?
到底发生了什么,Projector.projectVector() 和 Projector.unprojectVector() 有什么区别?我注意到在所有同时使用投影仪和光线对象的示例中,似乎在创建光线之前调用了 unproject 方法。你什么时候会使用 projectVector?
I am using the following code in this demoto spin the cube when dragged on with the mouse. Can someone explain in simple terms what exactly is happening when I unproject with the mouse3D and camera and then create the Ray. Does the ray depend on the call to unprojectVector()
我在这个演示中使用以下代码在用鼠标拖动时旋转立方体。有人可以简单地解释一下当我用 mouse3D 和相机取消投影然后创建 Ray.js 时到底发生了什么。射线是否取决于对 unprojectVector() 的调用
/** Event fired when the mouse button is pressed down */
function onDocumentMouseDown(event) {
event.preventDefault();
mouseDown = true;
mouse3D.x = mouse2D.x = mouseDown2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = mouseDown2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
/** Project from camera through the mouse and create a ray */
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(crateMesh); // store intersecting objects
if (intersects.length > 0) {
SELECTED = intersects[0].object;
var intersects = ray.intersectObject(plane);
}
}
/** This event handler is only fired after the mouse down event and
before the mouse up event and only when the mouse moves */
function onDocumentMouseMove(event) {
event.preventDefault();
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
if (SELECTED) {
var intersects = ray.intersectObject(plane);
dragVector.sub(mouse2D, mouseDown2D);
return;
}
var intersects = ray.intersectObject(crateMesh);
if (intersects.length > 0) {
if (INTERSECTED != intersects[0].object) {
INTERSECTED = intersects[0].object;
}
}
else {
INTERSECTED = null;
}
}
/** Removes event listeners when the mouse button is let go */
function onDocumentMouseUp(event) {
event.preventDefault();
/** Update mouse position */
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
if (INTERSECTED) {
SELECTED = null;
}
mouseDown = false;
dragVector.set(0, 0);
}
/** Removes event listeners if the mouse runs off the renderer */
function onDocumentMouseOut(event) {
event.preventDefault();
if (INTERSECTED) {
plane.position.copy(INTERSECTED.position);
SELECTED = null;
}
mouseDown = false;
dragVector.set(0, 0);
}
回答by acarlon
I found that I needed to go a bit deeper under the surface to work outside of the scope of the sample code (such as having a canvas that does not fill the screen or having additional effects). I wrote a blog post about it here. This is a shortened version, but should cover pretty much everything I found.
我发现我需要在表面下更深入一点才能在示例代码的范围之外工作(例如有一个不填满屏幕的画布或有额外的效果)。我在这里写了一篇关于它的博客文章。这是一个缩短的版本,但应该涵盖我发现的几乎所有内容。
How to do it
怎么做
The following code (similar to that already provided by @mrdoob) will change the color of a cube when clicked:
以下代码(类似于@mrdoob 已经提供的代码)将在单击时更改立方体的颜色:
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
projector.unprojectVector( mouse3D, camera );
mouse3D.sub( camera.position );
mouse3D.normalize();
var raycaster = new THREE.Raycaster( camera.position, mouse3D );
var intersects = raycaster.intersectObjects( objects );
// Change color if hit block
if ( intersects.length > 0 ) {
intersects[ 0 ].object.material.color.setHex( Math.random() * 0xffffff );
}
With the more recent three.js releases (around r55 and later), you can use pickingRay which simplifies things even further so that this becomes:
随着最近的 Three.js 版本(大约 r55 及更高版本),您可以使用 pickRay 进一步简化事情,使其变为:
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
var raycaster = projector.pickingRay( mouse3D.clone(), camera );
var intersects = raycaster.intersectObjects( objects );
// Change color if hit block
if ( intersects.length > 0 ) {
intersects[ 0 ].object.material.color.setHex( Math.random() * 0xffffff );
}
Let's stick with the old approach as it gives more insight into what is happening under the hood. You can see this working here, simply click on the cube to change its colour.
让我们坚持使用旧方法,因为它可以更深入地了解幕后发生的事情。你可以在这里看到这个工作,只需点击立方体来改变它的颜色。
What's happening?
发生了什么?
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
event.clientX
is the x coordinate of the click position. Dividing by window.innerWidth
gives the position of the click in proportion of the full window width. Basically, this is translating from screen coordinates that start at (0,0) at the top left through to (window.innerWidth
,window.innerHeight
) at the bottom right, to the cartesian coordinates with center (0,0) and ranging from (-1,-1) to (1,1) as shown below:
event.clientX
是点击位置的 x 坐标。除以window.innerWidth
与整个窗口宽度成比例的点击位置。基本上,这是从左上角 (0,0) 开始到右下角( window.innerWidth
, window.innerHeight
) 的屏幕坐标转换为中心为 (0,0) 的笛卡尔坐标,范围从 (-1,-1 ) 到 (1,1) 如下图:
Note that z has a value of 0.5. I won't go into too much detail about the z value at this point except to say that this is the depth of the point away from the camera that we are projecting into 3D space along the z axis. More on this later.
请注意,z 的值为 0.5。在这一点上,我不会详细介绍 z 值,只是说这是我们沿 z 轴投影到 3D 空间中远离相机的点的深度。稍后会详细介绍。
Next:
下一个:
projector.unprojectVector( mouse3D, camera );
If you look at the three.js code you will see that this is really an inversion of the projection matrix from the 3D world to the camera. Bear in mind that in order to get from 3D world coordinates to a projection on the screen, the 3D world needs to be projected onto the 2D surface of the camera (which is what you see on your screen). We are basically doing the inverse.
如果您查看three.js 代码,您会发现这实际上是从3D 世界到相机的投影矩阵的反转。请记住,为了从 3D 世界坐标到屏幕上的投影,3D 世界需要投影到相机的 2D 表面(这就是您在屏幕上看到的)。我们基本上是在做相反的事情。
Note that mouse3D will now contain this unprojected value. This is the position of a point in 3D space along the ray/trajectory that we are interested in. The exact point depends on the z value (we will see this later).
请注意, mouse3D 现在将包含此未投影值。这是沿着我们感兴趣的射线/轨迹的 3D 空间中的点的位置。确切的点取决于 z 值(我们稍后会看到)。
At this point, it may be useful to have a look at the following image:
此时,查看下图可能会有所帮助:
The point that we have just calculated (mouse3D) is shown by the green dot. Note that the size of the dots are purely illustrative, they have no bearing on the size of the camera or mouse3D point. We are more interested in the coordinates at the center of the dots.
我们刚刚计算的点 (mouse3D) 由绿点显示。请注意,点的大小纯粹是说明性的,它们与相机或 mouse3D 点的大小无关。我们对点中心的坐标更感兴趣。
Now, we don't just want a single point in 3D space, but instead we want a ray/trajectory (shown by the black dots) so that we can determine whether an object is positioned along this ray/trajectory. Note that the points shown along the ray are just arbitrary points, the ray is a direction from the camera, not a set of points.
现在,我们不只是想要 3D 空间中的单个点,而是想要一条光线/轨迹(由黑点显示),以便我们可以确定对象是否沿着这条光线/轨迹定位。请注意,沿射线显示的点只是任意点,射线是来自相机的方向,而不是一组点。
Fortunately, because we a have a point along the ray and we know that the trajectory must pass from the camera to this point, we can determine the direction of the ray. Therefore, the next step is to subtract the camera position from the mouse3D position, this will give a directional vector rather than just a single point:
幸运的是,因为我们沿着光线有一个点,并且我们知道轨迹必须从相机经过这个点,所以我们可以确定光线的方向。因此,下一步是从 mouse3D 位置中减去相机位置,这将给出一个方向向量而不仅仅是一个点:
mouse3D.sub( camera.position );
mouse3D.normalize();
We now have a direction from the camera to this point in 3D space (mouse3D now contains this direction). This is then turned into a unit vector by normalizing it.
我们现在有一个从相机到 3D 空间中这个点的方向(mouse3D 现在包含这个方向)。然后通过归一化将其转换为单位向量。
The next step is to create a ray (Raycaster) starting from the camera position and using the direction (mouse3D) to cast the ray:
下一步是从相机位置开始创建光线(Raycaster)并使用方向(mouse3D)投射光线:
var raycaster = new THREE.Raycaster( camera.position, mouse3D );
The rest of the code determines whether the objects in 3D space are intersected by the ray or not. Happily it is all taken care of us behind the scenes using intersectsObjects
.
其余代码确定 3D 空间中的对象是否与光线相交。令人高兴的是,这一切都在幕后使用intersectsObjects
.
The Demo
演示
OK, so let's look at a demo from my site herethat shows these rays being cast in 3D space. When you click anywhere, the camera rotates around the object to show you how the ray is cast. Note that when the camera returns to its original position, you only see a single dot. This is because all the other dots are along the line of the projection and therefore blocked from view by the front dot. This is similar to when you look down the line of an arrow pointing directly away from you - all that you see is the base. Of course, the same applies when looking down the line of an arrow that is travelling directly towards you (you only see the head), which is generally a bad situation to be in.
好了,让我们来看看一个演示从我的网站在这里显示了这些光线被浇铸在三维空间。当您单击任意位置时,相机会围绕对象旋转以显示光线是如何投射的。请注意,当相机返回其原始位置时,您只能看到一个点。这是因为所有其他点都沿着投影线,因此被前面的点挡住了视线。这类似于当您向下看直接指向远离您的箭头线时 - 您看到的只是底部。当然,当向下看直接朝向您的箭头线(您只能看到头部)时,这同样适用,这通常是一种糟糕的情况。
The z coordinate
z 坐标
Let's take another look at that z coordinate. Refer to this demoas you read through this section and experiment with different values for z.
让我们再看看那个 z 坐标。阅读本节时请参阅此演示,并尝试使用不同的 z 值。
OK, lets take another look at this function:
好的,让我们再看看这个函数:
var mouse3D = new THREE.Vector3( ( event.clientX / window.innerWidth ) * 2 - 1, //x
-( event.clientY / window.innerHeight ) * 2 + 1, //y
0.5 ); //z
We chose 0.5 as the value. I mentioned earlier that the z coordinate dictates the depth of the projection into 3D. So, let's have a look at different values for z to see what effect it has. To do this, I have placed a blue dot where the camera is, and a line of green dots from the camera to the unprojected position. Then, after the intersections have been calculated, I move the camera back and to the side to show the ray. Best seen with a few examples.
我们选择 0.5 作为值。我之前提到过 z 坐标决定了投影到 3D 的深度。所以,让我们看看 z 的不同值,看看它有什么影响。为此,我在相机所在的位置放置了一个蓝点,并在相机与未投影位置之间放置了一条绿点。然后,在计算出交点后,我将相机向后和向一侧移动以显示光线。最好看几个例子。
First, a z value of 0.5:
一、az值为0.5:
Note the green line of dots from the camera (blue dot) to the unprojected value (the coordinate in 3D space). This is like the barrel of a gun, pointing in the direction that they ray should be cast. The green line essentially represents the direction that is calculated before being normalised.
请注意从相机(蓝点)到未投影值(3D 空间中的坐标)的绿点线。这就像枪管,指向它们应该投射的方向。绿线基本上代表在归一化之前计算的方向。
OK, let's try a value of 0.9:
好的,让我们试试 0.9 的值:
As you can see, the green line has now extended further into 3D space. 0.99 extends even further.
如您所见,绿线现已进一步延伸至 3D 空间。0.99 进一步延伸。
I do not know if there is any importance as to how big the value of z is. It seems that a bigger value would be more precise (like a longer gun barrel), but since we are calculating the direction, even a short distance should be pretty accurate. The examples that I have seen use 0.5, so that is what I will stick with unless told otherwise.
我不知道 z 的值有多大。似乎更大的值会更精确(就像更长的枪管),但由于我们正在计算方向,即使是短距离也应该非常准确。我看到的示例使用 0.5,因此除非另有说明,否则我将坚持使用。
Projection when the canvas is not full screen
画布非全屏时的投影
Now that we know a bit more about what is going on, we can figure out what the values should be when the canvas does not fill the window and is positioned on the page. Say, for example, that:
现在我们对正在发生的事情有了更多的了解,我们可以弄清楚当画布没有填满窗口并位于页面上时应该是什么值。例如,说:
- the div containing the three.js canvas is offsetX from the left and offsetY from the top of the screen.
- the canvas has a width equal to viewWidth and height equal to viewHeight.
- 包含 Three.js 画布的 div 是距屏幕左侧的 offsetX 和距屏幕顶部的 offsetY。
- 画布的宽度等于 viewWidth,高度等于 viewHeight。
The code would then be:
代码将是:
var mouse3D = new THREE.Vector3( ( event.clientX - offsetX ) / viewWidth * 2 - 1,
-( event.clientY - offsetY ) / viewHeight * 2 + 1,
0.5 );
Basically, what we are doing is calculating the position of the mouse click relative to the canvas (for x: event.clientX - offsetX
). Then we determine proportionally where the click occurred (for x: /viewWidth
) similar to when the canvas filled the window.
基本上,我们正在做的是计算鼠标点击相对于画布的位置(对于 x: event.clientX - offsetX
)。然后我们按比例确定点击发生的位置(对于 x: /viewWidth
),类似于画布填充窗口时。
That's it, hopefully it helps.
就是这样,希望它有所帮助。
回答by mrdoob
Basically, you need to project from the 3D world space and the 2D screen space.
基本上,您需要从 3D 世界空间和 2D 屏幕空间进行投影。
Renderers use projectVector
for translating 3D points to the 2D screen. unprojectVector
is basically for doing the inverse, unprojecting 2D points into the 3D world. For both methods you pass the camera you're viewing the scene through.
渲染器projectVector
用于将 3D 点转换为 2D 屏幕。unprojectVector
基本上是为了做相反的事情,将 2D 点取消投影到 3D 世界中。对于这两种方法,您都可以通过您正在查看场景的相机。
So, in this code you're creating a normalised vector in 2D space. To be honest, I was never too sure about the z = 0.5
logic.
因此,在此代码中,您将在 2D 空间中创建归一化向量。老实说,我对z = 0.5
逻辑从来都不太确定。
mouse3D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
Then, this code uses the camera projection matrix to transform it to our 3D world space.
然后,此代码使用相机投影矩阵将其转换为我们的 3D 世界空间。
projector.unprojectVector(mouse3D, camera);
With the mouse3D point converted into the 3D space, we can now use it for getting the direction and then use the camera position to throw a ray from.
将 mouse3D 点转换为 3D 空间后,我们现在可以使用它来获取方向,然后使用相机位置从中投射光线。
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(plane);
回答by Prabu Arumugam
As of release r70, Projector.unprojectVector
and Projector.pickingRay
are deprecated. Instead, we have raycaster.setFromCamera
which makes the life easier in finding the objects under the mouse pointer.
从版本R70的,Projector.unprojectVector
和Projector.pickingRay
已被弃用。相反,我们有raycaster.setFromCamera
这使得在鼠标指针下查找对象变得更容易。
var mouse = new THREE.Vector2();
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
var raycaster = new THREE.Raycaster();
raycaster.setFromCamera(mouse, camera);
var intersects = raycaster.intersectObjects(scene.children);
intersects[0].object
gives the object under the mouse pointer and intersects[0].point
gives the point on the object where the mouse pointer was clicked.
intersects[0].object
给出鼠标指针下的对象,并intersects[0].point
给出鼠标指针被点击的对象上的点。
回答by pailhead
Projector.unprojectVector() treats the vec3 as a position. During the process the vector gets translated, hence we use .sub(camera.position)on it. Plus we need to normalize it after after this operation.
Projector.unprojectVector() 将 vec3 视为一个位置。在这个过程中,向量被翻译,因此我们在它上面使用.sub(camera.position)。另外,我们需要在此操作后对其进行标准化。
I will add some graphics to this post but for now I can describe the geometry of the operation.
我将在这篇文章中添加一些图形,但现在我可以描述操作的几何形状。
We can think of the camera as a pyramid in terms of geometry. We in fact define it with 6 panes - left, right, top, bottom, near and far (near being the plane closest to the tip).
我们可以将相机视为几何方面的金字塔。实际上,我们用 6 个窗格来定义它 - 左、右、上、下、近和远(近是最靠近尖端的平面)。
If we were standing in some 3d and observing these operations, we would see this pyramid in an arbitrary position with an arbitrary rotation in space. Lets say that this pyramid's origin is at it's tip, and it's negative z axis runs towards the bottom.
如果我们站在某个 3d 中并观察这些操作,我们会看到这个金字塔处于任意位置,在空间中任意旋转。假设这个金字塔的原点在它的顶端,它的负 z 轴朝底部延伸。
Whatever ends up being contained within those 6 planes will end up being rendered on our screen if we apply the correct sequence of matrix transformations. Which i opengl go something like this:
如果我们应用正确的矩阵变换序列,最终包含在这 6 个平面中的任何内容都将最终呈现在我们的屏幕上。我 opengl 是这样的:
NDC_or_homogenous_coordinates = projectionMatrix * viewMatrix * modelMatrix * position.xyzw;
This takes our mesh from it's object space into world space, into camera space and finally it projects it does the perspective projection matrix which essentially puts everything into a small cube (NDC with ranges from -1 to 1).
这将我们的网格从它的对象空间带到世界空间,进入相机空间,最后投影它执行透视投影矩阵,该矩阵基本上将所有内容放入一个小立方体(NDC,范围从 -1 到 1)。
Object space can be a neat set of xyz coordinates in which you generate something procedurally or say, a 3d model, that an artist modeled using symmetry and thus neatly sits aligned with the coordinate space, as opposed to an architectural model obtained from say something like REVIT or AutoCAD.
对象空间可以是一组整洁的 xyz 坐标,您可以在其中程序生成一些东西,或者说,一个 3d 模型,艺术家使用对称性建模,因此整齐地与坐标空间对齐,而不是像这样获得的建筑模型REVIT 或 AutoCAD。
An objectMatrix could happen in between the model matrix and the view matrix, but this is usually taken care of ahead of time. Say, flipping y and z, or bringing a model thats far away from the origin into bounds, converting units etc.
objectMatrix 可能发生在模型矩阵和视图矩阵之间,但这通常会提前处理。比如说,翻转 y 和 z,或者将远离原点的模型带入边界,转换单位等。
If we think of our flat 2d screen as if it had depth, it could be described the same way as the NDC cube, albeit, slightly distorted. This is why we supply the aspect ratio to the camera. If we imagine a square the size of our screen height, the remainder is the aspect ratio that we need to scale our x coordinates.
如果我们将平面 2d 屏幕视为具有深度,则可以将其描述为与 NDC 立方体相同的方式,尽管略微扭曲。这就是我们为相机提供纵横比的原因。如果我们想象一个正方形的屏幕高度,余数就是我们需要缩放 x 坐标的纵横比。
Now back to 3d space.
现在回到 3d 空间。
We're standing in a 3d scene and we see the pyramid. If we cut everything around the pyramid, and then take the pyramid along with the part of the scene contained in it and put it's tip at 0,0,0, and point the bottom towards the -z axis we will end up here:
我们站在一个 3d 场景中,我们看到了金字塔。如果我们切割金字塔周围的所有东西,然后将金字塔连同其中包含的场景部分一起取下,并将其尖端放在 0,0,0 处,并将底部指向 -z 轴,我们将在这里结束:
viewMatrix * modelMatrix * position.xyzw
Multiplying this by the projection matrix will be the same as if we took the tip, and started pulling it appart in the x and y axis creating a square out of that one point, and turning the pyramid into a box.
将其乘以投影矩阵将与我们获取尖端相同,并开始在 x 和 y 轴上将其拉开,从该点创建一个正方形,并将金字塔变成一个盒子。
In this process the box gets scaled to -1 and 1 and we get our perspective projection and we end up here:
在这个过程中,盒子被缩放到 -1 和 1,我们得到了透视投影,我们最终在这里:
projectionMatrix * viewMatrix * modelMatrix * position.xyzw;
In this space, we have control over a 2 dimensional mouse event. Since it's on our screen, we know that it's two dimensional, and that it's somewhere within the NDC cube. If it's two dimensional, we can say that we know X and Y but not the Z, hence the need for ray casting.
在这个空间中,我们可以控制一个二维鼠标事件。因为它在我们的屏幕上,我们知道它是二维的,并且它在 NDC 立方体内的某个地方。如果是二维的,我们可以说我们知道 X 和 Y 但不知道 Z,因此需要进行光线投射。
So when we cast a ray, we are basically sending a line through the cube, perpendicular to one of it's sides.
所以当我们投射一条射线时,我们基本上是通过立方体发送一条线,垂直于它的一个侧面。
Now we need to figure out if that ray hits something in the scene, and in order to do that we need to transform the ray from this cube, into some space suitable for computation. We want the ray in world space.
现在我们需要确定那条光线是否击中了场景中的某些东西,为了做到这一点,我们需要将来自这个立方体的光线转换成适合计算的空间。我们想要世界空间中的光线。
Ray is an infinite line in space. It's different from a vector because it has a direction, and it must pass through a point in space. And indeed this is how the Raycaster takes its arguments.
射线是空间中的一条无限长的线。它与矢量不同,因为它有方向,而且它必须经过空间中的一个点。事实上,这就是 Raycaster 采取论据的方式。
So if we squeeze the top of the box along with the line, back into the pyramid, the line will originate from the tip and run down and intersect the bottom of the pyramid somewhere between -- mouse.x * farRange and -mouse.y * farRange.
因此,如果我们沿着线挤压盒子的顶部,回到金字塔中,线将从尖端开始向下延伸并与金字塔底部相交 - mouse.x * farRange 和 -mouse.y * 远距离。
(-1 and 1 at first, but view space is in world scale, just rotated and moved)
(一开始是 -1 和 1,但视图空间在世界范围内,只是旋转和移动)
Since this is the default location of the camera so to speak (it's object space) if we apply it's own world matrix to the ray, we will transform it along with the camera.
由于这是相机的默认位置(它是对象空间),如果我们将它自己的世界矩阵应用于光线,我们将与相机一起变换它。
Since the ray passes through 0,0,0, we only have it's direction and THREE.Vector3 has a method for transforming a direction:
由于光线穿过 0,0,0,所以我们只有它的方向,而 THREE.Vector3 有一个转换方向的方法:
THREE.Vector3.transformDirection()
It also normalizes the vector in the process.
它还在此过程中对向量进行归一化。
The Z coordinate in the method above
上面方法中的Z坐标
This essentially works with any value, and acts the same because of the way the NDC cube works. The near plane and far plane are projected onto -1 and 1.
这基本上适用于任何值,并且由于 NDC 立方体的工作方式而具有相同的作用。近平面和远平面投影到 -1 和 1。
So when you say, shoot a ray at:
所以当你说的时候,射一束光线:
[ mouse.x | mouse.y | someZpositive ]
you send a line, through a point (mouse.x, mouse.y, 1) in the direction of (0,0,someZpositive)
你发送一条线,通过一个点 (mouse.x, mouse.y, 1) 在 (0,0,someZpositive) 的方向
If you relate this to the box/pyramid example, this point is at the bottom, and since the line originates from the camera it goes through that point as well.
如果将此与长方体/金字塔示例相关联,则该点位于底部,并且由于该线源自相机,因此它也穿过该点。
BUT, in the NDC space, this point is stretched to infinity, and this line ends up being parallel with the left,top,right,bottom planes.
但是,在 NDC 空间中,该点被拉伸到无穷远,这条线最终与左、上、右、下平面平行。
Unprojecting with the above method turns this into a position/point essentially. The far plane just gets mapped into world space, so our point sits somewhere at z=-1, between -camera aspect and + cameraAspect on X and -1 and 1 on y.
使用上述方法取消投影本质上将其转换为位置/点。远平面刚刚映射到世界空间中,因此我们的点位于 z=-1 的某个位置,在 X 上的 -camera 方面和 + cameraAspect 之间以及 y 上的 -1 和 1 之间。
since it's a point, applying the cameras world matrix will not only rotate it but translate it as well. Hence the need to bring this back to the origin by subtracting the cameras position.
因为它是一个点,应用相机世界矩阵不仅会旋转它,还会平移它。因此需要通过减去相机位置将其带回原点。