Cg 程式設計/Unity/虛擬現實投影

本教程討論了 Unity 中的離軸透視投影。它基於 “頂點變換”部分。由於只需要更改檢視矩陣和投影矩陣,而這在 C# 中實現,因此不需要著色器程式設計。
離軸透視投影的主要應用是虛擬現實環境,例如照片中所示的 CAVE 或所謂的魚缸 VR 系統。通常,使用者的頭部位置被跟蹤,並且為每個顯示器計算跟蹤位置處的相機的透視投影,這樣使用者就能夠體驗到透過視窗看到三維世界的錯覺,而不是看一個平面顯示器。

同軸投影指的是相機位置位於視平面的對稱軸上,即透過視平面中心並與其正交的軸。這種情況在 “頂點變換”部分 中進行了討論。
然而,在虛擬現實環境中,虛擬相機通常會跟蹤使用者的頭部位置,以建立視差效果,從而創造出更加逼真的三維世界錯覺。由於跟蹤的頭部位置並不侷限於視平面的對稱軸,因此同軸投影對於大多數虛擬現實環境來說是不夠的。
離軸透視投影透過允許任意相機位置來解決這個問題。雖然一些底層圖形 API(例如舊版本的 OpenGL)支援離軸投影,但它們對同軸投影的支援更好,因為這是更常見的情況。類似地,許多高階工具(例如 Unity)支援離軸投影,但對同軸投影提供了更好的支援,即你可以通過幾次滑鼠點選來指定任何同軸投影,但你需要編寫指令碼來實現離軸投影。
離軸透視投影需要與同軸透視投影不同的檢視矩陣和投影矩陣。為了計算同軸檢視矩陣,將指定的檢視方向旋轉到 z 軸上,如 “頂點變換”部分 所述。離軸檢視矩陣的唯一區別是這個“檢視方向”是作為指定視平面的正交方向計算的,即視平面的表面法向量。
離軸投影矩陣必須更改,因為視平面的邊緣不再圍繞與(技術上的)“檢視方向”的交點對稱。因此,必須計算到邊緣的四個距離,並將它們放入合適的投影矩陣中。有關詳細資訊,請參閱 Robert Kooima 在其出版物 “Generalized Perspective Projection” 中的描述。下一節介紹了該技術在 Unity 中的實現。
以下指令碼基於 Robert Kooima 出版物中的程式碼。實現差異非常少。其中一個是,在 Unity 中,視平面更容易指定為內建的 Quad 物件,它在物件座標系中的角點位於 (±0.5, ±0.5, 0)。此外,原始程式碼是為右手座標系編寫的,而 Unity 使用左手座標系;因此,所有叉積的結果必須乘以 -1。此外,此處的程式碼考慮了相機可能正在看到 Quad 物件的背面。
另一個區別是,相機 GameObject 的旋轉和引數 fieldOfView 被 Unity 用於視錐剔除;因此,指令碼應將這些值設定為適當的值。(這些值對矩陣的計算沒有意義。)不幸的是,如果其他指令碼(即設定跟蹤頭部位置的指令碼)也設定相機旋轉,這可能會導致問題。因此,可以使用變數 estimateViewFrustum 來停用此估計(可能會導致 Unity 對視錐剔除不正確)。
如果引數 setNearClipPlane 設定為 true,則指令碼將近裁剪平面的距離設定為相機與視平面之間的距離加上 nearClipDistanceOffset 的值。但是,如果該值小於 minNearClipDistance,則將其設定為 minNearClipDistance。此功能在使用該指令碼渲染映象時特別有用,如 “映象”部分 所述。nearClipDistanceOffset 應該是一個負數,儘可能接近 0,同時避免出現偽影。
// This script should be attached to a Camera object
// in Unity. Once a Quad object is specified as the
// "projectionScreen", the script computes a suitable
// view and projection matrix for the camera.
// The code is based on Robert Kooima's publication
// "Generalized Perspective Projection," 2009,
// http://csc.lsu.edu/~kooima/pdfs/gen-perspective.pdf
using UnityEngine;
// Use the following line to apply the script in the editor:
[ExecuteInEditMode]
public class ObliqueProjectionToQuad : MonoBehaviour {
public GameObject projectionScreen;
public bool estimateViewFrustum = true;
public bool setNearClipPlane = false;
public float minNearClipDistance = 0.0001f;
public float nearClipDistanceOffset = -0.01f;
private Camera cameraComponent;
void OnPreCull () {
cameraComponent = GetComponent<Camera> ();
if (null != projectionScreen && null != cameraComponent) {
Vector3 pa =
projectionScreen.transform.TransformPoint (
new Vector3 (-0.5f, -0.5f, 0.0f));
// lower left corner in world coordinates
Vector3 pb =
projectionScreen.transform.TransformPoint (
new Vector3 (0.5f, -0.5f, 0.0f));
// lower right corner
Vector3 pc =
projectionScreen.transform.TransformPoint (
new Vector3 (-0.5f, 0.5f, 0.0f));
// upper left corner
Vector3 pe = transform.position;
// eye position
float n = cameraComponent.nearClipPlane;
// distance of near clipping plane
float f = cameraComponent.farClipPlane;
// distance of far clipping plane
Vector3 va; // from pe to pa
Vector3 vb; // from pe to pb
Vector3 vc; // from pe to pc
Vector3 vr; // right axis of screen
Vector3 vu; // up axis of screen
Vector3 vn; // normal vector of screen
float l; // distance to left screen edge
float r; // distance to right screen edge
float b; // distance to bottom screen edge
float t; // distance to top screen edge
float d; // distance from eye to screen
vr = pb - pa;
vu = pc - pa;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
// are we looking at the backface of the plane object?
if (Vector3.Dot (-Vector3.Cross (va, vc), vb) < 0.0f) {
// mirror points along the x axis (most users
// probably expect the y axis to stay fixed)
vr = -vr;
pa = pb;
pb = pa + vr;
pc = pa + vu;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
}
vr.Normalize ();
vu.Normalize ();
vn = -Vector3.Cross (vr, vu);
// we need the minus sign because Unity
// uses a left-handed coordinate system
vn.Normalize ();
d = -Vector3.Dot (va, vn);
if (setNearClipPlane) {
n = Mathf.Max (minNearClipDistance, d + nearClipDistanceOffset);
cameraComponent.nearClipPlane = n;
}
l = Vector3.Dot (vr, va) * n / d;
r = Vector3.Dot (vr, vb) * n / d;
b = Vector3.Dot (vu, va) * n / d;
t = Vector3.Dot (vu, vc) * n / d;
Matrix4x4 p = new Matrix4x4 (); // projection matrix
p[0, 0] = 2.0f * n / (r - l);
p[0, 1] = 0.0f;
p[0, 2] = (r + l) / (r - l);
p[0, 3] = 0.0f;
p[1, 0] = 0.0f;
p[1, 1] = 2.0f * n / (t - b);
p[1, 2] = (t + b) / (t - b);
p[1, 3] = 0.0f;
p[2, 0] = 0.0f;
p[2, 1] = 0.0f;
p[2, 2] = (f + n) / (n - f);
p[2, 3] = 2.0f * f * n / (n - f);
p[3, 0] = 0.0f;
p[3, 1] = 0.0f;
p[3, 2] = -1.0f;
p[3, 3] = 0.0f;
Matrix4x4 rm = new Matrix4x4 (); // rotation matrix;
rm[0, 0] = vr.x;
rm[0, 1] = vr.y;
rm[0, 2] = vr.z;
rm[0, 3] = 0.0f;
rm[1, 0] = vu.x;
rm[1, 1] = vu.y;
rm[1, 2] = vu.z;
rm[1, 3] = 0.0f;
rm[2, 0] = vn.x;
rm[2, 1] = vn.y;
rm[2, 2] = vn.z;
rm[2, 3] = 0.0f;
rm[3, 0] = 0.0f;
rm[3, 1] = 0.0f;
rm[3, 2] = 0.0f;
rm[3, 3] = 1.0f;
Matrix4x4 tm = new Matrix4x4 (); // translation matrix;
tm[0, 0] = 1.0f;
tm[0, 1] = 0.0f;
tm[0, 2] = 0.0f;
tm[0, 3] = -pe.x;
tm[1, 0] = 0.0f;
tm[1, 1] = 1.0f;
tm[1, 2] = 0.0f;
tm[1, 3] = -pe.y;
tm[2, 0] = 0.0f;
tm[2, 1] = 0.0f;
tm[2, 2] = 1.0f;
tm[2, 3] = -pe.z;
tm[3, 0] = 0.0f;
tm[3, 1] = 0.0f;
tm[3, 2] = 0.0f;
tm[3, 3] = 1.0f;
// set matrices
cameraComponent.projectionMatrix = p;
cameraComponent.worldToCameraMatrix = rm * tm;
// The original paper puts everything into the projection
// matrix (i.e. sets it to p * rm * tm and the other
// matrix to the identity), but this doesn't appear to
// work with Unity's shadow maps.
if (estimateViewFrustum) {
// rotate camera to screen for culling to work
Quaternion q = new Quaternion ();
q.SetLookRotation ((0.5f * (pb + pc) - pe), vu);
// look at center of screen
cameraComponent.transform.rotation = q;
// set fieldOfView to a conservative estimate
// to make frustum tall enough
if (cameraComponent.aspect >= 1.0f) {
cameraComponent.fieldOfView = Mathf.Rad2Deg *
Mathf.Atan (((pb - pa).magnitude + (pc - pa).magnitude) /
va.magnitude);
} else {
// take the camera aspect into account to
// make the frustum wide enough
cameraComponent.fieldOfView =
Mathf.Rad2Deg / cameraComponent.aspect *
Mathf.Atan (((pb - pa).magnitude + (pc - pa).magnitude) /
va.magnitude);
}
}
}
}
}
要使用此指令碼,請在專案視窗中選擇建立 > C# 指令碼,將指令碼命名為“ObliqueProjectionToQuad”,雙擊新指令碼進行編輯,然後將上述程式碼複製並貼上到其中。然後將指令碼附加到你的主相機(從專案視窗拖動到層次結構視窗中的相機物件)。此外,建立一個 Quad 物件(遊戲物件 > 3D 物件 > Quad 在主選單中),並將其放置到虛擬場景中以定義視平面。在檢查器視窗中停用 Quad 的網格渲染器,使其不可見(它只是一個佔位符)。選擇相機物件,並將 Quad 物件拖動到檢查器中的投影螢幕。當遊戲啟動時,指令碼將處於活動狀態。新增以下程式碼行
[ExecuteInEditMode]
如程式碼中所述,使指令碼在編輯器中也能執行。
請注意,Unity 中可能有一些部分會忽略新的投影矩陣,因此與該指令碼組合使用時無法使用。
請注意,此程式碼是為內建的 Plane 物件而不是 Quad 物件編寫的。
// This script should be attached to a Camera object
// in Unity. Once a Plane object is specified as the
// "projectionScreen", the script computes a suitable
// view and projection matrix for the camera.
// The code is based on Robert Kooima's publication
// "Generalized Perspective Projection," 2009,
// http://csc.lsu.edu/~kooima/pdfs/gen-perspective.pdf
// Use the following line to apply the script in the editor:
// @script ExecuteInEditMode()
#pragma strict
public var projectionScreen : GameObject;
public var estimateViewFrustum : boolean = true;
public var setNearClipPlane : boolean = false;
public var nearClipDistanceOffset : float = -0.01;
private var cameraComponent : Camera;
function LateUpdate() {
cameraComponent = GetComponent(Camera);
if (null != projectionScreen && null != cameraComponent)
{
var pa : Vector3 =
projectionScreen.transform.TransformPoint(
Vector3(-5.0, 0.0, -5.0));
// lower left corner in world coordinates
var pb : Vector3 =
projectionScreen.transform.TransformPoint(
Vector3(5.0, 0.0, -5.0));
// lower right corner
var pc : Vector3 =
projectionScreen.transform.TransformPoint(
Vector3(-5.0, 0.0, 5.0));
// upper left corner
var pe : Vector3 = transform.position;
// eye position
var n : float = cameraComponent.nearClipPlane;
// distance of near clipping plane
var f : float = cameraComponent.farClipPlane;
// distance of far clipping plane
var va : Vector3; // from pe to pa
var vb : Vector3; // from pe to pb
var vc : Vector3; // from pe to pc
var vr : Vector3; // right axis of screen
var vu : Vector3; // up axis of screen
var vn : Vector3; // normal vector of screen
var l : float; // distance to left screen edge
var r : float; // distance to right screen edge
var b : float; // distance to bottom screen edge
var t : float; // distance to top screen edge
var d : float; // distance from eye to screen
vr = pb - pa;
vu = pc - pa;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
// are we looking at the backface of the plane object?
if (Vector3.Dot(-Vector3.Cross(va, vc), vb) < 0.0)
{
// mirror points along the z axis (most users
// probably expect the x axis to stay fixed)
vu = -vu;
pa = pc;
pb = pa + vr;
pc = pa + vu;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
}
vr.Normalize();
vu.Normalize();
vn = -Vector3.Cross(vr, vu);
// we need the minus sign because Unity
// uses a left-handed coordinate system
vn.Normalize();
d = -Vector3.Dot(va, vn);
if (setNearClipPlane)
{
n = d + nearClipDistanceOffset;
cameraComponent.nearClipPlane = n;
}
l = Vector3.Dot(vr, va) * n / d;
r = Vector3.Dot(vr, vb) * n / d;
b = Vector3.Dot(vu, va) * n / d;
t = Vector3.Dot(vu, vc) * n / d;
var p : Matrix4x4; // projection matrix
p[0,0] = 2.0*n/(r-l);
p[0,1] = 0.0;
p[0,2] = (r+l)/(r-l);
p[0,3] = 0.0;
p[1,0] = 0.0;
p[1,1] = 2.0*n/(t-b);
p[1,2] = (t+b)/(t-b);
p[1,3] = 0.0;
p[2,0] = 0.0;
p[2,1] = 0.0;
p[2,2] = (f+n)/(n-f);
p[2,3] = 2.0*f*n/(n-f);
p[3,0] = 0.0;
p[3,1] = 0.0;
p[3,2] = -1.0;
p[3,3] = 0.0;
var rm : Matrix4x4; // rotation matrix;
rm[0,0] = vr.x;
rm[0,1] = vr.y;
rm[0,2] = vr.z;
rm[0,3] = 0.0;
rm[1,0] = vu.x;
rm[1,1] = vu.y;
rm[1,2] = vu.z;
rm[1,3] = 0.0;
rm[2,0] = vn.x;
rm[2,1] = vn.y;
rm[2,2] = vn.z;
rm[2,3] = 0.0;
rm[3,0] = 0.0;
rm[3,1] = 0.0;
rm[3,2] = 0.0;
rm[3,3] = 1.0;
var tm : Matrix4x4; // translation matrix;
tm[0,0] = 1.0;
tm[0,1] = 0.0;
tm[0,2] = 0.0;
tm[0,3] = -pe.x;
tm[1,0] = 0.0;
tm[1,1] = 1.0;
tm[1,2] = 0.0;
tm[1,3] = -pe.y;
tm[2,0] = 0.0;
tm[2,1] = 0.0;
tm[2,2] = 1.0;
tm[2,3] = -pe.z;
tm[3,0] = 0.0;
tm[3,1] = 0.0;
tm[3,2] = 0.0;
tm[3,3] = 1.0;
// set matrices
cameraComponent.projectionMatrix = p;
cameraComponent.worldToCameraMatrix = rm * tm;
// The original paper puts everything into the projection
// matrix (i.e. sets it to p * rm * tm and the other
// matrix to the identity), but this doesn't appear to
// work with Unity's shadow maps.
if (estimateViewFrustum)
{
// rotate camera to screen for culling to work
var q : Quaternion;
q.SetLookRotation((0.5 * (pb + pc) - pe), vu);
// look at center of screen
cameraComponent.transform.rotation = q;
// set fieldOfView to a conservative estimate
// to make frustum tall enough
if (cameraComponent.aspect >= 1.0)
{
cameraComponent.fieldOfView = Mathf.Rad2Deg *
Mathf.Atan(((pb-pa).magnitude + (pc-pa).magnitude)
/ va.magnitude);
}
else
{
// take the camera aspect into account to
// make the frustum wide enough
cameraComponent.fieldOfView =
Mathf.Rad2Deg / cameraComponent.aspect *
Mathf.Atan(((pb-pa).magnitude + (pc-pa).magnitude)
/ va.magnitude);
}
}
}
}
如果已知立體顯示器的左右相機的的位置,則可以將該指令碼分別應用於每個相機。但是,如果 Unity Camera(我們將其稱為 mycam)用於立體渲染,則 mycam.transform.position 中的位置指定左右相機之間的中點。在這種情況下,可以使用 mycam.GetStereoViewMatrix(Camera.StereoscopicEye.Left).inverse.GetRow(3) 獲取左側相機的 位置(以四維向量表示),而可以使用 mycam.GetStereoViewMatrix(Camera.StereoscopicEye.Right).inverse.GetRow(3) 以類似方式獲取右側相機的 位置。然後,可以使用這些位置來設定兩個單獨的相機以進行離軸投影。
在離軸投影的一些應用(例如映象、門戶或魔術鏡)中,離軸相機可能會渲染到渲染紋理中,這些渲染紋理隨後用於紋理化表面。在立體渲染的情況下,通常有兩個渲染紋理(每個眼睛一個)。因此,使用生成的渲染紋理進行紋理化通常必須使用正確的渲染紋理。為此,Unity 提供了內建的著色器變數 unity_StereoEyeIndex,它對於左眼為 0,對於右眼為 1。例如,著色器可以從左側眼睛的渲染紋理中讀取顏色 leftColor,從右側眼睛的渲染紋理中讀取顏色 rightColor。然後,著色器表示式 lerp(leftColor, rightColor, unity_StereoEyeIndex) 在使用渲染紋理進行立體渲染時計算出正確の色。“映象”部分 中包含了這種方法的完整著色器程式碼。
在本教程中,我們瞭解了
- 離軸透視投影的用途以及與同軸透視投影的區別
- 計算離軸透視投影的檢視矩陣和投影矩陣
- 該計算及其在 Unity 中的限制的實現
如果你想了解更多