我想从立方体贴图[figure1]转换为equirectangular全景[figure2].
图1
Figure2
可以从Spherical变为Cubic(通过以下方式:将2:1 equirectangular全景转换为立方体贴图),但是如何反转它会丢失.
图2将使用Unity渲染为球体.
假设输入图像采用以下立方体贴图格式:
目标是将图像投影到equirectangular格式,如下所示:
转换算法相当简单.在给定具有6个面的立方体贴图的情况下,为了计算equirectangular图像中每个像素的颜色的最佳估计:
首先,计算与球形图像中的每个像素相对应的极坐标.
其次,使用极坐标形成矢量并确定立方体贴图的哪个面以及矢量所在的面部的哪个像素; 就像一个立方体中心的光线投射会击中它的一侧和那一侧的特定点.
请记住,在立方图的特定面上给出标准化坐标(u,v)的情况下,有多种方法可以估算等距矩形图像中像素的颜色.最基本的方法是非常原始的近似,并且为了简单起见将在本答案中使用,是将坐标舍入到特定像素并使用该像素.其他更高级的方法可以计算几个相邻像素的平均值.
算法的实现将根据上下文而变化.我在Unity3D C#中进行了快速实现,演示了如何在真实场景中实现该算法.它运行在CPU上,还有很大的改进空间,但很容易理解.
using UnityEngine; public static class CubemapConverter { public static byte[] ConvertToEquirectangular(Texture2D sourceTexture, int outputWidth, int outputHeight) { Texture2D equiTexture = new Texture2D(outputWidth, outputHeight, TextureFormat.ARGB32, false); float u, v; //Normalised texture coordinates, from 0 to 1, starting at lower left corner float phi, theta; //Polar coordinates int cubeFaceWidth, cubeFaceHeight; cubeFaceWidth = sourceTexture.width / 4; //4 horizontal faces cubeFaceHeight = sourceTexture.height / 3; //3 vertical faces for (int j = 0; j < equiTexture.height; j++) { //Rows start from the bottom v = 1 - ((float)j / equiTexture.height); theta = v * Mathf.PI; for (int i = 0; i < equiTexture.width; i++) { //Columns start from the left u = ((float)i / equiTexture.width); phi = u * 2 * Mathf.PI; float x, y, z; //Unit vector x = Mathf.Sin(phi) * Mathf.Sin(theta) * -1; y = Mathf.Cos(theta); z = Mathf.Cos(phi) * Mathf.Sin(theta) * -1; float xa, ya, za; float a; a = Mathf.Max(new float[3] { Mathf.Abs(x), Mathf.Abs(y), Mathf.Abs(z) }); //Vector Parallel to the unit vector that lies on one of the cube faces xa = x / a; ya = y / a; za = z / a; Color color; int xPixel, yPixel; int xOffset, yOffset; if (xa == 1) { //Right xPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceWidth); xOffset = 2 * cubeFaceWidth; //Offset yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight); yOffset = cubeFaceHeight; //Offset } else if (xa == -1) { //Left xPixel = (int)((((za + 1f) / 2f)) * cubeFaceWidth); xOffset = 0; yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight); yOffset = cubeFaceHeight; } else if (ya == 1) { //Up xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth); xOffset = cubeFaceWidth; yPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceHeight); yOffset = 2 * cubeFaceHeight; } else if (ya == -1) { //Down xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth); xOffset = cubeFaceWidth; yPixel = (int)((((za + 1f) / 2f)) * cubeFaceHeight); yOffset = 0; } else if (za == 1) { //Front xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth); xOffset = cubeFaceWidth; yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight); yOffset = cubeFaceHeight; } else if (za == -1) { //Back xPixel = (int)((((xa + 1f) / 2f) - 1f) * cubeFaceWidth); xOffset = 3 * cubeFaceWidth; yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight); yOffset = cubeFaceHeight; } else { Debug.LogWarning("Unknown face, something went wrong"); xPixel = 0; yPixel = 0; xOffset = 0; yOffset = 0; } xPixel = Mathf.Abs(xPixel); yPixel = Mathf.Abs(yPixel); xPixel += xOffset; yPixel += yOffset; color = sourceTexture.GetPixel(xPixel, yPixel); equiTexture.SetPixel(i, j, color); } } equiTexture.Apply(); var bytes = equiTexture.EncodeToPNG(); Object.DestroyImmediate(equiTexture); return bytes; } }
为了利用GPU,我创建了一个执行相同转换的着色器.它比在CPU上逐像素地运行转换要快得多,但不幸的是Unity对立方体贴图施加了分辨率限制,因此在使用高分辨率输入图像的情况下它的有用性受到限制.
Shader "Conversion/CubemapToEquirectangular" { Properties { _MainTex ("Cubemap (RGB)", CUBE) = "" {} } Subshader { Pass { ZTest Always Cull Off ZWrite Off Fog { Mode off } CGPROGRAM #pragma vertex vert #pragma fragment frag #pragma fragmentoption ARB_precision_hint_fastest //#pragma fragmentoption ARB_precision_hint_nicest #include "UnityCG.cginc" #define PI 3.141592653589793 #define TWOPI 6.283185307179587 struct v2f { float4 pos : POSITION; float2 uv : TEXCOORD0; }; samplerCUBE _MainTex; v2f vert( appdata_img v ) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv = v.texcoord.xy * float2(TWOPI, PI); return o; } fixed4 frag(v2f i) : COLOR { float theta = i.uv.y; float phi = i.uv.x; float3 unit = float3(0,0,0); unit.x = sin(phi) * sin(theta) * -1; unit.y = cos(theta) * -1; unit.z = cos(phi) * sin(theta) * -1; return texCUBE(_MainTex, unit); } ENDCG } } Fallback Off }
通过采用更复杂的方法来估计转换期间像素的颜色或通过后处理所得到的图像(或实际上两者),可以极大地改善所得图像的质量.例如,可以生成更大尺寸的图像以应用模糊滤波器,然后将其下采样到期望的大小.
我用两个编辑器向导创建了一个简单的Unity项目,该向导展示了如何正确使用C#代码或上面显示的着色器.在此处获取:https: //github.com/Mapiarz/CubemapToEquirectangular
请记住在Unity中为输入图像设置正确的导入设置:
点过滤
Truecolor格式
禁用mipmap
非幂2:无(仅适用于2D纹理)
启用读/写(仅适用于2D纹理)