top of page
Writer's pictureAlan

Stereoscopic Rendering with support for MultiView Rendering

Updated: Dec 13, 2022


In this blog, I will show how I implemented stereoscopic rendering using multi-view rendering in my Kurama Engine. This is not meant to be a complete tutorial, but just more of a way to document my learning process. If you spot any mistakes in my explanations, please feel free to let me know!


Note: Please note that I am coding in Java using LWJGL, and thus some of the vulkan method signatures might look a bit different


 

Stereoscopic Rendering


Stereoscopic rendering is the process of rendering the scene from two different perspectives that mimic the human eyes, and combining the two renders in a way that lets the user perceive depth and experience a "3d effect".


In this post, I will be showing how I implemented anaglyph stereo, which involves applying a different color filter to each of the rendered images, and the user wears 3D glasses with the respective filters that let the user perceive "3D".


Anaglyph Rendering as implemented in Kurama Engine

 

The Process


During each frame, we first generate two different camera matrices to simulate the left and right eye cameras. These are generated from the main camera's position and rotational information, while taking into account the eyeSeparation.



The math I implemented is taken and modified from Willem Sascha's multi-view rendering example.



Here is my code:

var projMatrices =
        createStereoProjectionMatrices(imageWidth, imageHeight,
        fovX, eyeSeparation, focalLength,
        nearClippingPlane, farClippingPlane);

leftProjection = projMatrices[0];
rightProjection = projMatrices[1];

Matrix rotationMatrix = getOrientation().getRotationMatrix();
Matrix scalingMatrix = Matrix.getDiagonalMatrix(getScale());
Matrix rotScalMatrix = rotationMatrix.matMul(scalingMatrix);

Vector cameraRight = rotationMatrix.getColumn(0).normalise();

leftObjectToWorld = rotScalMatrix.
        addColumn(getPos().sub(cameraRight.scalarMul(eyeSeparation/2f))).
        addRow(new Vector(new float[]{0, 0, 0, 1}));

rightObjectToWorld = rotScalMatrix.
        addColumn(getPos().add(cameraRight.scalarMul(eyeSeparation/2f))).
        addRow(new Vector(new float[]{0, 0, 0, 1}));

leftWorldToCam = leftObjectToWorld.getInverse();
rightWorldToCam = rightObjectToWorld.getInverse();

public static Matrix[] createStereoProjectionMatrices(int width, 
    int height, float fov, float eyeSeparation, float focalLength, 
    float zNear, float zFar) {
    
    Matrix[] matrices = new Matrix[2];

    // Calculate some variables
    float aspectRatio = (width) / (float)height;
    float wd2 = (float) (zNear * tan(Math.toRadians(fov / 2.0f)));
    float ndfl = zNear / focalLength;
    float left, right;
    float top = wd2;
    float bottom = -wd2;

    // Left eye
    left = -aspectRatio * wd2 + 0.5f * eyeSeparation * ndfl;
    right = aspectRatio * wd2 + 0.5f * eyeSeparation * ndfl;

    matrices[0] = Matrix.buildPerspectiveProjectionMatrix(zNear, zFar,
                                             left, right, top, bottom);

    // Right eye
    left = -aspectRatio * wd2 - 0.5f * eyeSeparation * ndfl;
    right = aspectRatio * wd2 - 0.5f * eyeSeparation * ndfl;

    matrices[1] = Matrix.buildPerspectiveProjectionMatrix(zNear, zFar,
                                             left, right, top, bottom);

    return matrices;
}

Now that we have the left and right eye camera matrices, we basically render the scene twice, once from the left eye's perspective and the other from the right eye. We render these into two different images, and then do another pass that combine these two images by applying red-cyan color filters, and generating a final image as seen above. The generated image would look "right" only when seen with the appropriate anaglpyh 3d glasses.



Here is the vertex shader for the anaglyph generation pass:

#version 450

layout (location = 0) out vec2 outUV;

void main()
    {
        int vert = int(gl_VertexIndex);
        outUV = vec2((gl_VertexIndex << 1) & 2, gl_VertexIndex & 2);
        gl_Position = vec4(outUV * 2.0f - 1.0f, 0.0f, 1.0f);
    }

This is just vertex shader "trickey" to generate UVs for a triangle that spans the frame buffer.


This is how the draw call looks:

// Render a fullscreen triangle, so that the fragment shader is run for 
    each pixel
vkCmdDraw(commandBuffer, 3, 1,0, 0);

And here is the fragment shader:

#version 450

layout (set = 0, binding = 0) uniform sampler2DArray samplerView;


layout (location = 0) in vec2 inUV;
layout (location = 0) out vec4 outColor;

void main() {

    bool inside = ((inUV.x >= 0.0) && (inUV.x <= 1.0) && (inUV.y >= 0.0)
                                   && (inUV.y <= 1.0));

    // red - cyan
    outColor = inside ? vec4(texture(samplerView, vec3(inUV, 0)).x, 0,0,0)
                      : vec4(0.0);
    outColor += inside? vec4(0, texture(samplerView, vec3(inUV, 1)).yz, 1) 
                      : vec4(0.0);
}

Since a portion of the generated triangle is outside the frame buffer, we first check whether the fragment is inside the frame buffer or not. If it is, we then generate the final output color by applying a red filter to the left-image and a cyan filter to the right-image and combining the pixels.


 

And that's it, you now have a anaglyph stereo renderer! Just throw on some 3D glasses, and have fun messing around with the parameters to see how it affects the 3D effect!


Though the renderer is complete, we could still do some optimizations.

 

Multiview Rendering



With our current setup, we are rendering the entire scene twice each frame, thus effectively halving our FPS. But if we think about it, since the left and right cameras are only slighly different, the two renders are mostly the same.


But here comes VK_KHR_MULTIVIEW to the rescue; It is a feature that tells the graphics pipeline to render n number of times without having to actually record duplicate commands. Of course, not having to run the graphics pipeline twice manually is already more efficient, but in addition to that, this vulkan extension also enables the developer to let the driver know that the multiple renders are related to each other, and thus it lets the driver do some behind-the-scene optimizations so that it is not as bad as having to render the entire scene twice.



We first need to ensure that VK_KHR_MULTIVIEW_EXTENSION_NAME is enabled when creating the logical device. Then, enable multiview in the Vulkan 1.1 features:


...
 var vkPhysicalDeviceVulkan11Features = 
                      VkPhysicalDeviceVulkan11Features.calloc(stack);
vkPhysicalDeviceVulkan11Features.sType(                                                           
                VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_1_FEATURES);
vkPhysicalDeviceVulkan11Features.multiview(true); 


VkDeviceCreateInfo createInfo = VkDeviceCreateInfo.calloc(stack);
createInfo.sType(VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO);
createInfo.pNext(vkPhysicalDeviceVulkan11Features);
...

Then, when you create the renderpass, we also need to attach a Multiview Create Info:


...
var viewMask = stack.ints(Integer.parseInt("00000011", 2));
var correlationMask = stack.ints(Integer.parseInt("00000011", 2));

var multiviewCreateInfo = VkRenderPassMultiviewCreateInfo.calloc();
multiviewCreateInfo.sType(
                VK_STRUCTURE_TYPE_RENDER_PASS_MULTIVIEW_CREATE_INFO);
multiviewCreateInfo.pViewMasks(viewMask);
multiviewCreateInfo.pCorrelationMasks(correlationMask);

renderPassInfo.pNext(multiviewCreateInfo);

...

The viewMask parameters specifies which view index corresponds to which subpass. The correlationMask is more interesting, it lets the driver know that the specified views are spatially correlated, and thus it would be more efficient to render them concurrently.



We will also be rendering to a single image that has 2 layers, instead of two completely separate images. Therefore, we need to modify the color and depth render targets to account for this:


Color Attachment:

(Note: multiViewNumLayers is 2 in this case)

...
var imageInfo = createImageCreateInfo(
        swapChainImageFormat,
        VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT,
        extent,
        1,
        VK_IMAGE_TILING_OPTIMAL,
        multiViewNumLayers,
        1,
        stack);

var memoryAllocInfo = VmaAllocationCreateInfo.calloc(stack)
        .usage(VMA_MEMORY_USAGE_GPU_ONLY)
        .requiredFlags(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);

colorAttachment.allocatedImage = createImage(imageInfo, memoryAllocInfo,
                                                         vmaAllocator);

var viewInfo =
        createImageViewCreateInfo(
                swapChainImageFormat,
                colorAttachment.allocatedImage.image,
                VK_IMAGE_ASPECT_COLOR_BIT,
                1,
                multiViewNumLayers,
                VK_IMAGE_VIEW_TYPE_2D_ARRAY,
                stack
        );
colorAttachment.imageView = createImageView(viewInfo, device);
...

Depth Attachment:

var imageInfo = createImageCreateInfo(
        depthFormat,
        VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT,
        extent,
        1,
        VK_IMAGE_TILING_OPTIMAL,
        multiViewNumLayers,
        1,
        stack);

var memoryAllocInfo = VmaAllocationCreateInfo.calloc(stack)
        .usage(VMA_MEMORY_USAGE_GPU_ONLY)
        .requiredFlags(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);

depthAttachment.allocatedImage = createImage(imageInfo, memoryAllocInfo, 
                                                        vmaAllocator);

var viewInfo =
        createImageViewCreateInfo(
                depthFormat,
                depthAttachment.allocatedImage.image,
                VK_IMAGE_ASPECT_DEPTH_BIT,
                1,
                multiViewNumLayers,
                VK_IMAGE_VIEW_TYPE_2D_ARRAY,
                stack
        );

depthAttachment.imageView = createImageView(viewInfo, device);

 

And that should be it! I have shown most of the important steps necessary for implementing anaglpyh stereo with multiview rendering. Of course, I did not show all the details, so I would recommend looking more into Sasha Willem's multiview or my Anaglyph Renderer's code if you are interested in learning more.


16 views0 comments

Recent Posts

See All

Multi Draw Indirect Rendering in Vulkan

In this blog, I will show how I implemented multi draw indirect rendering in my engine. As usual, this post is not meant to be a...

Comments


bottom of page