Artstation Github Mastodon Youtube

Necdet Yavaş

Vulkan API Notes

Vulkan Resources

Khronos Vulkan Tutorial

Vulkan Guide

Vulkan Hub

Vulkan Objects

Based on AMD GPUOpen Developer Guide

Objects
Instance

First object you create. Represents the connection from your application to the Vulkan runtime. Should exist once in your application. Stores all application specific state required to use Vulkan. You must specify all layers (like the Validation Layer) and all extensions you want to enable when creating an Instance.

PhysicalDevice

Represents a specific Vulkan-compatible device, like a graphics card. Enumerate these from “Instance” and query them for their vendorID , deviceID , and supported features, as well as other properties and limits.

PhysicalDevice can enumerate all available types of Queue Families. The graphics queue is the main one, but you may also have additional ones that support only Compute or Transfer.

A Memory Heap represents a specific pool of RAM. It may abstract your system RAM on the motherboard or a certain memory space in video RAM on a dedicated graphics card, or any other host- or device-specific memory the implementation wants to expose.

You must specify the Memory Type when allocating memory. It holds specific requirements for the memory blob like visible to the host, coherent (between CPU and GPU) and cached. There may be an arbitrary combination of these, depending on the device driver.

Device

Can be thought of as a logical device, or opened device. It is the main object that represents an initialized Vulkan device that is ready to create all other objects. During device creation, you need to specify which features you want to enable, and some of them are fundamental like anisotropic texture filtering. You also must state all queues that will be in use, their number and their Queue Families.

Queue

Represents a queue of commands to be executed on the Device. All the actual work to be done by the GPU is requested by filling CommandBuffers and submitting them to Queues, using the function vkQueueSubmit. If you have multiple queues like the main graphics queue and a compute queue, you can submit different CommandBuffers to each of them. This way you can enable asynchronous compute, which can lead to a substantial speed up if done right.

CommandPool

A simple object that is used to allocate CommandBuffers. It’s connected to a specific Queue Family.

CommandBuffer

Allocated from a specific CommandPool. It represents a buffer of various commands to be executed by a Device. You can call various functions on a command buffer, all of them starting with vkCmd. They are used to specify the order, type and parameters of tasks that should be performed when the CommandBuffer is submitted to a Queue and is finally consumed by the Device.

Buffer

A type of resource that occupy device memory. It is a container for any binary data that just has its length, expressed in bytes.

BufferView

Buffers aren’t always used directly in rendering. On top of them there is another layer, called views. You can think about them somewhat like views in databases – sets of parameters that can be used to look at a set of underlying data in a desired way.

BufferView is an object created based on a specific buffer. You can pass offset and range during creation to limit the view to only a subset of buffer data.

Image

A type of resource that occupy device memory. Represents a set of pixels. This is the object known in other graphics APIs as a texture. There are many more parameters needed to specify creation of an Image. It can be 1D, 2D or 3D, have various pixel formats (like R8G8B8A8_UNORM or R32_SFLOAT ) and can also consist of many discrete images, because it can have multiple array layers or MIP levels (or both). It doesn’t necessarily consist of just a linear set of pixels that can be accessed directly. Images can have a different implementation-specific internal format (tiling and layout) managed by the graphics driver.

ImageView

Images aren’t always used directly in rendering. On top of them there is another layer, called views. You can think about them somewhat like views in databases – sets of parameters that can be used to look at a set of underlying data in a desired way.

ImageView is a set of parameters referring to a specific image. There you can interpret pixels as having some other (compatible) format, swizzle any components, and limit the view to a specific range of MIP levels or array layers.

Sampler

Not bound to any specific Image. It is rather just a set of state parameters, like filtering mode (nearest or linear) or addressing mode (repeat, clamp-to-edge, clamp-to-border etc.).

DeviceMemory

Creating a Buffer of certain length or an Image with specific dimensions doesn’t automatically allocate memory for it. It is a 3-step process that must be manually performed by you. You can also choose to use our Vulkan Memory Allocator library which takes care of the allocation for you.

Allocate DeviceMemory, Create Buffer or Image, bind them together using function vkBindBufferMemory or vkBindImageMemory.

That’s why you must also create a DeviceMemory object. It represents a block of memory allocated from a specific memory type (as supported by PhysicalDevice) with a specific length in bytes. You shouldn’t allocate separate DeviceMemory for each Buffer or Image. Instead, you should allocate bigger chunks of memory and assign parts of them to your Buffers and Images. Allocation is a costly operation and there is a limit on maximum number of allocations as well, all of which can be queried from your PhysicalDevice.

Surface, Swapchain

One exception to the obligation to allocate and bind DeviceMemory for every Image is the creation of a Swapchain. This is a concept used to present the final image on the screen or inside the window you’re drawing into on your operating system. As such, the way of creating it is platform dependent. If you already have a window initialized using a system API, you first need to create a SurfaceKHR object. It needs the Instance object, as well as some system-dependent parameters. For example, on Windows these are: instance handle ( HINSTANCE ) and window handle ( HWND ). You can imagine SurfaceKHR object as the Vulkan representation of a window.

From it you can create SwapchainKHR. This object requires a Device. It represents a set of images that can be presented on the Surface, e.g. using double- or triple-buffering. From the swapchain you can query it for the Images it contains. These images already have their backing memory allocated by the system.

DescriptorSetLayout

The way Shaders can access Buffers, Images and Samplers is through descriptors. Descriptors don’t exist on their own, but are always grouped in descriptor sets. But before you create a descriptor set, its layout must be specified by creating a DescriptorSetLayout, which behaves like a template for a descriptor set.

DescriptorPool

Used to allocate descriptor sets. When creating a descriptor pool, you must specify the maximum number of descriptor sets and descriptors of different types that you are going to allocate from it.

DescriptorSet

You need both DescriptorPool and DescriptorSetLayout to be able to create it. It represents memory that holds actual descriptors. It can be configured so that a descriptor points to specific Buffer, BufferView, Image or Sampler. You can do it by using function vkUpdateDescriptorSets.

Several DescriptorSets can be bound as active sets in a CommandBuffer to be used by rendering commands. To do this, use the function vkCmdBindDescriptorSets. This function requires another object as well – PipelineLayout, because there may be multiple DescriptorSets bound and Vulkan wants to know in advance how many and what types of them it should expect.

PipelineLayout

Represents a configuration of the rendering pipeline in terms of what types of descriptor sets will be bound to the CommandBuffer. You create it from an array of DescriptorSetLayouts.

RenderPass

In other graphics APIs you can take the immediate mode approach and just render whatever comes next on your list. This is not possible in Vulkan. Instead, you need to plan the rendering of your frame in advance and organize it into passes and subpasses. Subpasses are not separate objects, so we won’t talk about them here, but they’re an important part of the rendering system in Vulkan. Fortunately, you don’t need to know all the details when preparing your workload. For example, you can specify the number of triangles to render on submission. The crucial part when defining a RenderPass in Vulkan is the number and formats of Attachments that will be used in that pass.

Attachment

Vulkan’s name for what you might otherwise know as a render target – an Image to be used as output from rendering. You don’t point to specific Images here – you just describe their formats. For example, a simple rendering pass may use a color attachment with format R8G8B8A8_UNORM and a depth-stencil attachment with format D16_UNORM . You also specify whether your attachment should have its content preserved, discarded or cleared at the beginning of the pass.

Framebuffer

(not to be confused with SwapchainKHR) Represents a link to actual Images that can be used as Attachments (render targets). You create a Framebuffer object by specifying the RenderPass and a set of ImageViews. Of course, their number and formats must match the specification of the RenderPass. Framebuffer is another layer on top of Images and basically groups these ImageViews together to be bound as attachments during rendering of a specific RenderPass. Whenever you begin rendering of a RenderPass, you call the function vkCmdBeginRenderPass and you also pass the Framebuffer to it.

Pipeline

Pipeline is the big one, as it composes most of the objects listed before. It represents the configuration of the whole pipeline and has a lot of parameters. One of them is PipelineLayout – it defines the layout of descriptors and push constants. There are two types of Pipelines – ComputePipeline and GraphicsPipeline. ComputePipeline is the simpler one, because all it supports is compute-only programs (sometimes called compute shaders). GraphicsPipeline is much more complex, because it encompasses all the parameters like vertex, fragment, geometry, compute and tessellation where applicable, plus things like vertex attributes, primitive topology, backface culling, and blending mode, to name just a few. All those parameters that used to be separate settings in much older graphics APIs (DirectX 9, OpenGL), were later grouped into a smaller number of state objects as the APIs progressed (DirectX 10 and 11) and must now be baked into a single big, immutable object with today’s modern APIs like Vulkan. For each different set of parameters needed during rendering you must create a new Pipeline. You can then set it as the current active Pipeline in a CommandBuffer by calling the function vkCmdBindPipeline.

PipelineCache

There is also a helper object called PipelineCache, that can be used to speed up Pipeline creation. It is a simple object that you can optionally pass in during Pipeline creation, but that really helps to improve performance via reduced memory usage, and the compilation time of your pipelines. The driver can use it internally to store some intermediate data, so that the creation of similar Pipelines could potentially be faster. You can also save and load the state of a PipelineCache object to a buffer of binary data, to save it on disk and use it the next time your application is executed. We recommend you use them!

ShaderModule

Shader compilation is a multi-stage process in Vulkan. First, Vulkan doesn’t support any high-level shading language like GLSL or HLSL. Instead, Vulkan accepts an intermediate format called SPIR-V which any higher-level language can emit. A buffer filled with data in SPIR-V is used to create a ShaderModule. This object represents a piece of shader code, possibly in some partially compiled form, but it’s not anything the GPU can execute yet. Only when creating the Pipeline for each shader stage you are going to use (vertex, tessellation control, tessellation evaluation, geometry, fragment, or compute) do you specify the ShaderModule plus the name of the entry point function (like “main”).