Working with User Interface (UI) in Unreal Engine

How to work with User Interface (UI) in the Unreal Engine?

UI (User Interface) or GUI (Graphics User Interface) is a dynamic graphical interface through which players can interact with the game (control, change settings…) as well as be informed by the game about important events.

The type, quantity, and structure of GUI elements are chosen based on the application's design. The golden rule is to keep the quantity and scope of GUI elements as minimal as possible, as every GUI utilizes a render target to display components on the screen, thus consuming a certain amount of application performance and memory. For mobile devices, it makes sense to mention also the relationship with power consumption.

A unique way of placing GUI in the space is through the head-up display or HUD. As the name suggests, this involves associating the user interface with the camera of the player's object, the Player Pawn.

Text Render Actor (Isolated UI Element)

Using the Text Render Actor makes sense in the case of standalone simple text (displaying names, information) placed in the virtual 3D world. It allows the use of Signed Distance Fonts or SDF fonts. These work by encoding the text into a texture, enabling texture sampling in the GPU to smooth the edges of characters → the text has sharp edges from any distance perspective. Apart from sharpness, rendering is faster and allows mask rendering with anti-aliasing. Configuration is done during font import into Unreal Engine [Right Mouse Button → User InterfaceFont] (for use in Text Render Actor), followed by configuring the options:

  • Font Cache typeoffline
  • Create Printable Onlyenabled
  • Used Distance Field Alphaenabled

An implementation example is part of Unreal's assets - in the "Engine Content" window, search for the term "textmaterial".

  • Predefined SDFs work well for Temporal Anti-Aliasing.
  • For MSAA, you need to change the material to the masked type.

Unreal Motion Graphics (UMG)

UMG (Unreal Motion Graphics) is a modern and extensive toolkit for creating user interfaces in Unreal Engine. It is suitable for creating any type of user interface. Technically, it is built on the proven concept of canvas elements (canvas is at the top of the UI element hierarchy), in the case of Unreal Engine, it's the Slate UI Framework. Potentially time-consuming implementation is compensated by a lower number of draw calls, typically resulting in better application performance.

UMG is managed through the Widget component. From a timing perspective, a widget is first rendered to a render target, and only then displayed in the 3D space. Its updates can then be sampled using the regular frame rendering, saving a significant amount of time.

2 Widget types in Unreal Engine

  • Widget Blueprint

    • Designed for in-game 2D user interfaces
    • It can be created by right mouse button click in Content DrawerUser InterfaceWidget Blueprint
  • Editor Utility Widget

    • Designed for Editor tools 2D user interfaces
    • It can be created by right mouse button click in Content DrawerEditor UtilitiesEditor Utility Widget

UMG + Mip-maps

When using textures within canvas elements, the texture should always include mip-maps. These can be automatically generated in UMG at the time of completion of rendering and before its use in the 3D space (MipMap box for ticking). Mip-map textures require maintaining a size ratio of 2x (256, 512, 1024, 2048…). Higher than 2048 often doesn't make sense in relation to the resolution of typical screens. The generated sheet mipmap occupies the same memory as the original texture.

Smaller mipmaps always look better when viewed from a distance, and larger mipmaps look better when viewed up close. Smooth transitions between individual mipmaps are achieved using Trilinear sampling. For better readability from angles, anisotropic filtering is combined.

When working with UMG, it's important to consider some blurring of edges when zooming in or out significantly due to texture downsizing (characters affecting fractions of pixels rather than whole pixels). Distance field math works well for edges but doesn't work well when multiple edges are within a pixel. In such cases, severe aliasing can occur. With some complexity and performance impact, the quality can be partly increased by enhancing the Distance Field Shader.

Stereo Layers

Stereo Layers are rendered into the 3D world in a separate rendering pass outside the rest of the application processes. The point of this is that the compositor never "misses" a frame - in other words, it won't fail to render a frame within the allocated time of X ms defined by the minimum required FPS (usually 13.88 ms for Quest / 11.11 ms for Rift S). This feature is valuable in several types of use cases:

  1. Applications with a high degree of user freedom → risk of scene cluttering with elements
  2. UI during level loading / level switching (this can avoid the need for whiteout transitions)

The benefits of using Stereo Layers are often balanced by the demanding management and blending of Stereo Layers with the rest of the 3D content due to stereo convergence (2 objects are rendered in 2 different planes, and they exist (must converge) in the same screen space) - simply to make all elements fit and harmonize. Multiple layers can be used and ordered based on depth.

Head-up Display (HUD) in VR

The Head-up Display is UI displayed directly in front of the screen - the camera of the Player Pawn object and simultaneously tracking its movement. Often, this UI is an embedded object of the Player Pawn.

For the HUD, the design of UI presentation is very important. The most optimal display is as close to the center of the displays as possible. The farther the UI is displayed from the center, the readability decreases, and it becomes more straining for the eyes (the need to observe something in the "periphery"). Furthermore, in the case of displaying near the screen edges with active fixed foveated rendering (FFR), the UI will be rendered in reduced resolution.

If we move the "rendering plane" away from the camera, it's necessary to address potential collisions between the UI and surrounding objects in the space. Also, a tight coupling of camera movement with UI movement can potentially induce Motion Sickness - a suitable solution is to apply a slightly delayed, freely-moving motion to the UI in response to camera movement (as if the UI has some mass that needs to be gradually set in motion and then stopped).

In the HUD, you can use Text Render Actors, HUD components, and Stereo Layers.