Immediate-mode user interface
I have always been perplexed by how immediate-mode GUI libraries like Dear ImGui and Nuklear work. They are almost too good to be true. They're APIs are as close to simple as they can get, yet can provide everything that a standard, retained-mode graphical user interface could. I wanted to understand how they work so for a weekend I set out to write my own simple immediate-mode graphical user interface for a hypothetical application. To start off, I wrote an extremely simple renderer that only draws rectangles with different colors. This was all I needed to start writing the UI system.
Begin and End
Like I said before, the API for an immediate-mode user interface is very simple. It consists of a bunch of procedural calls that draw and handle the elements that you want to have in the window, and creating a window is just as simple. All you need are Begin() and End() calls. Every call that happens between are elements that will end up being parented to the window.
Making windows moveable
Something that was apparent right after I created the window was that I wanted to be able to drag the window around and reposition it. This meant that I needed some way of identifying the window and storing its position between frames. In an immediate-mode user interface, identification can be what ever you want it to be. For simplicity, I decided to give each window its own ID as an int. This ID was then used to store the position and size of the window in a map which allowed me to redraw the window at its last position every frame. The benefit of using a map to store the state is that it is very easy to change the identification type to something else.
Dear ImGui uses the labels users provide as identification for the elements. Changing the IDs to strings in the future in my own system is trivial since I am using a map.
Buttons and sliders
Creating a button is simple. Calling Button() creates a button, and if the button detects a click it returns true. This is how you would execute code on a button click, as opposed to callback functions in a retained-mode user interface.
Creating a slider is also simple, kind of. I started by making a slider that changes its value when the user clicks on it and drags the cursor. However, this introduced a problem. If the user dragged the cursor outside of the slider's rectangle, the interaction would get cut off and the slider would stop. To solve this, I needed to remember which element was active between frames. When the user clicked on an element, I needed to set it as the active element until the mouse button was released. This allowed the slider to change its value even if the mouse went outside of it, because it does not execute its code based on if the mouse is over it, rather it continues the interaction if it is still the active item, regardless of whether the mouse is over it or not. Then all I had to to was set it as not active if it was active when the mouse button went up.
Signed distance fields
The UI felt a little too "boxy", so I wanted to add some roundness to the corners of my elements. I had recently learned about signed distance fields and figured I could use that to achieve my goal. Researching about it, I stumbled across Inigo Quilez's YouTube video titled Rounding Corners in SDFs. This video was exactly what I needed. I jumped into ShaderToy and started writing an SDF for a rectangle. This was math that I had never done before but it was pretty easy to implement since I knew the formula. Afterwards, rounding the edges was as simple as subtracting the resulting SDF with a number. This number represented how much the SDF would be rounded. I plugged that into my pixel shader and voila! Rounded corners.
What I learned from this project is that a simple immediate-mode user interface is not as complex as I thought that it was. It is trivial to get something simple up and running, and expanding it at a later time is always an option. Dear ImGui's complexity lies mostly in all the features it supports, but the core principles behind it stay the same. I have always implemented UIs as an object-oriented structure of element instances created when the application initializes, and then handling the graphical changes when the user interacts with that item separately. After seeing how simple an immediate-mode approach to building the UI is, this is the way I will be trying to implement any UI in the future. In a game engine for example, it is a very natural choice to want to reuse the graphics context which you have to render the game, to also render the UI and tools for the editor and an immediate-mode approach facilitates that.