From a purely technical perspective, UE is an absolute monster. It's not even remotely in the same league as Unity, Godot, etc. when it comes to iteration difficulty and tooling.
I struggle with UE over others for any project that doesn't demand an HDRP equivalent and nanometric mesh resolution. Unity isn't exactly a walk in the park either but the iteration speed tends to be much higher if you aren't a AAA wizard with an entire army at your disposal. I've never once had a UE project on my machine that made me feel I was on a happy path.
Godot and Unity are like cheating by comparison. ~Instant play mode and trivial debugging experience makes a huge difference for solo and small teams. Any experienced .NET developer can become productive on a Unity project in <1 day with reasonable mentorship. The best strategy I had for UE was to just use blueprints, but this is really bad at source control and code review time.
Any time a library in your code goes from being used by a couple people to used by everyone, you have to periodically audit it from then on.
A set of libraries on our code had hit 20% of response time through years of accretion. A couple months to cut that in half, no architectural or cache changes. Just about the largest and definitely the most cost effective initiative we completed on that team.
Looking at flame charts is only step one. You also need to look at invocation counts, for things that seem to be getting called far more often than they should be. Profiling tools frequently (dare I say consistently) misattribute costs of functions due to pressures on the CPU subsystems. And most of the times I’ve found optimizations that were substantially larger improvements than expected, it’s been from cumulative call count, not run time.
Seems to make the app smoother the more models we had. Rendering the UI (not downloading the code, this is still part of the bundle) only when you need it seems to be a low hanging fruit for optimizing performance.
Alternatively, how many modals can be open at any given time? And is it a floating element? May be an option to make it a global single instance thing then, set the content when needed. Allows for in/out transitions, too, as another commenter pointed out. See also "Portals" in React.
I remember solving this problem before. These are both global components, so you create a single global instance and control them with a global context or function.
You basically have a global part of the component and a local part. The global part is what actually gets rendered when necessary and manages current state, the local part defines what content will be rendered inside the global part for a particular trigger and interacts with the global part when a trigger condition happens (eg hover timeout for a tooltip).
Hmm, so what exactly is stored in that gigabyte of tooltips? Even 100,000 tooltips per language should take maybe a few tens of megabytes of space. How many localizations does the editor have?
Kinda annoying that the article doesn't really answer the core question, which is how much time was saved in the start up time. It does give a 0.05ms per tooltip figure, so I guess multiplied by 38000 gives ~2s saved, which is not too bad.
"Together, these two problems can result in the editor spending an extremely long time just creating unused tooltips. In a debug build of the engine, creating all of these tooltips resulted in 2-5 seconds of startup time. In comparison development builds were faster, taking just under a second."
At most one instance at start up. Asynchronous creation or lazy creation on first use are two other potential options. Speaking generally, not Unreal-specific.
I once made the mistake to buy some sound effects from Fab, I had to download the entire Unreal Engine and start it to create a project to then import the assets..
It took the whole afternoon
It's no wonder UE5 games have the reputation of being poorly optimized, you need an insane machine only just to run the editor..
State of the art graphics pipeline, but webdev level of bloat when it comes to software.. I'd even argue electron is a smoother experience tan Unreal Engine Editor
It's just like your computer and IDE, you start it up and never shut it down again.
Wouldn't it taking the whole afternoon be because it's downloading and installing assets, creating caches, indexing, etc?
Like with IDEs, it really doesn't matter much once they're up and running, and the performance of the product has ultimately little to do with the tools used in making them. Poorly optimized games have the reputation of being poorly optimized, that's rarely down to the engine. Maybe the complete package, where it's too easy to just use and plop down assets from the internets without tweaking for performance or having a performance budget per scene.
Being around back in days when LCDs replaced the CRTs and learning importance of native resolutions. I feel like recent games have been saved too much by frame-generation and all sort of weird resolution hacks... Mostly by Nvidia and AMD.
I am kinda sad we have reached point where native resolution is not the standard for high mid tier/low high tier GPUs. Surely games should run natively at non-4k resolution on my 700€+ GPU...
Games haven't been running full native resolution for quite some time, maybe even the last decade, as they tend to render to a smaller buffer and then upscale to the desired resolution in order to achieve better frame rates. This doesn't even include frame generation which is trading off supposed higher frame rates for worse response times so the games can feel worse to play.
By Games I mean modern AAA first or third person games. 2D and others will often run at full resolution all the time.
This is one scenario where IMGUI approaches have a small win, even if it's by accident - since GUI elements are constructed on demand in immediate mode, invisible/unused elements won't have tooltip setup run, and the tooltip setup code will probably only run for the control that's showing a tooltip.
(Depending on your IMGUI API you might be setting tooltip text in advance as a constant on every visible control, but that's probably a lot fewer than 38000 controls, I'd hope.)
It's interesting that every control previously had its own dedicated tooltip component, instead of having all controls share a single system wide tooltip. I'm curious why they designed it that way.
From a purely technical perspective, UE is an absolute monster. It's not even remotely in the same league as Unity, Godot, etc. when it comes to iteration difficulty and tooling.
I struggle with UE over others for any project that doesn't demand an HDRP equivalent and nanometric mesh resolution. Unity isn't exactly a walk in the park either but the iteration speed tends to be much higher if you aren't a AAA wizard with an entire army at your disposal. I've never once had a UE project on my machine that made me feel I was on a happy path.
Godot and Unity are like cheating by comparison. ~Instant play mode and trivial debugging experience makes a huge difference for solo and small teams. Any experienced .NET developer can become productive on a Unity project in <1 day with reasonable mentorship. The best strategy I had for UE was to just use blueprints, but this is really bad at source control and code review time.
Any time a library in your code goes from being used by a couple people to used by everyone, you have to periodically audit it from then on.
A set of libraries on our code had hit 20% of response time through years of accretion. A couple months to cut that in half, no architectural or cache changes. Just about the largest and definitely the most cost effective initiative we completed on that team.
Looking at flame charts is only step one. You also need to look at invocation counts, for things that seem to be getting called far more often than they should be. Profiling tools frequently (dare I say consistently) misattribute costs of functions due to pressures on the CPU subsystems. And most of the times I’ve found optimizations that were substantially larger improvements than expected, it’s been from cumulative call count, not run time.
This reminded me, I saw tooltips being a large chunk when I profiled my react app. I should go and check that.
Similarly, adding a modal like this
{isOpen && <Modal isOpen={isOpen} onClose={onClose} />}
instead of
<Modal isOpen={isOpen} onClose={onClose} />
Seems to make the app smoother the more models we had. Rendering the UI (not downloading the code, this is still part of the bundle) only when you need it seems to be a low hanging fruit for optimizing performance.
Alternatively, how many modals can be open at any given time? And is it a floating element? May be an option to make it a global single instance thing then, set the content when needed. Allows for in/out transitions, too, as another commenter pointed out. See also "Portals" in React.
I remember solving this problem before. These are both global components, so you create a single global instance and control them with a global context or function.
You basically have a global part of the component and a local part. The global part is what actually gets rendered when necessary and manages current state, the local part defines what content will be rendered inside the global part for a particular trigger and interacts with the global part when a trigger condition happens (eg hover timeout for a tooltip).
That breaks the out transition.
Even when using view transitions?
https://developer.mozilla.org/en-US/docs/Web/CSS/@starting-s...
So, win-win? I want a modal to get out of the way as fast as possible, any fade/transition animations are keeping me from what I want to look at. :)
Unless you set `isOpen` only when the transition has ended
Isn't isOpen = false what triggers the transition in the first place here?
In the Blazor space we use factories/managers to spawn new instances of a modal/tooltip instead of having something idle waiting for activation.
The tradeoff is for more complicated components, first renders can be slower.
Hmm, so what exactly is stored in that gigabyte of tooltips? Even 100,000 tooltips per language should take maybe a few tens of megabytes of space. How many localizations does the editor have?
Kinda annoying that the article doesn't really answer the core question, which is how much time was saved in the start up time. It does give a 0.05ms per tooltip figure, so I guess multiplied by 38000 gives ~2s saved, which is not too bad.
"Together, these two problems can result in the editor spending an extremely long time just creating unused tooltips. In a debug build of the engine, creating all of these tooltips resulted in 2-5 seconds of startup time. In comparison development builds were faster, taking just under a second."
Don't have access to read the code, but I think ideally there should be only one instance created at startup, right?
At most one instance at start up. Asynchronous creation or lazy creation on first use are two other potential options. Speaking generally, not Unreal-specific.
I once made the mistake to buy some sound effects from Fab, I had to download the entire Unreal Engine and start it to create a project to then import the assets..
It took the whole afternoon
It's no wonder UE5 games have the reputation of being poorly optimized, you need an insane machine only just to run the editor..
State of the art graphics pipeline, but webdev level of bloat when it comes to software.. I'd even argue electron is a smoother experience tan Unreal Engine Editor
Insanity
It's just like your computer and IDE, you start it up and never shut it down again.
Wouldn't it taking the whole afternoon be because it's downloading and installing assets, creating caches, indexing, etc?
Like with IDEs, it really doesn't matter much once they're up and running, and the performance of the product has ultimately little to do with the tools used in making them. Poorly optimized games have the reputation of being poorly optimized, that's rarely down to the engine. Maybe the complete package, where it's too easy to just use and plop down assets from the internets without tweaking for performance or having a performance budget per scene.
Yet it is the engine dominating the industry and beloved by artists of all kinds.
To get UE games that run well you either need your own engine team to optimise it or you drop all fancy new features.
Being around back in days when LCDs replaced the CRTs and learning importance of native resolutions. I feel like recent games have been saved too much by frame-generation and all sort of weird resolution hacks... Mostly by Nvidia and AMD.
I am kinda sad we have reached point where native resolution is not the standard for high mid tier/low high tier GPUs. Surely games should run natively at non-4k resolution on my 700€+ GPU...
Games haven't been running full native resolution for quite some time, maybe even the last decade, as they tend to render to a smaller buffer and then upscale to the desired resolution in order to achieve better frame rates. This doesn't even include frame generation which is trading off supposed higher frame rates for worse response times so the games can feel worse to play.
By Games I mean modern AAA first or third person games. 2D and others will often run at full resolution all the time.
This is one scenario where IMGUI approaches have a small win, even if it's by accident - since GUI elements are constructed on demand in immediate mode, invisible/unused elements won't have tooltip setup run, and the tooltip setup code will probably only run for the control that's showing a tooltip.
(Depending on your IMGUI API you might be setting tooltip text in advance as a constant on every visible control, but that's probably a lot fewer than 38000 controls, I'd hope.)
It's interesting that every control previously had its own dedicated tooltip component, instead of having all controls share a single system wide tooltip. I'm curious why they designed it that way.
How does this compare to React-like approach (React, Flutter, SwiftUI)?
It seems like those libraries do what IMGUI do, but more structured.