Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Unanswered user makes question their MS thesis and answers self 2 years later (stackoverflow.com)
429 points by apitman on Jan 3, 2021 | hide | past | favorite | 97 comments


I'm really sorry for that guy.

Now that he knows for a fact that GUIs can be fast, he'll be suffering for the rest of his life when a GUI app is slow for any reason.

I'm only 50% joking here.


Lay tile just once and you'll spend the rest of your life seeing every fault in every tile job ever.


O wow, this is so true. Eating sausage and knowing how it's made are just very different things. And you can't quite go back to the former after having seen/done the former.


I am assuming, based on the old saying, you are recommending NOT seeing how the sausage gets made?


Well, let me just say that they'll never taste quite the same. Obviously you'll need to renovate parts of your house over time, but as I summarize it to my friends, I care most about the parts of the renovation I you won't be able to see. How is the paint job behind the wall-mounted heater? Did they clear up rubble in that space under you kitchen? Did they tile over your old tiles (usually easy to see) and do you now have small spaces between the tiles where water can collect over time?


This is why my wife hates house hunting with me. I see all the flaws.


True. Source: My own experience.


Drywall, too.


And roofing.


Some defects can be tolerated and DIY’d away, but roofing, plumbing and foundations are something you really don’t want to discover later.


or bending and installing conduit


Also true for terminating a panel, punchdown, keystone, lacing, etc.


or painting


or cutting grass. my god. give me the tools and i'll give you a putting green fit for the pga


is this after you rip out the grass that is already there and then replace it with a grass that is amneable to to a putting green? because if you can make my St. Augustine grass a putting green, then you sir, are a god.


I can probably get it pretty playable and fast over the course of a season or two of working it shorter and shorter and rolling the playing surface smooth. At least municipal golf course tier.


Yep. Do it right once, or see it done right, and you'll never be the same. We need more people getting down to the fundamentals like this to keep everyone honest.

(Consider the Rick and Morty "true level" bit... and realize it's not that much of an exaggeration when it comes to stuff like this...)


Thanks, I hadn't seen that yet. Rick and Morty funny as usual.

For others interested: https://www.youtube.com/watch?v=Q1zBtJhgwBI


(OP here) The suffering hasn't increased that much. It was all painful before.


Glad to hear that, I guess :)


> Now that he knows for a fact that GUIs can be fast, he'll be suffering for the rest of his life when a GUI app is slow for any reason.

Not only slow GUIs but GUIs that demand tens or hundreds of MBs just to draw a few items


GUI libraries are heavyweight for a reason: lightweight GUIs do not adequately address accessibility, or internationalization.

If you were on my software project, and you weren't using native controls on macOS or Windows, one of the big two on Linux, or Electron, you'd better have a damn good reason.


Accessibility and internationalization are both very important, but I think it's possible to have both without bringing in an entire browser engine.


> GUI libraries are heavyweight for a reason: lightweight GUIs do not adequately address accessibility, or internationalization.

GUIs don't need to be heavyweight to be accessible or internationalized (excepting for text translations, which should be downloaded rather than installed)


I suspect the parent isn't talking about theoretical implementations but choosing an existing GUI framework for a project.

I don't know much about GUIs but lightweight tools often fail to take accessibility into account, and hand-rolled tools almost always do.


Why should text translations be downloaded rather than installed? Oh, you mean not included by default in the installer? I'm not sure why that's very material here.


This isn't necessarily true; FreePascal's LCL integrates with native accessibility frameworks:

> The LCL should automatically provide accessibility for all of its standard controls, or allow the underlying widgetset to provide its own accessibility for the controls utilized directly from the widgetset.

https://wiki.freepascal.org/LCL_Accessibility


Win32 has accessibility and localization both, yet pure Win32 API programming results in tiny binaries that use a negligible amount of RAM compared to modern frameworks.


Humm, what about Qt ? Or video games ?


Qt uses native controls on Windows or Mac. Not ideally, but it's a viable solution. And it's one of Linux's big two.

Video games count as a damn good reason.


Qt doesn't use native controls, they're lookalike.

The point of the comment asking about Qt is most likely that Qt is not heavyweight (it does a great job even on embedded.), therefore pointing out that there is a difference between Qt and the other you mentioned.

I'm not a fan of OOP and I think there are more interesting framework model - but Qt is definitely the best and most efficient GUI framework I've had the pleasure to work with.


I work with people to make great UXes. And now I live my life seeing horrible UXes everywhere and hating having to use them.


Color balance in photographs.


of course the first comment is someone "strongly" begging the user not to make a GUI library because they are "easy to get wrong" and "too big to be a fun hobby project"... i just don't get this mentality in programmers at all, but yet somehow it is rampant. i would maybe kind of understand this perverse form of insecurity / learned helplessness in the face of "complexity" for domains outside of one's training, experience, etc., but ... the whole point of programming is the sense that you can do whatever you want if you just write the code correctly. otherwise, what the hell are we doing here?


It’s not about helplessness, it’s understanding the difference between writing a toy _ over a week and the professional version is generally around 3+ orders of magnitude. Many people really don’t understand that gap.

I have worked with people actually trying to do several of such projects like implementing RSA encryption from scratch, a charting library, and several other similar projects they assumed where going to be a quick solution. It’s not that such projects are impossible, it’s that people starting them generally have no idea what’s actually involved.

That said, if you’re doing it for fun then feel free.


Pretty sure Linus never realised his toy operating system would turn into a 30 year project/career.

(I suspect RMS probably suspected the timeframe for GNU, at least for the collection of system utilities rather than the kernel, was going to be a multi decade piece of work.)


Linus was already doing a master’s thesis (like the OP.) He knew Linux wasn’t a weekend project, and he knew the orders of magnitude invoked in creating a commercial OS.

No one can predict the longevity of a project, but that’s not the point.


I would bet good money that if you told Linus in late 1991 that he’d spend his entire career working on his “side project” and that it would be the major OS powering many billion and even trillion dollar new tech companies, and running on several billion battery powered and wireless networked pocket supercomputers in 2021 - and that he’d be some sort of software industry messiah - he’d have laughed at you (and probably viciously evicerated you on a public mailing list for your stupidity, like he is wont to do...)


Hobby projects can be pretty much the same scale as master's thesis, especially for young people.


Implementing RSA is probably the only one i can get behind this advice for. Its easy to screw up, but its also non obvious that you screwed up. If someone actually needs the confidentiality you're trying to provide, the consequences can be far worse than the typical: program does not work.

Other than that, failure is how we learn, and maybe you will succeed.


A charting library is not that difficult.


I honestly can’t tell if you’re joking or not. I mean on one end you can quickly get a toy version up and running, at the other end success looks like: More than 3200 issues open on GitHub. 4,112 commits 22 branches 228 releases 121 contributors

So, this is the perfect example of what I am talking about. Getting something you can generously call a charting library isn’t that hard, getting something worth the time building on it’s own merits is. If you want to have a fun side project go for it. However, if the goal is to actually make charts it’s extremely unlikely that starting from scratch is a good idea.


As a general rule, I don't trust "$thing is not that difficult" statements. Similar to statements of "Just do X", if it's not coming from someone who actually knows what it is that is being done, then I would assume it's more difficult than is being suggested.


I've been writing graphing/charting code off and on for over two decades. My code has been used in high end audio software. Sometimes it's faster and easier to write the simple for loop that draws lines or splines on a canvas than to incorporate a "full-featured" charting library that has to support every kind of chart, plot, graph, network, 2D, 3D, etc. imaginable. A charting library that takes an array and makes lines is not that hard.

And yet I still had a dev get pissed off by my offer to help with a charting problem. "Yeah, but was it D3?"

I'm generally an advocate for buy-before-build in software, but I don't trust the kind of dev who won't even acknowledge that the easy part of a problem is easy (or that true experience isn't framework specific).


Indeed.

A couple of months ago, a junior developer mentioned he'd need a library for a bar chart, and asked for directions how to use it in an Xcode project. It turned out to be easier to draw the chart with a few rectangles, and the developer had one of those great a-ha moments.


It's a way to disguise your own ineptitude and limits of knowledge as sage advice or learned wisdom. Notice how that comment didn't answer his question or provide any extra context, but it still made the poster appear to be an authority.


The 'first comment' was in the context of the reddit!OP's reddit post that started as:

> I am currently working on a standard Windows desktop application <...>

> and have decided to write a GUI framework on my own

If you want to write an application, rolling your own GUI framework is obviously a horrible idea.

In this particular case the reddit!OP converted their GUI framework experiment into a master thesis, but that does not make the advice in the 'first comment' less valid. Did they write that application they wanted?


(OP here) I did! And it was enjoyable and didn't take that long (about 2 or 3 weeks initially). I think in the end It actually was still faster to just write the GUI by myself than to use Qt, simply because I would have to get used to Qt and put up with all its bull. But even if it wasn't, I'd still prefer 3 weeks of challange over 1 week of misery.


Congratulations! I wish you long to remain in the life situation where you can freely trade time for challenge ;)

Also I wouldn’t call QT ‘misery’. Drudgery maybe?


It's a question and answer site, not a question and unasked-for advice site. If someone's taken the time to write a thought-out question, then people should assume they've put research into it!


> not a question and unasked-for advice site

In practice, I think this is not correct. You will almost always be told that your question is a bad idea and you need something different (even if you already know it is a bad fit and you've said so...). I've had to make peace with the fact that it's probably a cultural thing to feel like the questioner is in fact mentally handicapped in some profound way.


Do what you can't

(if you recognise the reference you're right if not it's here https://www.youtube.com/watch?v=jG7dSXcfVqE)


Go ahead just don’t waste other people’s time (colleagues) and money (stakeholders) - there’s opportunity cost to everything.


Especially for a hobby project. So what if you never finish or it never gets used for anything "real". Taking on crazy challenges can be really fun.


Perhaps it would have been more correct and more effective to write "there is no money in rolling your own GUI library".


Don’t listen to people with a “can’t do” attitude. There are a lot of them and guess what: they never accomplish anything and hate people who do.


How about an immediate-mode renderer that works like a Rust async executor?

The executor could run all of the rendering closures on one core and benefit from caching.

It could detect slow rendering closures and automatically switch them to buffered mode and run them on a "slow renderers" thread. This would keep the rest of the UI snappy. It could also tint those portions of the UI so users can see which app is the culprit of UI slowness.

I hope that Rust can someday be used as the basis of an OS that does not rely on sandboxing for security, but instead uses the compiler to enforce security. Such an OS could use cooperative multitasking and use a lot less energy and less expensive hardware than our current multi-core CPU + MMU + kernel ring model.


I was trying to write this while learning Rust: https://github.com/avantgardnerio/rsui

I now have some production experience in Rust and have learned to shy away from Piston. I think you've inspired me to try again with SDL2 and a better approach.


To use the compiler for security you would need a way to prove that a given piece of machine code was generated by a trustworthy compiler, and not modified after it was loaded.


Yes. The idea is that all software is distributed as source code. The OS compiles the software itself.

To prevent software modification, we need a new CPU with two memory controllers, one for data and one for code. The code memory controller can only read, not write. A separate CPU runs the compiler and writes the code RAM.


That’s pretty awesome.

Admittedly, all I know about immediate mode and retained modes GUIs, I learned from this post, but I wonder why there’s no library that uses both. Could a library: 1. Use immediate mode to calculate the display. 2. Use retained mode as long as the amount of change is below a certain threshold. 3. Switch to immediate mode above the threshold. And so on.


In practice they do tend to be mixed, if not in exactly the way you describe. Immediate mode systems often retain/cache a lot of state under the hood for performance reasons, and determine exactly what needs to be redrawn on each iteration. Virtual doms (React, Vue, etc) are a great example of this.


> but I wonder why there’s no library that uses both

Sciter (https://sciter.com) uses both, as internally as on user's level.

For example you may want to define this:

   var bodyElement = document.body;

   bodyElement.paintForeground = function(graphics) {
     ... draw something 
     ... on top of <body> content 
     ... using the graphics 
   }
This allows to benefit from both: retained mode (HTML DOM, layout cached) and immediate mode rendering - paint handlers on DOM elements.


They meet in the middle when we talk about layout computation. Layout often flips between needing a globally-optimized solution of some kind(how to pack many boxes given some parameters of box size), and needing a fast, simple dynamic adjustment(same drawing algorithm with different numbers passed in). And what happens is that when the layout gets complex, the simple dynamic adjustment impacts the overall solution. Parts of the layout may also contain their own state, like window positioning. So even in immediate GUI there is retained elements, and vice versa.

It's a complex tangle, all in all.


The difference is in the programming model. The library itself is probably quite easy to support both models.

If you have some UI in retained mode, chances are that rendering it live every frame will lead to unacceptable performance.

But if you have UI in immediate mode, you can probably switch to retained - essentially lazy rendering until something else happens. But you'll have to be very careful that your render calls are indeed pure functions, and cannot produce different outputs unless some input has indeed changed.


I don’t buy it. The creator of Dear IMGUI himself says it’s slower. These posts never talk about how their compositor (if they’re even using one) maintains performance.

Are you building vertex buffers? Are you creating backing layers? These are the critical details, not whether or not the API itself “looks” like an immediate mode or retained mode API.

Even the immediate mode APIs retain information!

As soon as you do compositor work, you need framebuffers, and you can kiss perf goodbye as you realize you can’t just destroy panels left and right without some abstraction to efficiently use video memory.

Oh, you can’t paint then scale later? Oh, so to do transparency effects you have to redraw the primitives every frame with new alphas instead of sending a different alpha to your shader? OK. Sure.

What’s your invalidation strategy? You don’t have one because it’s all “immediate,” sure. That’s really interesting. If there’s no performance difference, then well, hmm, why didn’t we do this back in the win32 era?

Oh well, of course, it’s because there is a performance difference. And it’s staggering.

The most critically important work I’ve seen in modern GUI architecture has nothing to do with the API itself and everything to do with the compositor. You can make the API almost always look like anything you want. Under the hood, it’s frametime life or death depending on how you rasterize and display large amounts of UI and typesetting, and panel layout.


(OP here)

>Are you building vertex buffers?

yes

>Are you creating backing layers?

I am not sure what those are.

>Even the immediate mode APIs retain information!

Yes, immediate mode only refers to the way the API is designed, it has nothing to do with how the framework works internally.

>Oh, you can’t paint then scale later? Oh, so to do transparency effects you have to redraw the primitives every frame with new alphas instead of sending a different alpha to your shader? OK. Sure.

Yes, in my implementation, I redraw every primitive every frame.

>What’s your invalidation strategy? You don’t have one because it’s all “immediate,” sure. That’s really interesting. If there’s no performance difference, then well, hmm, why didn’t we do this back in the win32 era?

Because the graphics hardware wasn't there. Todays GPUs are crazy fast, which enables you to just push tens of thousands of vertices to the GPU and be done in less than a millisecond. In the early days this all had to happen on the CPU, which were way too slow for this, so you had to be very careful about what to redraw.


The answer compares resizing performance of their application to spotify's Electron app, which has a chrome browser doing the rendering. Not a very fair comparison. I assume their thesis doesn't do that.


As far as I understand Spotify does not use Electron, it uses the Chromium Embedded Framework.



(OP here) I adressed that caveat in the late part of the answer. I also noted that I don't even think the performance difference is attributable to immediate vs retained mode. This was just a comparison agains existing applications.

As far as I am concerned though, it is pretty irrelevant to the user if an application is slow because the developer made a mistake, or because they wanted it that way.


It's fair because they're both GUIs. It shouldn't have to be anyone's problem that a set of developers chose to pack a whole browser inside of their user interface.


If the question is about comparing the performance of immediate mode and retained mode, the comparison should be between two otherwise-similar applications. There are a lot of applications that are faster than Spotify, but that says more about Spotify than about those applications.


I don't understand this. Isn't it obviously slower if you redraw every window in every frame instead of just redrawing what changed?


In order to know "what changed", you need at the very minimum to maintain a meta-state about your application state.

In most retained-mode GUIs, this meta-state is often a copy of a significant portion of your application state, but with a very different structure. To maintain the integrity of both copies, you then need a layer of events and/or listeners on top of that, with a complex and dynamic control flow.

Retained mode optimizes for display throughput, which matters a few decades ago. Immediate mode is better for latency and consistency, which are much more important now.


it's "obvious" until you learn that most games (with way higher perf requirements than anything else) are immediate mode rendered. as with everything it depends - in this case, on how much usually changes, and how nested your data is (DOM trees are O(n^3) to do proper retained mode diffing, whereas full rerendering every time is O(n))

React's reconciliation bridges this - immediate mode programming model, retained mode commit model, with key-based bailout to solve the O(n^3) problem. There are reasonable disagreements as to how necessary this is, of course.


> it's "obvious" until you learn that most games (with way higher perf requirements than anything else) are immediate mode rendered.

Games also tend to be the only/main thing running at a time and constantly update large parts of their display. This is not true for most applications.


Tracking what changed has its own complexity.

If you have transparent or moving elements you have to identify the components behind them and redraw those as well. You could use buffers for each element to avoid some of it, but then you may have to update several buffers for various changes.

If you have elements that are updated a thousand times per frame (progress bars) do you update the whole element every time, only update the affected region of it on every change or try to somehow collect changes that might affect different parts of it for a single update operation?


Very cool and as a lapsed GUI/graphics guy this makes complete sense to me. The bottleneck on guis is remastering and layout not cycling through the lists of sliders.

How often you repaint or have to relayout should depend on only two things. How often is your data changing? And how soon does the user need to see that new data?

Even things like moving the mouse or view are just data changes. (In fact, the biggest messes you'll get into when writing GUI and render code is when you've failed to incorporate something into your scene or data model and end up handling it with a lot of special case code.)


It's interesting because this matches every GUI that has to respond in real-time.

If you look at ALL the digital workstations, they're doing their own rendering. And they all punt on text scaling.


What were the problems that repelled him? I see some discussion about immediate vs retained mode, but what where the gripes that led to re-inventing something?

Maybe start with X and explain what's so bad about that. Athena is pretty damn bare bones. Fast forward 40 years, what about embedded Qt? It's small and fast. What's the problem with it?

I'd be curious to know his complaints and how he addressed them.


(OP here) The main issues I had with them had nothing to do with immediate vs. retained mode, but simply with the unnecessary constraints the big GUI frameworks put on you.

Usually the GUI framwork wants to own the project. I need to build an application IN Qt or IN C# with WPF. Thats just unnecessary. There is no reason why these libraries can't just provide a header with some functions that I can call, and thats it. The notion that I have to use C# for WPF seems completely ridiculous to me. Or that I don't get to use my own build system (Qt) or chose how I want my control flow to happen and so on. I want the GUI framework to be a library to my application, no the other way around.

I hate all of these build shenanigans. I had a C++ application, that I built with a .bat file. I was not gonna port that to C# or have some bullshit wrapper around C#, or port the whole thing to Qt, or switch my build system. That WPF and Qt required this of me is just unnecessary, and I was super annoyed that everybody thinks that is just ok and not a big deal. Stuff like this makes modern programming a slog and kills my productivity and morale.

I don't know about embedded Qt, but after seing that standard dev Qt download is 40 GB, I already knew this is not going to work because 99% of that is going to be stuff I don't need or want, and (as it always goes with these things) it is going to cause a lot of friction, because, of course, I will still have to interact with those things.

So, yeah, basically I just couldn't find a small, simple, non-intrusive, no arbitrary constraints GUI library that looked halfway decent.


Would be nice to read the full thesis. This is a pretty naive question, but I wonder if retaining the current frame (just as pixels) and only redrawing when there are actual changes is a good middle ground? Is this something already being done in immediate mode GUI libraries?


Determining where there are changes would likely take up the bulk of the work to actually redraw in this scenario I would think, since you'd (to some extent or another) have to draw the new state to be able to compare it to the current state.


That's quite a feat.

The best I could manage was looking for an error message when I had a problem with mp3 id3 tags handling, finding an unanswered, 4 years old question on SO which was exactly my problem and eventually answering it after solving it myself.


Do popular GUI frameworks that present an immediate-mode interface actually use an immediate-mode implementation under the hood?


"Dear Imgui" is pure immediate mode both interface and implementation wise so far as I can tell.


Although it is worth noting that calls to Dear ImGUI don’t directly render, but rather build up state to batch render later with your graphics API of choice.


Fair.

"blitmediate"


The Windows API asks the application to redraw the window on certain events (e.g. resize).


The most popular immediate mode system I'm aware of is React, which definitely uses retained under the hood.


As a person who has used a slew of UI frameworks and written one or two: All of this is happening at a layer considerably lower than React.

This is what's happening when the browser redraws when you scroll for example. Arguably it has to do with the DOM, but certainly not how the DOM gets updated.


I think I'd argue that React is immediate mode with caching and then a retained-ish driver layer to talk to the underlying DOM.

But given the constraints of the "talking to the underlying DOM" part it's going to be a bit handwavey no matter how you argue it, I suspect.


Often times immediate / retained can be mixed up together (eg. https://docs.rs/conrod_core/0.71.0/conrod_core/guide/chapter...)


Immediate mode rendering is similar to what the Firefox WebRender changes are about:

https://hacks.mozilla.org/2017/10/the-whole-web-at-maximum-f...


We need a stateful GUI markup standard so that each language doesn't have to reinvent GUI's or GUI adaptors.


That's cool. Thanks for sharing it.

I tend to use the built-in framework (Swift native API). WFM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: