In the mid 80s, personal computers were released with graphical user interfaces – they were controlled by a keyboard and a mouse. Take the Amiga 500, for example (https://en.wikipedia.org/wiki/Amiga).
In the last 30+ years, I think we can all agree that computers have become exponentially more powerful, but the way we control them (QWERTY keyboard and a mouse) has stayed the same.
Therefore, I would argue this has left us with a mess of large on-screen menus, awkward keyboard shortcuts and in many cases, no choice but to move our hands between typing commands on the keyboard and clicking with the mouse.
So my question to you, dear HN reader, is whether you think this is a legitimate problem, or whether there are counter arguments, for example, our user interfaces are designed more efficiently nowadays.
Looking to the development of the Apple Touch Bar, I would say big companies believe this is a problem, but then again, Apple typically go for very minimalist 60% keyboards so it's not clear-cut.
Let me know your thoughts!
I think the trend is clear, ideally you don't want anything between you and your computer. Hence the rise of touch screen phones (combining the keyboard and touch controls with the screen).
What I'm excited about is moving from 2D controls to 3D environments. It is still a little early but if you look at VR, you can create any sort of interface with your computer. Need more monitors? Just edit your scene. Need a bigger or smaller monitor? Easy, edit your config file. Those monitors you just created ... you can make them respond to touch or your eyeball movement.
The only hardware you will need is a VR kit (headset for vision and audio + gloves/knuckles for your hand interface).