Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, designers did not get lazy. Devices got weirder.

It used to be that you could check the width and height of the viewport and say something like “320px wide? Must be a touch interface, deploy the big buttons”. Then tablets got big and it was like “1024px? Could be a laptop, but it’s probably an iPad, which has a touch interface, deploy the big buttons”. Then laptops got touch screens, then the Surface Studio came in and was like “HAHAHAHA”.

Now the game is “1920x1080? Could be a big tablet with touch, or a 1080p monitor without touch, or a non-maximized window on a Surface Studio with touch, or maybe it’s a monitor without touch hooked up to a laptop with a touchscreen and our window could get moved between them at any time...”

Nowadays, there’s no single reliable way to tell if a page is going to have to support touch until it gets a touch event, by which point you’ve already rendered the UI and it’s too late.



Simple solution would be to ask the user what they want. I genuinely don't understand why this is not common instead of trying to guess it.


You don’t have to ask the user, there is a media rule for querying whether the device is currently using coarse or fine pointer input[0] (though, of course, it relies on the OS not lying, which is not a given).

[0] https://developer.mozilla.org/en-US/docs/Web/CSS/@media/poin...


Some websites probably do have settings around this, but that gets to a point someone else mentioned: you would have to basically design, build, test and maintain two UIs. Except now with the kicker that one of those layouts is only used by the 5% of your userbase that both knows that the option is available and chooses to take you up on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: