Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The previous approaches learned screen-space-textures for different features and a feature mask to compose them.

Now it seems to actually learn the topology lines of the human face [0], as 3D artists would learn them [1] when they study anatomy. It also uses quad grids and even places the edge loops and poles in similar places.

[0] https://nvlabs-fi-cdn.nvidia.com/_web/alias-free-gan/img/ali... [1] https://i.pinimg.com/originals/6b/9a/0c/6b9a0c2d108b2be75bf7...



Yes. It's interesting that imposing what are essentially 2d invariance constraints leads the network to learn what we regard as 3D concepts.


There are some interesting 2d things our eyes do for 3d. If something is on the ground, half is above the horizon and half is below. Parallax is a 2d phenomenon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: