Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's good that we get visualization tools to explore artificial neural networks: the problem of "transparency" - understanding what an instance of ANN does instead of remaining a "black box", hence "teaching us something" as opposed to "just somehow working" -, is in fact much reduced when the right visualization - e.g. revealing the patterns that the connectional weights come to identify - is applied.

It is a primary goal to understand what causes an emergence of function. ...Which should be necessary to build (and tweak, and hack, and refine, and export) further. (This point of view should be a basic tenet, yet it does not seem to get the proper focus in this realm.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: