Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> are you teaching the class or taking it?

I teach it, my background is located in my profile and my research focuses on CS education.

Scale and impact do matter, I wholeheartedly agree. However, I stand by my point that genAI is mirroring how humans learn - repetition of previously observed actions. As part of my dissertation, I argued that humans operate using 'templates', or previously established frameworks / systems. Even in higher cognitive tasks like problem solving, we rely on workflows that we were trained on previously. Soloway referred to problem solving as a mental set of "basic recurring plans" [1] and if you look at the old 1980s Usborne children's books, they required kids to retype code [2]. For creative tasks, depending on the actor's background, Method and Meisner both tell people to draw from previous experiences and observations to develop a character. This behavior is similar in many areas like music, dance, martial arts, cooking, language acquisition, etc.

I am not making an ethical argument that GenAI violating copyright is okay because that's what humans do. I'm arguing that GenAI mirrors how humans learn. We observe a behavior and attempt to recreate that behavior. The difference is that humans can extract a fraction of the behavior and utilize it as part of something larger while GenAI cannot to the degree humans do. I'm sure GenAI would struggle to recreate "Who Framed Roger Rabbit?" because of the two polar different visual elements of the film (cartoon and real life).

In regards to your "If you’re talking ethics, talk about impact" section, its a bit of a loaded question. One side of the conversation could state that GenAI is helping many people that do not have confidence in their creative ability to produce their ideas, while the other could state its making it harder for artists.

Yes, it absolutely is hurting artists and I fully support the recent writer's strike over AI concerns. But I do not believe that diminishes how the mathematical models used in GenAI mirror our own skill acquistion.

[1] https://ieeexplore.ieee.org/document/5010283

[2] https://usborne.com/us/books/computer-and-coding-books



I took an AI in ethics course from a state backed school (Georgia Tech) and the answer to questions that weren’t “that’s illegal based on protected status” were “well, it depends.” Which, sure, that’s true, but maybe not helpful.

In my view it encouraged nihilism and apathy instead of developing ethical frameworks. From that lens, I feel teaching a course might be more limiting in the range of heuristics you’re willing to accept or endorse. Though happy to accept your personal experience.

A paper that comes to mind often from HCI is “do artifacts have politics” which looks at the impacts of technologies divorced from creator intent. I feel that’s similar here.

You’re not wrong that about the mechanism that it’s created. But I would argue that’s the least important part, ethically anyway.

Saying “strip mining with heavy industrial machines mimics laborers using shovels” is true to a degree, but but perhaps not that important piece of information.

I’m not saying you’re making that argument. I guess im just not totally sure the outcome you were looking for in sharing your original comment. I hear your comparison and agree with it and that it is interesting to view in that lense. I wasn’t sure if there was a deeper intent in sharing it.


Apologies for the delayed response, but on the bright side it's faster than I respond to some emails XD. I should preface the course I was referring to was "Intro to AI", not "Ethics in AI". I only have a single lecture dedicated to ethics, but do try to pepper it in as we cover topics. My original comments were more addressing "how humans learn" rather than any higher level ethical concerns. Your last section on "deeper intent" is correct, there wasn't any.

I have a pretty neutral stance to GenAI, mostly due to personality however it also stems from my background as well as recognizing students' interests. Prior to CS Education, my master thesis involved computer vision for catching "high valued targets", but was also funded to help minimize human trafficking. I have students in my classes that are very interested in going to work for defense companies like Lockheed and Raytheon, and I have others that are really interested in using AI for "social good" areas like healthcare and education. I try to have a neutral stance because: A) I hated the professors that I took that would use their lecture time to express their political opinions, B) opinions that are opposite to a student may otherwise discourage them from learning the material, and C) my primary focus is to make sure they learn the material and do it "right".

When I started teaching, I used the analogy that if they go on to write the software for the life support machine I'm hooked up it, it WORKS. If someone wants to go on to use AI to create weapons, I can't stop them anymore than I can force them to read a chapter or convincing the person beside on the highway to slow down. I just work to ensure they do it correctly (which includes being mindful of the ethical ramifications of using algorithm X for task Y).

What would an ethical framework for designing AI for a drone even look like? I have no idea, nor is it something I'm interested in delving into. I got out of face recognition for those reasons. Does an ethical framework for GenAI require the same elements, a fraction of them, or a completely different set of guidelines? Who gets to decide them - the 'experts' in AI, the government, society as a whole?

Personally, I've made the comment that the current opinions on regulating AI are like "everyone trying to be AI's parent". We're never going to agree because everyone has a different opinion on the "right" way to handle AI. Plus, human cognition is so unknown and illogical that we may never figure out a way to perfectly replicate human intelligence. I instead try to stay somewhat optimistic and marvel at the math we've used to create "AI".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: