Well that required you to understand the code. My reply is disbelief off codebases that no one understands in the near term when looking at the current output. People don't trust the output and do what you did, review it and catch the potential bug before going further with it.
The issue take with it is that we are able to trust the output of AI models as we trust the output of optimizing compilers. It isn't even close that we can just fire and forget the prompt like I can code up something and trust that it 99,999% the optimizer produces the desired binary that works as intended.
The issue take with it is that we are able to trust the output of AI models as we trust the output of optimizing compilers. It isn't even close that we can just fire and forget the prompt like I can code up something and trust that it 99,999% the optimizer produces the desired binary that works as intended.