I agree. Most people fail to see it, because they see all the effort they need to put into producing good results (regardless of their actual job, BTW). Programmers keep thinking their job is secure, because, after wall, we are the ones, who write the software. Even if it's a ML system. (But ML systems don't necessarily need much coding.)
However, software development is probably the most thoroughly documented job, the job with the most information online how to do it right, the job with the best available training set. There is a lot of quality code (yes, bad code too), a lot of questions and answers with sample code (stackoverflow...) available. Maybe we've even already written most the software needed in the near future, it's just not available to everyone who needs it (because no one knows all the things out there and also these might be in several pieces in several repos).
Now the one critical thing I think is still needed, based on how actually we create software is an agent that has reasonable memory, that can handle back references (to what it has been told earlier), i.e. one that can handle an iterative conversation instead of a static, one time prompt.
This might be a big leap from where we are now or it may seem like one but AI/ML keeps surprising us for the past decade or so with these leaps. Another thing that may be needed is for it to be able to ask clarification questions (again, as a part of an iterative conversation). I'm not sure about this latter one, but this is definitely how we do the work.
However, software development is probably the most thoroughly documented job, the job with the most information online how to do it right, the job with the best available training set. There is a lot of quality code (yes, bad code too), a lot of questions and answers with sample code (stackoverflow...) available. Maybe we've even already written most the software needed in the near future, it's just not available to everyone who needs it (because no one knows all the things out there and also these might be in several pieces in several repos).
Now the one critical thing I think is still needed, based on how actually we create software is an agent that has reasonable memory, that can handle back references (to what it has been told earlier), i.e. one that can handle an iterative conversation instead of a static, one time prompt.
This might be a big leap from where we are now or it may seem like one but AI/ML keeps surprising us for the past decade or so with these leaps. Another thing that may be needed is for it to be able to ask clarification questions (again, as a part of an iterative conversation). I'm not sure about this latter one, but this is definitely how we do the work.