I find AI to be extremely knowledgeable about everything, except anything I am knowledgeable about. Then it's like 80% wrong. Maybe 50% wrong. But it's significant.
So, c-suite see it churning out some basic code - not realising that code is 80% wrong - and think they don't need as many junior devs. Hell, might as well get rid of some mid level devs as well, cause AI will make the other mid level devs more efficient.
And when there aren't as many jobs for junior devs, there aren't as many people eligible for mid devs or senior devs.
I know it seems like the whole "Immigrants are lazy and leech off benefits. Immigrants are taking all our jobs" kinda thing.
But actually it's that LLMs are very good at predicting what the next word might be, not should be.
So it seems correct to people that don't actually know. While people that do know can see its wrong (but maybe not in all the ways it's wrong), and have to spend as much time fixing it as they would have if they had just fucking written it themselves in the first place.
Besides which, by the time an AI prompt is suitably created to get the LLM to generate its approximation of the solution for a problem.... Most of the work is done, the programmer has constrained the problem. The coding part is trivial in comparison.