The same way they will impact the rest of developers / programmers. Whatever that'll turn up to be.
At the risk of coming off as yet another contrarian: there is no AI. There is a buzzword hyped with no sense of moderation or restraint to the sky and above by the same folks who had tried to cram their AR/VR nonsense down the throat of each and everyone who'd listen; before pivoting to the crypto/blockchain/decentralized/Web3 before pivoting again to our shiny new fancy term.
"Intelligence" is an abstract, ephemeral, chiefly human concept. Some of the intelligence we are born with is innate: a baby doesn't need to predict the next N "tokens" most likely to follow their brains' inner UTF-8 "prompt" to seek the attention of a parent. Some of it is based on experience: you can't encode a career of 20 years coding, debugging, brainstorming, abd establishing viable communication channels with other people into an LLM/GPT trained on books and f* Reddit.
Every single word you're reading now is an abstract, ephemeral, chiefly human concept: a mix of experience + thought + emotion + history + all manner of associations with other concepts in turn. Language itself is a projection of our intellectual capacity. Not a fully-featured self-sufficient stand-alone container able of representing every facet of our collective knowledge on its own.
Whatever digitized UTF-8 based "intelligence" an LLM/GPT possesses has exceedingly little to do with the intelligence a human being is capable of. Why in the world would a bunch of self-obsessed f*s in their Silicon Country of Hype and B* choose to market and sell the former as in any way representative of the latter is as good of a guess of yours as it is of mine.
As long as we agree not to spit on the entirety of our neurobiological prowess just to make a bunch of glorified auto-complete peddlers alongside their VC sugar daddies happy: let's continue.
Define "normal". IT as an industry as a whole is one of the best representatives of the adage "the only constant in life is change". People used to talk to rubber ducks when figuring out why their code wouldn't work. Now they talk to whatever prompt box their YT/IG/TT feed has sold them on as "The Most Advanced AI Companion" out there. The work remains the same.
None of it happens without extensive brainstorming + design + understanding of the interactions in between the different parts involved. "Delegating" it to an "agent" involving a bunch of LLMs gradient-optimized into producing the most likely UTF-8 output from a given prompt means someone will need to check + understand + correct or re-prompt each and every bit.
How in the world would a brain-fried "prompt engineer" who has never written / shipped / debugged a piece of code on their own do that? Would they spin up another "agent" to check on the work of the previous slop machine? Who is going to check the work of this last one? Another "agent"?
You can't derive your own optimization function from hearsay and second-hand experience. People who have been in the industry for 20+ years will have little to no clue and/or interest in the problems of folks just starting out. Senior SWEs with 100/250/500k+ stashed in the bank have all the freedom in the world to play with and talk about whatever shiny toys others are pumping out. Unless you're one of them, drooling over each and every article or tweet they post will get you nowhere.
Don't flap your ears left and right. Choose the sector you're interested in. Track what's going on in it. Not someone's impression of what's going in it. Not someone's thought or reaction or hot take on whatever happened to hijack the attention span of a bunch of severely under-employed social media addicts who live and die by the amount of hype and drama they inject into their minds on a daily basis. Follow the raw data: as close to the source as you can get. Else you'll waste all the time and focus in the world on what has ultimately nothing to do with your own life whatsoever.
Be mindful of incentives. Reading a post of a CEO behind yet another LLM wrapper talking about how "AI WILL CHANGE THE WORLD" will do you as much as good as listening to your barber telling you why you should definitely get a haircut, from him, three times a day. Same goes for listening to anyone who "prides" themselves on "never" using a GPT because they're "above" any and all prompting. Do your own research. Conduct your own testing. Make your own choice.