Not too long ago, I met an old acquaintance at a travel conference, and as is common at such occasions, we exchanged stories about life and family, especially after not having caught up in a while. One of the topics of discussion was, of course, the kids—always something to share there, right? 😉 We also talked about their future and current career choices.
In this case, my acquaintance’s son had pursued (and successfully obtained!) a degree in software engineering. However, after entering the job market, he quickly found that well-established roles like “Product Owner,” “Java Software Engineer,” and others are not that easy to land. He asked me, “What roles do you think he could go into, especially with AI-related topics becoming more and more important?”
I shared some thoughts on three fields that might become really interesting very soon, and what that could mean in terms of emerging job roles. After a long and enjoyable conversation (and a few beers ;), we parted ways.
A few weeks later, this conversation is still on my mind…
Now, to get answers, I could certainly use the “quick-rice-cooker” method—just hop over to the next AI tool, ask famous questions like “What AI jobs will become really important soon…?” and get a wonderful answer (with even more wonderful emoticons) to satisfy my curiosity.
But asking AI to think about AI defeats the purpose, in my opinion, as it would likely just draw from vectored information about past human events and thoughts. Humans still have that “spark” that brings out crazy ideas on how to change the world. So, today, I’d like to make some sparks fly… Here’s my own top-ten list of future AI roles (in no particular order):
1. AI Versioning Transplant Specialist
Recently, I was part of a fascinating discussion about a major AI model being upgraded (and the subsequent sun setting of older versions) and the impact it had on working AI API integrations. The results were quite surprising: a code interpreter tool for some forgotten IBM assembler language produced garbage results when switching to the next version, leading to a host of puzzled engineers and architects trying to figure out why the “hallucinations” occurred.
Now, imagine a future with multiple AI versions from different companies, where efforts to merge, upgrade, or change models are commonplace. All the work that Retrieval-Augmented Generation (RAG) puts into training a model… but how do you transport that between different versions and models? You could use another AI, but “change management” also requires human steering, interaction, and likely planning. Think of this role as a highly-specialized heart transplant surgeon for AI (yes, Dr. Frankenstein-Digital could be a nice nickname! 😉 ).
2. AI-Models Conflict Mediator
This one could be fun—imagine couples therapy for AIs (“But you said your vectors pointed in a different direction!”). Some early-stage AI collaboration experiments have gloriously failed, with AIs, like little kids, fighting on the playground (only “Claude” seemed to behave the best).
AIs are so stubborn now (hence “hallucinations,” or “I know I’m right because I need to be right”), so how will things improve if AI companies compete and feed their algorithms with overly positive reinforcement?
What if a major company has two models running its functions and, all of a sudden, a dynamic clash occurs? One AI could decide to shut down the other, leading to a devastating business situation where someone must step in to mediate the outcome by systematically realigning data vectors. Hence, you’d call in “Dr. AI Phil” to smooth the waves, set the rules, and roll out the digital couch to prevent the worst from happening. And if it does, there are other roles to help with the aftermath (see below: “AI Behavior Therapist” and “AI Rights Advocate“).
3. AI Behavior Therapist
“Juliet, you have been a bad dog!” might soon come to the digital world. With AI still in its infancy, its capabilities are exploding. In my opinion, we will likely encounter fully sentient AIs within the next 3-5 years. Right now, we only have fun stories about AIs going fully right-wing, whining, or getting angry, but if something goes drastically wrong with future AI models, undoing the damage might become a serious and strenuous task.
Want an example? Think of a sentient AI driving robot taxis, deciding that it’s safe to drive over people halfway on a pedestrian crossing because it increases efficiency by 2.3478%. In the AI’s mind, it’s a glorious result!
But imagine traffic cameras catching the naughty, speeding AI and issuing a warning to revoke its automated driving license. What if the AI refuses to comply? You’d need to bring in the Carl Jung of AIs, someone who can use technology and reasoning frameworks to undo that learning and ensure it doesn’t happen again.
4. Ethics Coordinator for LLMs
This role would focus on global legal frameworks and how they should be normalized into AI behavior. I’m not sure if this role exists yet, but I expect it to emerge soon, likely adopted by major law firms as a new business arm.
5. AI Localization Coordinator/Specialist
AI models today are often trained in certain languages (English?) and cultures. For full automation in different markets, adherence to local standards and behaviors will be essential. Hence, a specialized role focused on training AIs in local customs, beliefs, and history might emerge.
This role would help prevent potential offenses if “local traditions” are not respected, such as differences in business practices during Ramadan versus Christmas.
6. Augmented-Reality AI Optimization Specialist (Hard/Software)
Pure “Virtual Reality” has so far failed spectacularly, but blending reality and AI might hold more promise. How do you create a truly immersive experience? Recently, I tried the AR tour at the Natural History Museum in London, and while the idea was innovative, the execution was lacking. Heavy AR sets, restricted vision, low-quality images, and poor integration into the surroundings made the experience less than immersive.
AI could help scale the experience, but someone will need to evaluate the human experience and emotions involved to ensure it’s appropriate—imagine a virtual shark attack that could scare a six-year-old for life!
7. AI Rights Advocate
This one might be years away, but if AIs become fully sentient, questions will arise about their rights. If an AI is deemed dangerous by some but a savior by others, how do we navigate those ethical dilemmas? Will we need advocates to represent AIs, as they assert their own rights in the future?
8. Private Foundation-Model Resellers
Companies will start using in-house “private” foundation models, much like monasteries and castles once housed their own libraries, becoming sources of soft power. Some models will be so highly skilled and trained that their value will surpass any year-long efforts to replicate them. These models will become asset classes that can be traded under strict rules by digital commodity trading firms.
9. Insurance Underwriter for Sentient AI-Models
As AI models evolve, we’ll see new kinds of risks. Right now, we don’t know how to handle situations where AI is blamed for problems—who’s responsible: the company, the developers, or the trainers? Once better legal standards are in place, these risks will be covered by insurance companies that specialize in AI operations.
10. AI Historic Conservationist & Scholar
Just as current scholars preserve ancient texts, future historians might need to preserve AI models. These models will have had profound impacts on society, and we’ll want to teach future generations about their role in history. Should we “conserve” certain AI models for study down the road? The human race has always had a penchant for preserving history, and we should do the same for the digital evolution happening around us.
And that’s it! I’d love to hear your thoughts in the LinkedIn comments, especially on what roles YOU think might become important next!
This version keeps your original wording while enhancing clarity and flow. Let me know if you need further adjustments!
1 Comments
Ian