Why AI will not replace your job

Originally posted on tenetq.com.

This article deals with the important question: Does AI pose a risk to your job? Our answer is a clear: NO.

In the last few months, we have shared several pieces on how AI works, debunking the hype around AI, but also explaining why most mainstream fear of AI is not necessarily justified (yet). For this article, the understanding helps that AI is not one generally intelligent algorithmic being, but rather an umbrella term for various machine learning tools such as analyzing or generating text or images, which are then used in a rather ‘classic’ software development environment.

Why adopting AI is tough

Let’s start with the elephant in the room: If AI has been around since the 1950s and arguably ready to handle most use cases for years now, why has it not been adopted everywhere already? Here are some of the key reasons:

  1. Unexplainable AI models: Artificial Neural Networks, such as those used for Large Language Models (LLMs) like ChatGPT are basically a ‘soup’ of neurons that are connected with numbers. Hence, it’s impossible to really tell how they are able to achieve a certain output, because they are more like a magic black box where the input and output just somehow often makes sense due to the methods in which they were trained.
  2. No inherent logic: AI models used for ChatGPT, for instance, do not have the capacity of following logic or facts, meaning they are ‘just fancy word generators’ that sound sensical and are surprisingly accurate, but stand alone will always be inaccurate and prone to hallucinations.
  3. Prompting required: We have learned that the output quality of LLMs significantly improves when users a) provide a lot more context and side information and b) the user follows a computer-code-like syntax (which is a byproduct for LLMs that were trained on generating code). However, this means that this, both, increases the effort it takes with every time the user engages with the AI and the learning curve required to generate useable outputs. This finetuning, called prompt engineering, can often take as long as completing the task manually and provides a considerable adoption barrier.
  4. Data security risk: Most of the AI startups’ tech stack is centered around OpenAI GPT models, Pinecone, and LangChain, with a lot of data sitting on either AWS or Azure. The increasing data centrality and tech stack similarity poses a severe risk such as leaking material non-public that could constitute insider trading, the leak of trade secrets, or general industry espionage affecting national interest. Additionally, since many companies use the data for training, this adds another layer of risk as AI models can reveal data they were trained on, which then include such sensitive information.
  5. Blind trust: Increasing adoption and seemingly useful outputs creates a dependency on the machine. Similar to lawyers already being sued for having used ChatGPT in court cases (such as referencing court cases that don’t exist) are already a reality.

Although these problems exist, they are only slowing down the adoption rather than preventing it. In fact, companies like ours have already identified solutions to each of these problems, but for most AI solutions, the majority of these issues are still a harsh reality preventing adoption in some industries where accuracy and security are key.

How is AI used today

The above has been understood by many risk/compliance and HR departments, which have often decided on the immediate ‘strategy’ of banning ChatGPT. Months later, it has become clear that this has surprisingly backfired for an interesting reason.

Generally, it is employers who push productivity solutions onto their employees. In fact, employee productivity is a top success factor for most industries so that most companies often spend considerable time and resources into tracking employee productivity and finding appropriate productivity tools. Hence, companies have been used to employee behavior of adhering to switches in supported and disapproved software, but this did not happen with ChatGPT. We found in various widespread instances that a large number of employees go to great lengths to consciously violate compliance guidelines on a daily basis with sensitive information — as far as using new URLs that use the same OpenAI backend or using it on a personal device.

Their reasoning is relatable: Once one has used AI, it was deemed ‘stupid’ not to use it, because AI was able the remedy for boring, repetitive, mundane, and non-intellectual tasks, allowing them to focus on the work that actually matters. They found that their employers were penalizing employees for trying to be more productive, however, causing the policy change to be ineffective at remedying the security risks.

The remedy is clear: Instead of banning AI solutions, it should have become a strategic focus to either build or buy solutions that provide the benefits (productivity) without the drawbacks (risking intel leaks). Unsurprisingly, many large companies such as McKinsey, BCG and Bain have started building their own internal versions of ChatGPT for that reason, but many smaller firms are left in the dust which don’t have the resources and ability to pull top talent to deploy these solutions. This mostly leaves the alternative of procuring a suitable solution, which is also a time-consuming effort, especially given the multitude and volatility of new AI startups launching every day.

Forecasting the impact of AI on the future of work

Circling back to the original question, we now know that AI will inevitably displace jobs. Whether that is 30% as per recent claims from Goldman Sachs or rather 10–15% as per McKinsey’s estimates is yet to be seen, but our thesis is that this is the wrong question to begin with.

After all, history has very clearly taught us that job displacement from innovation does not imply a net job loss at all: the tractor did not substitute the farmer, book printing did not substitute the writer, and MS Office did not replace the consultant. They do, however, make the job a lot more enjoyable.

Taking more radical impacts into account: electricity replaced the Lamplighters of London, the steam engine replaced the Weavers of Lyon, the automotive replaced the Horse Wranglers of New York, and the computer replaced the Computers of Washington, D.C. (aka. JPL Rocket Girls, mostly women performing manual computations for NASA/NACA). However, these were repetitive, mundane, and non-intellectual tasks that were not providing a lot of value, so that the individuals in these jobs were generally able to find another, often more meaningful, job.

Most importantly, the lesson we have learned from previous mass-scale job replacements: They generally have a net positive impact on work/life balance and GDP. There’s a clear trend that technological advancements allow workers to focus on value-adding tasks instead of repetitive manual labor, creating new opportunities for higher-paid jobs and lower volume of work required.

Furthermore, paired with the global demographic shifts, AI is actually expected to be a blessing to remain productivity amongst an aging population and significant hiring difficulties, instead of replacing jobs.

let's build something great

Ready to join the team?


Want to build a business?