top of page
  • twitterX_edited
  • LinkedIn

How is AI augmenting more traditional automation technologies?


Generative Artificial intelligence has had a transformational impact on how quickly and effectively intelligent automation technology is deployed. What was once considered a potential replacement of automation technology has quickly become an enhancement to it. This article dives deeply into how this is unfolding in 2024.

What is the point of AI?

The purpose of artificial intelligence technology is to support people. This is what it is designed to do. Humans have things that they need to do and other things they like to do. The point is to use technologies like AI for what people do not want to do, yet need to do.

The majority of individuals (working jobs that require a computer) tend to have a specialty that they must use their expertise for. Their tasks are typically divided into high-IQ tasks & low-IQ tasks. Leveraging AI & automation enables them to maximize their time on high-IQ activities, rather than the drudgery that can be taken over with technology. Now think about this replicated at scale.

In a nutshell, this is the approach enterprises are taking on bringing in generative AI & large language models (LLMs) into their organizations.

What are enterprises specifically using generative AI for?

Now, there are some misconceptions of what generative AI can do vs. what it should do in 2024. These are two very different things. Initially, generative AI was scrutinized heavily on data security & potential for hallucination. However, this has slowly fallen by the wayside as enterprises have found clear ways to leverage LLMs, while ensuring their data is secure & setting up guard-rails + selecting specific use cases that eliminate most of the material potential for hallucination.

For example, although enterprises can use generative AI to take in a host of information and make an underwriting decision on granting a loan to a particular individual, it should not be used for this as a person should be closely involved with any major decisions with implications of this magnitude. Enterprises should use generative AI to take over tasks that are not worth the time of a person, that take very little brainpower and are repetitious. An alternative here is to leverage generative AI to take in that host of information for that loan, organize it in a structured format, input the data into relevant systems and spit out necessary reports for a human to review and then make an underwriting decision on that loan.

This is the same process yet reduces the risk of leaving the decision-making power in the hands of technology, while removing the drudgery around complex loan processing.

How is this impacting established, well-known automation technologies?

The two most well-known technologies across the intelligent automation space are robotic process automation (RPA) & intelligent document processing (IDP).

RPA - In terms of enterprise process automation, RPA is being used in tandem with generative AI to fulfill complex needs. RPA cannot think nor execute tasks that have even the least bit of subjectivity. Yet, generative AI can take on the subjective portions of processes. For end-to-end enterprise use cases, generative AI adds very limited value by itself. However, when paired with RPA software bots that can take over data entry & data reconciliation tasks with 100% accuracy, it can add immense value.

IDP - On the other hand - IDP is being impacted in that, historically, machine learning needed to be utilized within IDP engines to train semi-structured & unstructured document types like invoices, insurance claims, legal complaints, bank statements, patient prescription documents, etc. However, this was a very limited approach as one enterprise may have 300+ difference invoice types, which then requires 300+ different machine learning templates to create. This is cost-prohibitive, time-consuming and inefficient, along with being limited in accuracy of data when extracting via traditional machine learning.

Instead, enterprises can now replace the traditional training via machine learning with LLMs that can take in unstructured & semi-structured documents without being trained. Pairing IDP engines with LLMs enable these engines to understand the context of data within a document immediately. With a small bit of configuration, fine-tuning and prompt-engineering, enterprises can achieve 95%+ accuracy on unstructured & even hand-written documents (assuming they're legible).

This is a revolutionary change for intelligent document processing and optical character recognition (OCR) technology. It is akin to what's happening in current day with self-driving cars. In recent news, Tesla deprecated most of the original code and replaced it with AI-based technology for its self-driving features.

RPA, IDP & LLMs - Enough acronyms for you? It may sound complex, but much more straightforward than it seems. This is because RPA & IDP have been used together in tandem for years. The only difference here is that now it's more like AI-based IDP is used in tandem with RPA. Nothing has changed on the RPA side, which is simply moving data around once it's been extracted.

On the IDP side, it still takes care of the pre-processing & digitizing of documents, while LLMs are layered on top of IDP to bolster the accuracy and speed of data extracted from unstructured documents. This means implementation times are shorter and even business users can quickly and efficiently configure complex documents for high accuracy extraction of data.

Previously, enterprises would entrust humans with the task of sifting through large sets of unstructured data. The issue is that the speed is limited to how quickly a human can read and understand this data so technology is ideal for this. Thankfully, generative AI is very good at understanding large sets of unstructured data.

For example, in the legal space, LLMs are being used to summarize endless pages of legal contracts & documentation in seconds.

Once data is summarized/extracted with LLM-powered IDP, RPA can be used to emulate any human actions needed on a computer screen. This includes inputting that data into a specific website, excel, word document, PowerPoint or system of record.

This also includes using RPA to reconcile data from one location to another. For example, ensuring that the data on an invoice document is the same as the data that's been entered into an accounting system of record. In the case of making sense of a large amount of unstructured data… the impact that this will have on the legal, healthcare, insurance and financial services industries is high and still difficult to fully measure.

Future of Automation

The future of intelligent automation & generative AI is leveraging technologies like large action models (LAMs). A large action model is a model that takes actions based on the specific prompts it receives. You still must train it like RPA, but the training is much faster, yet it's output is more volatile/currently dangerous. It uses a bit of subjectivity & inference to conduct actions, rather than doing this in a purely linear fashion like RPA. This is the future of what can make waves in the intelligent automation space once this technology is able to be deployed in a manner with less risk.

Gabriel Skelton is the Head of Artificial Intelligence Solutions at OpenBots. Gabriel assists firms in the selection & implementation of document processing automations involving unstructured & handwritten documents. Gabriel has a team of automation consultants throughout the healthcare, insurance and financial services industries that specialize in automations that involve the extraction of data from the most complex documents and porting that unstructured data directly into end system of records.

Gabriel has a Master’s Degree in Entrepreneurial Leadership from Babson College’s Olin Graduate School of Business.

Gabriel resides in Coral Springs, Florida with his wife and three children.


Related Posts

See All


bottom of page