When establishing an enterprise robotic process automation (RPA) program, my ultimate ambition is scale.
The first automation is always a pilot in a certain sense. Its development requires answering questions and solving problems unique to the individual business’s processes and infrastructure. Subsequent builds can then reduce development time by piggybacking on that knowledge. As the program matures, large amounts of technical and intellectual capital are stockpiled. Development time becomes dramatically streamlined, allowing ideas to become realized in functioning code sooner than one might be accustomed to.
There are obvious reasons to strive for scale like this: efficiency and reach. When you are developing at scale, you get the most out of your resources and backlogs shrink rather than grow. However, a recent project reminded me of another advantage that’s easy to overlook. The ability to quickly transform ideas into functioning code can invert the dynamic of discovery.
When developing that first automation, careful planning was required to prevent wasting time pursuing dead ends. It was worth spending a little additional time upfront to make sure you weren’t spending weeks building something out that ultimately wouldn’t work. When scale is achieved, though, that equation can actually flip. The process of theoretically exploring what the impact of automating a certain process would be can take more time than building the automation.
The example that made me realize we had achieved this desired scale was one in which the business wanted to prevent shipments from going into an automatic scheduling process. These shipping orders had a number of complex interacting variables that triggered the scheduling. The plan was to build an automation to go into the orders and remove these trigger variables before the scheduling process picks them up. The biggest challenge was going to be identifying the required logic of which variables to alter.
If this was the first automation I was building for the business, I would have wanted to carefully analyze the logic and perfect it in theory before building. However, I was at a point where I knew we could quickly produce an automation that removes a prescribed set of variables. With this in operation, we look at the results and refine our filtering logic as needed.
Sure enough, once the automation was stood up and put into use, we were able to iterate our way to an airtight set of rules that covered all the desired shipments. We had successfully leveraged our ability to build rapidly and shorten our discovery process.
Perhaps the biggest barrier to realizing this newfound efficiency is the reluctance to act hastily. This is, of course, a wise instinct. We establish program guidelines and practices to prevent costly mistakes, and lost time is not the only type of cost we need to minimize.
There is also the risk of putting something into production that does something worse than just not accomplishing the business need. A poorly planned automation can do a tremendous amount of damage compromising data, introducing security vulnerabilities, interrupting other processes, or simply wasting resources.
When at scale, though, those risks become greatly reduced. You are reusing ideas and code that are already proven. You can move quickly and safely because you have already benefited from the care and deliberation taken early in the program. Scale provides the stability to automate efficiently and dynamically, solving problems that would be nearly impossible for a less established program.
Contributed By: Tom Weaver
