WHAT TO KNOW
With inflation well above target levels and rising labor costs, automation investments are back at the forefront of budget discussions. The cost of automation has been slowly declining as more services are available on a subscription basis, allowing for automation of not only permanent processes, but also temporary or transitional processes. Tying this all together is the concept of hyperautomation – a term popularized in the 2019 Gartner Top 10 Strategic Technology Trends for 2020. Gartner defined hyperautomation as: a business-driven, disciplined approach that organizations use to rapidly identify, vet and automate as many business and IT processes as possible. Hyperautomation involves the orchestrated use of multiple technologies, tools or platforms, including: artificial intelligence (AI), machine learning, event-driven software architecture, robotic process automation (RPA), business process management (BPM) and intelligent business process management suites (iBPMS), integration platform as a service (iPaaS), low-code/no-code tools, packaged software, and other types of decision, process and task automation tools.
Let’s break this down for simplicity. Hyperautomation is the concept of using several tools to solve a business problem in a structured manner. This is not new. What is new, is the availability of a wide range of tools via the cloud through application programming interfaces (API) or other direct interfaces and broader access to data through robotic process automation (RPA). These technologies have dramatically expanded access to data and capabilities in a cost-effective manner, allowing for greater end-to-end automation, or straight-through processing, but they do come with additional considerations.
IS HYPERAUTOMATION RIGHT FOR ME?
Automation can help solve a variety of problems, but must always be weighed against cost, timeline, and flexibility constraints relative to a similar manual effort. The concept of hyperautomation adds structure to the planning process and helps tilt investment decisions in favor of technology, but is best viewed as an evolution, not a revolution. We see three primary areas where hyperautomation can augment current automation efforts:
Further automate existing activities
Most automated processes still require manual efforts. This could be at the beginning where activities must be manually initiated or configured to support the automated processing steps, at the end where results must be manually collected, reviewed, or passed to another team, or somewhere in the middle. These remaining manual efforts are candidates for hyperautomation, where tools such as RPA can be utilized to trigger processes or post results to a third-party platform, or AI can be incorporated in the review process to improve error identification and quality control efforts.
Critical to the management of any process is the ability to move work through a set of steps in a structured manner and report on the flow of this work. In traditional operations, these flows are often spread across several different platforms that all have their own proprietary workflow capabilities, but don’t necessarily easily interconnect. These disconnected capabilities can be integrated through the use of general purpose business process management (BPM) capabilities, such as Appian, Camunda, or EvoluteIQ. These platforms allow the steps of a process that are completed by different systems to be connected, providing an end-to-end managed workflow that can track key performance metrics and rework efforts, or incorporate a human-in-the-loop step.
Automate lower volume activities
Traditionally, processes required relatively high volumes to justify the expense of automation, and lower volume processes tended to fall to the bottom of the priority list. In these low volume processes, most of the management reporting must be completed manually, which can lead to delays, inaccuracies, and inconsistencies. Using the hyperautomation approach, these low-volume processes can be automated using collaboration platforms, such as Jira, Asana, or Monday, which provide basic case management and reporting capabilities with limited configuration requirements, most of which can typically be done with minimal training. This approach brings the basic structure that is important to process management, while allowing for rapid deployment and simplified reporting. As capacity permits or as volume dictates, these processes can be migrated to dedicated, customized platforms, if appropriate.
Automate temporary activities, such as remediation efforts
Larger organizations often have a regular backlog of remediation work resulting from acquisitions, regulatory actions, or operational issues, and the work usually doesn’t get much attention from an automation perspective. The remediation teams typically rely on desktop tools such as Microsoft Excel or some proprietary tool provided by a professional services team. Both options suffer from similar limitations – access, version control, and transparency. Similar to the low-volume processes, it is often possible to effectively utilize a collaboration platform to manage these temporary activities, providing much broader access and transparency across the team, and built-in version control and reporting. In this scenario, we recommend that dedicated instances be established to allow rapid setup and simplified wind-down.
Usage costs and reliance on third parties
Depending upon the agreement structure, many capabilities will include usage and subscription fees. This can be advantageous when you want to ramp-up quickly or manage changing volumes, but can add up quickly and should be closely monitored to ensure you continue to get the value for cost. This could mean regular evaluation of usage and gradual migration to fixed cost solutions in certain circumstances. It is also critical to quickly shut-down temporary solutions once they are no longer needed.
The wide availability of advanced capabilities via cloud-based services is a great strength of hyperautomation but also presents new challenges. When end-to-end workflows are comprised of components that are managed and maintained outside your environment, individual components can become points-of-failure that may impact the smooth flow of work. The design and management of these solutions must take this vulnerability into account and provide for appropriate contingencies.
When using any solution, you will invest significant knowledge capital to configure, maintain, and realize value. It is critical to select platforms and partners that are a good medium to long-term fit, and important to consider situations where you would need to migrate to alternate providers.
It has become easier to use technology and will likely continue to become more accessible to a wider audience in the coming years. Many tools promote the low-code / no-code capabilities that they offer and suggest that a wide range of citizen developers can configure the platforms. Here we recommend caution – while it is certainly true that a lot of general functionality can be configured through the use of visual user interfaces, there are typically pieces of the configuration that still require more technical know-how, and even the items that can be configured by a more general user should follow basic standards, like the use of data elements and implementation of style guides. Having resources available that understand the underlying technologies and have experience structuring solutions for growth and flexibility are recommended.
Third-party management is often fragmented within organizations, with purchasing teams, compliance teams, and user groups falling under different structures. As more reliance is placed on a wide array of third-party components within a larger process, it is critical that these teams have the appropriate skills and interaction to ensure high-availability of solutions and rapid issue remediation.
As the variety and location of service providers continue to expand, it goes without saying that regular monitoring of security practices and regional issues becomes more important. The interconnected nature of cloud-based services can result in components being located in several regions simultaneously, and security protocols should be adjusted to reflect this structure.
Confirm your needs: identify at least a half dozen business issues that could benefit from additional automation. This is a journey that requires engagement and commitment from all the stakeholders to be successful. Clearly articulate the value, benefits, and approach to all involved at the beginning and throughout the journey.
Evaluate tools: identify tools that address the identified business issues, get a general sense of cost, and develop a high-level business case. Additional diligence should be applied to the workflow component, as this will be the “backbone” of your solutions.
Select a team: each organization has its own culture for enabling change, and the team structure must reflect this culture. In general, the team should, at minimum, include business process, technical, and marketing/communications (key for promoting adoption) skills.
© 2022 The Antares Services Company LLC. All Rights Reserved.