Every time you send an email or make a call, that is a service being delivered. When organizations use the cloud or machine to machine communications, more services are delivered as part of technology contracts. Capabilities like these have become so tightly woven into the fabric of enterprise operations, that today they account for 70% of the average total spend. In 2018 alone there were 21 trillion dollars of global business purchases related to services, and this trend is growing fast.
As organizations strive to innovate, the technology contracts and invoices linked to these services have become even more complex, causing them to be tainted by inaccurate information. We call this bad data, and it is stripping value from organizations across the industries at an alarming rate.
The root cause of bad data
Large organizations are now spending more money on services than on goods, this has been the case since 2016. This spike in use has created an ever-changing landscape of agreements, with service suppliers and buyers alike trying endlessly to model it. Things like expectations and fluctuating demand influence the nature of these contracts, as do global factors like pandemics. Complexity is also created at the outset by telecoms providers, with one example offering 5,267 products on its own.
Some suppliers use multi-tiered formulas to navigate this landscape, others provide bespoke offers or group discounts, all of which amount to more layers of complexity. All of these factors result in the creation of a huge amount of data, in a lot of different places, shared amongst a lot of different people. This makes it practically impossible for organizations to track whether services are delivering their intended value, especially when data is incomplete. The faulty relationship between suppliers and buyers has become so commonplace that companies have just accepted this problematic system.
Where the challenge is being felt most
With no team or system in place to monitor service value, frustration builds between suppliers and buyers, but also between purchasing and administration teams.
With value constantly falling through the gaps, administration teams lack the data they need to fully understand the contracts they manage, while purchasing teams negotiate new deals in the dark.
This bad data is then commonly fed into outdated Excel spreadsheets where it is subject to human error and mismanagement, further reducing its quality.
We have touched on some of the challenges faced by existing teams, but we should also look at the challenge through a wider lens. By doing this it is clear that the vast majority of large organizations cannot quantify the value of the many services they deem mission critical.
Customers are also finding that the number of suppliers they need to deal with is much greater than ever before.
In the technology space this is being driven by the explosion of cloud services, placing massive strain on supplier management practices.
Organizations may now have hundreds, if not thousands more suppliers to deal with than the handful they worked with traditionally.
Taking the right approach
The key is to improve data quality. To do this, organizations need a system that collects all of the relevant information in a single, unified location.
The next step is to interrogate and then organize that data to reveal the actual meaning behind it. Lawyers will be the first to tell you that inconsistencies in a contract are the sign of major underlying problems, so identifying all of them is crucial.
Once a system is in place that interrogates and organizes the data, multiple cycles of this process will quickly increase the hygiene and quality of your information.
Consistency is very important here, because a one-off approach will allow the rot of bad data to set back in. To achieve positive results when interrogating such huge datasets, it is crucial to use the right technology.
We take a sophisticated approach that uses machine learning, but it is also important to realize that technology alone cannot be the silver bullet.
Your people are also vitally important on the journey to beating bad data. What I see being successful is a hybrid approach which on the one hand leverages the right technology, but also educates teams in the right ways to work together.
At the heart of this will be a set of steps and workflows that enable teams to collaborate to trace inconsistencies back to their source, eliminating bad data at the core. With the technology autonomously identifying the inconsistencies, people with various skill levels can undertake effective data interrogation, from experienced users to recent graduates.
Once the right technology, education and relationships are in place, the final step is to connect these processes to the supplier and make the whole system repeatable.
Unlocking hidden value
Thinking Machine Systems has developed a unique procurement intelligence software solution that tackles bad data directly. The way we are doing that involves cutting edge technologies like AI, machine learning and cloud architecture, which comes together to create a natural way of interrogating information.
We take that information and analyze it, providing automated insights and recommendations to enable customers to optimize the value of the many services they use.
By spotting inconsistencies and generating intelligent recommendations, we save our customers up to 15% of unnecessary extra service spend on average, while cutting unnecessary technical effort by two to three times.
The thing that makes the Thinking Machine Systems solution truly unique is the targeted way we tackle bad data associated with enterprise services.
While other solutions unquestioningly use existing technology contract and invoice data to inform their actions, we see this as sweeping the problem under the rug. We are changing that by eliminating bad data to effectively optimize your service value.