5 Signs Your Analytics Process Is Killing Your Procurement Workflow
How can we ensure that our data analyses support our strategic sourcing goals, and what mistakes do other procurement pros make that we can work to understand and avoid?
Strategic sourcing doesn’t happen in a vacuum. When procurement teams decide to evaluate supplier relationships and spend profiles, the available data is central to the overall success of the process. GL reports, purchase orders, invoices, and expense reports all contain rich information that procurement professionals can leverage for strategic sourcing intelligence — if they know how to use it.
So how can we ensure that our data analyses support our strategic sourcing goals? And more importantly, what mistakes do other procurement pros make that we’ll need to work to avoid?
Analyses of your spend data can provide your sourcing team with critical information, from historical purchasing trends to geographic spend distribution, detailing demand across company business units or departments. Even if that data is there, though, digging into it can be a different story.
The “80/20 data science dilemma” describes the primary challenge that often slows the discovery of these valuable insights: Most of the effort required for data analytics — 80% of it — lies in the collection, validation, organization, manipulation, and cleansing of data. This is a major hindrance to productivity and will eat away your time.
Want to streamline the process of identifying savings opportunities and executing strategic sourcing initiatives? Recognizing and understanding how to deal with these five pain points will help you identify areas for improvement in your analytics process.
1. You often find that your spend data is missing transaction dates, PO numbers, or business unit names, or contains messy vendor names.
Clean, quality data is key to a robust analysis, reducing the degree of maintenance and management required to gain insight into your spend. Ensure that data is clean from the get-go. As each line of spend is recorded, ensure that all fields are as complete as possible. Even more important, ensure that the values being collected represent exactly what each field denotes. For example, confirm that vendor names contain name only, and exclude errant information like store numbers — that information can always be placed in another field. Aside from increasing the time it takes to complete data management processes, messy, incomplete data can also lead to a number of problems, including challenges in entity resolution, aggregation, and statistical significance. One of the best methods for ensuring data quality is standardizing data collection best practices across your team or organization.
Nothing is more frustrating than realizing that you don’t have the fields or enough data to answer a key question about your spend when completing analytics workflow. Plan your data collection around the types of insights you want to be able to glean and keep in mind the data’s end use while designing your data collection practices. More specifically, consider what types of visuals, analyses, and insights you want from your spend. These end uses should not only dictate what fields are collected but also how those fields are collected, as well as your standards for data quality. An excellent example of this is the immediate (and rigorous) classification of spend data along a procurement-specific taxonomy. Classifying spend at the outset reduces the effort required to retroactively manually classify data, and allows your team to understand how your money is being spent with much less delay.
Field names in and of themselves often do not provide sufficient information about the content of your spend data. This can be easily alleviated by pairing metadata with your spend data. Metadata can include any information not directly contained in a data field: field descriptions, ID-field relationships (for relational databases), or value constraints. Having a complete understanding of the data is essential for accurately managing your data, performing analyses, and sharing information among groups and individuals.
4. Team members do not understand why data collection is necessary or why standard data collection practices and quality assurance are necessary.
Healthy data practices can be tedious and easy to ignore. Data illiteracy in a team that relies heavily on data insights can significantly decrease productivity. At the most basic level, every team member should understand how the data that they come into contact with will be used, and why its use is essential to the performance of your team. It can also be incredibly useful to familiarize all team members with the tools being used to handle data. Data literacy across your organization helps bridge the data disconnect between business goals and analytics.
Most analyses involve the manual or automated altering of data. There are several key practices you can employ to ensure that these changes don’t corrupt your data or derail an analysis. First, be sure to store your master data in a central database instead of saving it as an Excel document or CSV file. Any manipulation that occurs to this master data should be tracked. Record what changes are made, why they were made, who was responsible for them, and when they occurred. Make sure to store backups, or the functionality to undo changes that occur at any step of the data handling process. Valuable data should be treated as a precious commodity, with full accountability and visibility into how it is handled.
Image Credit: Freedomz/Shutterstock.com