Ventana Research uses the term “data pantry” to describe a method of data storage (and the technology and process blueprint for its construction) created for a specific set of users and use cases in business-focused software. It’s a pantry because all the data one needs is readily available and easily accessible, with labels that are immediately recognized and understood by the users of the application. In tech speak, this means the semantic layer is optimized for the intended audience. It is stocked with data gathered from multiple sources and immediately available for analysis, forecasting, planning and reporting. This does away with the need for analysts to repeatedly perform data extraction, enrichment or transformation motions from the required source systems, all but eliminating the substantial amount of time analysts and business users routinely spend on data preparation.
Automating data transformation also makes it practical to expand the scope of available data by removing the time constraint. Freeing up time and broadening accessible sources of data enables analysts and other business users to focus on creating useful, insightful, complete and timely analysis, insights and reports. Many software vendors already have taken steps in this direction without expressly calling the objective a data pantry. Ventana Research asserts that by 2026, almost all business-focused software vendors will offer a data pantry to facilitate the integration of operational and financial data, making a broader set of actionable information available to support data-driven decision-making.
The data pantry is consistent with the concept of the data fabric, a design approach to data platform architecture that embraces the now-practical opportunity to marshal data from across an enterprise to serve a specific
The data pantry is different in concept and construction from other data stores, most significantly in optimizing the semantic layer for a specific set of users or business domain to reduce or eliminate the ambiguity that can exist with functional but cryptic metadata. The dataset is curated for the users, domain and use cases, making it easier to navigate and therefore readily available for analysts and others to do useful work. It’s not a warehouse where users in effect wander the aisles looking floor to ceiling to spot what they need, or a lake that brings to mind the phrase “boil the ocean” to extract what’s needed. Data warehouses and data lakes have their uses, but today it’s not an either-or situation. All are necessary. And a data pantry isn’t like older forms of more defined data stores, like data marts or financial data warehouses. First, because the scope of data available is broader, which includes operational and external data, and second, and second, in programming how the data moves directly from each single authoritative system of record to the point of consumption by analysts and business users.
Data for the pantry can be taken from one or more data sources that might exist in multiple forms, including, for example, SQL databases, Snowflake or a cloud-based streaming service. Ideally, data is collected from the original authoritative source — the system where the initial entry or record was made — to ensure fidelity and consistency of related datasets. For example, data would be extracted from the customer relationship management application or ERP system where a transaction or record was made, or from each of these multiple systems that an organization that might be in use. Other common sources might include manufacturing execution systems, supply chain management or human capital management software, or any other application that an organization might be using relevant to the intended use cases. Data also may be taken from external sources such as third-party proprietary data providers and from data lakes, especially if they are the destination for streaming data from internal or external sources. In some cases, already aggregated sources may be the best choice for reasons of speed or economy. However, while this can be an expedient choice, it also might cause future consistency or other issues.
The data transformation steps from source to pantry are conventional, with data from sources continually extracted, enriched and, where necessary, transformed as needed to create a highly normalized set. Movement is orchestrated using application programming interfaces (APIs) or Robotic Process Automation, never manually. This wave-the-magic-wand description is not intended to minimize the considerable work that is required and the complexities that might be encountered in making this happen, as well as the ongoing maintenance required to stock a pantry for evolving business needs, including additions, changes and subtractions of data sources. However, this effort measured in person-hours pales in comparison to the time saved and the considerable increase in effectiveness achieved in eliminating the need for analysts to wrangle the data every time they need it.
A key feature of the data pantry is that it presents itself to the user as if it were a single, logical source. However, the data might be held in multiple types of data structures (such as SQL tables, a data cloud and a data lake) that are part of the business-focused application or platform. In this respect, the important distinction between the data pantry and other data architectures is that it is primarily designed and organized for the user of the data to facilitate analysis, forecasting, planning and reporting by eliminating the need to perform data preparation, data validation and data quality steps.
The data pantry addresses long-standing issues that routinely sap the productivity of finance and business analysts and other users of business data. Our Analytics and Data Benchmark Research finds that 69% of organizations
The data pantry is also essential for supporting artificial intelligence using machine learning (AI/ML) systems used for predictive and prescriptive analytics or for any sort of forecasting and planning based on time-series analysis. The goal of AI in this instance is to reduce bias and increase accuracy and consistency to make better business decisions. For these purposes, ML assists by identifying the most useful statistical relationships in a dataset for review. AI can perform this step faster and more accurately than most humans, and it must do this on a continuous basis so statistical models always reflect the most recent experiences and conditions.
In an earlier Analyst Perspective, I described how a data pantry facilitates machine learning by making it possible to amalgamate a broad set of information from many different sources to create more realistic and statistically relevant models to produce more accurate forecasts that support planning and decision-making. For business management purposes, drawing on a combination of financial, operational and external data enhances the ability of a ML system to identify meaningful correlations. This can prevent the creation of spurious models that are built on correlations that omit variables which have a significant explanatory power. A silly example of this is the apparent close connection between cheese consumption and civil engineering PhD awards.
Except in the case of relatively simple models, it is necessary to have a data pantry as part of an application or platform so that the large volume of operations necessary for ongoing training can be completed within time constraints. Software that uses ML must also monitor the health of the models it creates, to ensure that when the quality of forecasts degrade past a certain point, repairs can be made, or new models can be created. A data pantry architecture is necessary for performing this process of continuous assessments, because ongoing data calls from an application to a plethora of data sources would introduce latencies in ML processes that can undermine the quality of the predictions and forecasts.
I expect that business application software vendors will increasingly offer data pantries as an integral part of their platform or application. The pantry has the potential to significantly transform the work that business analysts do, especially those in financial planning and analysis. It will be an essential ingredient to make AI/ML a useful reality in business software. Although the concept is simple, the devil is in the details. Organizations should begin to anticipate how best to utilize data pantries, because they have the potential to substantially change what’s possible. At the same time, they must be able to separate the claims from vendors that are real from those that are sort-of real to be certain that the software will deliver what’s promised.
Regards,
Robert Kugel