From the customer-facing processes of order management to the trailing end of the value chain with supplies and supplier management, the decisions facing supply chain operators
No matter what decision you are facing across the entire supply chain, it can be supported by one or more measurements falling into one of these six categories of attributes of “supply chains to admire;" reliable, resilient, agile, cost efficient, sustainable, and ethical.
Let’s illustrate the power of Domain Driven Design (DDD) and Object-Oriented (OO) frameworks to aid in achieving these attributes, focusing on one example: resiliency.
Capabilities of the Digital Supply Chain of the Future
As we establish the data alignment to supply chain business process and KPIs, we’ll touch on another concept that is becoming critical called Feature Engineering
. We will discuss how the application of OO concepts, such as polymorphism and abstraction, are relevant to the feature engineering capabilities across various supply chain business applications, making AI / ML capabilities ready-to-operationalize. Feature engineering for reusability will be critical as we expand the maturity of the supply chain applications from the traditional rear-view perspective of Supply Chain Operations Reference (SCOR) model-based KPIs, to the types of advanced supply chain digital applications and capabilities covered in figure 2.
The traditional definition of resiliency is the ability/agility to handle the variability and uncertainty in the supply chain to meet a responsiveness measurement (i.e., order cycle time). The traditional responsiveness measurement was a single dimensional measure. But as we start dissecting the contributing factors to the responsiveness measurement, we find a probability and risk theory application to almost every aspect of the supply chain, particularly when we consider the lessons learned in the COVID pandemic
. The main business processes influencing the resiliency metrics are sourcing, manufacturing, logistics and distribution.
Let’s dig deeper into the distribution function’s contribution to the resiliency measurements and evaluate how the DDD or OO principles can be deployed.
The data needed to assess resiliency include customer orders, supply chain capacity, inventory, and cash flow. Each one of these serve multiple business analytic capabilities; the most critical of the emerging needs, though, is scenario planning. Scenario planning allows us to anticipate and mitigate the risks across the supply chain across all dimensions
(i.e., capital, labor, capability, capacity, macroeconomics, geopolitical and demand).
Analyzing the business process with the data lens (right to left) from the picture above, clarifies the need for a DDD/OO based design plan. Besides what is indicated in the picture above, some of the other necessary applications supported by order data are: Dynamic Pricing, New Product Introduction, Distribution Capacity Analysis, Expense Optimization, Freight Reporting and Analysis, Logistics Network Optimization, Multi-Channel Customer Demand with Product & Service Delivery coordination, Omni-Channel Forecasting Order Analysis, etc.
Each one of these use cases might vary slightly in the scope, granularity, or specific calculations applied to the Order information. These different subsets could vary by order types, time horizon, the nature of the product (make-to-order, make-to-stock, stock-to-order etc.), pricing conditions and customer type. Supporting a separate, custom data feed of order data to cater to every use case outlined above would be an expensive exercise. But by explicitly planning for the reuse of a converged data pipeline for Order information, and by exercising DDD / OO principles, we can treat Order as a base object class that can be reused extensively. Here are two examples:
- Abstraction – An order class to always represent the shippable demand OR order class to represent all orders due shipment in the calendar month. This allows you to obfuscate the complexity around combining order headers, details, line-item dates, etc. Best applied to the use cases such as dynamic pricing and logistics network optimization.
- Inheritance and Polymorphism – The best application for inheritance is scenario planning. Scenario planning may need multiple features / variables exposed which, when compared to base variables, can provide a robust application of inheritance. Extending this capability to 100s of data points with interdependency minimizes complexity and increases the ability to expand the scenario modeling capability.
A comprehensive analysis of application of data points to data / analytics use cases (supply chain and beyond) across manufacturing and CPG organizations reveals this: the most important “transaction” objects correspond to applications in over 50 analytical use cases. These objects include – orders, GL transactions, goods movement, production / procurement orders and customer interactions. A successful deployment of a DDD/OO based model can radically simplify the data pipelines and products that provide data to the downstream / upstream applications. This eliminates enormous technical debt, redundance governance and maintenance overhead inherent in the pipeline jungle of redundant and overlapping data feeds that you see in the “typical supply chain data ecosystem.”
Building on the converged object model concept, the use of feature engineering is more critical than ever. As the artificial intelligence and machine learning (AI/ML) applications become increasingly data hungry, a DDD/OO architecture can enable build-and-reuse of pre-populated feature stores with relevant and pre-calculated features based on such techniques as:
- Outlier management – anticipating and eliminating outliers. Most of these can be handled with the common statistical techniques like standard deviation, tiering, percentiles, etc. Exercising outlier management on the exposed objects from the previous section for a scenario planning exercise allows for robust data that can now be trained for fit.
- Imputation – missing data and missing critical values. Numerical imputation and information imputation. These algorithms are quite predictable, and the data readiness efforts can be integrated into the object representation for some use cases.
- Transformation and standardization – again vast resources and efforts are expended on enabling the ML tool kits to generate intelligence. The user-enabled capabilities that bring predictable structures with finite possible values for reference data enables an effective ML capability.
The DDD/OO techniques we’ve described, done correctly, will provide the first foundational steps in building a more resilient, cost-effective, and efficient data and analytic decisioning support capability. This will be enormously valuable in a post-pandemic supply chain.
In the next couple of posts in this series, my colleague Cheryl Wiebe will explore how to construct a domain-specific data hub to implement and support the more resilient data-driven digital supply chain of the future.