Getting Data Basics Right

Why do we care about basic business data management?

We care about basic business data management for two important reasons. Firstly, information is nowadays considered the fourth main production factor next to materials, labor force and finance. Through the ages, most companies have understood how to manage the original three production factors, but many are still struggling with managing their data properly. Badly managed data leads to bad information, which leads to bad decisions and in turn to sub-optimal business. Secondly, basic business data management lays the foundation for the digital transformation in any company. Data science, machine learning, artificial intelligence, blockchain, the internet-of-things and the likes are some of today’s most hyped technologies, and new applications are discovered right at the moment. This has not gone unnoticed to Supply Chain Management professionals as they aim to put these technologies to use in their business. Yet doing so requires proper data management, which is still a struggle to many.

Data is often inaccurate, incomplete, or inconsistent. We still see essential business data sharing via mechanisms like USB-sticks or email. Therefore, Data2Move invited its partners and professor Paul Grefen to discuss current company practices and how ‘basic’ data management can be improved as the first step towards proper data-driven business management and advanced data analytics. As testified by the partners in a poll during our event, there is a lot to gain by proper data management:

How can we improve our data management?

In practice, data quality problems often occur as a result of decentralized and disconnected data. The Logistics department may record order prices excluding taxes, while Marketing stores order prices including tax, resulting in poor conformity. An operations manager may take a USB stick with HR data home, resulting in security vulnerabilities. Sales may only send updates once per week, giving rise to longer lead times as a result of poor timeliness. Conformity, security, and timeliness are just three of the common types of data quality problems.

Data quality problems can be improved by having one centralized enterprise database, in which the rights and responsibilities of each department are clearly defined. We can distinguish two components within this database: the data store and the data warehouse. The data store contains low-level data that can be used for operational decision making, for example the number of orders due this week. In the data warehouse, filtered and aggregated data is stored based on the basis of which more high-level management information can be generated.

What can you do now?

Although good data management is not rocket science, it does require effort and time. When data quality is not in order, data analytics cannot help us to make better decisions. A solid database management technology is key to guarantee a minimum level of data quality. There is no single solution that works for all companies: you need to be aware of the decisions being taken in your company and which information can help to improve decision making. Operational decision-making needs much more low-level information compared to decision-making at the tactical/strategic level, and you may thus need different solutions at different levels. To get you started, here is a checklist that can help you to get an overview of data management problems in your company:

Finally, it is good practice to appoint a data manager (preferably not an IT-only expert, but someone with business knowledge) to prioritize and design your company’s plan towards proper data management. Welcome to the Chief Data Officer!

Interview Hilti: Federico Scotti di Uccio and Lisa van Lierop

When Lisa van Lierop started her internship at Hilti, she already had a good understanding of Materials Management and Multi Echelon Inventory Optimization. To Federico Scotti di Uccio, Lisa and Hilti were a match made in heaven: “Lisa immediately expressed an enthusiasm to dive deep into Multi Echelon Inventory Optimization. This topic is of great interest to us at Hilti Logistics. On top of that, Lisa demonstrated a tenacity to collect and compute large amounts of data and information that is sometimes challenging to obtain because of the complexity of processes and stakeholders involved.”  

Inventory management is an important area of the supply chain and of Hilti’s business itself. As Federico explains, Lisa’s research is really beneficial to the company: “The right size/ volume and positioning of inventory means that we can better serve our customers in the most efficient manner. Too much inventory has an impact on working capital and costs, too little has an impact on service. It is crucial to have the right methodology and understanding of how the network and product characteristics influence the optimal result.”   

In the research Lisa conducted for Hilti, access to good data is vital. It forms the basis of all analysis and modelling. Without it, it is not possible to prove any theory or concepts and to operationalize them. Acquiring and simultaneously computing large amounts of data is however one of the main challenges companies generally face in this context. That is why it is important that talented students get the opportunity to proof themselves at companies such as Hilti. They have up-to-date knowledge that can really make a difference in solving these complex problems.   

The past few years, the TU/e has proven to be an excellent talent pool for Hilti to fish in. And as far as Federico is concerned, Lisa certainly is not the last intern they will hire. “We have a strong collaboration with TU/e and we will continue hosting students depending on our priorities and availability. These collaborations are a good opportunity for the student and the university to combine theory and practice. But it is not only that. For us at Hilti, it is an opportunity to really investigate some important and prioritized topics. An internship also gives the student a chance to experience the corporate world for a few months. And in some cases, a successful internship can mark the beginning of a promising career here at Hilti.”

And much to the delight of all the parties involved, this is exactly what happened to Lisa after she graduated. Federico recalls: “During her internship, Lisa not only demonstrated a good understanding of her field and the Hilti Supply Chain, but she also integrated very well into our  corporate culture, her team and the different stakeholders she had to deal with. Her energy and willingness to learn made her go the extra mile with her project. That is why we were pleased to welcome her into our company.”   

And Lisa is also pleased. “My first contact with Hilti was during an event organized by ESCF. The international environment and the interesting supply chain triggered my interest for the company. During my master thesis, I got a chance to explore life at Hilti, and it convinced me that I wanted to continue my career with this company. Currently, I am working as a Global Materials Manager at Hilti in Liechtenstein.” 

According to Federico, Data2Move played a large part in the success they had with the recruitment of interns. “Data2move was an excellent support in recruiting talented students. The Data2move community enables us to stay in touch with the academic world and gives us the opportunity to work with driven and enthusiastic students full of knowledge. During the internship, Data2Move is a big support and that contributes to the development of the student and the success of the project.”

Data2Move Success Stories: Office Depot

For Laurens Kauffeld, it was a no-brainer to recruit Master student Stan Brugmans for their Multi Echelon Optimization project. “Anne (Recruitment and Talent Sourcer at Office Depot) and myself interviewed a couple of students and unanimously choose Stan. During his interview, Stan came across as professional and well prepared. He was able to explain why he chose our project and showed genuine interest in Office Depot.” Added bonus was that Laurens believed Stan would fit in nicely with the team.
Stan proved to be quite a catch for the company and the team at Office Depot felt very privileged to support him during his Master thesis research. During his studies Stan acquired excellent analytical and statistical skills. According to Laurens, these skills are critical for any supply chain optimization project. In addition, Stan also showed great commitment, dedication and work ethic. “Besides his theoretical qualities, Stan’s personality contributed enormously to the results and success of his project. Stan has a ‘can-do’ mentality, he works hard and always aims for the optimal solution.”

Stan started the Multi Echelon Optimization project with a great deal of enthusiasm. He dug right in and invested time to grind through all the data complexities. He also analyzed numerous approaches to calculate safety stock levels. This research was not only necessary, but also very beneficial to Office Depot. Office Depot’s Smart Choice product range is sourced in a Multi Echelon Supply Chain. By further optimizing their supply chain, they were able to continuously offer the best value to their customers.

According to Laurens, Stan’s contribution to the research was vital. “Multi Echelon is a very challenging topic that requires dedicated time from an analyst. Stan’s research improved data quality, initiated further collaboration between the countries and gave us a direction in how to further optimize our Multi Echelon Supply Chain. These are three key factors if you continuously want to improve your business.”

Because the Multi Echelon project was a hundred percent about using data to optimize service levels, it comes as no surprise that data played an important role in solving the problem. As Laurens explains: “With data we are able to model, optimize and simulate the real world without waiting months for the results and risking bad performance. To get the right stock in the right place at the right time, we need to understand our customer demand distribution and supplier lead time performance.” Since this information is often hidden in big data sets, Office Depot needed Stan’s specialist knowledge and analytical skills.

In the end, Stan and Laurens both look back on Stan’s research period at Office Depot as a successful and mutual beneficiary collaboration. And it is still continuing. Much to Stan’s delight, a vacancy opened up within the department at the end of his internship and he is now working as a supply and demand planner. “When I applied for the Multi Echelon project, I already considered Office Depot as a potential employer because of their international character. During my Master thesis it became clear that I wanted to stay at Office Depot. They gave me lots of learning opportunities, invested their time and commitment and showed me that they really value my work.”

Office Depot continuous to develop and maintains high service and value in an increasingly competitive market. Data2Move plays an important part in keeping up these high standards for Office Depot. Thanks to the positive experience they had with Stan, Laurens is very open to other Data2Move projects in the future. “Data2Move helped us find and recruit interesting projects and talented students. They enable Office Depot to get access to state-of-the-art and up-to-date academic knowledge and the student projects give us the opportunity to work with driven and enthusiastic students. The support Data2Move showed during Stan’s thesis contributed to his development and the overall success of the project.”

Data2Move Research Stories: how to optimize the rail fleet composition using GPS-data?

Data science can really help to solve complex supply chain management challenges. In our Data2Move Research Stories, you can find out how our students tackle these challenges. This time we feature Bart Pierey’s Master thesis research at SABIC in Sittard. SABIC is a Saudi  manufacturing company, active in petrochemicals, chemicals, industrial polymers, fertilizers and metals.

Where it started – challenge
With his research, Pierey wanted to optimize the rail fleet composition at SABIC based on, among other things, GPS data. This, because  the rail yard close to one of the production sites of SABIC has  limited capacity. Several companies share part of this yard  and available spaces are assigned based on a ‘first come, first served’ principle. Consequential, the parking yard could be fully occupied when a new train arrives. If this is the case, the arriving train is rejected at the gate which leads to operational problems. On the other hand, the fleet cannot be decreased too much, because you need  a high availability of rail cars to prevent production scale downs.

Data validation and preparation
During his research, Pierey discovered that the quality of the GPS data, gathered from the fleet management system, was not optimal. He executed a data preparation and validation project which led to more accurate data. As Pierey states: “The initial data was not good enough but after the preparation and validation phase it was considered to be sufficient.”

Simulation model
In his Master thesis, Pierey explains what steps he took  to tackle the rail freight car fleet problems SABIC runs into. To get an understanding of the issue at hand and the characteristics of the system, he interviewed several stakeholders. “Planners and business both had  interests, so  I tried to solve the puzzle for SABIC, based mainly on GPS-data,” Pierey explains. He presented the final results  to several business managers within the company and he developed a discrete event simulation model with stochastic holding and travel times. This model has not only been used to improve the general understanding of fleet behavior, but also to find an optimal fleet size which minimizes the utilization of parking space in the shared parking area.

Near-optimal fleet size
Besides the determination of the optimal fleet size and composition, Pierey also took the impact of the parameters holding and travel time into account. Pierey: “The holding time, the time rail cars stay at the customer’s production facility, has a major influence on the optimal fleet size. Travel time only has limited influence. That is why, I advise to emphasize on decreasing the holding times at customers.” Pierey concluded that SABIC’s current fleet size is near-optimal but their fleet composition should be adjusted.

Shared parking yard planning
To the users of the rail yard, Pierey recommended sharing the multi-use parking space forecasts. “At present, the forecasts on the usage of space in the multi-use parking area are not shared between the various site users because these are confidential. If they are shared, the arrival and departure planning could be adjusted, resulting in fewer problems in this multi-use area. Therefore, I recommend open conversation and sharing these forecasts.”  

Modal shift
The train and rail car planning could also benefit from a modal shift, using other transport modes than just trains. As Pierey explains: “Another solution to decrease the safety stock at a company’s yard is to replace the train with another transport mode to fill the peaks in transport demand.”

Lessons learned from Pierey’s study:

  • Simulation can be of great value in understanding a system’s behavior. In addition, a simulation model can be used for testing several scenario’s and policy adjustments.
  • Make sure that the data you enter in a system is correct. Data cleaning and validation actions are very important, as garbage in results in garbage out.

To make sure that data quality is guaranteed, it is key to properly maintain data collecting systems. Appointing a system owner will help, as this person will be responsible for the system and its functioning within the organization.

Data2Move Research Stories: When does multi-echelon inventory control pay off?

Data science helps to solve complex supply chain management challenges. In our Data2Move Research Stories, you can find out how our students tackle these challenges. This time, we feature Lisa van Lierop’s Master thesis research at Hilti AG in Liechtenstein. Hilti AG is a multinational company that develops, manufactures, and markets products and services for the construction, building maintenance and energy sector.

Where it started – challenge
When inventory is optimized locally, inventory control is often based on a single-echelon approach. But a single-echelon approach might not be optimal from a broader supply chain perspective. In order to optimize stock levels over multiple supply chain stages and still ensure a high service level to end customers, Van Lierop focused on the potential of multi-echelon inventory control. More precisely, she studied the potential of centralized inventory control under different settings. This should help to strive for optimized stock levels throughout the entire Hilti network, and in the meantime ensure a high service to end customers.

Van Lierop addresses this challenge in her Master thesis ‘Quantifying the benefits of multi-echelon inventory control’.

Technical and organizational challenges
“Changing to a multi-echelon control policy is not easy”, Van Lierop states. “The main technical challenge is that you need to collect, and simultaneously process, a large amount of data. You also have to consider that in most supply chains, data needs to be exchanged between different firms.” On top of these technical challenges, organizational challenges pop up. When you optimize your inventory throughout the whole chain, local control is no longer necessary. Inventory should be managed centrally. However, this change requires someone, or some department, to take responsibility for the centralized inventory control. Another organizational challenge regards   benefit sharing: ‘How are the benefits of the new inventory control approach shared or divided between the different supply chain stages?’

Multinationals such as Hilti AG who look for answers to these important questions might benefit from software tools to support these answers. Still, considering the complex inventory landscape and organizational responsibilities, according to Van Lierop “it is no surprise that the number of published real-world applications of multi-echelon inventory control are scarce.”

‘When does a multi-echelon inventory control policy pay off?’
Before a company decides to switch to a multi-echelon approach, they need to have a good indication of the potential for their product portfolio, according to Van Lierop. For some products, the benefits of a multi-echelon approach might be higher than for others. That is why the aim of Van Lierop’s research was to identify the product/supply chain characteristics for which a multi-echelon inventory control policy pays off. More concretely, she studied scenarios based on combinations of the following five dimensions: demand, demand variability, lead-time, lead-time variability and holding costs.

Simulation model
Van Lierop used a software program designed by Prof. Dr. Ton de Kok (ChainScope) for the multi-echelon safety stock optimization. By using simulation, she was able to model and compare different safety stock procedures and multi-echelon distribution networks. In her research approach, Van Lierop also used a MRP-based replenishment policy with forecasted demand, because many companies replenish their stock based on forecasts.

Results revealed that the benefits of a multi-echelon inventory control approach are the highest for items with a high lead-time to the first location in the distribution network, a high demand rate and high inventory holding costs. Van Lierop: “Furthermore, the savings for low demand, low cost items were relatively low. Especially when the first location in the distribution network can be quickly resupplied. So when companies consider a pilot for a centralized safety stock procedure in their distribution networks, they should focus on ‘high-potential’ items first.”

Data2Move Research Stories: H&S

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time we feature Rijk van der Meulen’s master thesis research at H&S Group, an international and intermodal operating Logistics Service Provider in the liquid foodstuff industry.  

Where it started – challenges 
H&S Group asked Van der Meulen to focus on two operational challenges faced by many intermodal operating Logistics Service Providers:

– The efficient repositioning of empty tank containers

– Proactive planning of their drayage operations

Van der Meulen addressed these challenges in his thesis ‘Forecasting the required tank container and trucking capacity for an intermodal Logistics Service’. His research explores how you can predict demand more accurately and how these demand predictions facilitate better operational planning.

Insight into tank containers and trucking units per location and time
To tackle these challenges, it was important to extract valuable information from data. Van der Meulen needed insight into the expected number of loadings and deliveries. Also, he needed the corresponding requirements of trucking and tank container capacity. By combining these key aspects, he defined how many tank containers and trucking units are needed in a certain planning region at a given time.

Dynamic demand prediction
The true innovative character of van der Meulen’s prediction methodology lies in the dynamic update of the predictions of loadings and deliveries. He used a mathematical technique (Bayesian) to dynamically adjust the initial prediction based on new orders as they enter into the system. This ‘advance demand information’ represents the demand for the future, which is already known in the present. It ensures that planners have access to the most up-to-date and accurate loading and delivery predictions at any time.

In his next step, Van der Meulen used the adjusted forecast to predict the required tank container and trucking capacity. He relied on multiple additional models based on the hierarchical top-down forecast approach and multiple linear regression to assess the effectiveness of the complete forecasting methodology. Its accuracy was put to the test during a one-month test case for two planning regions.

The one-month test case showed that the dynamic prediction method increased the accuracy of the initial forecast by 65 percent. Using cost simulations, Van der Meulen estimates that this improved prediction accuracy can lead to a 5.2 percent reduction of the total costs associated with trucking operations. That’s a big step towards achieving the operational excellence necessary to survive in the low-margin industry of intermodal logistic service providers. Van der Meulen’s research strengthened H&S in their conviction that forecasting plays a vital role in addressing the challenges of empty tank container repositioning and drayage operations planning.

Spin-off project – implementation
As a result of these findings, H&S started a joint forecasting implementation project with Logistics Service Provider Den Hartogh and data science consultancy firm CQM. The goal of this collaboration is to implement the dynamic prediction methodology of van der Meulen’s research and integrate the implementation with the planning software at H&S and Den Hartogh. This allows both companies to plan their trucking and container operations better and achieve significant cost reductions while maintaining the same service to their customers.

Data2Move Research Stories: the VMI effect within Heineken’s Dutch supply chain

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you’ll find out how students have managed to answer them. This time: Rolf van der Plas and his master thesis on the Vendor Managed Inventory effect within the Dutch supply chain of Heineken.

Where it started

The global beer market is consolidating with less local and more global brewing companies. The remaining players enter into a hyper-competition to be innovative and to differentiate themselves from each other, in order to leverage their scale for increasing operational excellence.

As part of Heineken’s drive to improve operational excellence, a new enterprise resource system will be introduced. It has an optional module that supports collaboration based on the Vendor Managed Inventory (VMI) framework. Heineken has been exploring VMI collaboration with a number of customers. Rolf van der Plas aimed to validate the effectiveness of VMI for Heineken and their customers, like retailers, by quantifying the effect on supply chain performance. He investigated:

  • Heineken’s transport utilization
  • Stock levels in the distribution centers (DCs) of the customer
  • Out-of-Stock performance in the customer DCs

Rolf selected three techniques to investigate VMI collaboration:

  • Data analytics to analyze the current effect
  • A simulation model to redesign the VMI process
  • A simulation-based searching (SBS) model to enhance the parameters settings used in the VMI-designs

Findings: a win-win situation

The current VMI collaboration in Heineken results in 15% higher transport utilization compared to deliveries to DCs of customers without VMI.  A new variance-based VMI design with enhanced parameter settings results in an even higher supply chain performance for Heineken and the customer compared to the current VMI implementation.TabelR

The benefits? A 7% higher truck utilization, meaning trucks are used more efficiently and transport costs go down. Also, a 70% reduction of the average stock levels in the customer DCs, while maintaining a 0% Out-of-Stock performance. The main finding: both supply chain partners benefit.

Further advice

In conclusion, the VMI collaboration effect is both beneficial for the suppliers and for the customers. Rolf’s study proves that his SBS model is an adequate method to consistently identify robust settings that enhance supply chain performance. Rolf endorses the usage of the model for future challenges. He recommends setting up (more) VMI collaboration with your supply chain partners as a tool to improve your supply chain performance.

Also check out our previous research stories