Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time: Dalí Ploegmakers’ master thesis on using carrier event log data to improve the last mile at Hilti.
As a manufacturer of high-quality tools and materials for the construction world, Hilti aims to provide customers with the highest service level. In reaching that ambition, reliable last-mile delivery is vital. Ploegmakers’ thesis revolves around using state-of-the-art Process Mining techniques to improve the last mile and help Hilti live up to their customer expectations.
Where it started
An increasing number of international shipping companies, Hilti included, outsources their last-mile delivery process to independent carriers. In doing so, shippers lose a degree of control over this vital process. On the other hand, new technological developments such as track-and-trace produce granular data on each event in the delivery process, providing more transparency. As such, Dalí Ploegmakers was set with the task to find out how HILTI could use this data to improve their last mile.
To find opportunities for improving the delivery process, Ploegmakers draws from the Data Science literature and applied Process Mining techniques. He transformed data into an interactive and easily interpretable process model. This process model can verify whether parcels arrive at the customer on time, and determine high-risk areas for delayed parcel delivery.
In addition to providing better customer service, the increased transparency also improves the relationship between Hilti and the carriers, as testified by Ploegmakers:
“Creating transparency within the carrier network allows us to have a factual and constructive discussion with the carrier on improving the last-mile delivery process.”
The research thus highlights the importance of transparency between Hilti and its carriers – increased transparency leads to both better customer service ánd better shipper-carrier relations.
Conclusion and next steps
Ploegmakers concludes that creating transparency in the carriers’ process and creating a notification system are two key improvements that could enhance the external last-mile delivery process. Moreover, he states that by applying Process Mining more opportunities are yet to be found, once again confirming the importance of having accurate and complete datasets available.
Insights from the Data-Driven Inventory event of 27/10/2020
The current pandemic has put all firms in a challenging position. From unpredictable demand of customers to disrupted supply chains, all putting extra pressure on firms to deliver their products timely. At the same time, the pandemic has also given rise to new opportunities such as an increasing need for supply chain resilience and data sharing between suppliers and customers.
Our second online Data2move event took place on Tuesday October 27th, 2020. The virtual meeting was organized around the topic Data-driven Inventory in collaboration with some of our partners: Jumbo, Pipple, Philips, MMGuide and Vendrig, showcased exciting collaboration projects with our students and also participated in a panel discussion about Covid-19 and its impact on their businesses.
This article presents key themes that emerged from the panel discussion. Want to learn more on how Philips has handled the ramp up in demand for ventilators? Or how Jumbo managed hoarding at the start of the pandemic? Then keep on reading!
Challenges in Forecasting Data
Jumbo and Philips have both been active in industries that have been largely impacted by the COVID crisis. Where Philips experienced a huge increase in demand for ventilators from hospitals, Jumbo had to anticipate on the hoarding behavior of their customers. Both companies were startled given the huge increases in demand for some of their products and were forced to handle fast. According to Feyza Gilbaz, supply chain consultant at the innovation department at Philips, technologies like machine learning could not keep up with the growing demand for ventilators coming from the hospitals. Therefore, Philips had to switch to hardcore daily management and information tracking and simple planning dashboards. Jumbo, on the other hand, responded differently and noticed that the demand for some products, such as toilet paper, was similar to festive products around the Christmas period. Therefore, Jumbo applied a similar forecasting model to the SKU’s that showed such hoarding patterns. Thus, both companies switched from machine learning methods to more manual forecasting methods.
Currently we are working with many suppliers from China which entails long lead times. We are looking to improve the resilience of our supply chain by incorporating suppliers closer by.
Rudolf Vendrig, CEO at Vendrig IJsselstein B.V.
Supply Chain Disruptions
While Philips redesigned their production processes for the ventilators, they realized they were highly dependent on the components coming from their suppliers. According to Feyza Gilbaz: “At that moment in time, information from our suppliers became more crucial than ever, we needed to see the availability of the suppliers to redesign our processes for the production of ventilators.” She also argues that supply is one of the biggest issues in such crises; this is what we call supply disruptions. According to Zumbul Atan, professor in supply chain management at the TU Eindhoven, supply disruptions rarely happen, but their impact is drastic.
Supply Chain Resilience
Edwin Wenink, owner of SCenergy, argues that we are entering a new era of supply disruptions. To combat such supply disruptions, we should invest in supply chain resilience. Zumbul Atan adds that companies can be far ahead of the competition when the correct actions have been taken to implement pre-active strategies. A pre-active strategy that could have helped Philips and Jumbo during the crisis was to include more suppliers in their supply chains. Moreover, Feyza Gilbaz from Philips mentions that it is important that supply chains should work as a supply chain network. “A supply chain network should always be active and it should include multiple suppliers and multiple clients” according to Feyza. This will increase the resilience within a supply chain because the chain is no longer dependent on one major element.
Rudolf Vendrig, CEO at Vendrig IJsselstein B.V. laundries, describes how the company is looking to improve the resilience in their supply chain: “Currently we are working with many suppliers from China which entails long lead times. We are looking to improve the resilience of our supply chain by incorporating suppliers closer by.” Therefore, our supply chains should be more resilient to be able to react upon disruptions timely. Looking further ahead in the future, investing in resilient supply chains is advisable.
What Lies Ahead – New Opportunities
For Roos Rooijakkers, data scientist at Pipple, new challenges have arisen by collaborating with new partners. Pipple has made new connections with public health services (GGD), governmental crisis teams (Dienst Testen), hospitals and the government. According to Roos, for working efficiently on COVID related themes, these bodies must be sharing data and information. For example, the government is more likely to make better decisions if they receive reliable data on hospital admissions and the number of infections. Pipple supports the different bodies by building data infrastructures, dashboards and models for sharing and explaining relevant data within or between different stakeholders. In a similar way, Pipple supervises the testing logistics. Roos explains: “If someone wants to do a test, they do the test at the testing location from which the test must be transferred to the lab. This supply chain needs to be coordinated. Pipple is helping in this coordination process.” Roos has observed that the pandemic has created more opportunities for data science, however the most crucial factor for success, is whether the company sees the value of data.
Vendrig has received proposals from international companies to start washing mouth masks. Currently, Rudolf Vendrig sees that a lot of employees at companies are responsible for their own masks. This is where Vendrig can jump in and offer clean and hygienic masks to companies that work intensively with mouth masks, such as supermarkets. In order to facilitate such services, Vendrig is currently talking to their soap suppliers, as well as packaging suppliers involved to make new processes ready for this intervention.
All in all, the pandemic has made us realize how dependent companies are on their suppliers and what the consequences of supply disruptions may be. Therefore, it has been found that resilience should be a key aspect in supply chains to combat these types of events. Whereas now most firms react upon the consequences of the pandemic, they could have created a head start if they had the right pre-active measures in place. At the same time, the pandemic has also created new opportunities for firms like Pipple and Vendrig including the importance of sharing data and new developments in the field of providing clean and hygienic mouth masks as a service.
Panel discussion participants
Edwin Wenink – Data2Move orchestrator Feyza Gilbaz – supply chain consultant at Philips Innovation Services Rudolf Vendrig – CEO at Vendrig IJsselstein B.V. laundries Roos Rooijakkers – data scientist & consultant at Pipple and WiDS ambassador Zumbul Atan – Associate professor of supply chain management at TU Eindhoven Jan Leensen – Supply chain developer at Jumbo.com
Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time, we delve into the work of Marc Schmitz to learn more about how to optimize the newspaper distribution network at de Persgroep.
Where it started
“Today’s news is tomorrow’s history”, Marc quotes, referring to the perishable nature of newspapers. Schmitz was set with the task to optimize the design of the newspaper distribution network of de Persgroep.
Why this task? Improvements in technology have caused many people to read and receive their news via digital channels. However, there is still a substantial amount of people that want printed newspapers delivered to their homes. Additionally, a shift in demand has meant that more printed newspapers are requested on Saturday whilst fewer and fewer newspapers are needed for weekdays. In order to deal with these societal and economic trends, a fresh and deep look in the newspaper distribution network was necessary to cut costs and maintain a high service level.
Schmitz took the problem hands-on: Through supply chain design analysis he determined the optimal locations of depots and vehicle routes. In combining the problem of finding the right locations for facilities and determining optimal vehicle routes, he set himself for a challenging task.
The importance of data
To determine a suitable model, Schmitz had access to different types of data. Geographical data were used to determine possible locations for depots and to gather insights on customer locations. Moreover, Schmitz used data on carrier capacity and costs to find the best routes for different locations.
Findings and advice
Schmitz’s main goal was to simultaneously choose depot locations and sizes. He concluded that a combination of many smaller and a few larger depots was optimal. With this design, most vehicles are fully loaded and carriers receive their newspapers well in advance, such that high service levels are maintained. Finally, he concluded that implementing the redesign could cut total costs by 20 percent!
Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time, we delve into the work of Thijmen Bruist to learn more about the operational challenges for on-demand laundry, with Data2Move partner TKT.
Where it started
On-demand laundry for consumers is a growing business. Think for example of the elderly who are no longer able to do their own laundry, the always-busy households, or the young professionals who simply wish to use their time for things more fun than laundry. Yet there are still only a handful of companies that offer such on-demand laundry services.
It is important for laundry companies to be ready for the continued growth of on-demand laundry. However, the processes for on-demand laundry are very different from standard laundry processes, since a much larger variety of textiles needs to be cleaned. Many laundry companies that wish to serve this growing market lack the knowledge (and resources) to change their production process overnight.
Therefore, Thijmen Bruist used a bottom-up approach to analyze the individual production steps of two laundry companies to gain more insight into operations involved in on-demand laundry and the innovations that the new process requires. By thorough analyses of the throughput times and costs, we gain insight in the effectiveness and feasibility of each process innovation.
The importance of data
In order to analyze throughput times and costs, Thijmen used data from two case studies. The first case study predicts the expected change in costs under increased demand. The second case study provided data on the time spent in each production step. From this data, Thijmen calculated the throughput time and the time to expedition for each item, from which the profit-maximizing volume is calculated.
Findings and advice
Based on the data analysis of costs and throughput times, Thijmen proposed a transition model: “The most effective way forward is to gradually invest in automation. This should, in turn, allow for more production and more profit”. The transition model consists of the following steps:
Gain insight into the cost structure of the internal logistics.
Process a volume close to the profit maximization point, thereby using the current production facilities as efficiently as possible.
Invest in process automation when the company already processes a volume between their profit maximization point and their revenue maximization point. Investing in process automation will allow for higher production volumes which are essential for higher profits.
The company should use a Return-On-Investment model to determine which production step they should automate.
Steps 1 to 4 are repeated.
Thus, the model guides laundry facilities in upgrading their laundry processes to profitable on-demand laundry and provides them with implementation steps accordingly.
Why do we care about basic business data management?
We care about basic business data management for two important reasons. Firstly, information is nowadays considered the fourth main production factor next to materials, labor force and finance. Through the ages, most companies have understood how to manage the original three production factors, but many are still struggling with managing their data properly. Badly managed data leads to bad information, which leads to bad decisions and in turn to sub-optimal business. Secondly, basic business data management lays the foundation for the digital transformation in any company. Data science, machine learning, artificial intelligence, blockchain, the internet-of-things and the likes are some of today’s most hyped technologies, and new applications are discovered right at the moment. This has not gone unnoticed to Supply Chain Management professionals as they aim to put these technologies to use in their business. Yet doing so requires proper data management, which is still a struggle to many.
Data is often inaccurate, incomplete, or inconsistent. We still see essential business data sharing via mechanisms like USB-sticks or email. Therefore, Data2Move invited its partners and professor Paul Grefen to discuss current company practices and how ‘basic’ data management can be improved as the first step towards proper data-driven business management and advanced data analytics. As testified by the partners in a poll during our event, there is a lot to gain by proper data management:
How can we improve our data management?
In practice, data quality problems often occur as a result of decentralized and disconnected data. The Logistics department may record order prices excluding taxes, while Marketing stores order prices including tax, resulting in poor conformity. An operations manager may take a USB stick with HR data home, resulting in security vulnerabilities. Sales may only send updates once per week, giving rise to longer lead times as a result of poor timeliness. Conformity, security, and timeliness are just three of the common types of data quality problems.
Data quality problems can be improved by having one centralized enterprise database, in which the rights and responsibilities of each department are clearly defined. We can distinguish two components within this database: the data store and the data warehouse. The data store contains low-level data that can be used for operational decision making, for example the number of orders due this week. In the data warehouse, filtered and aggregated data is stored based on the basis of which more high-level management information can be generated.
What can you do now?
Although good data management is not rocket science, it does require effort and time. When data quality is not in order, data analytics cannot help us to make better decisions. A solid database management technology is key to guarantee a minimum level of data quality. There is no single solution that works for all companies: you need to be aware of the decisions being taken in your company and which information can help to improve decision making. Operational decision-making needs much more low-level information compared to decision-making at the tactical/strategic level, and you may thus need different solutions at different levels. To get you started, here is a checklist that can help you to get an overview of data management problems in your company:
Finally, it is good practice to appoint a data manager (preferably not an IT-only expert, but someone with business knowledge) to prioritize and design your company’s plan towards proper data management. Welcome to the Chief Data Officer!
When Lisa van Lierop started her internship at Hilti, she already had a good understanding of Materials Management and Multi Echelon Inventory Optimization. To Federico Scotti di Uccio, Lisa and Hilti were a match made in heaven: “Lisa immediately expressed an enthusiasm to dive deep into Multi Echelon Inventory Optimization. This topic is of great interest to us at Hilti Logistics. On top of that, Lisa demonstrated a tenacity to collect and compute large amounts of data and information that is sometimes challenging to obtain because of the complexity of processes and stakeholders involved.”
Inventory management is an important area of the supply chain and of Hilti’s business itself. As Federico explains, Lisa’s research is really beneficial to the company: “The right size/ volume and positioning of inventory means that we can better serve our customers in the most efficient manner. Too much inventory has an impact on working capital and costs, too little has an impact on service. It is crucial to have the right methodology and understanding of how the network and product characteristics influence the optimal result.”
In the research Lisa conducted for Hilti, access to good data is vital. It forms the basis of all analysis and modelling. Without it, it is not possible to prove any theory or concepts and to operationalize them. Acquiring and simultaneously computing large amounts of data is however one of the main challenges companies generally face in this context. That is why it is important that talented students get the opportunity to proof themselves at companies such as Hilti. They have up-to-date knowledge that can really make a difference in solving these complex problems.
The past few years, the TU/e has proven to be an excellent talent pool for Hilti to fish in. And as far as Federico is concerned, Lisa certainly is not the last intern they will hire. “We have a strong collaboration with TU/e and we will continue hosting students depending on our priorities and availability. These collaborations are a good opportunity for the student and the university to combine theory and practice. But it is not only that. For us at Hilti, it is an opportunity to really investigate some important and prioritized topics. An internship also gives the student a chance to experience the corporate world for a few months. And in some cases, a successful internship can mark the beginning of a promising career here at Hilti.”
And much to the delight of all the parties involved, this is exactly what happened to Lisa after she graduated. Federico recalls: “During her internship, Lisa not only demonstrated a good understanding of her field and the Hilti Supply Chain, but she also integrated very well into our corporate culture, her team and the different stakeholders she had to deal with. Her energy and willingness to learn made her go the extra mile with her project. That is why we were pleased to welcome her into our company.”
And Lisa is also pleased. “My first contact with Hilti was during an event organized by ESCF. The international environment and the interesting supply chain triggered my interest for the company. During my master thesis, I got a chance to explore life at Hilti, and it convinced me that I wanted to continue my career with this company. Currently, I am working as a Global Materials Manager at Hilti in Liechtenstein.”
According to Federico, Data2Move played a large part in the success they had with the recruitment of interns. “Data2move was an excellent support in recruiting talented students. The Data2move community enables us to stay in touch with the academic world and gives us the opportunity to work with driven and enthusiastic students full of knowledge. During the internship, Data2Move is a big support and that contributes to the development of the student and the success of the project.”
For Laurens Kauffeld, it was a no-brainer to recruit Master student Stan Brugmans for their Multi Echelon Optimization project. “Anne (Recruitment and Talent Sourcer at Office Depot) and myself interviewed a couple of students and unanimously choose Stan. During his interview, Stan came across as professional and well prepared. He was able to explain why he chose our project and showed genuine interest in Office Depot.” Added bonus was that Laurens believed Stan would fit in nicely with the team. Stan proved to be quite a catch for the company and the team at Office Depot felt very privileged to support him during his Master thesis research. During his studies Stan acquired excellent analytical and statistical skills. According to Laurens, these skills are critical for any supply chain optimization project. In addition, Stan also showed great commitment, dedication and work ethic. “Besides his theoretical qualities, Stan’s personality contributed enormously to the results and success of his project. Stan has a ‘can-do’ mentality, he works hard and always aims for the optimal solution.”
Stan started the Multi Echelon Optimization project with a great deal of enthusiasm. He dug right in and invested time to grind through all the data complexities. He also analyzed numerous approaches to calculate safety stock levels. This research was not only necessary, but also very beneficial to Office Depot. Office Depot’s Smart Choice product range is sourced in a Multi Echelon Supply Chain. By further optimizing their supply chain, they were able to continuously offer the best value to their customers.
According to Laurens, Stan’s contribution to the research was vital. “Multi Echelon is a very challenging topic that requires dedicated time from an analyst. Stan’s research improved data quality, initiated further collaboration between the countries and gave us a direction in how to further optimize our Multi Echelon Supply Chain. These are three key factors if you continuously want to improve your business.”
Because the Multi Echelon project was a hundred percent about using data to optimize service levels, it comes as no surprise that data played an important role in solving the problem. As Laurens explains: “With data we are able to model, optimize and simulate the real world without waiting months for the results and risking bad performance. To get the right stock in the right place at the right time, we need to understand our customer demand distribution and supplier lead time performance.” Since this information is often hidden in big data sets, Office Depot needed Stan’s specialist knowledge and analytical skills.
In the end, Stan and Laurens both look back on Stan’s research period at Office Depot as a successful and mutual beneficiary collaboration. And it is still continuing. Much to Stan’s delight, a vacancy opened up within the department at the end of his internship and he is now working as a supply and demand planner. “When I applied for the Multi Echelon project, I already considered Office Depot as a potential employer because of their international character. During my Master thesis it became clear that I wanted to stay at Office Depot. They gave me lots of learning opportunities, invested their time and commitment and showed me that they really value my work.”
Office Depot continuous to develop and maintains high service and value in an increasingly competitive market. Data2Move plays an important part in keeping up these high standards for Office Depot. Thanks to the positive experience they had with Stan, Laurens is very open to other Data2Move projects in the future. “Data2Move helped us find and recruit interesting projects and talented students. They enable Office Depot to get access to state-of-the-art and up-to-date academic knowledge and the student projects give us the opportunity to work with driven and enthusiastic students. The support Data2Move showed during Stan’s thesis contributed to his development and the overall success of the project.”
Data science can really help to solve complex supply chain management challenges. In our Data2Move Research Stories, you can find out how our students tackle these challenges. This time we feature Bart Pierey’s Master thesis research at SABIC in Sittard. SABIC is a Saudi manufacturing company, active in petrochemicals, chemicals, industrial polymers, fertilizers and metals.
Where it started – challenge With his research, Pierey wanted to optimize the rail fleet composition at SABIC based on, among other things, GPS data. This, because the rail yard close to one of the production sites of SABIC has limited capacity. Several companies share part of this yard and available spaces are assigned based on a ‘first come, first served’ principle. Consequential, the parking yard could be fully occupied when a new train arrives. If this is the case, the arriving train is rejected at the gate which leads to operational problems. On the other hand, the fleet cannot be decreased too much, because you need a high availability of rail cars to prevent production scale downs.
Data validation and preparation During his research, Pierey discovered that the quality of the GPS data, gathered from the fleet management system, was not optimal. He executed a data preparation and validation project which led to more accurate data. As Pierey states: “The initial data was not good enough but after the preparation and validation phase it was considered to be sufficient.”
Simulation model In his Master thesis, Pierey explains what steps he took to tackle the rail freight car fleet problems SABIC runs into. To get an understanding of the issue at hand and the characteristics of the system, he interviewed several stakeholders. “Planners and business both had interests, so I tried to solve the puzzle for SABIC, based mainly on GPS-data,” Pierey explains. He presented the final results to several business managers within the company and he developed a discrete event simulation model with stochastic holding and travel times. This model has not only been used to improve the general understanding of fleet behavior, but also to find an optimal fleet size which minimizes the utilization of parking space in the shared parking area.
Near-optimal fleet size Besides the determination of the optimal fleet size and composition, Pierey also took the impact of the parameters holding and travel time into account. Pierey: “The holding time, the time rail cars stay at the customer’s production facility, has a major influence on the optimal fleet size. Travel time only has limited influence. That is why, I advise to emphasize on decreasing the holding times at customers.” Pierey concluded that SABIC’s current fleet size is near-optimal but their fleet composition should be adjusted.
Shared parking yard planning To the users of the rail yard, Pierey recommended sharing the multi-use parking space forecasts. “At present, the forecasts on the usage of space in the multi-use parking area are not shared between the various site users because these are confidential. If they are shared, the arrival and departure planning could be adjusted, resulting in fewer problems in this multi-use area. Therefore, I recommend open conversation and sharing these forecasts.”
Modal shift The train and rail car planning could also benefit from a modal shift, using other transport modes than just trains. As Pierey explains: “Another solution to decrease the safety stock at a company’s yard is to replace the train with another transport mode to fill the peaks in transport demand.”
Lessons learned from Pierey’s study:
Simulation can be of great value in understanding a system’s behavior. In addition, a simulation model can be used for testing several scenario’s and policy adjustments.
Make sure that the data you enter in a system is correct. Data cleaning and validation actions are very important, as garbage in results in garbage out.
To make sure that data quality is guaranteed, it is key to properly maintain data collecting systems. Appointing a system owner will help, as this person will be responsible for the system and its functioning within the organization.
Data science helps to solve complex supply chain management challenges. In our Data2Move Research Stories, you can find out how our students tackle these challenges. This time, we feature Lisa van Lierop’s Master thesis research at Hilti AG in Liechtenstein. Hilti AG is a multinational company that develops, manufactures, and markets products and services for the construction, building maintenance and energy sector.
Where it started – challenge
When inventory is optimized locally, inventory control is often based on a single-echelon approach. But a single-echelon approach might not be optimal from a broader supply chain perspective. In order to optimize stock levels over multiple supply chain stages and still ensure a high service level to end customers, Van Lierop focused on the potential of multi-echelon inventory control. More precisely, she studied the potential of centralized inventory control under different settings. This should help to strive for optimized stock levels throughout the entire Hilti network, and in the meantime ensure a high service to end customers.
Van Lierop addresses this challenge in her Master thesis ‘Quantifying the benefits of multi-echelon inventory control’.
Technical and organizational challenges
“Changing to a multi-echelon control policy is not easy”, Van Lierop states. “The main technical challenge is that you need to collect, and simultaneously process, a large amount of data. You also have to consider that in most supply chains, data needs to be exchanged between different firms.” On top of these technical challenges, organizational challenges pop up. When you optimize your inventory throughout the whole chain, local control is no longer necessary. Inventory should be managed centrally. However, this change requires someone, or some department, to take responsibility for the centralized inventory control. Another organizational challenge regards benefit sharing: ‘How are the benefits of the new inventory control approach shared or divided between the different supply chain stages?’
Multinationals such as Hilti AG who look for answers to these important questions might benefit from software tools to support these answers. Still, considering the complex inventory landscape and organizational responsibilities, according to Van Lierop “it is no surprise that the number of published real-world applications of multi-echelon inventory control are scarce.”
‘When does a multi-echelon inventory control policy pay off?’
Before a company decides to switch to a multi-echelon approach, they need to have a good indication of the potential for their product portfolio, according to Van Lierop. For some products, the benefits of a multi-echelon approach might be higher than for others. That is why the aim of Van Lierop’s research was to identify the product/supply chain characteristics for which a multi-echelon inventory control policy pays off. More concretely, she studied scenarios based on combinations of the following five dimensions: demand, demand variability, lead-time, lead-time variability and holding costs.
Simulation model Van Lierop used a software program designed by Prof. Dr. Ton de Kok (ChainScope) for the multi-echelon safety stock optimization. By using simulation, she was able to model and compare different safety stock procedures and multi-echelon distribution networks. In her research approach, Van Lierop also used a MRP-based replenishment policy with forecasted demand, because many companies replenish their stock based on forecasts.
Findings Results revealed that the benefits of a multi-echelon inventory control approach are the highest for items with a high lead-time to the first location in the distribution network, a high demand rate and high inventory holding costs. Van Lierop: “Furthermore, the savings for low demand, low cost items were relatively low. Especially when the first location in the distribution network can be quickly resupplied. So when companies consider a pilot for a centralized safety stock procedure in their distribution networks, they should focus on ‘high-potential’ items first.”
Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time we feature Rijk van der Meulen’s master thesis research at H&S Group, an international and intermodal operating Logistics Service Provider in the liquid foodstuff industry.
Where it started – challenges
H&S Group asked Van der Meulen to focus on two operational challenges faced by many intermodal operating Logistics Service Providers:
– The efficient repositioning of empty tank containers
– Proactive planning of their drayage operations
Van der Meulen addressed these challenges in his thesis ‘Forecasting the required tank container and trucking capacity for an intermodal Logistics Service’. His research explores how you can predict demand more accurately and how these demand predictions facilitate better operational planning.
Insight into tank containers and trucking units per location and time
To tackle these challenges, it was important to extract valuable information from data. Van der Meulen needed insight into the expected number of loadings and deliveries. Also, he needed the corresponding requirements of trucking and tank container capacity. By combining these key aspects, he defined how many tank containers and trucking units are needed in a certain planning region at a given time.
Dynamic demand prediction
The true innovative character of van der Meulen’s prediction methodology lies in the dynamic update of the predictions of loadings and deliveries. He used a mathematical technique (Bayesian) to dynamically adjust the initial prediction based on new orders as they enter into the system. This ‘advance demand information’ represents the demand for the future, which is already known in the present. It ensures that planners have access to the most up-to-date and accurate loading and delivery predictions at any time.
In his next step, Van der Meulen used the adjusted forecast to predict the required tank container and trucking capacity. He relied on multiple additional models based on the hierarchical top-down forecast approach and multiple linear regression to assess the effectiveness of the complete forecasting methodology. Its accuracy was put to the test during a one-month test case for two planning regions.
The one-month test case showed that the dynamic prediction method increased the accuracy of the initial forecast by 65 percent. Using cost simulations, Van der Meulen estimates that this improved prediction accuracy can lead to a 5.2 percent reduction of the total costs associated with trucking operations. That’s a big step towards achieving the operational excellence necessary to survive in the low-margin industry of intermodal logistic service providers. Van der Meulen’s research strengthened H&S in their conviction that forecasting plays a vital role in addressing the challenges of empty tank container repositioning and drayage operations planning.
Spin-off project – implementation
As a result of these findings, H&S started a joint forecasting implementation project with Logistics Service Provider Den Hartogh and data science consultancy firm CQM. The goal of this collaboration is to implement the dynamic prediction methodology of van der Meulen’s research and integrate the implementation with the planning software at H&S and Den Hartogh. This allows both companies to plan their trucking and container operations better and achieve significant cost reductions while maintaining the same service to their customers.