Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time: Dalí Ploegmakers’ master thesis on using carrier event log data to improve the last mile at Hilti.
As a manufacturer of high-quality tools and materials for the construction world, Hilti aims to provide customers with the highest service level. In reaching that ambition, reliable last-mile delivery is vital. Ploegmakers’ thesis revolves around using state-of-the-art Process Mining techniques to improve the last mile and help Hilti live up to their customer expectations.
Where it started
An increasing number of international shipping companies, Hilti included, outsources their last-mile delivery process to independent carriers. In doing so, shippers lose a degree of control over this vital process. On the other hand, new technological developments such as track-and-trace produce granular data on each event in the delivery process, providing more transparency. As such, Dalí Ploegmakers was set with the task to find out how HILTI could use this data to improve their last mile.
To find opportunities for improving the delivery process, Ploegmakers draws from the Data Science literature and applied Process Mining techniques. He transformed data into an interactive and easily interpretable process model. This process model can verify whether parcels arrive at the customer on time, and determine high-risk areas for delayed parcel delivery.
In addition to providing better customer service, the increased transparency also improves the relationship between Hilti and the carriers, as testified by Ploegmakers:
“Creating transparency within the carrier network allows us to have a factual and constructive discussion with the carrier on improving the last-mile delivery process.”
The research thus highlights the importance of transparency between Hilti and its carriers – increased transparency leads to both better customer service ánd better shipper-carrier relations.
Conclusion and next steps
Ploegmakers concludes that creating transparency in the carriers’ process and creating a notification system are two key improvements that could enhance the external last-mile delivery process. Moreover, he states that by applying Process Mining more opportunities are yet to be found, once again confirming the importance of having accurate and complete datasets available.
Insights from the Data-Driven Inventory event of 27/10/2020
The current pandemic has put all firms in a challenging position. From unpredictable demand of customers to disrupted supply chains, all putting extra pressure on firms to deliver their products timely. At the same time, the pandemic has also given rise to new opportunities such as an increasing need for supply chain resilience and data sharing between suppliers and customers.
Our second online Data2move event took place on Tuesday October 27th, 2020. The virtual meeting was organized around the topic Data-driven Inventory in collaboration with some of our partners: Jumbo, Pipple, Philips, MMGuide and Vendrig, showcased exciting collaboration projects with our students and also participated in a panel discussion about Covid-19 and its impact on their businesses.
This article presents key themes that emerged from the panel discussion. Want to learn more on how Philips has handled the ramp up in demand for ventilators? Or how Jumbo managed hoarding at the start of the pandemic? Then keep on reading!
Challenges in Forecasting Data
Jumbo and Philips have both been active in industries that have been largely impacted by the COVID crisis. Where Philips experienced a huge increase in demand for ventilators from hospitals, Jumbo had to anticipate on the hoarding behavior of their customers. Both companies were startled given the huge increases in demand for some of their products and were forced to handle fast. According to Feyza Gilbaz, supply chain consultant at the innovation department at Philips, technologies like machine learning could not keep up with the growing demand for ventilators coming from the hospitals. Therefore, Philips had to switch to hardcore daily management and information tracking and simple planning dashboards. Jumbo, on the other hand, responded differently and noticed that the demand for some products, such as toilet paper, was similar to festive products around the Christmas period. Therefore, Jumbo applied a similar forecasting model to the SKU’s that showed such hoarding patterns. Thus, both companies switched from machine learning methods to more manual forecasting methods.
Currently we are working with many suppliers from China which entails long lead times. We are looking to improve the resilience of our supply chain by incorporating suppliers closer by.
Rudolf Vendrig, CEO at Vendrig IJsselstein B.V.
Supply Chain Disruptions
While Philips redesigned their production processes for the ventilators, they realized they were highly dependent on the components coming from their suppliers. According to Feyza Gilbaz: “At that moment in time, information from our suppliers became more crucial than ever, we needed to see the availability of the suppliers to redesign our processes for the production of ventilators.” She also argues that supply is one of the biggest issues in such crises; this is what we call supply disruptions. According to Zumbul Atan, professor in supply chain management at the TU Eindhoven, supply disruptions rarely happen, but their impact is drastic.
Supply Chain Resilience
Edwin Wenink, owner of SCenergy, argues that we are entering a new era of supply disruptions. To combat such supply disruptions, we should invest in supply chain resilience. Zumbul Atan adds that companies can be far ahead of the competition when the correct actions have been taken to implement pre-active strategies. A pre-active strategy that could have helped Philips and Jumbo during the crisis was to include more suppliers in their supply chains. Moreover, Feyza Gilbaz from Philips mentions that it is important that supply chains should work as a supply chain network. “A supply chain network should always be active and it should include multiple suppliers and multiple clients” according to Feyza. This will increase the resilience within a supply chain because the chain is no longer dependent on one major element.
Rudolf Vendrig, CEO at Vendrig IJsselstein B.V. laundries, describes how the company is looking to improve the resilience in their supply chain: “Currently we are working with many suppliers from China which entails long lead times. We are looking to improve the resilience of our supply chain by incorporating suppliers closer by.” Therefore, our supply chains should be more resilient to be able to react upon disruptions timely. Looking further ahead in the future, investing in resilient supply chains is advisable.
What Lies Ahead – New Opportunities
For Roos Rooijakkers, data scientist at Pipple, new challenges have arisen by collaborating with new partners. Pipple has made new connections with public health services (GGD), governmental crisis teams (Dienst Testen), hospitals and the government. According to Roos, for working efficiently on COVID related themes, these bodies must be sharing data and information. For example, the government is more likely to make better decisions if they receive reliable data on hospital admissions and the number of infections. Pipple supports the different bodies by building data infrastructures, dashboards and models for sharing and explaining relevant data within or between different stakeholders. In a similar way, Pipple supervises the testing logistics. Roos explains: “If someone wants to do a test, they do the test at the testing location from which the test must be transferred to the lab. This supply chain needs to be coordinated. Pipple is helping in this coordination process.” Roos has observed that the pandemic has created more opportunities for data science, however the most crucial factor for success, is whether the company sees the value of data.
Vendrig has received proposals from international companies to start washing mouth masks. Currently, Rudolf Vendrig sees that a lot of employees at companies are responsible for their own masks. This is where Vendrig can jump in and offer clean and hygienic masks to companies that work intensively with mouth masks, such as supermarkets. In order to facilitate such services, Vendrig is currently talking to their soap suppliers, as well as packaging suppliers involved to make new processes ready for this intervention.
All in all, the pandemic has made us realize how dependent companies are on their suppliers and what the consequences of supply disruptions may be. Therefore, it has been found that resilience should be a key aspect in supply chains to combat these types of events. Whereas now most firms react upon the consequences of the pandemic, they could have created a head start if they had the right pre-active measures in place. At the same time, the pandemic has also created new opportunities for firms like Pipple and Vendrig including the importance of sharing data and new developments in the field of providing clean and hygienic mouth masks as a service.
Panel discussion participants
Edwin Wenink – Data2Move orchestrator Feyza Gilbaz – supply chain consultant at Philips Innovation Services Rudolf Vendrig – CEO at Vendrig IJsselstein B.V. laundries Roos Rooijakkers – data scientist & consultant at Pipple and WiDS ambassador Zumbul Atan – Associate professor of supply chain management at TU Eindhoven Jan Leensen – Supply chain developer at Jumbo.com
Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time, we delve into the work of Marc Schmitz to learn more about how to optimize the newspaper distribution network at de Persgroep.
Where it started
“Today’s news is tomorrow’s history”, Marc quotes, referring to the perishable nature of newspapers. Schmitz was set with the task to optimize the design of the newspaper distribution network of de Persgroep.
Why this task? Improvements in technology have caused many people to read and receive their news via digital channels. However, there is still a substantial amount of people that want printed newspapers delivered to their homes. Additionally, a shift in demand has meant that more printed newspapers are requested on Saturday whilst fewer and fewer newspapers are needed for weekdays. In order to deal with these societal and economic trends, a fresh and deep look in the newspaper distribution network was necessary to cut costs and maintain a high service level.
Schmitz took the problem hands-on: Through supply chain design analysis he determined the optimal locations of depots and vehicle routes. In combining the problem of finding the right locations for facilities and determining optimal vehicle routes, he set himself for a challenging task.
The importance of data
To determine a suitable model, Schmitz had access to different types of data. Geographical data were used to determine possible locations for depots and to gather insights on customer locations. Moreover, Schmitz used data on carrier capacity and costs to find the best routes for different locations.
Findings and advice
Schmitz’s main goal was to simultaneously choose depot locations and sizes. He concluded that a combination of many smaller and a few larger depots was optimal. With this design, most vehicles are fully loaded and carriers receive their newspapers well in advance, such that high service levels are maintained. Finally, he concluded that implementing the redesign could cut total costs by 20 percent!
Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time, we delve into the work of Thijmen Bruist to learn more about the operational challenges for on-demand laundry, with Data2Move partner TKT.
Where it started
On-demand laundry for consumers is a growing business. Think for example of the elderly who are no longer able to do their own laundry, the always-busy households, or the young professionals who simply wish to use their time for things more fun than laundry. Yet there are still only a handful of companies that offer such on-demand laundry services.
It is important for laundry companies to be ready for the continued growth of on-demand laundry. However, the processes for on-demand laundry are very different from standard laundry processes, since a much larger variety of textiles needs to be cleaned. Many laundry companies that wish to serve this growing market lack the knowledge (and resources) to change their production process overnight.
Therefore, Thijmen Bruist used a bottom-up approach to analyze the individual production steps of two laundry companies to gain more insight into operations involved in on-demand laundry and the innovations that the new process requires. By thorough analyses of the throughput times and costs, we gain insight in the effectiveness and feasibility of each process innovation.
The importance of data
In order to analyze throughput times and costs, Thijmen used data from two case studies. The first case study predicts the expected change in costs under increased demand. The second case study provided data on the time spent in each production step. From this data, Thijmen calculated the throughput time and the time to expedition for each item, from which the profit-maximizing volume is calculated.
Findings and advice
Based on the data analysis of costs and throughput times, Thijmen proposed a transition model: “The most effective way forward is to gradually invest in automation. This should, in turn, allow for more production and more profit”. The transition model consists of the following steps:
Gain insight into the cost structure of the internal logistics.
Process a volume close to the profit maximization point, thereby using the current production facilities as efficiently as possible.
Invest in process automation when the company already processes a volume between their profit maximization point and their revenue maximization point. Investing in process automation will allow for higher production volumes which are essential for higher profits.
The company should use a Return-On-Investment model to determine which production step they should automate.
Steps 1 to 4 are repeated.
Thus, the model guides laundry facilities in upgrading their laundry processes to profitable on-demand laundry and provides them with implementation steps accordingly.
Why do we care about basic business data management?
We care about basic business data management for two important reasons. Firstly, information is nowadays considered the fourth main production factor next to materials, labor force and finance. Through the ages, most companies have understood how to manage the original three production factors, but many are still struggling with managing their data properly. Badly managed data leads to bad information, which leads to bad decisions and in turn to sub-optimal business. Secondly, basic business data management lays the foundation for the digital transformation in any company. Data science, machine learning, artificial intelligence, blockchain, the internet-of-things and the likes are some of today’s most hyped technologies, and new applications are discovered right at the moment. This has not gone unnoticed to Supply Chain Management professionals as they aim to put these technologies to use in their business. Yet doing so requires proper data management, which is still a struggle to many.
Data is often inaccurate, incomplete, or inconsistent. We still see essential business data sharing via mechanisms like USB-sticks or email. Therefore, Data2Move invited its partners and professor Paul Grefen to discuss current company practices and how ‘basic’ data management can be improved as the first step towards proper data-driven business management and advanced data analytics. As testified by the partners in a poll during our event, there is a lot to gain by proper data management:
How can we improve our data management?
In practice, data quality problems often occur as a result of decentralized and disconnected data. The Logistics department may record order prices excluding taxes, while Marketing stores order prices including tax, resulting in poor conformity. An operations manager may take a USB stick with HR data home, resulting in security vulnerabilities. Sales may only send updates once per week, giving rise to longer lead times as a result of poor timeliness. Conformity, security, and timeliness are just three of the common types of data quality problems.
Data quality problems can be improved by having one centralized enterprise database, in which the rights and responsibilities of each department are clearly defined. We can distinguish two components within this database: the data store and the data warehouse. The data store contains low-level data that can be used for operational decision making, for example the number of orders due this week. In the data warehouse, filtered and aggregated data is stored based on the basis of which more high-level management information can be generated.
What can you do now?
Although good data management is not rocket science, it does require effort and time. When data quality is not in order, data analytics cannot help us to make better decisions. A solid database management technology is key to guarantee a minimum level of data quality. There is no single solution that works for all companies: you need to be aware of the decisions being taken in your company and which information can help to improve decision making. Operational decision-making needs much more low-level information compared to decision-making at the tactical/strategic level, and you may thus need different solutions at different levels. To get you started, here is a checklist that can help you to get an overview of data management problems in your company:
Finally, it is good practice to appoint a data manager (preferably not an IT-only expert, but someone with business knowledge) to prioritize and design your company’s plan towards proper data management. Welcome to the Chief Data Officer!
When Lisa van Lierop started her internship at Hilti, she already had a good understanding of Materials Management and Multi Echelon Inventory Optimization. To Federico Scotti di Uccio, Lisa and Hilti were a match made in heaven: “Lisa immediately expressed an enthusiasm to dive deep into Multi Echelon Inventory Optimization. This topic is of great interest to us at Hilti Logistics. On top of that, Lisa demonstrated a tenacity to collect and compute large amounts of data and information that is sometimes challenging to obtain because of the complexity of processes and stakeholders involved.”
Inventory management is an important area of the supply chain and of Hilti’s business itself. As Federico explains, Lisa’s research is really beneficial to the company: “The right size/ volume and positioning of inventory means that we can better serve our customers in the most efficient manner. Too much inventory has an impact on working capital and costs, too little has an impact on service. It is crucial to have the right methodology and understanding of how the network and product characteristics influence the optimal result.”
In the research Lisa conducted for Hilti, access to good data is vital. It forms the basis of all analysis and modelling. Without it, it is not possible to prove any theory or concepts and to operationalize them. Acquiring and simultaneously computing large amounts of data is however one of the main challenges companies generally face in this context. That is why it is important that talented students get the opportunity to proof themselves at companies such as Hilti. They have up-to-date knowledge that can really make a difference in solving these complex problems.
The past few years, the TU/e has proven to be an excellent talent pool for Hilti to fish in. And as far as Federico is concerned, Lisa certainly is not the last intern they will hire. “We have a strong collaboration with TU/e and we will continue hosting students depending on our priorities and availability. These collaborations are a good opportunity for the student and the university to combine theory and practice. But it is not only that. For us at Hilti, it is an opportunity to really investigate some important and prioritized topics. An internship also gives the student a chance to experience the corporate world for a few months. And in some cases, a successful internship can mark the beginning of a promising career here at Hilti.”
And much to the delight of all the parties involved, this is exactly what happened to Lisa after she graduated. Federico recalls: “During her internship, Lisa not only demonstrated a good understanding of her field and the Hilti Supply Chain, but she also integrated very well into our corporate culture, her team and the different stakeholders she had to deal with. Her energy and willingness to learn made her go the extra mile with her project. That is why we were pleased to welcome her into our company.”
And Lisa is also pleased. “My first contact with Hilti was during an event organized by ESCF. The international environment and the interesting supply chain triggered my interest for the company. During my master thesis, I got a chance to explore life at Hilti, and it convinced me that I wanted to continue my career with this company. Currently, I am working as a Global Materials Manager at Hilti in Liechtenstein.”
According to Federico, Data2Move played a large part in the success they had with the recruitment of interns. “Data2move was an excellent support in recruiting talented students. The Data2move community enables us to stay in touch with the academic world and gives us the opportunity to work with driven and enthusiastic students full of knowledge. During the internship, Data2Move is a big support and that contributes to the development of the student and the success of the project.”
For Laurens Kauffeld, it was a no-brainer to recruit Master student Stan Brugmans for their Multi Echelon Optimization project. “Anne (Recruitment and Talent Sourcer at Office Depot) and myself interviewed a couple of students and unanimously choose Stan. During his interview, Stan came across as professional and well prepared. He was able to explain why he chose our project and showed genuine interest in Office Depot.” Added bonus was that Laurens believed Stan would fit in nicely with the team. Stan proved to be quite a catch for the company and the team at Office Depot felt very privileged to support him during his Master thesis research. During his studies Stan acquired excellent analytical and statistical skills. According to Laurens, these skills are critical for any supply chain optimization project. In addition, Stan also showed great commitment, dedication and work ethic. “Besides his theoretical qualities, Stan’s personality contributed enormously to the results and success of his project. Stan has a ‘can-do’ mentality, he works hard and always aims for the optimal solution.”
Stan started the Multi Echelon Optimization project with a great deal of enthusiasm. He dug right in and invested time to grind through all the data complexities. He also analyzed numerous approaches to calculate safety stock levels. This research was not only necessary, but also very beneficial to Office Depot. Office Depot’s Smart Choice product range is sourced in a Multi Echelon Supply Chain. By further optimizing their supply chain, they were able to continuously offer the best value to their customers.
According to Laurens, Stan’s contribution to the research was vital. “Multi Echelon is a very challenging topic that requires dedicated time from an analyst. Stan’s research improved data quality, initiated further collaboration between the countries and gave us a direction in how to further optimize our Multi Echelon Supply Chain. These are three key factors if you continuously want to improve your business.”
Because the Multi Echelon project was a hundred percent about using data to optimize service levels, it comes as no surprise that data played an important role in solving the problem. As Laurens explains: “With data we are able to model, optimize and simulate the real world without waiting months for the results and risking bad performance. To get the right stock in the right place at the right time, we need to understand our customer demand distribution and supplier lead time performance.” Since this information is often hidden in big data sets, Office Depot needed Stan’s specialist knowledge and analytical skills.
In the end, Stan and Laurens both look back on Stan’s research period at Office Depot as a successful and mutual beneficiary collaboration. And it is still continuing. Much to Stan’s delight, a vacancy opened up within the department at the end of his internship and he is now working as a supply and demand planner. “When I applied for the Multi Echelon project, I already considered Office Depot as a potential employer because of their international character. During my Master thesis it became clear that I wanted to stay at Office Depot. They gave me lots of learning opportunities, invested their time and commitment and showed me that they really value my work.”
Office Depot continuous to develop and maintains high service and value in an increasingly competitive market. Data2Move plays an important part in keeping up these high standards for Office Depot. Thanks to the positive experience they had with Stan, Laurens is very open to other Data2Move projects in the future. “Data2Move helped us find and recruit interesting projects and talented students. They enable Office Depot to get access to state-of-the-art and up-to-date academic knowledge and the student projects give us the opportunity to work with driven and enthusiastic students. The support Data2Move showed during Stan’s thesis contributed to his development and the overall success of the project.”
Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time we feature Rijk van der Meulen’s master thesis research at H&S Group, an international and intermodal operating Logistics Service Provider in the liquid foodstuff industry.
Where it started – challenges
H&S Group asked Van der Meulen to focus on two operational challenges faced by many intermodal operating Logistics Service Providers:
– The efficient repositioning of empty tank containers
– Proactive planning of their drayage operations
Van der Meulen addressed these challenges in his thesis ‘Forecasting the required tank container and trucking capacity for an intermodal Logistics Service’. His research explores how you can predict demand more accurately and how these demand predictions facilitate better operational planning.
Insight into tank containers and trucking units per location and time
To tackle these challenges, it was important to extract valuable information from data. Van der Meulen needed insight into the expected number of loadings and deliveries. Also, he needed the corresponding requirements of trucking and tank container capacity. By combining these key aspects, he defined how many tank containers and trucking units are needed in a certain planning region at a given time.
Dynamic demand prediction
The true innovative character of van der Meulen’s prediction methodology lies in the dynamic update of the predictions of loadings and deliveries. He used a mathematical technique (Bayesian) to dynamically adjust the initial prediction based on new orders as they enter into the system. This ‘advance demand information’ represents the demand for the future, which is already known in the present. It ensures that planners have access to the most up-to-date and accurate loading and delivery predictions at any time.
In his next step, Van der Meulen used the adjusted forecast to predict the required tank container and trucking capacity. He relied on multiple additional models based on the hierarchical top-down forecast approach and multiple linear regression to assess the effectiveness of the complete forecasting methodology. Its accuracy was put to the test during a one-month test case for two planning regions.
The one-month test case showed that the dynamic prediction method increased the accuracy of the initial forecast by 65 percent. Using cost simulations, Van der Meulen estimates that this improved prediction accuracy can lead to a 5.2 percent reduction of the total costs associated with trucking operations. That’s a big step towards achieving the operational excellence necessary to survive in the low-margin industry of intermodal logistic service providers. Van der Meulen’s research strengthened H&S in their conviction that forecasting plays a vital role in addressing the challenges of empty tank container repositioning and drayage operations planning.
Spin-off project – implementation
As a result of these findings, H&S started a joint forecasting implementation project with Logistics Service Provider Den Hartogh and data science consultancy firm CQM. The goal of this collaboration is to implement the dynamic prediction methodology of van der Meulen’s research and integrate the implementation with the planning software at H&S and Den Hartogh. This allows both companies to plan their trucking and container operations better and achieve significant cost reductions while maintaining the same service to their customers.
Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you’ll find out how students have managed to answer them. This time: Rolf van der Plas and his master thesis on the Vendor Managed Inventory effect within the Dutch supply chain of Heineken.
Where it started
The global beer market is consolidating with less local and more global brewing companies. The remaining players enter into a hyper-competition to be innovative and to differentiate themselves from each other, in order to leverage their scale for increasing operational excellence.
As part of Heineken’s drive to improve operational excellence, a new enterprise resource system will be introduced. It has an optional module that supports collaboration based on the Vendor Managed Inventory (VMI) framework. Heineken has been exploring VMI collaboration with a number of customers. Rolf van der Plas aimed to validate the effectiveness of VMI for Heineken and their customers, like retailers, by quantifying the effect on supply chain performance. He investigated:
Heineken’s transport utilization
Stock levels in the distribution centers (DCs) of the customer
Out-of-Stock performance in the customer DCs
Rolf selected three techniques to investigate VMI collaboration:
Data analytics to analyze the current effect
A simulation model to redesign the VMI process
A simulation-based searching (SBS) model to enhance the parameters settings used in the VMI-designs
Findings: a win-win situation
The current VMI collaboration in Heineken results in 15% higher transport utilization compared to deliveries to DCs of customers without VMI. A new variance-based VMI design with enhanced parameter settings results in an even higher supply chain performance for Heineken and the customer compared to the current VMI implementation.
The benefits? A 7% higher truck utilization, meaning trucks are used more efficiently and transport costs go down. Also, a 70% reduction of the average stock levels in the customer DCs, while maintaining a 0% Out-of-Stock performance. The main finding: both supply chain partners benefit.
In conclusion, the VMI collaboration effect is both beneficial for the suppliers and for the customers. Rolf’s study proves that his SBS model is an adequate method to consistently identify robust settings that enhance supply chain performance. Rolf endorses the usage of the model for future challenges. He recommends setting up (more) VMI collaboration with your supply chain partners as a tool to improve your supply chain performance.
On May the 14th, Data2Move hosted her sixth event at The Student Hotel in Eindhoven. In line with the previous events, this happening was themed ‘Customer Sensing and Responding’ after the last Data2Move research charter.
As our greatest common Data2Move denominator prescribes, the event had a strong focus on using data to boost supply chain performance. Our ‘data of choice’ for this event was customer data and the questions to tackle sounded something like: “What do we already know about our customers that can help us improve our logistics operations?” and “What else should we know about them?” Since applying customer data to improve your supply chain operation is still an undiscovered and underexplored field for many companies, expectations were high.
Meanwhile, in the Data2Move community…
Because we want to keep everyone well informed, we started the event with a short overview of what’s going on in the Data2Move community. We are proud to announce that we have ten ongoing student projects. Furthermore, the community has teamed up with ESCF to bring additional knowledge and unite forces to recruit the best students for our projects. In terms of partner collaborations, MMGuide, CTAC and TKT are setting up a first pilot and many more small scale collaborations are happing as we speak. And last but not least; we have five new partners: TLN, Evofenedex, RoyalHaskoning, DAF and RHI Magnesita. All are full ESCF members.
“Moving Consumer Goods, Not Vehicles”
The keynote of the day was in the capable hands of the newest Data2Move team member, Assistant Professor Virginie Lurkin (TU/e). Lurkin showed how both customers and suppliers can benefit from taking into account consumer behavior and preferences when it comes to the operational decision making. While most operational models focus on the ‘average’ customer, it is actually better to start celebrating the heterogeneity within your customer base. For example, it is important to realize that if the average service level is 97%, this does not mean that all of your customers have an equally high service level. Even more important, you should always ask yourself how high the service level for you most valuable customers is.
Lurkin showed that if you do not take into account this heterogeneity, it will surely result into more miles, more inventory, higher costs and, most importantly, disappointed customers. But if you do take into account customer behavior, these issues can be resolved.
Shaping Research Stories
After the keynote, it was time to hand over the stage to a selection of our most excellent students. During short presentations the students outlined their research projects which they are currently conducting at one of the Data2Move partner companies. Afterwards, there was time to actively brainstorm with the Data2Move partners that were present. The students got a chance to ask their advice, guidance and to pick their expert brains. These brainstorm sessions proved to be an excellent educational ‘win-win situation’ for both the students and the Data2Move partners. In the text blocs below, you will find short abstracts of all the research projects that were discussed.
“The value of expiration date information for a grocery retailer” by Gijs Bastiaansen @ Jumbo
Gijs Bastiaansen’s project revolves around the expiration dates of products that customers need on a day-to-day basis: perishables. In his thesis, he tries to find out how to determine the percentage of customers that exhibit “grabbing” behavior; grabbing means that a customer does not bother to look at the expiration date, but just “grabs” whichever product is easiest to take. Although Gijs has access to some helpful data, there is no easy way to extract the desired numbers, as barcodes on perishables do not hold any information about expiration dates. As such, he is still looking for additional ways to support his assumptions. During the event he got helpful suggestions to do this.
“Inventory optimization in a two-echelon supply chain” by Stan Brugmans @ Office Depot
Stan Brugmans is applying multi-echelon inventory theory to the internal supply chain of Office Depot. The supply chain within the scope of his project is the central distribution center and multiple local distribution centers. Office Depot is subject to a phenomenon that is typical for companies in locally controlled supply chains: the Bullwhip Effect. Stan explained that this effect causes small fluctuations in customer demand to result in highly variable demand at the start of the supply chain. The multi-echelon theory that he is applying focusses on product availability to customers and inventory cost reduction by integrally controlling the supply chain.
“Forecasting tankcontainer and trucking capacity for an intermodal carrier” by Rijk van der Meulen @ H&S
Rijk van der Meulen is currently working on improving the container capacity planning at H&S. He is doing so by implementing statistical forecasting techniques. One of the key strengths of Rijk’s research is that he enhances these forecasts by taking into account that some information about future demand may already be available. By doing so, the main forecast error is reduced by almost 50%! He also found out that only two of the partners were using advance demand information in practice. Hopefully, his research triggers more Data2Move partners to improve their capacity planning by taking this information into account.
“Determining the optimal rail fleet size and composition” by Bart Pierey @ Sabic
Bart Pierey recently started his research at SABIC. He focusses on determining the optimal size of the rail container fleet that SABIC uses to ship its products to its customers. Two important aspects are crucial in determining this optimum: if there are too little Rail Tank Containers (RTCs) available on-site, SABIC is forced to scale down production, while having too many RTCs on site results in SABIC having to use the capacity of its competitors; something that will be penalized in the near future. This research is an excellent example of a Data2Move project. Bart will use the vast amount of GPS tracker data that has been collected during the past two years to build a simulation model of the site.
Results from student projects
The follow-up to the brainstorm sessions also featured two of our most talented (ex)student members. In two plenary student presentations, Lisa van Lierop and Rolf van der Plas shared their Master thesis research.
First up was former Data2Move student-assistant Lisa van Lierop with an interesting presentation on multi-echelon inventory approach at Hilti. Lisa explained the basics, struggles and opportunities that come with swapping from a single-echelon to a multi-echelon inventory approach. She researched how integrally controlling your supply chain could lead to serious benefits such as higher revenue, better service levels and lower stocks.
During her presentation, the partners were asked to fill in a short survey about if they were using multi-echelon inventory management. If so, they were asked how they were using multi-echelon inventory management. Some important and interesting results emerged from this survey:
Two participants stated that they use a multi-echelon approach and that it has led to improved service levels to their customers, as well as cost savings;
Most of the partners would consider switching to multi-echelon inventory management if it at least could promise a reduction of 5% of physical stock;
Participants estimated that multi-echelon inventory management would have the largest effect as a result of demand variability and holding costs.
The thesis is titled “Quantifying the benefits of a multi-echelon inventory approach”. As soon as this research is finalized, it will be made available to the community.
Next, Rolf van der Plas explained how Vendor Managed Inventory could help Heineken and its customers to achieve mutual benefits. If Heineken controls the inventory levels of their customers, this could result in lower transportation and inventory costs. At the same time this will lead to less stock outs which ultimately will result in happier customers! He supported his statements by showing the results of state-of-the-art simulation models that he has implemented at Heineken. The full thesis can be found here.
Exciting and promising results can really work up an appetite. So after a short wrap-up, everybody was allowed some time to process this new information while enjoying some bits and beverages!
On October the 29thwe will celebrate our second year anniversary. Two years of Data2Move is an excellent reason to celebrate, reflect and to re-evaluate the topics that we have been working on. So make sure to save the date! If you have any suggestions for the program of the event, or topics that you would like the community to focus on, do not hesitate to send us an email!
For now, we look back at a very successful event. It was great to welcome so many new faces and partners and we hope to see you on the 29th of October!