Data2Move Article: Getting Data Basics Right

Why do we care about basic business data management?

We care about basic business data management for two important reasons. Firstly, information is nowadays considered the fourth main production factor next to materials, labor force and finance. Through the ages, most companies have understood how to manage the original three production factors, but many are still struggling with managing their data properly. Badly managed data leads to bad information, which leads to bad decisions and in turn to sub-optimal business. Secondly, basic business data management lays the foundation for the digital transformation in any company. Data science, machine learning, artificial intelligence, blockchain, the internet-of-things and the likes are some of today’s most hyped technologies, and new applications are discovered right at the moment. This has not gone unnoticed to Supply Chain Management professionals as they aim to put these technologies to use in their business. Yet doing so requires proper data management, which is still a struggle to many.

Data is often inaccurate, incomplete, or inconsistent. We still see essential business data sharing via mechanisms like USB-sticks or email. Therefore, Data2Move invited its partners and professor Paul Grefen to discuss current company practices and how ‘basic’ data management can be improved as the first step towards proper data-driven business management and advanced data analytics. As testified by the partners in a poll during our event, there is a lot to gain by proper data management:

How can we improve our data management?

In practice, data quality problems often occur as a result of decentralized and disconnected data. The Logistics department may record order prices excluding taxes, while Marketing stores order prices including tax, resulting in poor conformity. An operations manager may take a USB stick with HR data home, resulting in security vulnerabilities. Sales may only send updates once per week, giving rise to longer lead times as a result of poor timeliness. Conformity, security, and timeliness are just three of the common types of data quality problems.

Data quality problems can be improved by having one centralized enterprise database, in which the rights and responsibilities of each department are clearly defined. We can distinguish two components within this database: the data store and the data warehouse. The data store contains low-level data that can be used for operational decision making, for example the number of orders due this week. In the data warehouse, filtered and aggregated data is stored based on the basis of which more high-level management information can be generated.

What can you do now?

Although good data management is not rocket science, it does require effort and time. When data quality is not in order, data analytics cannot help us to make better decisions. A solid database management technology is key to guarantee a minimum level of data quality. There is no single solution that works for all companies: you need to be aware of the decisions being taken in your company and which information can help to improve decision making. Operational decision-making needs much more low-level information compared to decision-making at the tactical/strategic level, and you may thus need different solutions at different levels. Finally, it is good practice to appoint a data manager (preferably not an IT-only expert, but someone with business knowledge) to prioritize and design your company’s plan towards proper data management. Welcome to the Chief Data Officer!

Interview Hilti: Federico Scotti di Uccio and Lisa van Lierop

When Lisa van Lierop started her internship at Hilti, she already had a good understanding of Materials Management and Multi Echelon Inventory Optimization. To Federico Scotti di Uccio, Lisa and Hilti were a match made in heaven: “Lisa immediately expressed an enthusiasm to dive deep into Multi Echelon Inventory Optimization. This topic is of great interest to us at Hilti Logistics. On top of that, Lisa demonstrated a tenacity to collect and compute large amounts of data and information that is sometimes challenging to obtain because of the complexity of processes and stakeholders involved.”  

Inventory management is an important area of the supply chain and of Hilti’s business itself. As Federico explains, Lisa’s research is really beneficial to the company: “The right size/ volume and positioning of inventory means that we can better serve our customers in the most efficient manner. Too much inventory has an impact on working capital and costs, too little has an impact on service. It is crucial to have the right methodology and understanding of how the network and product characteristics influence the optimal result.”   

In the research Lisa conducted for Hilti, access to good data is vital. It forms the basis of all analysis and modelling. Without it, it is not possible to prove any theory or concepts and to operationalize them. Acquiring and simultaneously computing large amounts of data is however one of the main challenges companies generally face in this context. That is why it is important that talented students get the opportunity to proof themselves at companies such as Hilti. They have up-to-date knowledge that can really make a difference in solving these complex problems.   

The past few years, the TU/e has proven to be an excellent talent pool for Hilti to fish in. And as far as Federico is concerned, Lisa certainly is not the last intern they will hire. “We have a strong collaboration with TU/e and we will continue hosting students depending on our priorities and availability. These collaborations are a good opportunity for the student and the university to combine theory and practice. But it is not only that. For us at Hilti, it is an opportunity to really investigate some important and prioritized topics. An internship also gives the student a chance to experience the corporate world for a few months. And in some cases, a successful internship can mark the beginning of a promising career here at Hilti.”

And much to the delight of all the parties involved, this is exactly what happened to Lisa after she graduated. Federico recalls: “During her internship, Lisa not only demonstrated a good understanding of her field and the Hilti Supply Chain, but she also integrated very well into our  corporate culture, her team and the different stakeholders she had to deal with. Her energy and willingness to learn made her go the extra mile with her project. That is why we were pleased to welcome her into our company.”   

And Lisa is also pleased. “My first contact with Hilti was during an event organized by ESCF. The international environment and the interesting supply chain triggered my interest for the company. During my master thesis, I got a chance to explore life at Hilti, and it convinced me that I wanted to continue my career with this company. Currently, I am working as a Global Materials Manager at Hilti in Liechtenstein.” 

According to Federico, Data2Move played a large part in the success they had with the recruitment of interns. “Data2move was an excellent support in recruiting talented students. The Data2move community enables us to stay in touch with the academic world and gives us the opportunity to work with driven and enthusiastic students full of knowledge. During the internship, Data2Move is a big support and that contributes to the development of the student and the success of the project.”

Data2Move Success Stories: Office Depot

For Laurens Kauffeld, it was a no-brainer to recruit Master student Stan Brugmans for their Multi Echelon Optimization project. “Anne (Recruitment and Talent Sourcer at Office Depot) and myself interviewed a couple of students and unanimously choose Stan. During his interview, Stan came across as professional and well prepared. He was able to explain why he chose our project and showed genuine interest in Office Depot.” Added bonus was that Laurens believed Stan would fit in nicely with the team.
Stan proved to be quite a catch for the company and the team at Office Depot felt very privileged to support him during his Master thesis research. During his studies Stan acquired excellent analytical and statistical skills. According to Laurens, these skills are critical for any supply chain optimization project. In addition, Stan also showed great commitment, dedication and work ethic. “Besides his theoretical qualities, Stan’s personality contributed enormously to the results and success of his project. Stan has a ‘can-do’ mentality, he works hard and always aims for the optimal solution.”

Stan started the Multi Echelon Optimization project with a great deal of enthusiasm. He dug right in and invested time to grind through all the data complexities. He also analyzed numerous approaches to calculate safety stock levels. This research was not only necessary, but also very beneficial to Office Depot. Office Depot’s Smart Choice product range is sourced in a Multi Echelon Supply Chain. By further optimizing their supply chain, they were able to continuously offer the best value to their customers.

According to Laurens, Stan’s contribution to the research was vital. “Multi Echelon is a very challenging topic that requires dedicated time from an analyst. Stan’s research improved data quality, initiated further collaboration between the countries and gave us a direction in how to further optimize our Multi Echelon Supply Chain. These are three key factors if you continuously want to improve your business.”

Because the Multi Echelon project was a hundred percent about using data to optimize service levels, it comes as no surprise that data played an important role in solving the problem. As Laurens explains: “With data we are able to model, optimize and simulate the real world without waiting months for the results and risking bad performance. To get the right stock in the right place at the right time, we need to understand our customer demand distribution and supplier lead time performance.” Since this information is often hidden in big data sets, Office Depot needed Stan’s specialist knowledge and analytical skills.

In the end, Stan and Laurens both look back on Stan’s research period at Office Depot as a successful and mutual beneficiary collaboration. And it is still continuing. Much to Stan’s delight, a vacancy opened up within the department at the end of his internship and he is now working as a supply and demand planner. “When I applied for the Multi Echelon project, I already considered Office Depot as a potential employer because of their international character. During my Master thesis it became clear that I wanted to stay at Office Depot. They gave me lots of learning opportunities, invested their time and commitment and showed me that they really value my work.”

Office Depot continuous to develop and maintains high service and value in an increasingly competitive market. Data2Move plays an important part in keeping up these high standards for Office Depot. Thanks to the positive experience they had with Stan, Laurens is very open to other Data2Move projects in the future. “Data2Move helped us find and recruit interesting projects and talented students. They enable Office Depot to get access to state-of-the-art and up-to-date academic knowledge and the student projects give us the opportunity to work with driven and enthusiastic students. The support Data2Move showed during Stan’s thesis contributed to his development and the overall success of the project.”

Data2Move Research Stories: how to optimize the rail fleet composition using GPS-data?

Data science can really help to solve complex supply chain management challenges. In our Data2Move Research Stories, you can find out how our students tackle these challenges. This time we feature Bart Pierey’s Master thesis research at SABIC in Sittard. SABIC is a Saudi  manufacturing company, active in petrochemicals, chemicals, industrial polymers, fertilizers and metals.

Where it started – challenge
With his research, Pierey wanted to optimize the rail fleet composition at SABIC based on, among other things, GPS data. This, because  the rail yard close to one of the production sites of SABIC has  limited capacity. Several companies share part of this yard  and available spaces are assigned based on a ‘first come, first served’ principle. Consequential, the parking yard could be fully occupied when a new train arrives. If this is the case, the arriving train is rejected at the gate which leads to operational problems. On the other hand, the fleet cannot be decreased too much, because you need  a high availability of rail cars to prevent production scale downs.

Data validation and preparation
During his research, Pierey discovered that the quality of the GPS data, gathered from the fleet management system, was not optimal. He executed a data preparation and validation project which led to more accurate data. As Pierey states: “The initial data was not good enough but after the preparation and validation phase it was considered to be sufficient.”

Simulation model
In his Master thesis, Pierey explains what steps he took  to tackle the rail freight car fleet problems SABIC runs into. To get an understanding of the issue at hand and the characteristics of the system, he interviewed several stakeholders. “Planners and business both had  interests, so  I tried to solve the puzzle for SABIC, based mainly on GPS-data,” Pierey explains. He presented the final results  to several business managers within the company and he developed a discrete event simulation model with stochastic holding and travel times. This model has not only been used to improve the general understanding of fleet behavior, but also to find an optimal fleet size which minimizes the utilization of parking space in the shared parking area.

Near-optimal fleet size
Besides the determination of the optimal fleet size and composition, Pierey also took the impact of the parameters holding and travel time into account. Pierey: “The holding time, the time rail cars stay at the customer’s production facility, has a major influence on the optimal fleet size. Travel time only has limited influence. That is why, I advise to emphasize on decreasing the holding times at customers.” Pierey concluded that SABIC’s current fleet size is near-optimal but their fleet composition should be adjusted.

Shared parking yard planning
To the users of the rail yard, Pierey recommended sharing the multi-use parking space forecasts. “At present, the forecasts on the usage of space in the multi-use parking area are not shared between the various site users because these are confidential. If they are shared, the arrival and departure planning could be adjusted, resulting in fewer problems in this multi-use area. Therefore, I recommend open conversation and sharing these forecasts.”  

Modal shift
The train and rail car planning could also benefit from a modal shift, using other transport modes than just trains. As Pierey explains: “Another solution to decrease the safety stock at a company’s yard is to replace the train with another transport mode to fill the peaks in transport demand.”

Lessons learned from Pierey’s study:

  • Simulation can be of great value in understanding a system’s behavior. In addition, a simulation model can be used for testing several scenario’s and policy adjustments.
  • Make sure that the data you enter in a system is correct. Data cleaning and validation actions are very important, as garbage in results in garbage out.

To make sure that data quality is guaranteed, it is key to properly maintain data collecting systems. Appointing a system owner will help, as this person will be responsible for the system and its functioning within the organization.

Data2Move Research Stories: When does multi-echelon inventory control pay off?

Data science helps to solve complex supply chain management challenges. In our Data2Move Research Stories, you can find out how our students tackle these challenges. This time, we feature Lisa van Lierop’s Master thesis research at Hilti AG in Liechtenstein. Hilti AG is a multinational company that develops, manufactures, and markets products and services for the construction, building maintenance and energy sector.

Where it started – challenge
When inventory is optimized locally, inventory control is often based on a single-echelon approach. But a single-echelon approach might not be optimal from a broader supply chain perspective. In order to optimize stock levels over multiple supply chain stages and still ensure a high service level to end customers, Van Lierop focused on the potential of multi-echelon inventory control. More precisely, she studied the potential of centralized inventory control under different settings. This should help to strive for optimized stock levels throughout the entire Hilti network, and in the meantime ensure a high service to end customers.

Van Lierop addresses this challenge in her Master thesis ‘Quantifying the benefits of multi-echelon inventory control’.

Technical and organizational challenges
“Changing to a multi-echelon control policy is not easy”, Van Lierop states. “The main technical challenge is that you need to collect, and simultaneously process, a large amount of data. You also have to consider that in most supply chains, data needs to be exchanged between different firms.” On top of these technical challenges, organizational challenges pop up. When you optimize your inventory throughout the whole chain, local control is no longer necessary. Inventory should be managed centrally. However, this change requires someone, or some department, to take responsibility for the centralized inventory control. Another organizational challenge regards   benefit sharing: ‘How are the benefits of the new inventory control approach shared or divided between the different supply chain stages?’

Multinationals such as Hilti AG who look for answers to these important questions might benefit from software tools to support these answers. Still, considering the complex inventory landscape and organizational responsibilities, according to Van Lierop “it is no surprise that the number of published real-world applications of multi-echelon inventory control are scarce.”

‘When does a multi-echelon inventory control policy pay off?’
Before a company decides to switch to a multi-echelon approach, they need to have a good indication of the potential for their product portfolio, according to Van Lierop. For some products, the benefits of a multi-echelon approach might be higher than for others. That is why the aim of Van Lierop’s research was to identify the product/supply chain characteristics for which a multi-echelon inventory control policy pays off. More concretely, she studied scenarios based on combinations of the following five dimensions: demand, demand variability, lead-time, lead-time variability and holding costs.

Simulation model
Van Lierop used a software program designed by Prof. Dr. Ton de Kok (ChainScope) for the multi-echelon safety stock optimization. By using simulation, she was able to model and compare different safety stock procedures and multi-echelon distribution networks. In her research approach, Van Lierop also used a MRP-based replenishment policy with forecasted demand, because many companies replenish their stock based on forecasts.

Findings
Results revealed that the benefits of a multi-echelon inventory control approach are the highest for items with a high lead-time to the first location in the distribution network, a high demand rate and high inventory holding costs. Van Lierop: “Furthermore, the savings for low demand, low cost items were relatively low. Especially when the first location in the distribution network can be quickly resupplied. So when companies consider a pilot for a centralized safety stock procedure in their distribution networks, they should focus on ‘high-potential’ items first.”

Data2Move Research Stories: H&S

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you will find out how students have answered these. This time we feature Rijk van der Meulen’s master thesis research at H&S Group, an international and intermodal operating Logistics Service Provider in the liquid foodstuff industry.  

Where it started – challenges 
H&S Group asked Van der Meulen to focus on two operational challenges faced by many intermodal operating Logistics Service Providers:

– The efficient repositioning of empty tank containers

– Proactive planning of their drayage operations

Van der Meulen addressed these challenges in his thesis ‘Forecasting the required tank container and trucking capacity for an intermodal Logistics Service’. His research explores how you can predict demand more accurately and how these demand predictions facilitate better operational planning.

Insight into tank containers and trucking units per location and time
To tackle these challenges, it was important to extract valuable information from data. Van der Meulen needed insight into the expected number of loadings and deliveries. Also, he needed the corresponding requirements of trucking and tank container capacity. By combining these key aspects, he defined how many tank containers and trucking units are needed in a certain planning region at a given time.

Dynamic demand prediction
The true innovative character of van der Meulen’s prediction methodology lies in the dynamic update of the predictions of loadings and deliveries. He used a mathematical technique (Bayesian) to dynamically adjust the initial prediction based on new orders as they enter into the system. This ‘advance demand information’ represents the demand for the future, which is already known in the present. It ensures that planners have access to the most up-to-date and accurate loading and delivery predictions at any time.

In his next step, Van der Meulen used the adjusted forecast to predict the required tank container and trucking capacity. He relied on multiple additional models based on the hierarchical top-down forecast approach and multiple linear regression to assess the effectiveness of the complete forecasting methodology. Its accuracy was put to the test during a one-month test case for two planning regions.

Findings
The one-month test case showed that the dynamic prediction method increased the accuracy of the initial forecast by 65 percent. Using cost simulations, Van der Meulen estimates that this improved prediction accuracy can lead to a 5.2 percent reduction of the total costs associated with trucking operations. That’s a big step towards achieving the operational excellence necessary to survive in the low-margin industry of intermodal logistic service providers. Van der Meulen’s research strengthened H&S in their conviction that forecasting plays a vital role in addressing the challenges of empty tank container repositioning and drayage operations planning.

Spin-off project – implementation
As a result of these findings, H&S started a joint forecasting implementation project with Logistics Service Provider Den Hartogh and data science consultancy firm CQM. The goal of this collaboration is to implement the dynamic prediction methodology of van der Meulen’s research and integrate the implementation with the planning software at H&S and Den Hartogh. This allows both companies to plan their trucking and container operations better and achieve significant cost reductions while maintaining the same service to their customers.

Data2Move Research Stories: the VMI effect within Heineken’s Dutch supply chain

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you’ll find out how students have managed to answer them. This time: Rolf van der Plas and his master thesis on the Vendor Managed Inventory effect within the Dutch supply chain of Heineken.

Where it started

The global beer market is consolidating with less local and more global brewing companies. The remaining players enter into a hyper-competition to be innovative and to differentiate themselves from each other, in order to leverage their scale for increasing operational excellence.

As part of Heineken’s drive to improve operational excellence, a new enterprise resource system will be introduced. It has an optional module that supports collaboration based on the Vendor Managed Inventory (VMI) framework. Heineken has been exploring VMI collaboration with a number of customers. Rolf van der Plas aimed to validate the effectiveness of VMI for Heineken and their customers, like retailers, by quantifying the effect on supply chain performance. He investigated:

  • Heineken’s transport utilization
  • Stock levels in the distribution centers (DCs) of the customer
  • Out-of-Stock performance in the customer DCs

Rolf selected three techniques to investigate VMI collaboration:

  • Data analytics to analyze the current effect
  • A simulation model to redesign the VMI process
  • A simulation-based searching (SBS) model to enhance the parameters settings used in the VMI-designs

Findings: a win-win situation

The current VMI collaboration in Heineken results in 15% higher transport utilization compared to deliveries to DCs of customers without VMI.  A new variance-based VMI design with enhanced parameter settings results in an even higher supply chain performance for Heineken and the customer compared to the current VMI implementation.TabelR

The benefits? A 7% higher truck utilization, meaning trucks are used more efficiently and transport costs go down. Also, a 70% reduction of the average stock levels in the customer DCs, while maintaining a 0% Out-of-Stock performance. The main finding: both supply chain partners benefit.

Further advice

In conclusion, the VMI collaboration effect is both beneficial for the suppliers and for the customers. Rolf’s study proves that his SBS model is an adequate method to consistently identify robust settings that enhance supply chain performance. Rolf endorses the usage of the model for future challenges. He recommends setting up (more) VMI collaboration with your supply chain partners as a tool to improve your supply chain performance.

Also check out our previous research stories

Event Report: Customer Sensing and Responding

14-05-2019, The Student Hotel, Eindhoven

On May the 14th, Data2Move hosted her sixth event at The Student Hotel in Eindhoven. In line with the previous events, this happening was themed ‘Customer Sensing and Responding’ after the last Data2Move research charter.

As our greatest common Data2Move denominator prescribes, the event had a strong focus on using data to boost supply chain performance. Our ‘data of choice’ for this event was customer data and the questions to tackle sounded something like: “What do we already know about our customers that can help us improve our logistics operations?” and “What else should we know about them?” Since applying customer data to improve your supply chain operation is still an undiscovered and underexplored field for many companies, expectations were high.

Meanwhile, in the Data2Move community…

Because we want to keep everyone well informed, we started the event with a short overview of what’s going on in the Data2Move community. We are proud to announce that we have ten ongoing student projects. Furthermore, the community has teamed up with ESCF to bring additional knowledge and unite forces to recruit the best students for our projects. In terms of partner collaborations, MMGuide, CTAC and TKT are setting up a first pilot and many more small scale collaborations are happing as we speak. And last but not least; we have five new partners: TLN, Evofenedex, RoyalHaskoning, DAF and RHI Magnesita. All are full ESCF members.

“Moving Consumer Goods, Not Vehicles”

The keynote of the day was in the capable hands of the newest Data2Move team member, Assistant Professor Virginie Lurkin (TU/e). Lurkin showed how both customers and suppliers can benefit from taking into account consumer behavior and preferences when it comes to the operational decision making. While most operational models focus on the ‘average’ customer, it is actually better to start celebrating the heterogeneity within your customer base. For example, it is important to realize that if the average service level is 97%, this does not mean that all of your customers have an equally high service level. Even more important, you should always ask yourself how high the service level for you most valuable customers is.

Lurkin showed that if you do not take into account this heterogeneity, it will surely result into more miles, more inventory, higher costs and, most importantly, disappointed customers. But if you do take into account customer behavior, these issues can be resolved.

Shaping Research Stories

After the keynote, it was time to hand over the stage to a selection of our most excellent students. During short presentations the students outlined their research projects which they are currently conducting at one of the Data2Move partner companies. Afterwards, there was time to actively brainstorm with the Data2Move partners that were present. The students got a chance to ask their advice, guidance and to pick their expert brains. These brainstorm sessions proved to be an excellent educational ‘win-win situation’ for both the students and the Data2Move partners. In the text blocs below, you will find short abstracts of all the research projects that were discussed.

 

“The value of expiration date information for a grocery retailer” by Gijs Bastiaansen @ Jumbo

 

Gijs Bastiaansen’s project revolves around the expiration dates of products that customers need on a day-to-day basis: perishables. In his thesis, he tries to find out how to determine the percentage of customers that exhibit “grabbing” behavior; grabbing means that a customer does not bother to look at the expiration date, but just “grabs” whichever product is easiest to take. Although Gijs has access to some helpful data, there is no easy way to extract the desired numbers, as barcodes on perishables do not hold any information about expiration dates. As such, he is still looking for additional ways to support his assumptions. During the event he got helpful suggestions to do this.

 

“Inventory optimization in a two-echelon supply chain” by Stan Brugmans @ Office Depot

 

Stan Brugmans is applying multi-echelon inventory theory to the internal supply chain of Office Depot. The supply chain within the scope of his project is the central distribution center and multiple local distribution centers. Office Depot is subject to a phenomenon that is typical for companies in locally controlled supply chains: the Bullwhip Effect. Stan explained that this effect causes small fluctuations in customer demand to result in highly variable demand at the start of the supply chain. The multi-echelon theory that he is applying focusses on product availability to customers and inventory cost reduction by integrally controlling the supply chain.

“Forecasting tankcontainer and trucking capacity for an intermodal carrier” by Rijk van der Meulen @ H&S

 

Rijk van der Meulen is currently working on improving the container capacity planning at H&S. He is doing so by implementing statistical forecasting techniques. One of the key strengths of Rijk’s research is that he enhances these forecasts by taking into account that some information about future demand may already be available. By doing so, the main forecast error is reduced by almost 50%! He also found out that only two of the partners were using advance demand information in practice. Hopefully, his research triggers more Data2Move partners to improve their capacity planning by taking this information into account.

 

“Determining the optimal rail fleet size and composition” by Bart Pierey @ Sabic

 

Bart Pierey recently started his research at SABIC. He focusses on determining the optimal size of the rail container fleet that SABIC uses to ship its products to its customers. Two important aspects are crucial in determining this optimum: if there are too little Rail Tank Containers (RTCs) available on-site, SABIC is forced to scale down production, while having too many RTCs on site results in SABIC having to use the capacity of its competitors; something that will be penalized in the near future. This research is an excellent example of a Data2Move project. Bart will use the vast amount of GPS tracker data that has been collected during the past two years to build a simulation model of the site.

 

Results from student projects

The follow-up to the brainstorm sessions also featured two of our most talented (ex)student members. In two plenary student presentations, Lisa van Lierop and Rolf van der Plas shared their Master thesis research.

First up was former Data2Move student-assistant Lisa van Lierop with an interesting presentation on multi-echelon inventory approach at Hilti. Lisa explained the basics, struggles and opportunities that come with swapping from a single-echelon to a multi-echelon inventory approach. She researched how integrally controlling your supply chain could lead to serious benefits such as higher revenue, better service levels and lower stocks.

During her presentation, the partners were asked to fill in a short survey about if they were using multi-echelon inventory management. If so, they were asked how they were using multi-echelon inventory management. Some important and interesting results emerged from this survey:

  • Two participants stated that they use a multi-echelon approach and that it has led to improved service levels to their customers, as well as cost savings;
  • Most of the partners would consider switching to multi-echelon inventory management if it at least could promise a reduction of 5% of physical stock;
  • Participants estimated that multi-echelon inventory management would have the largest effect as a result of demand variability and holding costs.

The thesis is titled “Quantifying the benefits of a multi-echelon inventory approach”. As soon as this research is finalized, it will be made available to the community.

Next, Rolf van der Plas explained how Vendor Managed Inventory could help Heineken and its customers to achieve mutual benefits. If Heineken controls the inventory levels of their customers, this could result in lower transportation and inventory costs. At the same time this will lead to less stock outs which ultimately will result in happier customers! He supported his statements by showing the results of state-of-the-art simulation models that he has implemented at Heineken. The full thesis can be found here.

Exciting and promising results can really work up an appetite. So after a short wrap-up, everybody was allowed some time to process this new information while enjoying some bits and beverages!

Next steps

On October the 29th we will celebrate our second year anniversary. Two years of Data2Move is an excellent reason to celebrate, reflect and to re-evaluate the topics that we have been working on. So make sure to save the date! If you have any suggestions for the program of the event, or topics that you would like the community to focus on, do not hesitate to send us an email!

For now, we look back at a very successful event. It was great to welcome so many new faces and partners and we hope to see you on the 29th of October!

 

Data2Move Research Stories: A decision support tool to deal with imbalances in container flows

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories you will find out how students have managed to answer them. This time: Auke Holle’s master thesis, undertaken at Den Hartogh Logistics, an important player in the containerized transportation industry. The focus of this thesis: how to benefit from the demand price elasticities to achieve a better balance of container flows in the transport network?

Where it started – challenges & strategies
Den Hartogh Logistics – and many other logistics service providers – are operating within complex supply chains. One of the important challenges they face is the imbalance in container flows: if more freight flows from location A to B than from B to A, empty containers pile up at location B. Logistics service providers commonly apply two strategies:

  • The repositioning of empty containers from surplus to shortage locations.
  • Pricing strategies to influence the demand such that less repositioning movements – and thus less costs – are necessary.

Holle’s work focusses on these pricing strategies.

The need for calculating models

Holle developed a simulation model that measures the impact of several pricing strategies on the company’s profit. An important advantage of this model is that it does not only consider imbalances at a single location, but includes the spill-over effects of changes in one location on profit generated in other locations. It thus considers total profit generated throughout the entire transport network.

Findings

An important insight is that ‘solving’ an imbalance in one location does not necessarily increase the total profit. Instead, the model considers the entire network to determine the discount levels that should be given to customers that ship from surplus locations, as well as the price increases to charge to customers that ship from shortage locations. While the model does not maximize profit, it ensures increased profit compared to the status quo across many possible demand scenarios – even in extreme cases when, for instance, all customers want to ship from the same location.

Simulation can benefit total supply chain

While Holle’s research focuses on the profit improvements for Den Hartogh, the implementation of the simulation model as a decision support tool will also benefit Den Hartogh’s customers. More balanced container flows reduces the repositioning costs, which, in turn, allows Den Hartogh to charge lower prices while maintaining the same service level – potentially leading to a considerable competitive advantage.

A personal note

According to Holle having insight into the effects of a price setting is of major importance. “Den Hartogh Logistics was aware of the existence of the effects of a demand adjustment. They knew that solving an imbalance is no guarantee for an improvement in profit, but the company had not quantified the impact. In some cases, the effects of solving an imbalance surprised me, as we identified large differences in the effect of solving an imbalance from either an import or export perspective.”

Event Report: Data Driven Last Mile

19-02-2019 – Evoluon

On Tuesday the 19th of February, Data2Move community members gathered at the Evoluon in Eindhoven for a brand new Data2Move event. After events on Collaboration and Data Driven Inventory, this event revolved around the topic of transportation. Themed Data Driven Last Mile,  the objective was to show the participants how different variables affect transportation and in particular, how data can help enhance transport planning decisions.

After a delicious lunch, Prof. Luuk Veelenturf opened the event by introducing the programme of the day. However, there was no time to sit back and relax for too long because the first part of the scheduled workshop called everyone into action. The goal of this workshop? To show the participants how they can use data to increase the quality of their transportation decisions.

Workshop Part I: Travel Time Data Analysis

Before the workshop started Veelenturf shared the results of the guessing exercise that was part of the registration procedure for the event. Every participant had to make an educated guess on the registration form of the event regarding the average speed of a truck on the Dutch road during its delivery route with intervals of two hours. The winner, Mr. Ingmar Scholten (CTAC), had an impressive score of 59%. Could the use of the data driven approach beat this excellent score…?

At the start of the workshop Veelenturf briefly addressed the planning of delivery routes (routing) and how some routes may take longer depending on factors such as length, weather, time of the day and day of the week. After this, the participants were divided into small teams of four to five people. Based upon a large dataset of truck delivery routes (including speed, time and distance) the different teams had to predict the average speed within a two hour time window starting from 6:00 until 18:00 and a time-window for delivery that would best match those average speeds. This input was then compared to the data of approximately a thousand random pre-picked routes. Each prediction was assessed by calculating the percentage of trips that actually arrived within the time window that a company gave to its customers based on forecasted speeds and the time window setting.

All of the teams were supervised by a BSc/MSc/Phd/PDEng student with access to the dataset. It was up to the partners to discuss and think of ways to filter the data and come to average speeds and a suitable time window for delivery. The students were equipped with a pre-programmed tool to aid the process. Most teams came to suitable average speeds. However, the winning team of this first part of the workshop did not succeed in beating Mr. Scholtens’ score…..yet. The question remained whether this score would be beaten at all in the second part of the workshop.

Keynote by Stefan Minner (TU Munich) on Routing with Uncertain Travel and Service Times

When one was under the impression that after the exciting first part of the workshop they could finally sit back and let someone else do the work, they were most certainly wrong. The keynote by Prof. Stefan Minner was very engaging but challenging. Minner talked about a Data-Driven approach for routing of delivery services under uncertain travel and service times. Most models use deterministic travel and service times but according to Minner this produces incomplete results in practice. Minner pointed out how machine learning can be applied to logistics and transportation. He also addressed the difference between predictive analytics (sequential approach) and prescriptive analytics (integrated approach) and pointed out that a significant increase in forecasting accuracy does not always necessarily leads to a significant increase in performance. In conclusion Minner explained the data driven approach to the vehicle routing problem with time windows and showed some computational results and numerical tests of this approach. After this, it was time for a well-deserved coffee break and some network opportunities.

Stefan1
Different kind of analyses on data

Workshop Part II: One last time to improve your estimates

After the break, the results of the first workshop were highlighted in the second part of the workshop. Veelenturf shared some of the results of a study done by one of his students on the same dataset. This student had come to 18 different speed profiles based upon differences in distance, urbanisation and day of the week. By using these profiles, the student managed to predict travel speeds that accounted for almost 90% of routes arriving within the specified time window. Based on this prediction this student also managed to optimise the truck planning using software developed by TU/e.

Inspired by this example, the teams then got a chance to improve their scores by making different speed profiles based upon distance. So every team had to produce speed profiles and a time window, but now they also had to choose three different distance categories for different speed profiles.

Stefan2
Different Speed Profiles

Compared to the initial guesswork and the results of the first part of the workshop, each team showed a large increase in their on-time scores. This shows that further exploration of the data and the determination of more profiles by setting parameters was beneficial to the on-time scores. A powerful conclusion to illustrate the importance of good data analytics and critical thinking.

After all this data-crunching it was finally time to announce the winning team. Every team-member received a small device that enables you to track your own speed throughout the day. We will have to find out at the next event if the results of their personal speed tracking are just as impressive as their prediction accuracy.

Next steps

The theme featured in this Data2Move event was transportation. The topic of the upcoming Data2Move event in May is Customer Sensing and Responding. This next event will feature a number of on-going student projects (Bachelor and Master). Students will share the valuable insights they have discovered and there is room to give them your input as a professional. As Customer Sensing and Responding is the last charter, this means that for the event after May, we, as a community, can go in any direction we want. Please do not hesitate to share your ideas about possible topics.