Insights

Data2Move Research Stories: the VMI effect within Heineken’s Dutch supply chain

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you’ll find out how students have managed to answer them. This time: Rolf van der Plas and his master thesis on the Vendor Managed Inventory effect within the Dutch supply chain of Heineken.

Where it started

The global beer market is consolidating with less local and more global brewing companies. The remaining players enter into a hyper-competition to be innovative and to differentiate themselves from each other, in order to leverage their scale for increasing operational excellence.

As part of Heineken’s drive to improve operational excellence, a new enterprise resource system will be introduced. It has an optional module that supports collaboration based on the Vendor Managed Inventory (VMI) framework. Heineken has been exploring VMI collaboration with a number of customers. Rolf van der Plas aimed to validate the effectiveness of VMI for Heineken and their customers, like retailers, by quantifying the effect on supply chain performance. He investigated:

  • Heineken’s transport utilization
  • Stock levels in the distribution centers (DCs) of the customer
  • Out-of-Stock performance in the customer DCs

Rolf selected three techniques to investigate VMI collaboration:

  • Data analytics to analyze the current effect
  • A simulation model to redesign the VMI process
  • A simulation-based searching (SBS) model to enhance the parameters settings used in the VMI-designs

Findings: a win-win situation

The current VMI collaboration in Heineken results in 15% higher transport utilization compared to deliveries to DCs of customers without VMI.  A new variance-based VMI design with enhanced parameter settings results in an even higher supply chain performance for Heineken and the customer compared to the current VMI implementation.TabelR

The benefits? A 7% higher truck utilization, meaning trucks are used more efficiently and transport costs go down. Also, a 70% reduction of the average stock levels in the customer DCs, while maintaining a 0% Out-of-Stock performance. The main finding: both supply chain partners benefit.

Further advice

In conclusion, the VMI collaboration effect is both beneficial for the suppliers and for the customers. Rolf’s study proves that his SBS model is an adequate method to consistently identify robust settings that enhance supply chain performance. Rolf endorses the usage of the model for future challenges. He recommends setting up (more) VMI collaboration with your supply chain partners as a tool to improve your supply chain performance.

Also check out our previous research stories

Event Report: Customer Sensing and Responding

14-05-2019, The Student Hotel, Eindhoven

On May the 14th, Data2Move hosted her sixth event at The Student Hotel in Eindhoven. In line with the previous events, this happening was themed ‘Customer Sensing and Responding’ after the last Data2Move research charter.

As our greatest common Data2Move denominator prescribes, the event had a strong focus on using data to boost supply chain performance. Our ‘data of choice’ for this event was customer data and the questions to tackle sounded something like: “What do we already know about our customers that can help us improve our logistics operations?” and “What else should we know about them?” Since applying customer data to improve your supply chain operation is still an undiscovered and underexplored field for many companies, expectations were high.

Meanwhile, in the Data2Move community…

Because we want to keep everyone well informed, we started the event with a short overview of what’s going on in the Data2Move community. We are proud to announce that we have ten ongoing student projects. Furthermore, the community has teamed up with ESCF to bring additional knowledge and unite forces to recruit the best students for our projects. In terms of partner collaborations, MMGuide, CTAC and TKT are setting up a first pilot and many more small scale collaborations are happing as we speak. And last but not least; we have five new partners: TLN, Evofenedex, RoyalHaskoning, DAF and RHI Magnesita. All are full ESCF members.

“Moving Consumer Goods, Not Vehicles”

The keynote of the day was in the capable hands of the newest Data2Move team member, Assistant Professor Virginie Lurkin (TU/e). Lurkin showed how both customers and suppliers can benefit from taking into account consumer behavior and preferences when it comes to the operational decision making. While most operational models focus on the ‘average’ customer, it is actually better to start celebrating the heterogeneity within your customer base. For example, it is important to realize that if the average service level is 97%, this does not mean that all of your customers have an equally high service level. Even more important, you should always ask yourself how high the service level for you most valuable customers is.

Lurkin showed that if you do not take into account this heterogeneity, it will surely result into more miles, more inventory, higher costs and, most importantly, disappointed customers. But if you do take into account customer behavior, these issues can be resolved.

Shaping Research Stories

After the keynote, it was time to hand over the stage to a selection of our most excellent students. During short presentations the students outlined their research projects which they are currently conducting at one of the Data2Move partner companies. Afterwards, there was time to actively brainstorm with the Data2Move partners that were present. The students got a chance to ask their advice, guidance and to pick their expert brains. These brainstorm sessions proved to be an excellent educational ‘win-win situation’ for both the students and the Data2Move partners. In the text blocs below, you will find short abstracts of all the research projects that were discussed.

 

“The value of expiration date information for a grocery retailer” by Gijs Bastiaansen @ Jumbo

 

Gijs Bastiaansen’s project revolves around the expiration dates of products that customers need on a day-to-day basis: perishables. In his thesis, he tries to find out how to determine the percentage of customers that exhibit “grabbing” behavior; grabbing means that a customer does not bother to look at the expiration date, but just “grabs” whichever product is easiest to take. Although Gijs has access to some helpful data, there is no easy way to extract the desired numbers, as barcodes on perishables do not hold any information about expiration dates. As such, he is still looking for additional ways to support his assumptions. During the event he got helpful suggestions to do this.

 

“Inventory optimization in a two-echelon supply chain” by Stan Brugmans @ Office Depot

 

Stan Brugmans is applying multi-echelon inventory theory to the internal supply chain of Office Depot. The supply chain within the scope of his project is the central distribution center and multiple local distribution centers. Office Depot is subject to a phenomenon that is typical for companies in locally controlled supply chains: the Bullwhip Effect. Stan explained that this effect causes small fluctuations in customer demand to result in highly variable demand at the start of the supply chain. The multi-echelon theory that he is applying focusses on product availability to customers and inventory cost reduction by integrally controlling the supply chain.

“Forecasting tankcontainer and trucking capacity for an intermodal carrier” by Rijk van der Meulen @ H&S

 

Rijk van der Meulen is currently working on improving the container capacity planning at H&S. He is doing so by implementing statistical forecasting techniques. One of the key strengths of Rijk’s research is that he enhances these forecasts by taking into account that some information about future demand may already be available. By doing so, the main forecast error is reduced by almost 50%! He also found out that only two of the partners were using advance demand information in practice. Hopefully, his research triggers more Data2Move partners to improve their capacity planning by taking this information into account.

 

“Determining the optimal rail fleet size and composition” by Bart Pierey @ Sabic

 

Bart Pierey recently started his research at SABIC. He focusses on determining the optimal size of the rail container fleet that SABIC uses to ship its products to its customers. Two important aspects are crucial in determining this optimum: if there are too little Rail Tank Containers (RTCs) available on-site, SABIC is forced to scale down production, while having too many RTCs on site results in SABIC having to use the capacity of its competitors; something that will be penalized in the near future. This research is an excellent example of a Data2Move project. Bart will use the vast amount of GPS tracker data that has been collected during the past two years to build a simulation model of the site.

 

Results from student projects

The follow-up to the brainstorm sessions also featured two of our most talented (ex)student members. In two plenary student presentations, Lisa van Lierop and Rolf van der Plas shared their Master thesis research.

First up was former Data2Move student-assistant Lisa van Lierop with an interesting presentation on multi-echelon inventory approach at Hilti. Lisa explained the basics, struggles and opportunities that come with swapping from a single-echelon to a multi-echelon inventory approach. She researched how integrally controlling your supply chain could lead to serious benefits such as higher revenue, better service levels and lower stocks.

During her presentation, the partners were asked to fill in a short survey about if they were using multi-echelon inventory management. If so, they were asked how they were using multi-echelon inventory management. Some important and interesting results emerged from this survey:

  • Two participants stated that they use a multi-echelon approach and that it has led to improved service levels to their customers, as well as cost savings;
  • Most of the partners would consider switching to multi-echelon inventory management if it at least could promise a reduction of 5% of physical stock;
  • Participants estimated that multi-echelon inventory management would have the largest effect as a result of demand variability and holding costs.

The thesis is titled “Quantifying the benefits of a multi-echelon inventory approach”. As soon as this research is finalized, it will be made available to the community.

Next, Rolf van der Plas explained how Vendor Managed Inventory could help Heineken and its customers to achieve mutual benefits. If Heineken controls the inventory levels of their customers, this could result in lower transportation and inventory costs. At the same time this will lead to less stock outs which ultimately will result in happier customers! He supported his statements by showing the results of state-of-the-art simulation models that he has implemented at Heineken. The full thesis can be found here.

Exciting and promising results can really work up an appetite. So after a short wrap-up, everybody was allowed some time to process this new information while enjoying some bits and beverages!

Next steps

On October the 29th we will celebrate our second year anniversary. Two years of Data2Move is an excellent reason to celebrate, reflect and to re-evaluate the topics that we have been working on. So make sure to save the date! If you have any suggestions for the program of the event, or topics that you would like the community to focus on, do not hesitate to send us an email!

For now, we look back at a very successful event. It was great to welcome so many new faces and partners and we hope to see you on the 29th of October!

 

Data2Move Research Stories: A decision support tool to deal with imbalances in container flows

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories you will find out how students have managed to answer them. This time: Auke Holle’s master thesis, undertaken at Den Hartogh Logistics, an important player in the containerized transportation industry. The focus of this thesis: how to benefit from the demand price elasticities to achieve a better balance of container flows in the transport network?

Where it started – challenges & strategies
Den Hartogh Logistics – and many other logistics service providers – are operating within complex supply chains. One of the important challenges they face is the imbalance in container flows: if more freight flows from location A to B than from B to A, empty containers pile up at location B. Logistics service providers commonly apply two strategies:

  • The repositioning of empty containers from surplus to shortage locations.
  • Pricing strategies to influence the demand such that less repositioning movements – and thus less costs – are necessary.

Holle’s work focusses on these pricing strategies.

The need for calculating models

Holle developed a simulation model that measures the impact of several pricing strategies on the company’s profit. An important advantage of this model is that it does not only consider imbalances at a single location, but includes the spill-over effects of changes in one location on profit generated in other locations. It thus considers total profit generated throughout the entire transport network.

Findings

An important insight is that ‘solving’ an imbalance in one location does not necessarily increase the total profit. Instead, the model considers the entire network to determine the discount levels that should be given to customers that ship from surplus locations, as well as the price increases to charge to customers that ship from shortage locations. While the model does not maximize profit, it ensures increased profit compared to the status quo across many possible demand scenarios – even in extreme cases when, for instance, all customers want to ship from the same location.

Simulation can benefit total supply chain

While Holle’s research focuses on the profit improvements for Den Hartogh, the implementation of the simulation model as a decision support tool will also benefit Den Hartogh’s customers. More balanced container flows reduces the repositioning costs, which, in turn, allows Den Hartogh to charge lower prices while maintaining the same service level – potentially leading to a considerable competitive advantage.

A personal note

According to Holle having insight into the effects of a price setting is of major importance. “Den Hartogh Logistics was aware of the existence of the effects of a demand adjustment. They knew that solving an imbalance is no guarantee for an improvement in profit, but the company had not quantified the impact. In some cases, the effects of solving an imbalance surprised me, as we identified large differences in the effect of solving an imbalance from either an import or export perspective.”

Event Report: Data Driven Last Mile

19-02-2019 – Evoluon

On Tuesday the 19th of February, Data2Move community members gathered at the Evoluon in Eindhoven for a brand new Data2Move event. After events on Collaboration and Data Driven Inventory, this event revolved around the topic of transportation. Themed Data Driven Last Mile,  the objective was to show the participants how different variables affect transportation and in particular, how data can help enhance transport planning decisions.

After a delicious lunch, Prof. Luuk Veelenturf opened the event by introducing the programme of the day. However, there was no time to sit back and relax for too long because the first part of the scheduled workshop called everyone into action. The goal of this workshop? To show the participants how they can use data to increase the quality of their transportation decisions.

Workshop Part I: Travel Time Data Analysis

Before the workshop started Veelenturf shared the results of the guessing exercise that was part of the registration procedure for the event. Every participant had to make an educated guess on the registration form of the event regarding the average speed of a truck on the Dutch road during its delivery route with intervals of two hours. The winner, Mr. Ingmar Scholten (CTAC), had an impressive score of 59%. Could the use of the data driven approach beat this excellent score…?

At the start of the workshop Veelenturf briefly addressed the planning of delivery routes (routing) and how some routes may take longer depending on factors such as length, weather, time of the day and day of the week. After this, the participants were divided into small teams of four to five people. Based upon a large dataset of truck delivery routes (including speed, time and distance) the different teams had to predict the average speed within a two hour time window starting from 6:00 until 18:00 and a time-window for delivery that would best match those average speeds. This input was then compared to the data of approximately a thousand random pre-picked routes. Each prediction was assessed by calculating the percentage of trips that actually arrived within the time window that a company gave to its customers based on forecasted speeds and the time window setting.

All of the teams were supervised by a BSc/MSc/Phd/PDEng student with access to the dataset. It was up to the partners to discuss and think of ways to filter the data and come to average speeds and a suitable time window for delivery. The students were equipped with a pre-programmed tool to aid the process. Most teams came to suitable average speeds. However, the winning team of this first part of the workshop did not succeed in beating Mr. Scholtens’ score…..yet. The question remained whether this score would be beaten at all in the second part of the workshop.

Keynote by Stefan Minner (TU Munich) on Routing with Uncertain Travel and Service Times

When one was under the impression that after the exciting first part of the workshop they could finally sit back and let someone else do the work, they were most certainly wrong. The keynote by Prof. Stefan Minner was very engaging but challenging. Minner talked about a Data-Driven approach for routing of delivery services under uncertain travel and service times. Most models use deterministic travel and service times but according to Minner this produces incomplete results in practice. Minner pointed out how machine learning can be applied to logistics and transportation. He also addressed the difference between predictive analytics (sequential approach) and prescriptive analytics (integrated approach) and pointed out that a significant increase in forecasting accuracy does not always necessarily leads to a significant increase in performance. In conclusion Minner explained the data driven approach to the vehicle routing problem with time windows and showed some computational results and numerical tests of this approach. After this, it was time for a well-deserved coffee break and some network opportunities.

Stefan1
Different kind of analyses on data

Workshop Part II: One last time to improve your estimates

After the break, the results of the first workshop were highlighted in the second part of the workshop. Veelenturf shared some of the results of a study done by one of his students on the same dataset. This student had come to 18 different speed profiles based upon differences in distance, urbanisation and day of the week. By using these profiles, the student managed to predict travel speeds that accounted for almost 90% of routes arriving within the specified time window. Based on this prediction this student also managed to optimise the truck planning using software developed by TU/e.

Inspired by this example, the teams then got a chance to improve their scores by making different speed profiles based upon distance. So every team had to produce speed profiles and a time window, but now they also had to choose three different distance categories for different speed profiles.

Stefan2
Different Speed Profiles

Compared to the initial guesswork and the results of the first part of the workshop, each team showed a large increase in their on-time scores. This shows that further exploration of the data and the determination of more profiles by setting parameters was beneficial to the on-time scores. A powerful conclusion to illustrate the importance of good data analytics and critical thinking.

After all this data-crunching it was finally time to announce the winning team. Every team-member received a small device that enables you to track your own speed throughout the day. We will have to find out at the next event if the results of their personal speed tracking are just as impressive as their prediction accuracy.

Next steps

The theme featured in this Data2Move event was transportation. The topic of the upcoming Data2Move event in May is Customer Sensing and Responding. This next event will feature a number of on-going student projects (Bachelor and Master). Students will share the valuable insights they have discovered and there is room to give them your input as a professional. As Customer Sensing and Responding is the last charter, this means that for the event after May, we, as a community, can go in any direction we want. Please do not hesitate to share your ideas about possible topics.

Data2Move Research Stories: Automated Store Ordering to improve a supermarket’s inventory management

Where supply chain management and data science meet, interesting questions arise. In our Data2Move Research Stories, you’ll find out how students have managed to answer them. This time: Bob van Beuningen’s master thesis on Automated Store Ordering versus Manual Store Ordering at Jumbo Supermarkten.

Ever since the Dutch retail market entered a fierce price war in 2003, retailers have continuously been looking for ways to save costs while maintaining the high service level that is demanded by customers.

That’s where, for instance, an Automated Store Ordering system comes in. This can reduce food waste, reduce stock outs, and can save employees a significant amount of work.

Where it started

Every one of Jumbo’s supermarkets relies on such an ASO system to predict the amount of goods that should be in stock on any given day. A challenge, however, is in the fact that 9% of the generated orders are manually adapted by store managers.

Jumbo wanted to find out why they make these adjustments. By finding out, the system could be changed in order to create a so-called hands-off policy, meaning adjustments would never be necessary.

Bob focuses on this in his thesis – and also delivers recommendations for Jumbo to judge the ‘correctness’ of the adjusted orders (i.e. whether the orders add more value or whether the adjustment only costs more money).

Findings

Bob conducted interviews and performed a logistic regression analysis. Three main reasons were found that actively cause managers to adjust the orders:

  • The product is on promotion
  • The product is on second placing (i.e. store managers have allocated extra shelf space, typically at the head of an aisle)
  • The inventory in the ASO system was incorrect

Additionally, it was tested if these order adjustments added value, meaning they were good for the company. First it turned out 75% of the adjustments meant a bigger order and 25% meant a smaller one. Results were that only 15% of ‘upwards adaptations’ added value and 65% of downward adaptations added value. What seemed to contribute to the latter was the question whether or not a product was perishable. For instance, downward adjustments in perishable products are more likely to add value.

“Store managers are more likely to add value for perishable products than for non-perishable products,” Bob wrote.

Further advice

In order to get better results, Bob advises Jumbo to change some of their Key Performance Indicators: “It is recommended to use the KPIs ‘process trustworthiness’, ‘added value of order adaptions’, and ‘order acceptance’ to move to a hands-off situation. It is important to use these KPIs to find out whether a specific store is able to move to a hands-off policy or not.”

“If an adaptation was correct, and thus added value, then the system should be able to recognize this and make the adaptation itself in the future. By means of these KPIs, Jumbo should be able to get more insights in what needs to be changed in the system in order to achieve this goal.”

How do companies successfully set up partnerships?

What makes partnerships work? During ‘Collaboration: Discovering the Potential’, a Data2Move community event, prof. dr. Ard-Pieter de Man (Vrije Universiteit Amsterdam) shared valuable insights from extensive research. A certain attitude is important: “You don’t want to be in control, you want to be up to speed. To do that, you need to work together.”

Both researcher and consultant, professor De Man is an expert on partnerships between organizations. He is highly interested in organizations’ capability to change – and how partnerships can help make that happen.

Partnerships are on the rise, said prof. De Man while discussing current trends. Companies in the IT and pharmaceutical industries are taking the lead. “And not only that: they also make an effort to find out how to manage these collaborations. At the beginning of our studies, the companies we tracked used 11 tools to manage their collaborations – evaluations, legal aspects, etcetera. By the end, ten years later, they had 30 tools.”

One of the other trends: there are more and more multi-company partnerships. “Six, seven, sometimes even ten companies work together.”

3 core elements of success

Many partnerships fail, prof. De Man said, because of a mismatch between strategies or cultures, or because of a lack of trust. But then what makes partnerships successful? He shared three core elements of success:

  • Structure
  • Relationship
  • Collaborative capability

You can properly take care of these by answering certain questions. For instance, in the case of structure: Who talks to whom? Which goals do we discuss? How are we going to share costs and revenue? Or in terms of relationship: How do we build trust? Is everyone committed? And collaborative capability: Are we willing to share knowledge? Do we have the right tools? Are we able to collaborate?

Everyone benefits: norms for collaboration

De Man emphasizes the importance of the relational aspect. Introducing his list of norms for successful collaborative behavior, deduced from research, he starts with empathy. Can you understand how the collaboration affects your partner(s)?

This is related to mutuality. “Your company benefits, and so does your partner,” De Man explained. “This is still a problem for many companies.”

In comes flexibility. Markets, needs, goals: circumstances may change. “Are you willing to evolve with them? Or do you want to stick to the contract?”

Other norms include commitment, a willingness to solve conflicts, and a strategic outlook: “If you can actually look ahead by two or three years, all partners involved can reap tremendous benefits – but all too often the focus is on the short term.”

To show how these success factors and norms translate to daily behavior, De Man shared the Abbott-Reata Behavioral Principles, including…

  • try a talk before you e-mail
  • celebrate achievements together
  • make it a habit to share information

Increasingly interdependent

Discussing examples from Air France and KLM (initially offering their customers more destinations; eventually learning from each other) and John Deere and Kespry (drones gathering topographic data; tractor drivers benefiting from data while planting), professor De Man showed how companies are becoming increasingly interdependent. And partnerships like these are just the beginning. All together, they help shape big ecosystems, where the fate of individual companies becomes intertwined.

“We still think of competition as something between companies, but it’s becoming something between ecosystems,” said De Man. “In a data-driven economy, your success depends on your partner ecosystem. An ecosystem with the different parts supporting and strengthening each other.”

Getting up to speed

De Man makes it clear: managing diverse partnerships is becoming a competitive advantage – and the human element becomes even more important than it already was. This was an idea that resonated throughout our event. Asking our community members which take-away they found especially noteworthy, many of them responded along these lines:

  • “It’s not about technology, it’s about empowering people.”
  • “Collaborations start with relationships.”
  • “It’s the people who collaborate, not the companies.”

For companies who are hesitant to start forming partnerships, professor De Man provides a solid reminder: “With today’s rapid developments, you don’t want to be in control – you want to be up to speed. To do that, you need to work together.”

Become a part of the Data2Move community and join us for our next event in February.

More on professor De Man’s work can be found on his Vrije Universiteit Amsterdam profile.

Data2Move Research Stories: improving demand planning for an international manufacturer

Where supply chain management and data science meet, and where theory and practice meet, that’s where Data2Move meets. At intersections like these, interesting questions arise. In our Data2Move Research Stories, you’ll find out how students have answered them.

This time, we look into Nazli Akgül’s master thesis: a study of the relationship between a company’s demand planning process and its supply chain performance.

Where it started

This international manufacturer of pharmaceutical products aimed to improve its performance regarding inventory management and planning – and save costs thanks to this improvement.

Where did this objective come from? Well: the company noticed its demand planning took to much time. As a consequence, their decision-making process was often hurried, and led to suboptimal outcomes. This problem was found throughout the company across multiple countries. It affected the inventory, which was often inaccurate, and raised costs unnecessarily.

Akgül took to the problem with an in-depth analysis of the company’s planning activities and the way they related to inventory levels and performance. She found that some activities held back performance – and others turned out to be more important than initially thought.

Findings and advice

By using simulation techniques, Akgül investigated possible changes to see what effect they would have. Of course, reality may differ from the simulations, but they generally prove to be valuable. It was concluded that the company should try to reduce reporting tasks for planners and, instead, allocate more time for backorder analysis and demand consensus meetings with supply chain partners.

By doing so, predicting demand would no longer be an issue, the company’s inventory would be managed accurately from now on, and performance would increase significantly.

Furthermore, thorough data analyses showed that demand planning performance significantly decreased when planners had to spend time on assuring the data are of good quality. Therefore, it is strongly recommended that the manufacturer adopts a more accurate data system.

The new approach Akgül recommends, including the new data system to support it, would require change on a large scale. An investment at first, which makes for a major time-saver later on.