Fully connected and flexible systems – the concept behind Industry 4.0 – provides a wealth of opportunities for continuous improvement in food safety culture, assuming that the data gathered is interpreted efficiently.
Fully connected and flexible systems – the concept behind Industry 4.0 – provides a wealth of opportunities for continuous improvement in food safety culture, assuming that the data gathered is interpreted efficiently.
For independent certification advisors, such as LRQA, there is a role to play in helping the food supply chain analyse, contextualise and apply lessons held in its data.
Big food retailers are already capitalising on the promise of big data. There are many examples of how they are using information from global networks of retail scanners to gain better visibility of the supply chain and identify contamination sources in record time.
There is also ample evidence that AI start-ups are helping restaurants to fill seats by staying on top of trends such as fluctuations in the public taste palate.
These initiatives use data to inform current market and crisis-response strategies. The Holy Grail for food-safety specialists, however, is data-driven intelligence that identifies trends to predict events such as food contamination and crop failures. This remains widely elusive, with only the rare exception being reported.
Few if any food companies are using existing data and modern algorithmic tools to build analytical models that could indicate when and where the next issue is likely to happen, setting the wheels in motion to prevent food crises and recalls.
The issue is not technological. The tools to support a supply-chain wide distribution of predictive practices are there, and they are improving by the minute. However, efforts to identify and assure the quality of the industry’s data at most points in the food supply chain are embryonic.
Changing that will require several significant logistical and cultural hurdles to be overcome.
When old meets new
The food industry’s safety-related data is held by millions of companies around the world in various types of assets, data formats and written languages. That which is in digital format can be held in anything from modern phones and laptops to old servers and hard drives with obsolete operating systems.
Translation and conversion tools to improve the quality of the data are readily available. However, because simply converting everything would be a waste of resources, the most useful data needs to be identified, which means that even if its owners consent to sharing, they will also need to reach consensus on what they hope to learn. This is no small task.
The underlying logic of collaboration is that exchanging information reduces uncertainty. In this context, one of the top priorities for reducing risk in food supply chain has to be the creation of a community that willingly exchanges information.
This is easier said than done. Historically, food supply chains have been characterised by arms-length, even adversarial, relationships between participants who are unwilling to share information, either with suppliers or customers.
The process of anonymising data to protect commercial secrets is comparatively simple in the age of digitalization. Agreeing a universal format to create the metadata required for dependable, global analyses may prove more difficult; agreeing to collective goals in a fiercely competitive industry may prove harder still.
Too sensitive to share
At present, there is no meaningful collaboration in the food supply chain with regards to sharing the data that would enable a predictive tool. The industry is dominated by arms’ length relationships which have so far prevented the development of any data-sharing platforms.
In the eyes of most executives, data remains too sensitive to share.
Trust issues aside, the potential for predictive analytics to shift industry fortunes is pretty persuasive, potentially providing the impetus to create data platforms and online ecosystems. Still, there are some who believe the creation of those systems needs to be driven by the food industry’s most influential members, the big brands.
Trusted independent certification firms such as LRQA have a role to play in the creation of shared data platforms, as do new technologies such as Blockchain. Distributed ledger technologies combine the openness of the Internet with the security of cryptography to potentially give all partners in the food supply chain a safer way to verify and share information, and establish trust.
Certification companies, like governments, also own comprehensive data sets that track types of food events – such as strains of bacterial contamination, allergens and food fraud – across an array of private and public organisations, as well as geographical regions. Their data sets are rich with insights that run deeper than those held by individual private companies.
Grasping that role will require some changes as conventional assessment is blended with digital monitoring. For one, audit solutions will need to support the delivery of a type of real-time assurance that transitions from calendar-based audits to those that are scheduled when risks are more probable.
Clearly, snapshot verifications based on sampling will no longer be sufficient in a rapidly digitalising industry that is dedicated to safety systems that predict risk while continuing to encourage robust change management practices and continuous improvement.
A visible change among the certification community will be employee skillsets. Food specialists will continue to work side by side with traditional partners, such as certified auditors. But the new faces are destined to include employees who have been trained as data analysts, data scientists and mathematicians.
Those who know how to recognise the logic inherent in numbers, will be one of the keys to learning the lessons of the past and making the global food chain safer.