iStock image -2201625894 AI globe data paths network ecosystem blue royyimzy
Dialog beats passive acceptance every time
by Cynthia Kalina-Kaminsky, Ph.D., CEO Process Strategy Solutions
Author’s Note: I’m using this article to open a dialog on supply chain risks posed by integrated, cascading AI decisions – I hope you’ll join in
A bit of background:
I’ve been involved with AI since an early project with the Army Corp of Engineers to design and build a robotic arc welder. Master welders were scarce and extra welding capacity was needed. The AI worked with was a type of early machine learning.
I was responsible for the software, some electronic design, and turning the then current AI theory into something that works. AI has always required processing speed (that did not exist), lots of data (which we didn’t have), and a clear definition of the end goal (nailed this part). There were also a lot of assumptions being made.
Today:
AI is currently being scattered throughout your supply chain and business in general.
There are 2 major risks I see and am eager to read your insights on:
Agentic AI
Agentic AI is defined by Wikipedia as “…class of artificial intelligence that focuses on autonomous systems that can make decisions and perform tasks without human intervention. The independent systems automatically respond to conditions, to produce process results.”
We don’t always do a great job of linking system requirements ahead of time and often, in supply chain, end up with systems that don’t like talking to each other.
Agentic AI should have some ability to change that since it will involve training prior to release.
However, no human understands how the logic of one agentic AI system will interface and make decisions in concert with another 1 or 2 or many agentic AI systems embedded in your supply chains.
We talk about keeping AI under control by having a person in or on the loop, checking to make sure AI is running in an approved way.
But quickly the digital conundrum comes into play – especially since agentic AI decisions are made invisibly, behind the scenes, in theory to keep the supply chain optimized and running smoothly – no matter what the disturbance of the day may be.
But, if AI is brought in because it can gather and analyze more data more quickly than a human, and develop options faster than a human, how can a human be smart enough (or at least fast enough or even intuitive enough) to understand what rapid decision making behind the scenes means for the business? Each minute brings more behind the scenes decisions.
Even if AI decision making stays within given boundaries, we are all familiar with past cascading risks where nothing seems amiss locally, but at the end of many isolated decisions or events where each appeared to be within guidelines, we have massive failure.
What methods do you believe should be used to handle and control agentic AI to prevent it from rapidly taking your supply chains and business over a cliff?
And what do you do if you are in the life sciences and need to promise no life-threatening issues will arise from using agentic AI?
We can automate machine to machine decision making about inventory levels, inventory positioning, spend, orders, allocations, processes, etc. If agentic AI systems are “learning”, perhaps they’ll alter something because it makes logical sense, but does not sense for regulations or safety or …? Some businesses have stated no one is to be hired if they can be replaced by AI. To an extent, for the casual observer, most supply chain functions can be replaced by AI.
AI data
In addition to machine learning, a class of AI most people are familiar with are the large language models (LLMs).
Currently, LLMs can be very useful as a tool toward achieving a goal in a controlled environment for a specific problem.
It is also well known that LLMs “hallucinate”. Or in engineering terms, LLMs provide wrong answers about 30% of the time. And those wrong answers are developed based on data pulled from somewhere.
Reportedly, new versions of LLM models are growing so big that there isn’t enough data to train them on.
So, the solution being proposed is the use of inference generated synthetic data.
Quick definition: Datacamp states that synthetic data is generated using algorithms that mimic the statistical properties and structures of real data.
But if we are using LLMs in a changing environment, what data will it be mimicking?
And if we are using Agentic AI for machine learning and LLMs integrated together, how do we guarantee the invisible behind the scenes decision making relying on simulated data with our connected systems isn’t creating business ending results?
Or worse, life-threatening results?
I’m eager to hear your thoughts on this and to engage in meaningful dialog with you here.
Because with AI, decisions rapidly affect our entire ecosystem and we’re all going to have to deal with this sooner rather than later.
To join the LinkedIn discussion, go to https://www.linkedin.com/pulse/ecosystems-ai-risk-cynthia-kalina-kaminsky-ph-d--fygde
Join Dr. Cynthia Kalina-Kaminsky in June and learn how to effectively build your supply chain ecosystems with governance aspects designed to handle risk using ASCM SCOR Learn more here