How close are we, really, to AI in PLM?

Aug 31, 2023 | Hot Topics

(Newsletter subscribers: sorry if you’re seeing this twice. The first one went out with weird formatting.)

Did you hear the news yesterday? My local radio station advised me NOT to rely on Siri and Alexa for help if I need to perform CPR (cardio-pulmonary resuscitation for someone whose heart has stopped due to cardiac arrest). Apparently, those two sources give a synopsis of a movie of that same name rather than telling you how to do it. [In case it’s helpful: perform CPR to the BeeGees’ “Stayin’ Alive” — its 104 beats per minute rhythm is perfect for pacing you while you dance or do chest compressions. If you’re not into disco, many other songs can work: https://www.firstresponse-ed.com/blog/the-great-cpr-song-list/.] 

And did you see this earlier this week? Axios reported that “AI-generated content could account for as much as 90% of information on the internet in just a few years, as ChatGPT, Dall-E and other programs spill torrents of verbiage and images.” That’s bad because it means that algorithms will train on content generated by AI so that errors propagate — a “disease” that even has a name: ‘”Model Autophagy Disorder,” or MAD, is what researchers at Rice and Stanford dubbed the result of AI consuming its own products,’ according to Axios.

Why tell you this? Because there’s a general perception that AI will take over, replacing knowledge workers and causing chaos in the classroom as kids use AI to write term papers. And that leads to the fear that this is all imminent and that we’d better start looking for new jobs right now. Yet, the indicators that AI is less ready than the public thinks it is don’t get enough attention. 

AI isn’t ready for all-purpose, real-world applications. Yes, it can help write a term paper, but only if the student isn’t all that picky about accuracy. For real-world commercial uses, it’s probably not ready. One example: a newspaper in Ohio published AI-generated sports recaps that relied on box scores rather than actual game reporting and had AI “write” the stories. The result I saw was this:

“The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”

The Ohio paper had just laid off a lot of sportswriters — perhaps a mistake if they want to keep covering this for their community. There was also another newsroom failure: Why didn’t an editor catch this? A human in this loop would have been helpful. For more on AI in newsrooms, read here.

Most of the hype has been about AI for content creation, like writing articles, creating images and movies, and producing music. PLMish use cases for AI are much more specific and typically not yet about creating new assets but drawing inferences from giant piles of existing assets. And the care we take in training the algorithms is significantly more than “look at all things on the Internet.” We’re seeing a lot of pattern detection in PLMish AI, whether those patterns are visual or in data streams.

A couple of years ago, at a Bentley Year in Infrastructure event, I was really taken with examples of image processing to detect specific faults. One example looked at cracks in dams and bridges, and another looked at vegetation growth on buildings. Some surface flaws on a dam are OK; others are really, really bad and need to be addressed as quickly as possible. A certain amount of ivy on a building is charming, but too much can eat into the mortar of the facade and needs to be removed. In both scenarios, we used to send people up to take a look — but that can be dangerous, and the results are only as good as the inspector’s training. Using drones to capture images enables us to reduce risk and cost and perhaps do this more frequently to capture current states and track changes. Using drones was a significant step forward and presented technical challenges of making sure the positioning was perfect, the images tracked, etc. Adding AI to process all of these images takes this to another level, as AI can learn how to grade the flaws and start building a work order priority list, with the most challenging or urgent cases sent to a human for analysis. This is already changing the role of asset inspectors but in a positive way — not replacing them but letting them focus on the critical cases, prioritizing work, and strategizing mitigations. (Read more about Bentley’s bridge monitoring solutions here:  https://www.bentley.com/software/bridge-monitoring/ and take a quick look at this YouTube video on dam monitoring: https://www.youtube.com/watch?v=6kvRffaFoIM)

We see similar projects across the PLMish space, so far creating targeted solutions to specific problems. For example, creating knowledge graphs to help designers analyze bills of material for patterns like most-used/least-used. Or combining physics-based simulation with AI-driven design to surface more and better design alternatives earlier in a project. Or helping construction planners figure out the most logical way to stage their project. The PLMish vendor community is hard at work combining AI technology with their vertical-specific offerings. They have so far stayed away from generic AI solutions that users can apply to generic problems.

What about industrial buyers of these solutions? I haven’t spoken to one who’s not interested in the potential of AI. (Two caveats: they wouldn’t seek me out if they weren’t interested in what technology could do for them, so it’s a self-selecting cohort, not a universal sample; and two, they’re interested, not necessarily investing yet.) They see this coming and are working hard to ensure that the data they will use to train algorithms is as accurate and applicable as possible, even going so far as to generate more data to current specifications if they aren’t sure about legacy info. That takes time and isn’t necessarily free.

Taking this level of care with the data used to train algorithms matters since we need to ensure that the warranty data that informs our plans for a next-gen product is accurate and current. Or that the historic CAE cases we rely on to jumpstart an analysis program are relevant to the present case. And that we’re not falling into the Model Autophagy Disorder trap, building an entire knowledge set on faulty foundations.

PLMish enterprises so far seem to be following the typical pattern of adopting any new technology: start small with a pilot project that seeks to solve a specific problem. See how it goes. Adjust. Repeat. Grow. So far, so good. But it’s very early days.


Discover more from Schnitger Corporation

Subscribe to get the latest posts sent to your email.