Let’s talk about AI
I’m seeing two very different strategies as engineering/design/manufacturing/construction companies think about artificial intelligence: On the one hand, many bigger companies have teams (usually in IT) who stay on top of machine learning, large language models, and the rest of the big bucket of technologies that make up “AI.” They’re thinking about applications they can build using today’s AI toolkits to create a competitive advantage in a specific part of the business. AI agents (aka agentic AI) are already responding to many HR, IT, and sales inquiries, predicting demand to tune supply chains, and doing other tasks with clearly defined rule sets. They haven’t made it into pure engineering/design and manufacturing yet; we mostly see them in the business-adjacent functions, like tracing requirements, vetting components and suppliers, and tracking deliverable status.
Many companies, though, are in serious denial: they don’t think the technology is ready, they don’t think it will apply to them and their business needs — or they simply don’t have the time to think about it. The progressive ones in this cohort think, “We’ll let our vendors figure it out, sell it to us, and then we’ll be “AI,” too.” That’s better than ignoring it altogether, but not sufficient to be successful in the long run. If you’re buying a solution from a vendor, your competitors are, too — your advantage will come from how you use it, so start thinking about this now.
At a human level, though, we are massively unready to deal with the changes AI will bring to almost everyone’s job — mine, yours, your lawyer’s, and your accountant’s. Perhaps not the craftspeople on the shop floor or construction site since they still need to install that switch. However, AI could change where they get the installation instructions and how they show that they’ve tested it. AI will help us know what task to do first, provide us with the information we need to do that job, organize what we create, and report that we’ve successfully done today’s tasks.
When will AI affect me or you? That’s the many, many dollar question. One person I spoke with said that some entry-level jobs (customer-facing support, marketing tasks like building prospect lists for sales campaigns, some coding) may disappear within the next 12 months. Disappear, as in, not coming back. Others don’t see that quick a timeline, partly because of concerns that AI might serve out incorrect or incomplete information — the trust hasn’t been built yet.
But sooner or later, agentic AI will be here. If you listen to its developers, it will be faster and cheaper than human workers and won’t require meal breaks or days off. It will learn from its mistakes and know when to hand off to another agent because it’s been asked to do something outside its scope. We’ll have to see about exactly how far, how quickly commercial AI gets.
What can or should we do to get ready? I’ve collected a couple of ideas as I speak with people about this:
- Don’t ignore it. Putting our heads in the sand may be comforting for now, but it won’t help you individually or your enterprise as your competitors race toward AI.
- Learn everything you can about AI. There are many sources of information online, including classes from Google, Microsoft, Amazon, OpenAI, Coursera, MIT, and others. (Make sure you verify the creator; there’s also a lot of misinformation online.) Unless it interests you, focus on applications of AI, not the nuts and bolts of how it works — you’re redefining your job, not designing the next large language model.
- Dedicate part of every day or week to figuring out what AI (agentic or not) can do to help you do your job better. Early successes show that modest, targeted solutions improve productivity; trying to go too big takes longer and is not as assured of success.
- Focus on real-world business value, not the gee-whizz-ness of the technology. The tech by itself isn’t the answer; we need to rethink how we do what we do, taking advantage of this new technology. An analogy is word processing: we went from secretaries typing everything to having the creators writing and typing simultaneously. AI is that kind of change by times a bazillion.
- If you are a boss, look at your team. Who is most receptive to new technologies? Are they experienced at their job? If not, pair a tech-savvy new employee with an experienced one; we need to know how a task is done and why it’s done the way it is before we can change it to take advantage of AI.
- Recognize that not everyone will be on board with this whole AI thing. Try to help people understand that their jobs will change and encourage them to help define the new normal. Be realistic about how your workforce might change, given tech and economic upheaval.
- Work with your vendor ecosystem. They are ALL working on how they can add AI to their solutions, so define what you need and tell them! In turn, you’ll benefit from their broader view across many customer companies.
Bottom line: Assume you’ll be the boss of a bunch of AI agents by this time next year. Or the year after; who knows? What do you want them to do? How can they assist you? If it helps, think of them as an infinite number of really smart interns who have no HR issues and never need to rest. How can they make you more successful? What do you have to teach them to get to that point? What’s the best possible outcome when you’re paired with those agents/interns?
For example: Do you design widgets? Can AI help you find old designs to repurpose, speed up your simulation program, discover new parts or vendors to ease supply chain issues, or track compliance to ensure that you meet marketing’s design objectives? Can it automatically generate drawings or other deliverables (to what specs, using what as inputs, with what guardrails)? At a minimum, learn to work with the AI assists/co-pilots built into the PLMish products you use to do your job more effectively.
Start thinking about all of this now so that you’re ready when someone asks you how you see AI reshaping your job. “We want to use AI for business benefit” isn’t enough; define a specific problem you want to solve and metrics to tell if you’re getting closer to that goal. Go!
—
That gorgeous cover image is from NASA and has absolutely nothing to do with AI. But it’s a small, perfect planet, no?
Discover more from Schnitger Corporation
Subscribe to get the latest posts sent to your email.
Great advice. I played with ChatGPT a few years ago, got it to do some silly stuff, then ignored it until this year. Now it’s a core tool in my bat belt.
I asked it to design a short course this year to upskill me based on how I’d been using it. Something like 20 minutes a day for a few weeks. It delivered and guided me each morning with hands-on examples.
That’s just the beginning. But you have to start somewhere.
The next step is creating agents that work on my behalf. I think I’m a couple months away from that.
Andrew Yang saw this coming back in 2020. Jobs disappearing, the nature of work changing. This isn’t theoretical anymore. The “when” is now. And, people aren’t ready.
Yes, past industrial revolutions killed jobs and created new ones. But this time the pace is different. It’s not happening over decades. We’re talking months and years. That’s a very different kind of disruption.
Excellent suggestions, Jeff! I really like your idea of using ChatGPT to teach ChatGPT — I’m going to try that next. Thanks for reading and commenting.
Entirely as a thought experiment if we’d had (magical) access to AI agents like ChatGPT (wayyyyy) back when we worked on CADDS 4X what uses could we have made of it?
Things that I might have done would be help creating ATTs for new features or maybe helping document these new features. Not sure a coding agent would have done particularly well with the eclectic mix of programming languages and styles that made up that CAD application. Or having worked on a couple of CAD applications since C4X any CAD program for that matter. Might have been useful to write or start CVMAC macros but only if an LLM could find enough training data to be useful.
I’m personally taking a trust but verify approach to AI having watched the Metaverse and Blockchain/CryptoCurrency hype cycles inflate and then fizzle. I think AI has more utility than those examples, not a high bar to clear IMHO, so I’m trying to keep an open mind.
Some of the AI booster-ism feels like its over-hyped, tinged with a certain degree of desperation, from the CEOs and VCs who have invested billions of $$ so far building and training these tools and might be feeling pressure to justify their investment and, to be cynical, get to the “too big to fail” stage for the willful copyright infringement some of the models use to access training data or at the very least turn running AI systems into a money-making rather than money incinerating proposition by getting people to rely on them and then raise the subscription fees to something sustainable for OpenAI, Google, Microsoft, et al to continue training and running these AI systems.
I just had an up close and personal experience of “Please use AI because we want you to” when after an update Google sneakily remapped the power button on my phone to launch Gemini instead. Absolutely not what I wanted and I know I wasn’t asked if it was OK to mess about with *my* phone that way, but the intrusive way AI is being forced into every crevice of the digital world at the moment really does feel desperate and honestly I hate it.
Having said that I can see that there are potential use cases for AI, I just used CoPilot to start a project in JavaScript, as I wasn’t familiar with it and felt it would be interesting to use JS rather than my default choice for arbitrary scripting which is Perl.
Thinking up the prompt to get started took around 30 seconds but rather than vibe code my way to the final version I switched to reading the JS documentation and modifying the generated script “framework” the old fashioned way in vi from a command line and in about 5 minutes I completed and debugged the script and its a handy tool for me.
I think I prefer doing that sort of creative(ish) work myself as I got to learn something about the JS scripting language and I didn’t have to spend about the same amount of time negotiating with a chatbot to get to where I wanted to be.
I also worry a bit about the long term health of a lot of companies with big legacy code bases who are all in on AI as it seems like the phrase “AI Agent paired with an experienced coder” gets used a bit.
If these AI agents take most of the entry level coding jobs then where are these experienced coders coming from in a few years from now?
From my own experience it can take a year or more to get familiar enough with some of these multi-million line applications so a “fresh from college” junior software engineer is no longer actively dangerous when making changes.
If the pipeline of junior programmers who are learning to become experienced and grizzled veterans from the current cadre of greybeards dries up where will the pool of people knowledgeable enough about the way the application works come from to verify that the AI agent didn’t just hallucinate a solution, generate a unit test that doesn’t actually test what its supposed to or introduced severe functional or performance regressions that will tarnish the companies reputation?
Sorry for the record setting screed, maybe I should have asked ChatGPT to make it shorter/tighter … nahhh :)