ASSESSing the future of CAE

Feb 24, 2016 | Hot Topics

I’ve been thinking a lot about simulation: who should be doing it, where it fits in a design process, how can we involve more people in using and creating these tools … And there are no easy answers. Quick, throw-away reactions are that everyone should be simulating, all the time, as part of every phase of the design. But does that really make sense? Is it cost-effective? Will it really speed up the parts of the process that need speed, or will it cause wasteful iterations if inexperienced users find errors that aren’t really there?

At the ASSESS 2016 conference a couple of weeks ago, a lot of simulation advocates participated in formal sessions and working groups to hash out some of these issues. To be clear, everyone there had a vested interest in growing the use of simulation –software and hardware vendors, champions of this technology in industry and government, educators– but there’s no consensus on how to make that growth a reality.

Brad Holtz and Joe Walsh organized the sessions so that, after a brief keynote, we all split into smaller groups. The first round of breakouts were opportunities to learn: about simulation in the world of medical devices; new paradigms for systems engineering or large scale simulation, cloud, new user interaction ideas and more (go here for a full agenda). The second set of sessions were working groups, where we were asked to come up with suggestions and action plans around a specific topic. Karlheinz Peters and I led a working group on democratizing CAE — we spent 3 hours identifying issues and goals and, I have to say, it was fascinating and frustrating in equal measure.

Our group of around 15 represented the full spectrum of attendees: software and hardware vendors, end-users, academics, and consultants. We periodically bogged down but it was a remarkably frank discussion: the vendors were open about potential and real shortcomings and the users were clear in describing their needs and irritations. A lot of what we came up with was no surprise: Our vision for simulation was that it should be accessibly by anyone who could benefit from a deeper exploration of a product, how it works, where it will fail. Simulation should complement current processes to add to overall product knowledge — it can’t replace every bit of physical test or other ways engineers currently explore their designs. It may change job functions and roles, but doesn’t disenfranchise anyone. (This was, surprisingly, one of the places we got stuck; it was great having customers on hand to describe how adding CAE to a job description shakes up the status quo.)

Our group got caught up in the moment, I think, and ended our vision statement with: Every new system developed should be fully virtually tested (versus over-engineered today). This is where I disagree. Some things simply are easier to over-engineer. A long time ago, at a Solid Edge event, I wound up chatting with a guy whose company makes sturdy plastic boxes in which to carry cameras and other fragile gear. The raw materials aren’t expensive but the cargo they protect can be very costly, so the company has always over-engineered to protect itself from possible failures. For him, simulation simply didn’t make sense. And I think that’s the point: simulation, where it makes sense, is incredibly useful in predicting behavior and failure. But to say it needs to be used everywhere doesn’t ring true. Our group was probably pondering the truly tough problems out there, where simulation is the only way to figure out if something will work early in the design process, like a car or satellite. But that’s the tip of the pyramid of products made in the world; biggest dollar value but smallest volume. Many of the objects made today require simple structural analysis, if that.

Back to the democratization group at ASSESS: We identified a lot of barriers to this wider adoption that were, again, no surprise. Too expensive, too hard to use, results too hard to interpret, no clear path to simulation data management, reliability of results when generated by a non-expert; pure, old-fashioned inertia …

Two issues that came up over and over again, however, added a great deal to this conversation. First, we need agnostic CAE models that can be moved between codes. We’ve got some sort of a start at this, with the Functional Mock-up Interface (FMI) and its Functional Mockup Units (FMU) but that applies to a subset of all CAE out there. In order to be truly effective, our group reasoned, we need to be able to easily exchange structural, CFD, multibody dynamics and other models. There are issues galore: vendors want to keep their users using only one solution; how do you create a model that is more than lowest common denomination; protecting IP at the right level, and so on. But it’s a great topic to explore further.

Also, our group was 50-something, white and overwhelmingly male. We recognized this (it couldn’t have been more obvious) and we discussed the need to make CAE a “sexier” profession in order to draw in people who might think differently and not be as predisposed to doing things the way they’ve always been done, simply because that’s how it’s always been. One person in the group said that simulation is already sexy: heck, there’s a whole video game industry built around the idea of simulating things, and we need to figure out how to capitalize on that. True. Another approach is to work with #STEM (science, technology, engineering and math) educators to focus on simulation. We need to learn what would interest new entrants, while also making it a valued, remunerative career path. I’m especially interested in bringing more women into this discussion; whether you’re female or not, leave a comment below and let’s start a conversation on this — I think it’s perhaps the most important thing we can do to move this industry forward.

Finally, our goals. We were asked to come up with actionable desired outcomes as a way of moving people and organizations in the direction of our vision — in this case, growing the CAE user base. Our group set audacious targets:

  • Grow the use of simulation by one order of magnitude in 5 years
  • Only some of this should be in the Fortune 100, far more must be in SMB

Getting there is going to be hard. We identified a number of next steps, including

  • Creating a vendor-neutral place (perhaps on the ASSESS site?) to publicize cases where CAE has been successfully implemented.
  • We also want to look at how casual users will fit into the long-term vision for CAE; it’s a multi-tasking world, and the dedicated CAE expert may only be sustainable in a subset of organizations that benefit from CAE.
  • We also need to dive deeper into the “too expensive” question: compared to what? Not to failure of an expensive asset, yet it comes up all the time. Is the barrier to adoption total installed cost or just the license? What alternative business models can reduce these barriers?
  • One potential cost-reducer is chunking CAE into apps; what might that look like? We need to look at how apps could proliferate: templates vs standards vs local/global optimization.
  • Finally, how do we go about changing today’s view of simulation as a cost (bad) to benefit/revenue generator (good)?

None of this is easy — and ours was only one of many such sessions at ASSESS 2016. The formal conversation continue at COFES in April but don’t wait until then. If you’ve got something to say, comment here, get on Twitter, write your own blog post … but let’s talk (respectfully) to move things forward. Sitting still won’t work.

A fun side note: ASSESS took place at a lovely venue outside Washington DC just as a snowstorm was targeting the mid-Atlantic US East Coast. Weather forecasters were besides themselves, trying to estimate snow fall and the what that would do to the region’s travel infrastructure. The questions wasn’t if it would hit, but when and how much. Almost all ASSESS attendees had flown to DC from somewhere else (some had flown internationally), so the storm’s impact on our flights home consumed much of the hallway conversation. Are the models correct? How is the data being presented? Are the interpretations correct? What’s converging? We did an awful lot of theorizing on where the models were flawed, given that I don’t think any of us knew a darned thing about it. It was very fitting, given the CAE context of ASSESS. Unfortunately, some colleagues did get to experience the storm in DC for longer than they had planned but I’m told everyone stayed warm and fed and eventually made it home. Phew.