Recently, I stumbled upon this article, “What’s the Point of AI without Design and Systems Thinking? ”, highlighted by UX Collective publication in their newsletter.
I can’t help but notice that Systems Thinking (ST) in design circles has passed the hype curve and has now entered a form of “ST-washing” stage, when even surface-level understanding passes for brilliance, to the point of being published by a now-established media in the field without critical scrutiny.
What’s the Point of AI without Design and Systems Thinking?
Perhaps the question is a non sequitur?
I mean, there could be many reasons for using AI which does not require Design or systems thinking to make sense. Even the article’s conclusion is about the “potential of interdisciplinary collaboration” for solving complex challenges, not the lack of inherent purpose of AI.
The article takes a consequent amount of effort to frame things in an unambiguous way to put systems thinking on a pedestal. Beyond “interdisciplinarity”, systems thinking is thematised here as the solution.
1/ False equivocation, poor understanding
In this article, “design” (lower "d") is used a lot as a general term but is confused with “Design Thinking”, which itself is confused with “Human-centred design”, which themselves are defined as existing solely at the “personal level” of human experiences:
a) In the section about “design” titled “Design Thinking”:
At the heart of design thinking (also widely known as human-centered design) […]
b) Later, in one of the examples given about healthcare, design is put in contrast to systems thinking:
Enter design. It focuses on streamlining patient interactions, ensuring that the journey — from diagnosis to recovery — is patient-centric.
Beyond the individual, the healthcare ecosystem’s holistic health is paramount.
c) In the example given about education, design is once again put in contrast to systems thinking:
But in this transformation, design’s role in curating experiences becomes pivotal. […]
Systems thinking ensures that while individual learning experiences are enriched, the overarching goals of education […] are not forgotten.
2/ Holistic who?
Systems Thinking is presented as “holistic” and therefore misunderstood (as often) as foreseeing unintended consequences.
In the section about ST:
Instead of isolating problems or solutions, it encourages a holistic exploration, ensuring we recognize and respect the myriad interrelationships at play.
Picture this: a company introduces a cutting-edge technology to enhance user experience. While the immediate goal is achieved, this change may inadvertently shift user behaviors, influence related markets, or even reshape societal norms. These cascading effects exemplify why a linear mindset — where A leads to B — can sometimes be limiting. In contrast, systems thinking prompts us to ask: If A leads to B, how might that impact C, D, or even Z?
Please, this is linear thinking disguised as holistic bullshit. If cascading effects are nonlinear, how one could even be able to ask this question (you know, unknown unknowns)?
Extending the reasoning “A leads to B” to “A leads to B, C, which leads to Z” does not let you escape the linearity of following causal relationships, you just end up having more things to follow (yes, I’m looking at you wheel of externalities).
3/ Yellowstone, we've got a problem
The Yellowstone’s Wolves case is probably one favourite to show “the power of systems thinking”.
Instead of looking for quick fixes, park officials and ecologists turned to systems thinking.
The Yellowstone Wolves case (see the very well-documented Wikipedia page) is an instance of post-rationalisation of systems thinking proponents looking at past known events which confirm their prior belief.
No, no one “turned to systems thinking” for answers. Ecologists and biologists did what they do. The Yellowstone ecology was very well-studied for years before the reintroduction of wolves, and the reintroduction of wolves was one of the many options they sought to explore. The boundaries of “the problem” were rather clear at the time of the decision, yet the consequences of the reintroduction were far from being certain.
Stating that systems thinking is necessary to solve complex problems and then presenting the Yellowstone Wolves case as an instance of good application of systems thinking is begging the question, this is circular reasoning.
Yet the author fails to recognise two issues here:
- First, if this instance of wild reintroduction/preservation is that strong of a case for systems thinking, then any similar cases are too, even the failed ones, right? Yet we are extremely rarely presented with failed systems thinking instances, which tend to support the post-rationalisation approach to the case selection;
- Second, in the particular context of this article, it seems rather strange to argue both for "nature conservationism" as a relevant case for systems thinking and “AI as new horizons opening”, a techno-solutionist position, as a relevant future application of systems thinking. This is contradictory in the approach and frankly irreconcilable in their distinct philosophies.
Berger-Tal and his co-authors looked at the International Union for Conservation of Nature’s six editions of Global Reintroduction Perspectives, a series of published volumes published periodically, each containing dozens of case studies of reintroduction projects around the world.
They found that only about 3% of all reported cases were actually declared failures. But most of the time the authors themselves are dictating the terms of potential failure or success. – The Wildlife Society, “Could past mistakes guide future reintroductions?”
↳ You don’t know what you don’t know, right, but ignorance is not a good justification for "blind advocacy".
4/ A categorisation issue
In the section “Interweaving AI, Design, and Systems Thinking”, we are led to believe all these terms, namely "design", "AI" and "Systems Thinking", previously poorly defined, are clear independent fields with clearly identifiable people or roles that need to work together.
If the argument can be somewhat loosely supported for AI and design –and yet, are everyone who designs "designers"? (Or worse) are everyone who designs "design thinkers"?– this becomes even more ridiculous when it comes to systems thinking. Do “Systems thinkers” really exist outside of vanity LinkedIn titles? Is it a distinct discipline? Or a distinct practice? What about Systemic Design then?
5/ Holistically meaningless
The contemporary world, with its intricate networks, doesn’t operate in isolation. Problems aren’t stand-alone; they’re part of a larger web of interrelated issues.
When a conclusion derived from a premise needs you to accept the latter to become true, then this is tautological. The right answer here is “it depends”. And “it depends” is rarely a satisfying answer for those coming to systems thinking looking for a solution.
Holism itself, often used ad nauseam by some, is never an answer or a solution or a value judgment. It is merely a constatation, an observation of the world. Accepting Holism as a fact means you cannot categorise observable things as non-holistic: a solution is holistic per the nature of the context of observation, whether its creators were conscious or not of this fact. I repeat: Holism means everything is part of a larger context, even the things you despise.
But this says very little about what you should do, and asserting that you must look at everything as interconnected to be able to act is first, let's put it mildly, a misunderstanding and second, missing the point. Well, even most systems thinking frameworks acknowledge the importance of boundaries, that you set a limit to your inquiry.
We can add two useful principles to that:
- The map isn't the territory
- A map is as useful as it enables you to act
My problem here is indeed that kind of statement is misleading and vacuous outside of any context.
6/ ST has a "problem" problem
Every fucking example presented in systems thinking vulgarisation is a city/urbanism, traffic/transportation, or immigration/poverty problem. These are already “de facto interdisciplinary collaboration” situations, in which designers are rarely invited to participate (and that would be interesting to ask why).
But even more so, who are the systems thinkers? Perhaps the (missing) designers? Perhaps the urbanists/city planners? Or the policymakers?
Take, for example, the Smart City initiatives getting traction globally. In Singapore, an interdisciplinary team — comprising data scientists, urban planners, and designers — joined forces. AI experts wrangled data to understand urban mobility patterns, designers prototyped user-friendly public spaces, while systems thinkers ensured every solution fit within the larger urban narrative, from waste management to energy consumption.
The outcome? A city where technology serves its residents, not the other way around. Streets are more pedestrian-friendly, public transport is efficient, and green spaces are abundant — all tailored to the unique rhythms and needs of its inhabitants.
Smart City initiatives are largely criticised (to the point there is a dedicated section in Wikipedia: https://en.wikipedia.org/wiki/Smart_city#Criticism) and the “successful implementations” are, to date, more the exception than the norm.
Note that none of the criticisms against AI, even those raised by the article itself, let alone against smart cities, are addressed here. Wouldn’t these be relevant challenges to systems thinking? Should we also mention the cases of projects without a "smart city" (and probably without systems thinking) which achieved similar results?
Now, this begs the question: What’s the point of using systems thinking to solve issues you generated with a Smart City approach? Would systems thinking be just a means to an end? Really?
Finally, there is an underlying question to ask: is systems thinking elitist?
Because, when you look at who holds the kind of positions to actually perform any form of systemic intervention, from beginning to end, to the type of challenges often presented as good systems thinking cases, I’m quite sure an argument could made that it displays (at least some form of) elitism.
And that corroborates with a point I often make to designers who see systems thinking as the “next big thing”: it’s not for everyone. It is, in practice, for a very selected few.
7/ AI ubiquity
I really do not understand the conclusion. At no point at all, AI itself is questioned as a solution to any of the problems raised. The assumption which transpires here is that “AI is relevant; it is good; we need it in some capacity".
The path forward is clear: it’s not about individual mastery, but embracing the potential of interdisciplinary collaboration so that we don’t create AI solutions that merely “throw technology at a problem,” expecting all complexities to be resolved.
Sure. we should not throw technology at the problem, but there is no problem doing exactly that with systems thinking, apparently.
Thanks for reading!