Hi everyone, Kevin here.

It's been quite some time since my last post here. 😓

In the past few months, I've been discussing how to approach (so-called) "Artificial Intelligence" (AI) from a design perspective with the Design & Critical Thinking community. This translated into a series of articles on the subject, the broader context in which this discussion happen, and the kind of good design principles that can come out of this – along with a critique of what has been done so far.

Turbulences, loss of senses and disillusion
Hi, Kevin here. In my recent readings, I noticed an interesting theme in relation to the recent layoffs in Tech, the disillusion of some in the design communities, and the criticisms of our tools as instruments of “our own downfall”. The many deaths of UX designHow UX was proclaimed dead
Turbulences and AI: from a third perspective to design principles
Hi everyone, Kevin again. I recently wrote about our loss of senses and disillusion in face of recent turbulences: the recent layoffs, the rise of AI, and the rapid commoditization of design all seem to indicate we are gently running along the edge of a precipice only covered by fog,
Turbulences and AI: clarifications and more principles
Intelligence & expectations Please note that I will simplify some complex subjects here. What we call “Artificial Intelligence” should be called “algorithms” or “Large Language Models” (LLMs for short) because they have nothing close to “intelligent”. They do what algorithms do, even though in a ”…

Beyond the obvious: the need for better metaphors and principles

Interestingly, I participated in IxDA's Interaction23 event during which some speakers talked about this in very different ways. Although not directly related to AI, I found Kate Darling's talk on the relationship between humans and robots and the need for new metaphors to make sense of their “intelligence” really aligned with some of my explorations.

My take on this at the time:

Designing interactions between humans and artificial agents (robots, AI, etc.) is not about the object's features but rather our relationship to their differences, one that we should treat akin to our relationship with animals, as Kate Darling from MIT lab suggests.

In my series of articles, I approach things on a similar note and I propose some principles to use LLMs algorithms like ChatGPT as an extension of a collective understanding, not as an agent that could replace individuals – and why calling these "Artificial Intelligence" don't elicit the kind of metaphors we need to discuss and make sense.

I also touch upon how a Deleuzian philosophy combined with a third perspective approach can help make sense of a liminal situation and come to design principles without resolving the inherent ambiguity.

I elaborated on these elements in a recent interview with the amazing people at Unfuc*d By Design podcast.

A huge thanks to them for giving me the opportunity to share some of my experiences and thoughts about design, design processes, meta-design, complexity and human systems. 🙏

AI for Player Experience and Human Autonomy

Anyways, if you're interested in the topic and are in Switzerland, don't hesitate to join me on Tuesday 13 June 2023, at 6 PM CET in the beautiful city of Lausanne.

I will be one of the speakers at the AI for Player Experience event organised by CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe).

AI for Player Experience, Tue, Jun 13, 2023, 6:00 PM | Meetup
**🧐 Interested in [#AI](https://www.linkedin.com/feed/hashtag/?keywords=ai&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7054792973468733441), #Player_behavior_analysis and

EDIT – Here are the slides I used during the event (unfortunately not recorded):

Designing ↔ AI: Human autonomy and third perspective
Design ↔ AI Human autonomy and third perspective

Thanks for reading!

Cheers,
Kevin