I was reading
Import AI recently and Jack Clark's notes on AI needing AI designers really resonated with me. The newsletter links to
Josh Lovejoy's article on how to design with intentionality and purpose. There are, he argues, different levels of automation driven by AI, and we need to understand each level.
I won't give too much away there, but it reminds me of Don Norman's
The Design of Future Things. While he didn't focus on AI specifically, he does discuss how products that "do work" for humans need to be good communicators. In other words, they need to explicitly and clearly outline what they are about to do, how they will do it, and (at least initially) why they are doing it.
For example, suppose you have a self-flying plane and the plane expects turbulence 10km out. Rather than simply changing the altitude or direction of the plane, it first needs to announce its intentions so the pilots aren't surprised.
The above also depends on what
Amber Foucault calls the "minimum acceptable performance" of an algorithm. The pilots would only en
trust (key word: 'trust') the algorithm if they already had an opinion that it's good enough to delegate to.
AI + design is such a beautiful, embryonic field.