Paul Daugherty, H. James Wilson, and Nicola Morini Bianzino wrote in 2017 about jobs that, in their opinion, artificial intelligence would create. MIT Sloan Management Review recently interviewed two of the three co-authors to find out what they have learned since that article was published two years ago. For those of us in education, the interview is a worthwhile read, as we consider how learning (itself) is a core competence.
In many ways, parsing arguments around job displacement through intelligent automation is not a worthwhile exercise; certain trends are already in motion, whilst others are still in the conjecture phase only. This interview explores what is happening in the world of work, as evidenced in the creation of new positions over the past two years. In other words, the co-authors are conducting research related to real jobs that exist, and that are in the process of being tweaked as more is learned about what is required for the job. These jobs, they assert, fall into three categories at the moment: trainers, explainers, and sustainers. Let’s define our terms.
Trainers. The people who are doing the actual building of the AI systems: they do the data science and the engineering around machine learning.
Explainers. The people who explain (within the organisation, and to the public) how AI works and the kinds of outcomes generated.
Sustainers. The people who ensure that AI systems work (‘behave’) properly at the outset of a process, as well as produce desired outcomes over time, considering any unintended consequences.
Following are some highlights from the interview, with regard to each role type.
(1) Trainer types do include the people who are actively building the AI systems (e.g., deep learning scientists, robotics engineers), but the researchers are learning that “it’s important to have functional experts as trainers, as well. You might have someone with a marketing background or an operations background on your team. They help identify problems that the technical experts will then go in and solve.”
(2) “Companies are recognising that AI become the brand.” This is a statement that merits our reflection. The co-authors cite one specific non-technical job that they see in this category: the AI Personality Trainer. This is the person who trains the behaviour of chatbots (and the like), ensuring that the early-stage AI behaves “in the right way, to have the right answers, the right tone, and so on.”
(1) “Last year […] there were about 75,000 new explainer roles related to the right to transparency mandated by GDPR.
(2) “In health care, we’re seeing explainers working with physicians to help them understand why an AI system is making a particular recommendation and whether the doctor can make a medical recommendation to a patient as a result.”
(1) “Sustainers spend a good deal of their day thinking about unintended consequences and how they may affect the public. [For instance], how do you come up with a pricing model [if you’re a company] that’s algorithm driven but also workable in terms of public acceptance?” They go on to state, “The risks of bias in algorithms, discriminatory facial recognition systems–these are things that the first wave of trainers didn’t necessarily given enough consideration to. Sustainers address the question of whether these unanticipated and unintended consequences can be managed and how. They might even recommend that an AI system be taken out of operation until the company figures out how to get it right.”
To my mind, these role types sound rather familiar…they sound like teacher roles. I still maintain that, no matter the advancements and developments, teaching and learning still represent the life-blood of the human experience. Indeed, the interviewer inevitably comes to the teaching and learning question in a way with which we’ve become familiar over the past ten years, asking a question that is often posed by educators, or sometimes posed *at* educators: “One challenge organisations face is that many of the jobs created by AI have no established path for training and development because they’re brand-new. How do they solve for that?” Some of the co-authors’ responses are things that we’ve all heard before (e.g., more experiential learning is needed), but their comment around learning was particularly valuable: “Learning and training as organisational capabilities will be differentiating for companies that experiment effectively, find the optimal mix of approaches for themselves, and then scale up.” They go on: “A lot of our findings have been surprising to us. For instance, you might think that STEM skills are the be-all and end-all for the age of AI. But our research is showing that four distinctively softer skills are become much more valuable as people begin collaborating with smart machines: these are complex reasoning, creativity, social/emotional intelligence, and certain forms of sensory perception.”
Paul Daugherty offers the final comment of the interview, sharing that “We’ve also launched a research project on responsible AI. How do organisations make sure they get good, ethical outcomes? We’re looking at issues like transparency and explainability […]. We are looking at bias. We are looking at accountability and trustworthiness with AI systems.” That this commentary appears last in the interview is unfortunate, as it is on the minds of many. These are the items with which we struggle; these are the items that deserve more attention (and funding) then they’ve been getting.
Questions for Reflection
(1) Should these trends and developments result in fundamental changes to ‘how we do school?’
(2) Is the focus on (and funding for) STEM helpful, a premature distraction, or something else?
(3) How often are we questioning taken-as-truth assertions such as “many of the jobs created by AI have no established path for training and development because they’re brand-new”? I challenge us to dissect this phrase and consider whether we truly believe what the phrase purports.
(4) We know that we need diverse teams to focus on the issues around ‘responsible AI.’ How are we (yes, in education) trying to influence representation of diverse groups? If the work around responsible AI is being done by homogenous groups, how might we challenge their assertions regarding how ‘responsible’ it is?