I’m simply looking for evidence that supports the typical assertion about why we use technology in schools, about why we spend so much of our budgets on costly information services and infrastructure.
Pearson and the Knowledge Lab (University College London) recently published Intelligence Unleashed: An Argument for AI in Education, a report that explores the parameters around artificial intelligence (AI) in education. The authors provide a useful definition of artificial intelligence as “computer systems that have been designed to interact with the world through capabilities (for example, visual perception and speech recognition) and intelligent behaviours (for example, assessing the available information and then taking the most sensible action to achieve a stated goal) that we would think of as essentially human.”
As they state up front in the report, “We start from the belief that new technologies, when we fully exploit their possibilities, will change not only the ways we learn, but what we learn, as well as how we work, how we collaborate, and how we communicate”. The italicised phrase is not something to be taken lightly. I would argue that, in many ways, schools the world over have barely begun to exploit the possibilities of new technologies. I visit a number of schools, as well as have a great number of conversations with senior administrators and leaders, and I often hear something along the following lines: “We are at the cutting edge of technology — we’re using SmartBoards and we have laptops and tablets for the students. They learn so much better this way.” When I ask a more probing question, such as, “What evidence do you have that the technologies are helping students to learn better?”, I am usually met with a blank stare…or receive a rather incoherent response.
I’m simply looking for evidence that supports the typical assertion about why we use technology in schools, about why we spend so much of our budgets on costly information services and infrastructure. When it comes down to it, I’m skeptical of how we are doing in terms of exploiting the possibilities that technology brings to us; rather, too often we are replacing pencil/paper tasks with a tech tool. There is certainly nothing wrong with using these new (and often fabulous) tools, but I would caution against deluding ourselves that, somehow, because the students are doing the same tasks they’ve done for years except with a tech tool, we are being disruptive, or that we are using technology in ways that will transform education.
Happily, the report shows an orientation toward research and concomitant evidence, in terms of how to consider pedagogy and technology as a productive intersection for teaching and learning. However, having slogged through all sixty pages, I am concerned that this putative orientation loses itself by the end of the report. There is enough there there, nevertheless, that gives me some hope that the technology conversation is about to move forward in some exciting ways, provided that we can proceed with even a modicum of evidence.
Case in point. Often underestimated severely is the notion that what we’re really talking about is not technology…but culture change. If all these AI promises outlined and/or hypothesised by the authors are to come to be, they will be realised only when we consider culture change. The authors offer us the nut to crack when they state on p. 11 that artificial intelligence in education (AIEd) is an “essentially human endeavour.”
Where my cautionary note begins is on the same page, however, when it becomes clear that the authors are considering poking around the edges of true culture change when they write that “we will lessen achievement gaps, address teacher retention and development, and equip parents to better support their children’s (and their own) learning” (emphasis mine). When anyone begins to talk about achievement gaps, I would suggest that we all pause and consider what we’re really talking about: using technology to result in better scores on existing achievement tests (of all kinds). Too often, what is not talked about is using technology as a mechanism to create entirely different assessments that have purpose and meaning beyond stratifying socio-economic factors and the like. If we are going to have a serious discussion about AI in education, I submit that, simultaneously, we need to be looking at the entire system of assessment as well. I am not anti-assessment; I am for a holistic picture of each student, and that picture can include a mixture of traditional assessments (which I believe we’ll never get rid of in our lifetimes) and thoughtful assessments that are more performance-task-orientated in nature. After all, if the students cannot apply their knowledge (existing and the bit of information they just read or looked up online), what is the point? On a slightly larger scale, the authors’ assertion (p. 13) that “our education systems will need to achieve at levels that none have managed to date” is another harbinger (to my mind) of an inability to view technological advancements as separate from existing understandings of achievement. To be fair to the authors, they do handle the notion of assessment (stating, of course, that AIEd will force a renaissance in assessment, miraculously…this sounds suspiciously familiar) later in the report (p. 35), but the reader will only spend time on their ideas if s/he is able to make it that far.
If you believe that, somehow, AI will bring a newly-standardised way of “doing technology” to schools, think again. I agree with the authors that the most likely scenario in an AIEd world is one that mimics what we are familiar with right now: the world of apps. It is probable that the AIEd world will be one of myriad approaches, what might feel like a scattergun approach to many. I am curious to see how well-ensconced assessment companies will respond to this fragmentation.
A nugget of wisdom makes it way to the surface, though, when we read that “AIEd is hampered by a system that encourages siloed research, and that shies away from dealing with the essential messiness of educational contexts.” Not only is the research siloed in academic environments, when we look at funding models from venture capital firms, we see a similar approach as well, in an effort to protect intellectual property…and, therefore, investment. Arguably, a start-up accelerator may provide fertile ground for sharing of ideas, but I know of scant evidence that would support such conjecture.
One thing that we (in schools) must take note of, though, is the impact that automation in general (this is related to AI, in many ways) is having on the work force. A 2013 study by economists Frey and Osborne (C.B. Frey and M.A. Osborne, The Future of Employment: How Susceptible are Jobs to Computerisation, 2013) examined jobs in the US, with the authors proffering that approximately 47% of jobs are at high risk of being carried out by computers in the next ten to twenty years. If the tasks associated with so many jobs are able to be computerised, we will have to adjust in some way to a life of work that blends technologies and strictly human capacities, but how are we preparing our students for this in schools across the world? Do we ban Wolfram Alpha because it infringes on traditional beliefs associated with instructional autonomy and problem-solving, or, rather, ought we to consider the nature of how we assess our students’ ability to engage with complex tasks, using all means at their disposal?
I commend to you the pages associated with an overview of AIEd (pp.17-21), insofar as three models of AIEd are discussed directly: (1) the pedagogical model, (2) the domain model, and (3) the learner model. Importantly, the authors discuss how these three models are meant to inform and support each other. It is a worthwhile study in how we might think about these areas, as we have curricular discussions, whether at the department, divisional, or whole-school level. Without going into great depth on these models at this writing, suffice it to say that a sine qua non of this ‘operating system’ for the educational experience is the skill-set of the instructor when it comes to data analysis and interpretation. That domain will not be the purview of the director of teaching and learning only; it must be a shared skill set. Given all the other hoops through which teachers and administrators must jump currently, how teacher preparation programmes and schools themselves will upskill new and tenured teachers remains a mystery, and the authors do not take on this central question. And let us not forget that, in addition to the three aforementioned models, AIEd is also experimenting in models that represent the social, emotional, and meta-cognitive aspects of learning. The AIEd movement is nothing if not ambitious, but the flotsam and jetsam created by all these efforts will produce not only an app-like world, we will experience a variety of reactions from educational leaders, both aspiring and sitting, when it comes to how our schools will learn about AIEd and implement any associated pieces.
Perhaps the most interesting proposal in this lengthy document is the notion of every learner having an ‘intelligent personal tutor.’ As a concept, it has a certain degree of logic to it, and it could even be argued to have a degree of elegance; however, I return to the earlier upskilling comment. In order for a teacher to manage this intelligent tutor effectively in ways that take advantage of the technology’s capabilities, we must look at changing a number of variables in the system that we call school. An inherent weakness in this document resurfaces with this tutor notion, in that the authors appear to believe that this is something we can simply graft onto the existing system. I take the contrary position. I submit that, in addition to upskilling, schools would need to change teaching loads (number of courses taught) to accommodate what is effectively a graduate teaching assistant who runs a recitation session one day a week (the idea is not new, in other words). My comments do not serve to uphold the existing system, as I remain critical of it on many levels, but to suggest that an intelligent personal tutor could be offered right now is to see AIEd and extant education systems through rose-tinted spectacles.
It is when the authors venture into the territory of intelligent support for collaborative learning that this report loses its constructive balance of perspicacity and provocation. They state that AIEd can be very helpful in this area, and cite four specific approaches: (1) adaptive group formation, (2) expert facilitation, (3) virtual agents, and (4) intelligent moderation. Although the concepts make sense, if one is taking time to reflect on the earlier assertions and suggestions, here one should take note that the system would have to change fundamentally for these ideas to be implemented at any kind of scale outside of one specific school. Even in one school, it would be challenging. Public money (i.e. taxes that undergird a national education system) is simply not ready for this, in a way that would be useful. To be sure, legislation could force the funding into existence, but it will be counterbalanced by a maintained wedding with existing system structures, making any kind of transformation fraught at best. They do take a stab at AI’s role in system reform far later in the document, but there would have been benefits from planting a few thought-provoking questions in the earlier pages of the report, so as to set the stage.
Rather than stop there, though, the authors continue, jumping into the next item: virtual reality. To be truthful, virtual reality merits its own white paper, as the pedagogical benefits are tangible, implementable, and provide the ability to make us more cognisant of our understanding of the human experience. Sadly, this technology is not explored as well as it could be.
Almost in last place (not surprising), the authors take on ‘teachers and AIEd.’ In one page. It would have been helpful to weave their commentary throughout the document rather than relegating it to the late stages of the report, but it does provide a somewhat useful synopsis of what would be needed in the teaching cadre in order for AIEd to flourish. Yet it doesn’t go far enough. What’s more, I would argue that it shows far too much ignorance for how educational systems work. To suggest, for example, that perhaps AI-assisted professional development would be a great means to help develop teachers to thrive in this AI-enhanced world of teaching and learning is to underestimate (greatly) the kinds of training and development required. The associated cottage industry for this kind of professional development would be expensive and massive, two things that, in theory, AIEd is supposed to address and relieve. The great irony, of course, is that it will require funding and scale. Not to be outdone, though, the authors address teacher recruitment and retention, as well as continuous professional development, later on in the report, suggesting that somehow AI is the cure for this in an age when we are facing a shortage of teachers around the world.
Just as this latter section of the report began to lose its potency, the authors toss in a variety of items that arguably comprise the best reading in the entire paper, from potentially ending stop-and-test assessment to the ethics of AI. However, the mental links required of the reader are tenuous at best — not because impossible intellectual demands are being made of the reader, but because the logic and flow of the piece are lacking. These points seem almost entirely unrelated to the earlier coverage of ideas.
The most insightful comment is buried on p. 46, in which the authors share that the debate on artificial intelligence in education has been the exclusive province of economists to date, not that of educators. If one views conference programming and online CPD courses as evidence, it would appear that educators are doing little research in this area. Their most useful suggestion, I would argue, is for the formation of a DARPA for education (pp. 51; 53). There is a role here for ECIS: for instance, to provide funding for experimentation with technology, to author position papers, to advocate for the intersection of research and classroom evidence, in an effort to inform the conversation in schools as well as among professional development providers. I underscore that evidence would have to be a key part of what we might do.
All in all, this is a seminal report, even though it suffers from information overload and a lack of idea scaffolding, such that it could be quite challenging to walk away with anything other than a bewildered mindset. There is much value contained in its pages; five to ten subsequent reports could begin to tease out their ideas in more coherent and potentially impactful ways, though. I agree with a late-stage assertion in the paper that, to advance the needed conversation and experimentation around AI, we must conceive of alliances and partnerships that bring together multifaceted talent. I also appreciate the authors’ statement that they do not underestimate the complexities of engaging AI and the future of education. Indeed, I am grateful that they have identified any number of things that we (in schools) should be discussing and debating, complemented by action in classrooms and in considering professional development that aligns with schools’ objectives.
It is time for educators to play a critical role in the AI discussion, instead of leaving conjecture and fear mongering to the economists. We are positioned to do so, I believe.