The siren song of artificial intelligence (AI) is absolutely mesmerising. Witness a recent posting in Chief Learning Officer magazine by the managing director of learning science platforms at a large publishing house, who, in a passionate plea for the place of AI in learning and development, states the following: “There has never been so much knowledge in the world. […] Content is proliferating at an astronomical rate” (23 Oct 2017, “Artificial Intelligence Comes to Learning”). At such words, heads typically nod and everyone allows for how wise this person must be, to see things so clearly.
Have we considered that content does not equal knowledge? Re-read those two sentences in the paragraph above. The concatenation of the sentences provides the illusory effect that the terms are synonymous. They are not. Do you recall the expression of someone not being able to see the forest for the trees? In the case of AI, the trees equal ‘content,’ and the forest equals ’knowledge.’ In other words, we are struggling to see what is knowledge, due to there being so much content in the way. ‘How might we discern knowledge in the midst of so much content?’ is probably the better question to ask. The author of the article tries to get at this by positing that “[i]ncreasingly, learners need artificial intelligence to help them navigate knowledge in a way best-suited for them.” I’m not sure that I see the same need.
Instead, I would posit that we (humans) would do far better to ensure that our means of learning and development is structured in such a way that we can ‘navigate knowledge’ without the help of artificial intelligence. There is nothing wrong with using a content-sifting tool (true artificial intelligence doesn’t exist yet, but human-programmed algorithms do) to cut through drivel, but it is a fundamental human need to be able to navigate knowledge using our own faculties. The fallacy of having some personalised learning universe is made all the more blatant when one considers that, in order to be engaged in an adaptive learning programme (the author’s grail), one must realise that it’s still a programme of study. In other words, there is a set of parameters (it could be a single discipline, such as mathematics) within which the adaptive learning programme must fit. Were I a learner in such a scenario, the adaptive learning software would not know what to do if I wanted to leave theorems and learn something about the environment in which bamboo grows best, for instance. Of course, it wouldn’t allow me to do so, since, as a human-programmed algorithm, it doesn’t fit within the programme of study. In reality, it is as prescriptive as it is (so-called) personalised.
How could we possibly ensure that these algorithms deliver what the author calls “true knowledge acquisition?” He praises applications such as Spotify or Pandora, but, if you use one of them (or something similar), do its recommendations reflect “true knowledge” of what music you want to hear, the moment you open the application, no matter the time, your mental state, etc.? My own experience with Spotify, for instance, and a favourite mix of 75 songs that I play frequently has taught me that, despite my pressing “shuffle”, the algorithm behind ’shuffle’ produces the same sequence of songs every time I press it, without fail. Yes, they are not in the order listed in my playlist, but the supposedly-randomised order is actually linear. Trust that algorithm for “true knowledge acquisition” when it comes to my own learning? I think not.
Yet the siren song of “the artificial intelligence revolution” continues to attract more ships. We’ve a long way to go; the journey is just beginning.