One of the things I appreciate deeply about impactful school communities is how we treat knowledge and knowledge structures.
Roger C. Schank once wrote that, “[k]nowing what particular knowledge structure we are in while processing can help us determine how much we want to know about a given event; that is, contexts help narrow the inference process” (AI Magazine 8.4). Continuing to focus on how we, as humans, engage in the intelligent process that we call inference, he says “[m]any possible ways exist to control the combinatorics of the inference process: deciding among them and implementing them is [serious]” (63).
As we continue to ponder the relevance of “outdated school structures” [a term bandied about with some frequency in pseudo-educational opining as well as genuine educational research] in the age of Artificial Intelligence (AI), we would do well to consider Schank’s message. The citation in the previous paragraph was taken from his article “What is AI, Anyway?”, an article he penned in…wait for it…1987. In this must-read work for anyone interested in contemplating the future of AI, Schank talks about what ‘real’ AI is/would be. Later in the article, he states that “[a] program is not an AI program because it uses some form of logic or if-then rules. Expert systems are only AI programs if they attack some AI issue.” He goes on to identify some ten enduring issues (also termed “problems” in the philosophical sense) that would have to be attacked by a [software] programme in order for us to consider it, truly, to be AI. One of those ten items is “control of combinatorial explosion,” and it has a direct relation to the human process of inference.
He proffers that, “[o]nce you allow a program to make assumptions beyond what is has been told about what may be true, the possibility that it could go on forever doing this assuming becomes quite real. At what point do you turn off your mind and decide that you have thought enough about a problem? Arbitrary limits are just that, arbitrary. […] Many possible ways exist to control the combinatorics of the inference process: deciding among them and implementing them is a serious AI problem if the combinatorial explosion is first started by an AI process.” (63)
Schank, whose article formed part of the dissemination of the findings from the Yale Artificial Intelligence Project at the time, and who continues to point out the failings of what we are calling AI in 2018 (’this isn’t AI,’ he would say), comes to what many educationalists already know, despite all the hype about how AI is going to solve humanity’s greatest challenges: “The ability to wonder why, to generate a good question about what is going on, and the ability to invent an answer, to explain what has gone on to oneself, is at the heart of intelligence. We would accept no human who failed to wonder or explain as very intelligent. In the end, we will have to judge AI programs by the same criteria.”
We’re not there yet. Decades from now, we will look back at the present and recognise that we were subject to marketing and sales hype around machine learning, rather than true AI. We are in the earliest days (even 31 years after Schank’s cited article) now, and ‘intelligence’ is exactly “why school?” remains relevant and a goal worth pursuing. We are seeking ‘the heart of intelligence.’
Remember this when you hear about the ‘next great thing’ that is ostensibly some form of AI. If this be-all and end-all of programmes is based on logic and if-then rules, it isn’t AI. It might perform a function (such as calculation) better than humans, but performing a function better than humans doesn’t enter into the definition of intelligence. After all, because we require calculators in trigonometry classes, do we consider students (or ourselves) less intelligent because of the introduction of the calculator?