How can we use AI in education, and how should we use AI in education? These are very different questions. The former deals with our pure ability to use it (we employ software, for instance), while the latter encourages us to pause and contemplate ethical decisions related to the use of AI applications and products. The distinction should not be taken lightly.
At the heart of the matter is everything from the lives of staff and students to a school’s brand and reputation. How indeed should we use it? With more than 2,000 AI start-ups already in existence, and with more funding being channeled toward such efforts, the deployment of such applications calls into question the possible effects of using them.
MIT Sloan Management Review (6th December 2018) features an intriguing article on the subject, “Every Leader’s Guide to the Ethics of AI.” Co-authors Thomas H. Davenport and Vivek Katyal share that many executives are beginning to struggle with the ethical dimensions of AI: “A 2018 survey by Deloitte of 1,400 US executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organisations don’t yet have specific approaches to deal with AI ethics.”
Davenport and Katyal posit seven actions that leaders would do well to consider, as they contemplate how they ‘walk the line’ between can and should:
- Make AI ethics a board-level issue
- Promote fairness by avoiding bias in AI applications
- Lean toward disclosure of AI use
- Tread lightly on privacy
- Help alleviate employee anxiety
- Recognise that AI often works best with — not without — humans
- See the big picture
Item 1: Make AI ethics a board-level issue
The co-authors underscore the point that, “since an AI ethical mishap can have a significant impact on a [school’s] reputation and value, we contend that AI ethics is a board-level issue.” They cite the extant notion of racially-oriented algorithmic bias in software that deals with parole recommendations. One wonders if one were to work with an AI software around student advancement from one grade or year-level to the next, whether there might be some algorithmic bias directing the software’s recommendations. Surely such a question is worthy of generative discussion at a board level, with senior management’s participation? Can we afford not to have that discussion, and any possible ethical framework it might engender?
Item 2: Promote fairness by avoiding bias in AI applications
Algorithmic bias has been identified in areas ranging from credit scoring to hiring decisions; why not curriculum design as well? Are we asking ourselves whether any AI applications we utilise are treating all groups equally? If advertisements for high-paying jobs end up going more to men, or if names from under-represented groups of people pop up more frequently in database searches around criminals, how do we address the obvious bias inherent in the software itself, which was developed by humans? How might we reduce algorithmic bias? What would risk management guidelines look like, in such scenarios? These questions are all important, yet we must ask ourselves whether we’re investing any time in asking such questions.
Item 3: Lean toward disclosure of AI use
As the co-authors suggest, “a recommended ethical approach to AI usage is to disclose to customers or affected parties that it is being used and provide at least some information about how it works.” They go on to say that automated decision systems that affect customers “should reveal that they are automated, and list the key factors used in decision-making.” One wonders if a school were to employ an AI software for admission, whether it would be beneficial or harmful (or otherwise) if the school were to disclose that the decision-maker is automated. For those of us interested in GDPR applicability, the co-authors state that “every customer should have the ‘right to an explanation’ – not just those affected by the GDPR in Europe, which already require it.” What might this mean for admission, for example? To this end, it would make sense to disclose “the types and sources of data used by the AI application.” As they further elucidate, “While regulations requiring disclosure of data use are not yet widespread outside of Europe, we expect that requirements will expand, most likely affecting all industries. Forward-thinking companies will get out ahead of regulation and begin to disclose AI usage in situations that involve customers or other external stakeholders.”
Item 4: Tread lightly on privacy
In the quest to improve our school security systems, are we creating potential privacy concerns? How far does facial imaging go, for instance? Will we be conducting facial imaging on our campuses, and what will we do with that information? How will we deal with ‘false positives,’ i.e. when we discover that our AI software is identifying someone who is completely fine as a person who shouldn’t be around children? Will we have a team and a protocol to follow, to investigate any such concern, and how quickly could we deploy them to do so? The co-authors suggest that “at least in the short run, AI used in this context may actually increase the need for human curators and investigators.
Item 5: Help alleviate employee anxiety
Thankfully, the notion that AI will somehow result in massive unemployment in short order has been re-thought, since much early-stage AI is in the form of applications that help particular jobs. The revised thinking on AI-induced unemployment is that it will disrupt at a slower rate, allowing employees time to gain other (needed) skills. The ethical thing to do, the co-authors suggest, is for companies (and schools, I would add) to “advise employees of how AI may affect their jobs in the future […]; the time for retraining is now.
Item 6: Recognise that AI often works best with — not without — humans
“Many AI-related problems are the result of machines working without adequate human supervision or collaboration,” the authors proffer. One has only to look at Facebook: the company has had to add 10,000 people to augment AI capabilities in addressing things such as fake news, or a failure to pick up on inappropriate images. The authors’ advice is not to eliminate human-centric jobs that solve customer or employee problems; “instead […], introduce new capabilities as “beta” or “trainee” offerings and encourage users to provide feedback on their experience. Over time, as AI capabilities improve, communications with users may become more confident.”
Item 7: See the big picture
The authors stress that, ultimately, the thing to keep foremost in our minds is to “build AI systems that respect dignity and autonomy, and reflect societal values.” Principally, they underscore the notion of ‘first, do no harm.’ Even with small-scale learning opportunities with AI, the moment something goes wrong, we must act, and act accordingly with an eye to learning how to improve, not just resolve an issue.
Bias, privacy, and security issues are challenges that we already face in our very human lives, and they are mirrored in the development of AI. Strangely, it might give us a small degree of comfort to know that, although we don’t know exactly what AI will bring, it exhibits risk characteristics with which we are already familiar and against which we already strive to stay ahead of the game. Such risks will need to become part of risk management in schools, which includes the governance level as well as the senior management team.
Lastly, having just visited a secondary school where students can take two terms of AI courses, we need to ensure that young people are learning about these risk characteristics exhibited by AI, as they (the students) themselves learn to code and implement that code, whether in a project involving autonomous vehicles or something else. As we build these skills in young people, who in turn will become the next generation of AI developers, let us exercise our duty of care with them to ensure that ‘being human’ is still at the core of the citizens we help to equip.