Picture this: Eva, six years old and just starting Grade 1, is fascinated with the human body and how it works. Why, she wonders, do some people have a different eye colour from their parents? Why do the tiny hairs on her arm stand up when it gets cold, and why does skin swell and itch after a tiny mosquito bite? Why do her parents love the taste of seafood, when she hates it?
What is Eva’s teacher to do with all these questions? Many are simply too complicated to explain to a young child at a level she can comprehend (like recessive genes resulting in different eye colours). And as a recent immigrant to Canada, Eva still finds explanations in English tricky to understand. This makes personalized learning even more of a challenge for her teacher.
We, as formal or informal educators, have all faced obstacles in our teaching, from having the knowledge to accurately explain a wide range of topics, to having the patience required to continuously respond to those “yes, but why?” questions, and perhaps most challenging, to creatively illustrate meaning at an individual’s level of understanding.
Technologies – such as artificial intelligence and virtual reality – that have for decades been described in science fiction are now emerging in a way that may soon make this kind of individualized and infinite learning possible at any grade level and indeed throughout life.
Artificial Intelligence (AI) / Machine Learning (ML) and Virtual Reality (VR) / Augmented Reality (AR) are two sets of buzzwords that often seem to be used interchangeably. However, AI is not the same as ML, and similarly, VR is not the same AR; it is worth clarifying their differences before we imagine their possible impact on the future of learning.
Artificial Intelligence (AI) is the broader concept of machines being able to carry out tasks in a way that we would consider “smart.” An example is the way your smartphone keyboard predicts the word you are typing based on the first letter and from the average frequency and proximity this word has to the other words you previously typed.
Machine Learning (ML) is an application of AI based around the idea that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves, by giving them access to large data sets and letting them identify patterns on their own. So rather than creating a rule that tells the computer that the letter “t” in a word at the end of a message is most commonly used for “thank you,” you tell the computer to find patterns in an individual’s typing habits. This might indicate the person is actually more likely to write “thanx.” With the advent of the internet and the vast increase in digital information being generated and stored, computers are now able to delve into, extract and analyze (aka “mine”) this data and come up with structures and patterns that we, “smart humans,” may not even see.
Both AI and ML are methodologies that computers use to analyze data.
Augmented Reality (AR) and Virtual Reality (VR), by contrast, are means by which we can convey information represented by a digital reality (a sensory experience that mimics physical reality).
Mixed reality is the blending of the physical and digital worlds. Mixed reality is a spectrum; on one side, which we currently refer to as Augmented Reality, visualizations are overlaid on top of the physical world – think Pokémon Go. At the opposite side, Virtual Reality presents a digital environment that completely occludes your vision of the real (physical) world and transports you to a different virtual (digital) world.
In their most recent incarnation, AR/VR are presented on head-mounted displays: wearable devices that make users feel as though they are truly present in the virtual world. Head-mounted displays seamlessly replace the surrounding real environment with the rich sights and sounds of a simulated three-dimensional world. Coupled with auditory stimuli and haptic feedback, VR experiences are truly immersive and elicit perceptions and behaviours similar to those one would observe in real life. Users view and engage with content that has been created using software and special cameras to create a graphically rendered virtual world.
In its infancy, AR/VR were used primarily for military training (flight simulators), entertainment and gaming, and more recently in the media sector. As VR equipment has become increasingly affordable and available, there has been an incredible explosion of interest around the development of VR technologies and content, and now these are being implemented across various sectors, including healthcare (for things like phobia treatments), and also in education.
Virtual Reality in clinical education
Imagine that as Eva grows older, she excels in biology, takes an interest in the health sciences, and decides to pursue a degree in Nursing. In order to graduate, Eva must complete a clinical placement – but local opportunities are limited and highly competitive. Travel is difficult as she also works part-time and provides care for her elderly grandmother. In the past it would have been very hard for Eva (and many others like her) to balance her responsibilities and complete her degree. In response to these growing challenges, and to give emerging healthcare professionals the opportunity to “practice before they practice,” post-secondary healthcare programs have started to invest in simulation as a part of their curricula.
Professors from the School of Nursing at York University have applied for funds to develop a Virtual Reality simulated Intensive Care Unit (ICU) environment, to enable health-care professionals to practice and gain in-situ experience. VR technologies are of special interest to clinical education as they can effectively simulate experiences and afford controlled manipulation, which allows users to engage realistically yet under safe conditions. VR also overcomes some limitations of more traditional simulation methods (such as live actors), which are more costly and time consuming. With VR, one can create a wider range of clinical scenarios (e.g. hospital ICU, out-patient clinic, long-term care setting) that can be exposed simultaneously to a greater number of students. Furthermore, VR simulations can be repeated as many times as required to create the desired level of familiarity and appreciation of the different roles, skills and scenarios.
In another simulation project, the professors are working on a VR training simulation platform called “ScrubXchange” that helps build empathy and understanding for the different clinical roles and responsibilities in healthcare. It’s intended to help nursing students “live a day in the scrubs” of another professional or in another setting – perhaps in Eva’s case as a nurse practicing in a clinic in Botswana.
Imagine now that Eva dreams of working for Doctors Without Borders. It would good for her to have the opportunity to understand how her education in Toronto may differ from her future work environment; how the tools at her disposal may be different and how to best use them, and how the cultural and professional norms in another country may impact how she works and interacts with others. Through VR, she can be transported into a virtual but realistic clinical setting in Botswana. She will be immersed in a clinic, staff and equipment on the other side of her world.
The potential of virtual learning
In the last ten years, education has benefited from a real revolution – most schools and universities now have a functioning virtual learning environment like Moodle, Sakai, WebCT or Blackboard, and their benefits have already been well documented. In short, in addition to helping students (and educators) develop a skill set that is needed in the current marketplace, virtual learning environments can improve equity of access by providing greater curriculum choice, flexibility, breadth of experiences, and opportunities for every student to excel, including the geographically isolated, the disengaged and vulnerable, the gifted and talented and those with special needs.
Machine learning brings additional benefits and furthers those already afforded by virtual learning environments. However, the greatest impact ML would bring to education is one-on-one personalization: the ability to customize and adapt curriculum to the current knowledge, learning abilities and preferred pedagogical style of individual students, and do so time and time again so that students have continuity.
At York University, educators are looking to combine an existing e-learning platform, Daagu, with the power of machine learning. In its current form, one of the aspects that makes Daagu unique is that it encourages students to tag moments, elements, emotions, or conversations that have created a shift in their understanding, leading to an “aha” moment. With a large enough data pool, machine learning could build off Daagu’s embedded tags and pair up students who have similar or complimentary learning styles. The long-term goal of the initiative is to better understand how, and in what order, content and experiences should be presented for optimal learning, and to do so on an individual level. In other words, to begin to customize and deliver content to students in a way that provokes personal reflection and pushes them towards their own “aha” moments.
For example, let’s suppose Eva is learning about stitches. To help her learn which types of sutures and seams are ideal for different types of wounds, the program could first present Eva with a visualization of a quilt she made with her grandmother when she was a child. Showing how different thread and patterns are ideal for different materials, depending on their elasticity and the desired strength or flexibility, the program could then draw parallels to different surgery incisions and wounds, and which areas of the body need greater flexibility to account for increased movement. Finally, if it appeares that Eva has understood the basic idea, but is best able to cement a concept through emotional experience, the program would generate an interactive movie in which her grandmother trips in the kitchen and requires stitching around her knee. Eva is challenged to describe the motion of the knee, the type and size of wound and to suggest the most appropriate suture and seam pattern. For Eva, this approach is meaningful and memorable. Another student might be better taught in an entirely different way. This ability to learn from the users and provide personalized curriculum is the true power of AI.
Risks of AI education
For all its potential benefits, AI also creates opportunities for new kinds of misuse, and so we should proceed with caution. Where there is ubiquitous technology and a captive, perhaps naïve, audience, there is the threat of abuse. One obvious risk is the potential for privacy and security breaches, and of user data being mined and mishandled.
A big risk is for any country, system, organization or company to wield too much control over people’s education and learning techniques. Even subtle ways in which history is taught, what is included or omitted, can have grave impacts on society and politics. The fact that virtual education is easily scalable allows for more scalable misinforming. With machine learning computers, only a handful of content creators can have immense impact over many people. The more we learn about how the brain works and understand how people form biases, the more we realize how vulnerable we are to targeted presentations of inaccurate or biased views.
Finally, there is a valid concern that individuals will no longer know how to effectively communicate in person, or be empathetic towards the needs of other (real) people. Some argue that society is changing, the need for in-person interactions is decreasing and therefore the ability to foster what we traditionally recognized as deep relationships is no longer as important. However, if we collectively believe that there is something valuable in building face-to-face connections, then we have full control to design future tools to help improve the skills that are on a downward spiral. Much like the shift to improve the bedside manner of physicians, we need to make the teaching of communication skills a priority, alongside programming, math and sciences. We should thoughtfully design the next set of technology-based teaching tools so that they encourage rather than dilute our abilities to have meaningful conversations in person. If we focus on building AI and AR tools that encourage longer and more complex communication, incorporate visual, auditory, and sensory interaction (what AR/VR actually contribute), and provoke self-driven exploration and experimentation (what AI is able to generate), we have the ability to reverse the current trend.
Despite the risks, it is undeniable that we are entering an age of revolutionized education. With little imagination, one can easily see a future similar to that described in Neal Stephenson’s science fiction novel, The Diamond Age. The story features a young protagonist, Nell, who at the age of four acquires an interactive AI “book” whose sole purpose is to steer its reader (with whom it bonds) intellectually towards a more interesting life and to become an effective member of society. The AI book is designed to react to its reader/owner’s environment and teach them everything they need to know to survive and develop, personalizing every interaction to reflect their life, preferred interests, and learning style, and it does so without bias and with infinite patience and support.
We can look forward to the day when students have a truly personalized education experience that helps to drive both their professional education and personal development.
Photo: Valentin Russanov (iStock)
First published in Education Canada, March 2018