By: Sara Jakubowicz (LTXD)
Join me for a day at EdTech Week 2023! I’ll be giving you the full tour of the particularly mesmerizing, jaw-dropping events at The Relay Graduate School of Education. Spilling through the halls, EdTech founders and salespeople spread the word about their innovative products through the technological advancements of pens and canvas bags. (Yes, I did let them practice their sales pitches on me. Who can say no to freebies?) A zoo of people in their business-best attire and tables featuring technology solving problems that may or may not actually exist. (Not sure if everyone here completed their user needs assessment!) Wherever I went, one thing’s for sure- there was a persistent buzzing and bumbling about AI. Me being the curious (no- not nosy, never nosy) person that I am, I decided to investigate. Follow along, fellow detective as I make you privy to the world of EdTech Artificial Intelligence.
Key Ideas:
VR as a tool for admissions in medical programs
AI used to support medical screenings and patient-friendly communication
AI used to complete mundane, but ancillary, tasks of physicians
The importance of educating people about biases in AI algorithms
Walking into a panel full of professionals at top institutions like the Mayo Clinic, Johns Hopkins, I was unsure of how physicians fit into this conference; however, I was pleasantly surprised. Educational innovations can come in places where you least expect them (though, for some this is totally expected as top-notch research hospitals tend to have the money and the resources). They dove into how virtual reality provides affordances for doctors and medical students practicing procedures that they will later have to practice on living, breathing patients. Not only were these doctors able to virtually collaborate on these medical operations, but the various doctors combined their efforts in favor of a bit of friendly competition. An up-and-coming tradition each month- whoever performs the most VR operations outside of their area of specialty wins. The extrinsic rewards of bragging rights and the possibility of a trophy thrown in are ever so motivating, as my graceful failures in Xbox NFL games against my brother would suggest.
They also explained the potential for virtual reality as an assessment to get into medical programs. VR could assess people’s potential for future learning in medical programs, testing their ability to learn and their problem-solving skills as opposed to inert, rote knowledge. Though VR could provide a certain level of equity, especially to those abroad, how would test-takers access these materials in the first place? VR testing centers? I wonder if high-school-me would be enthused by the idea of taking a test in VR or maintain my aversion to test-taking. The world may never know.
Transitioning from virtual reality’s applications in the MEdTech space (yes, I make puns; no, I won’t stop), the panelists talked about how Artificial Intelligence can help professionals in the field. Currently, the panelists’ medical programs are dabbling with AI-use for screenings and providing patients with easily readable information on disease, detections, and diagnoses of common diseases. Doctors use already-manufactured, commercially-available products like the Apple Watch, which already track biometrics like ECGs and heart rate. However, doctors, themselves, are better able to detect abnormalities. While weighing pros and cons, as one so often does when deciding the future of lives, an important consideration in the AI-use debate within Med-ucation is Generative AI’s reduction in physician burnout. It can do mundane tasks like charting, though physicians still need to learn and understand the basics through their own experiential learning.
With Generative AI and Large Language Models (LLMs) and their rise to prominence, people need to be educated about how these models are created. It is important to validate the information given by these models and use these technologies responsibly. Cheers to the middle school librarians who taught me how to dive into the world of credible sources! Okay, I do search Google and Wikipedia to occasionally seek insightful information on “Michael Parenti” or “Barbie” or “Constructionism”, but I really do use other websites to fact check. I swear bro, so I was excited to find that Edtech Week further addressed how to use technology to fact check technology!
Key Ideas:
Copyleaks Platform: Using AI to detect AI-produced content
AI Model Collapse Theory: After generations of being trained by its content, AI will produce increasingly inaccurate information
Standing in his blue blazer and black t-shirt at the front of the dimly-lit room, Chief Operating Officer of Copyleaks, Shouvik Paul, made professors salivate as if they were the reincarnation of Pavlov’s dog. You see, Copyleaks is a platform which uses a unique algorithm for an Artificial Intelligence model which detects AI-produced content. Very meta. Kind of like someone tattling to the teacher that their classmate is actually at Coachella and is not, in fact, sick. Its extreme accuracy comes from its training on data sets- related content selected from the web based on the Generative AI’s / LLM’s goal. These data sets are then used to enhance the model’s accuracy in providing outputs (response sent by the AI model) that more correctly aligns with the inputs (user-submitted prompt). With Copyleaks specifically, their models are trained on intentions and stylistic approaches used by AI when generating outputs as well as including outputs produced by people for comparison. Because of its stylistic approach to essay analysis, Copyleaks acts as an effective policer in writing samples. Hmmm… maybe I shouldn’t be ratting my fellow students out. Sorry comrades!
Training of AI algorithms brings up the concern of AI Model Collapse theory. If, as predicted by some, AI is exponentially used for content generation, including in professional news and media, then AI production will be integrated into the content that the LLMs and Generative AIs are being trained on. Based on recent research, such as Ilia Shumailov’s “The Curse of Recursion: Training on Generated Data Makes Models Forget” (2023), after 10 generations of Generative AI being trained, the product released, and the cycle repeated again, the AI will start to falter, degenerate, and eventually collapse. Being trained on its own content, the AI will start to produce more and more incorrect outputs and eventually gibberish, if the presiding companies aren’t careful. He did concede, however, that this is just a theory and that similar end-of-world worries were presented with computers and Y2K. Ah, don’t you love end-of-the-world rumors? 2012 was never more fun. Sprinkles a little spice in your life. (Hunger Games, where you at?)
Key Ideas:
Impact of high-risk AI vs. low-risk AI on policy decision-making
Challenges for adopting a global policy for AI regulation
Another room, another panel- though this one hosted a similar crew as the VR in Healthcare one, but this one was in regards to the use of AI in Healthcare. A topic of discussion was about building public trust with AI through peer-reviewed research and evidence. People are generally afraid of change, something Don Norman loves to talk about in his book “Design of Everyday Things.” People like using the products they’re used to and like to stay in a zone of comfort; in order to drag them off of their mental couches, you have to make the product something for which they’ll want to overcome that fear. People need to be reassured that their input data and their general information won’t be leaked and would be deidentified. Discussing further, they delved into the idea of AI regulation and how, in the health profession, AI should be regulated in the same way as other medical innovations. Is something burning? I smell a hot take.
As I write this, having just watched the IBM Policy Lab panel on AI policy and how to best regulate AI (not part of EdTech week), it’s fun to write this and know that AI regulation is being heavily debated. It was interesting to hear from Kai Zenner, the digital policy advisor for the European Parliament and someone involved in the political negotiations for the EU AI Act, about the current struggles to reach a consensus on AI regulation. He, as well as the other panelists, discussed the differences in high-risk AI vs. low-risk AI and how those differences affect policy decisions. Christina Montgomery, IBM chief privacy and trust officer directing IBM’s AI ethics policy and practices gave examples of the two: high-risk would be AI used to determine jail sentences, while low-risk- a book recommendation service. She doesn’t think regulations for high-risk should hamper the innovations of low-risk AI.
Later, Zenner went on to say that an OECD working paper is being used to inform the EU’s AI Act, providing a level of user reach to indicate significant impact which would classify high-risk AI (45 million users). This is particularly relevant for LLMs and Generative AIs like Chat GPT 4, which are what the panel refers to as foundation models or systemic models. Discriminatory data sets can result in systemic risks- a technological reenactment of real-life!
A sentiment commonly echoed in the AI space, including Montegomery’s voice, is that AI is a software that is not inherently good or bad, but a tool to be used; the usage needs to be regulated. Though it gets a bit more complicated when you take into account that algorithms and their code have intended and/or unintended biases, which can affect large populations and how they see the world- just like humans. They continued to talk about whether or not they thought AI policy should be globalized. Two of the panelists, Zenner and Montgomery, advocated that AI policy should be similar worldwide with stipulations for cultural and experimental reasons, protecting individuals everywhere while standardizing regulations for compatibility. However, a logistics fellow working on science, technology, and energies policies for Senator Heinrich of New Mexico, Max Katz, mentioned, “It’s hard enough to pass a law in your own country, let alone organize with the world.” The US’s strong sense of individualism is alive and well!
Interestingly enough, Katz mentioned that his senator along with New York’s Senator Schumar and another senator have implemented a bipartisan effort to encourage thought on AI legislation and educating the members of the Senate on AI- namely, they started an insight forum in September where different leaders in the field talk about different topics on the Senate floor regarding AI like current innovations and appropriate guardrails.
AI is the latest technological advancement that will change the future of human-computer interactions and their place in society. Upon entering ChatGPT, it does tell you that your information may be used for future iterations. Screenshots of people’s interactions with GPT clearly show bias in the algorithm. It’s a high-risk foundation model that can heavily influence people’s knowledge and their understanding of the world. But, it’s pretty good if you ever need to add to your dad joke repository… “Why don’t scientists trust atoms? Because they make up everything!”
AI is a tool to be used, but we must consider ethicality and use it to aid our understanding rather than be the basis of it. In the Tobey Maguire Spiderman movie, Spiderman’s uncle once said, “With great power comes great responsibility.” It is on us and the people with whom we share the world to use AI to make the world better for wear, rather than worse. Though GenAI can aid you in your cognitive processes and schema accretion (I know; Cognitive Science is holding a coup d’état over my mind), take Generative AI’s outputs with a grain of salt. Think about the biases that may be meshed into the algorithm or the data sets on which the AI model is trained. Kindly remember Uncle Ben’s words as you type your next prompt into Chat GPT.