OpEd: 5 questions schools and universities should ask before they purchase AI tech products

I wrote the op ed below for The Conversation and I am republishing it here for posterity, under their Creative Commons license. Here’s the original article.

 

5 questions schools and universities should ask before they purchase AI tech products

Every few years, an emerging technology shows up at the doorstep of schools and universities promising to transform education. The most recent? Technologies and apps that include or are powered by generative artificial intelligence, also known as GenAI.

These technologies are sold on the potential they hold for education. For example, Khan Academy’s founder opened his 2023 Ted Talk by arguing that “we’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”

‘How AI Could Save (Not Destroy) Education’

As optimistic as these visions of the future may be, the realities of educational technology over the past few decades have not lived up to their promises. Rigorous investigations of technology after technology – from mechanical machines to computers, from mobile devices to massive open online courses, or MOOCs – have identified the ongoing failures of technology to transform education.

Yet, educational technology evangelists forget, remain unaware or simply do not care. Or they may be overly optimistic that the next new technology will be different than before.

When vendors and startups pitch their AI-powered products to schools and universities, educators, administrators, parents, taxpayers and others ought to be asking questions guided by past lessons before making purchasing decisions.

As a longtime researcher who examines new technology in education, here are five questions I believe should be answered before school officials purchase any technology, app or platform that relies on AI.

1. Which educational problem does the product solve?

One of the most important questions that educators ought to be asking is whether the technology makes a real difference in the lives of learners and teachers. Is the technology a solution to a specific problem or is it a solution in search of a problem?

To make this concrete, consider the following: Imagine procuring a product that uses GenAI to answer course-related questions. Is this product solving an identified need, or is it being introduced to the environment simply because it can now provide this function? To answer such questions, schools and universities ought to conduct needs analyses, which can help them identify their most pressing concerns.

2. Is there evidence that a product works?

Compelling evidence of the effect of GenAI products on educational outcomes does not yet exist. This leads some researchers to encourage education policymakers to put off buying products until such evidence arises. Others suggest relying on whether the product’s design is grounded in foundational research.

Unfortunately, a central source for product information and evaluation does not exist, which means that the onus of assessing products falls on the consumer. My recommendation is to consider a pre-GenAI recommendation: Ask vendors to provide independent and third-party studies of their products, but use multiple means for assessing the effectiveness of a product. This includes reports from peers and primary evidence.

Do not settle for reports that describe the potential benefits of GenAI – what you’re really after is what actually happens when the specific app or tool is used by teachers and students on the ground. Be on the lookout for unsubstantiated claims.

3. Did educators and students help develop the product?

Oftentimes, there is a “divide between what entrepreneurs build and educators need.” This leads to products divorced from the realities of teaching and learning.

For example, one shortcoming of the One Laptop Per Child program – an ambitious program that sought to put small, cheap but sturdy laptops in the hands of children from families of lesser means – is that the laptops were designed for idealized younger versions of the developers themselves, not so much the children who were actually using them.

Some researchers have recognized this divide and have developed initiatives in which entrepreneurs and educators work together to improve educational technology products.

Questions to ask vendors might be: In what ways were educators and learners included? How did their input influence the final product? What were their major concerns and how were those concerns addressed? Were they representative of the various groups of students who might use these tools, including in terms of age, gender, race, ethnicity and socioeconomic background?

4. What educational beliefs shape this product?

Educational technology is rarely neutral. It is designed by people, and people have beliefs, experiences, ideologies and biases that shape the technologies they develop.

It is important for educational technology products to support the kinds of learning environments that educators aspire for their students. Questions to ask include: What pedagogical principles guide this product? What particular kinds of learning does it support or discourage? You do not need to settle for generalities, such as a theory of learning or cognition.

5. Does the product level the playing field?

Finally, people ought to ask how a product addresses educational inequities. Is this technology going to help reduce the learning gaps between different groups of learners? Or is it one that aids some learners – often those who are already successful or privileged – but not others? Is it adopting an asset-based or a deficit-based approach to addressing inequities?

Educational technology vendors and startups may not have answers to all of these questions. But they should still be asked and considered. Answers could lead to improved products.The Conversation

 

more on erasure and edtech

Last week I wrote a post on erasure and edtech, and this morning I saw that Stephen Downes has replied.

He writes that he “can’t verify whether Audrey Watters ever wrote this, because a Google search doesn’t turn it up.” Fair. I added a link to the original post, but here it is, as well.

Stephen also writes that the Woolf whitepaper discussed didn’t actually vanish, as he can find a copy through the Internet archive. My original post included a link to a copy (second paragraph here, linked from the original), so as to be clear that the whitepaper isn’t gone as in “no one can ever find it.” It vanished as in: “it’s no longer prominent, visible, accessible, and readily available.” And certainly, Internet sleuthing, given time, effort, skill, and some knowledge about the thing you’re looking for may yield evidence of it, though your mileage might vary.

One way to read “vanish” is to do what Stephen does, which is to zoom in and ask a literal question: Is the paper available somewhere? Another way is to zoom out, and ask: Have there been attempts to erase, rewrite, and reframe histories of edtech (e.g., through practices like removing references and ignoring critiques)? That’s how I understand erasure to work.

New paper: Treating AI as a real psychological other

In a recent talk, Punya Mishra claimed that “whether we like it or not we will start treating these bots as a if they are a psychological real other.” There’s quite a lot of evidence from social psychology going back to the 1990’s that humans consistently (and unconsciously) treat computers as social actors. This paradigm has been further refined in recent years, but overall there’s evidence that we do treat technologies in social ways (e.g., by being polite).

In a paper that we published this month, we show that learners imagine their interactions with an AI as abiding by social processes, including encompassing issues such as respect, honesty, and trust. Equally importantly, this finding isn’t uniform. At times learners imagine AI as a tool/object that can be used in the service of learning, while other times they imagine AI as a subject as one who has agency and possibly some kind of internal subjectivity.

Here’s the paper

Veletsianos, G., Houlden, S., & Johnson, N. (in press). Is Artificial Intelligence in education an object or a subject? Evidence from a story completion exercise on learner-AI interactions. Tech Trends. The final version is available at https://doi.org/10.1007/s11528-024-00942-5 but here is a public pre-print version.

Abstract

Much of the literature on artificial intelligence (AI) in education imagines AI as a tool in the service of teaching and learning. Is such a one-way relationship all that exists between AI and learners? In this paper we report on a thematic analysis of 92 participant responses to a story completion exercise which asked them to describe a classroom agreement between an AI instructor and a learner twenty years into the future. Using a relational theoretical framework, we find that the classroom agreements between AI and learners that participants produced encompassed elements of education, boundaries, affordances, and social conventions. These findings suggest that the ways learners relate to AI vary. Some learners relate to AI as an object, others relate to AI as a subject, and some relate to AI both as an object and a subject. These results invite a deeper engagement with the ways in which learners might relate to AI and the kinds of ethics and social protocols that such relations suggest.

 

Edtech history, erasure, udacity, and blockchain

This thought in Audrey’s newsletter (update: link added March 30th) caught my attention, and encouraged me to share a related story.

 [Rose Eveleth] notes how hard it can be to tell a history when you try to trace a story to its primary sources and you simply cannot find the origin, the source. (I have been thinking a lot about this in light of last week’s Udacity news. So much of “the digital” has already been scrubbed from the web. The Wired story where Sebastian Thrun claimed that his startup would be one of ten universities left in the world? It’s gone. Many of the interviews he did where he said other ridiculous things about ed-tech – gone. What does this mean for those who will try to write future histories of ed-tech? Or, no doubt, of tech in general?) Erasure.

 

Remember how blockchain was going to revolutionize education? Ok, let’s get into the weeds of a related idea and how most everything that happened around it has also disappeared from the web.

One way through which blockchain was going to revolutionize education was through the development of education apps and software running on the blockchain. Around 2017, Initial Coin Offerings (ICOs) were the means through which to raise money to build those apps. An ICO was the cryptocurrency equivalent of an initial public offering. A company would offer people a new cryptocurrency token in exchange for funds to launch the company. The token would then provide some utility for ICO holders relating to the app/software (e.g., you could exchange it for courses, or for study sessions, or hold on to it hoping that its value would increase and resell, etc). The basic idea idea here was crowdfunding, and a paper published in the Harvard International Law Journal estimates that contributions to ICO’s exceeded $50bn by 2019. The Wikipedia ICO page includes more background.

A number of these ICOs focused on education. Companies/individuals/friends* would create a website and produce a whitepaper describing their product. Whitepapers varied, but they typically described the problem to be solved, the blockchain-grounded edtech solution they offered, use cases, the team behind the project, a roadmap, and the token sale/model.

To give you a sense of the edtech claims included in one of those whitepapers:

“The vision is the groundbreaking disruption of the old education industry and all of its branches. The following points are initial use cases which [coin] can provide … Users pay with [coins] on every major e-learning platform for courses and other content they have passed or consumed… Institutions can get rid of their old and heavy documented certification process by having it all digitalized, organized, governed and issued by the [coin] technology.”

I was entertaining an ethnographic project at the time, and collected a few whitepapers. For a qualitative researcher, those whitepapers were a treasure trove of information. But, looking online, they’re largely scrubbed, gone, erased. In some cases, ICO’s founders’ LinkedIn profiles were scrubbed and online communities surrounding the projects disappeared, even as early as ICOs didn’t raise the millions they were hoping for.

Some of you following this space might remember Woolf, the “world’s first blockchain university” launched by Oxford academics. And you might also remember that, like other edtech projects, it “pivoted.” See Martin Weller’s writing and David Gerard’s writing on this. Like so many others, the whitepaper describing the vision, the impending disruption of higher ed through a particular form of edtech, is gone. David kept a copy of that whitepaper, and I have copies of a couple of whitepapers from other ventures. But, by and large, that evidence is gone. I get it. Scammers scam, honest companies pivot, the two aren’t the same, and reputation management is a thing. But, I hope that this short post serves as a small reminder to someone in the future that grandiose claims around educational technology aren’t new. And perhaps, just perhaps, at a time of grandiose claims around AI in education, there are some lessons here.

 

 

Open Access fees are exorbitant

After signed another publishing agreement, and I was, once again, taken aback by the exorbitant OA fees that publishers charge.

Publishing open access with us (gold OA) lets you share and re-use your article immediately after publication.

The article processing charge (APC) to publish an article open access in Educational technology research and development is:

Article processing charge (excluding local taxes)
£2,290.00 / $3,290.00 / €2,590.00

Some organisations will pay some or all of your APC.

If you want to publish subscription, instead of open access, there will be an option to do that in the following steps.

I know, I know, we probably shouldn’t have submitted to journal that isn’t gold and free OA by default, *but* the system is structured in such ways that my junior co-authors would benefit from being published in this journal.

While not a solution to this problem, it’s worth noting the terms in the publishing agreement around sharing the article. This is in the terms:

The Assignee grants to the Author (i) the right to make the Accepted Manuscript available on their own personal, self-maintained website immediately on acceptance.

This is the approach that I use for nearly all my papers, but it’s worth remembering that what this really does is suggest an individual solution to a systemic problem, which will do little to solve the broader problem of lack of access to research.

There are other statements in the terms around placing one’s article in an institutional repository, but author self-archiving is generally the first and immediate option available to individuals. And perhaps google scholar will index the author’s personal website, making the article available, as shown below. Google scholar’s approach of identifying articles and placing publicly-available versions in search results is a systemic solution to the problem. Unpaywall is similar in that respect.

 

[To be clear: this post isn’t about ETR&D. It’s about the publishers & the publishing system]

Reflections: 100 Year EdTech Project design summit

Last week, I was at the 100 Year EdTech Project Design Summit in Phoenix AZ, and I thought it might be worthwhile to post some raw reflections here, captured throughout the days, and unedited. At the event

“Leaders, educators, futurists, designers, students, lifelong learners, visionaries –  all will be invited to explore the last 50 years of technology’s impact on education, observe where we stand today, and imagine the future of education for generations to come.”

I appreciated the event starting with a student keynote panel. Some ideas I heard included attention to equity, excitement about the role of AI (i have lots and lots of thoughts on this, including how much space these conversations are taking and concern around the kinds of conversations it’s displacing), the inevitability of technology, the limits of our imagination (e.g., a comment made around how X years ago “my concern was about walking to the library and spending hours looking for an article, and so no i couldn’t have imagined the progress of tech”), emphasizing community, expanding access to tech (e.g., broadband for all), and sharing wealth and resources.

And because I’m obviously reflecting on these from my own perspective: These conversations are somewhat similar in Canada, but there’s a stark difference: The starting point there, or at least in the conversations I was part of, was typically decolonization, conciliation, indigenization, equity, and inclusion. The starting point here, or at least at this event, is technology in the service of similar ends… in other words, there’s a more pronounced technosolutionist stance at this event. Granted, that’s the focus of the event, which makes solutionism a difficult pattern of thought to escape/resist.

The slide below from Kiran Budhrani recentered some ideas around the broader issues, that don’t have to do with tech, but shape education nonetheless

This was punctuated by Bryan Alexander highlighting climate adaptation, and especially climate migrants and the impacts of that impending reality. I say reality because I’ve reached the conclusion that climate collapse is more likely than otherwise. I hope I am proven wrong.

A great question from a medical professional was the following: what do medical professionals need to know when patients (all of us) come to them with some basic knowledge of their ailments? The focus here was  on skills and knowledge relating to empathy and communication on the medical professional’s part, as well as the ethical issues around AI systems that will invariably support patients (e.g., what data were they trained on? how trustworthy are they, etc etc). I also think this area relates to patients navigating the flood of information/misinformation circulating online and their use of various technologies to make sense of their ongoing and new ailments. This reminds me of Dave Cormier’s book, which argues that we ought to be preparing people to navigate uncertainty at a time of information abundance.

Much of the event focused on small group discussions around approaches that might address certain challenges. I thought that framing the role of edtech in the future in terms of scenarios was grounding and valuable. The discussions in my group were rich, and there lots and lots of thoughts and ideas about our topic.

Finally, it was great to catch up with Philippos Savvides, fellow Cypriot at ASU, who partners with and supports edtech startups around the world. I also appreciated a short tour of EdPlus (ASU’s internal-focused OPM) and learning more about their work. Rather than outsourcing online program management, like so many other institutions, EdPlus focuses on innovating and scaling ASU offerings. I believe that  operations integral to the institution (and OPM is one of them) ought to stay within the institution and ought to be cultivated. I like what ASU is doing here. And through luck or foresight, it’s perhaps avoiding the entanglements of an OPM market in turbulence.

 

Update #1: The paper “Climate imaginaries as praxis,” showed up in my inbox a few hours after posting this, and I wish I had read it prior to the summit. Abstract reads: As communities around the world grapple with the impacts of climate change on the basic support systems of life, their future climate imaginaries both shape and are shaped by actions and material realities. This paper argues that the three globally dominant imaginaries of a climate changed future, which we call ‘business as usual’, ‘techno-fix’ and ‘apocalypse’ – fail to encourage actions that fundamentally challenge or transform the arrangements that underpin systemic injustices and extractive forms of life. And yet, to meet the challenges associated with food production, energy needs, and the destruction of ecosystems, people are coming together, not only to take transformative action, but in doing so, to create and nurture alternative imaginaries. This paper presents empirical findings about how communities in north and south India and south-east Australia are pre-figuring alternative futures, locally and in most cases in the absence of broader state support. An analysis of communities’ actions and reflections indicates that their praxes are altering their future imaginaries, and we consider how these local shifts might contribute to broader changes in climate imaginaries. At the heart of the emerging imaginaries are a set of transformations in the relational fabric within which communities are embedded and how they attend to those relations: relations within community, with the more-than-human, and with time.

Open for Public Comment: Minnesota’s Computer Science Strategic Plan

My colleague Cassie Scharber shared this with me and I am passing it along for broader input. Please share widely and submit comments!
**
The draft of the Minnesota state plan for K12 computer science education is now open for public review and feedback (Feb 1-Feb 16). This plan contains recommendations for teacher licensure, academic student standards, and professional learning. More information can be found on MDE’s website

How to Provide Comments on the Plan

1.       Review the CS State Plan Draft

2.       Share Your Thoughts: We encourage you to share your thoughts, suggestions, and concerns through the online comment form.

3.       Attend Virtual Feedback Sessions: Join our virtual feedback sessions where you can engage directly with members of the CS Working Group and share your insights. Sessions will be held via Zoom for one hour each. Register for one of the sessions using the following links:

4.       Help us Spread the Word: Help us reach more stakeholders by sharing this information with your colleagues, friends and community members. The more variety of voices we hear, the stronger and more inclusive our strategic plan will be.

Page 1 of 81

Powered by WordPress & Theme by Anders Norén