Category: sharing Page 4 of 41

New paper: faculty members’ hopes and anxieties

In a new paper for the Canadian Journal of Learning & Technology (paper and citation below) we highlight the ways in which faculty members hopes and anxieties about the future are shaped by both personal and environmental factors. Importantly, we note that imagining and working towards more hopeful futures (a concept which we examined in earlier work), may be a fruitful approach in addressing the challenges that the higher education sector (and our societies) are facing.

Abstract

Higher education worldwide is facing several challenges spanning from economic, social, technological, demographic, environmental, to political tensions. Calls to rethink, reimagine, and reform higher education to respond to such challenges are ongoing, and need to be informed by a wide variety of stakeholders. To inform such efforts, we interviewed thirty-seven faculty members at Canadian colleges and universities to develop a greater understanding of their hopes and anxieties about the future of higher education as they considered what higher education may look like five years into the future. Results centred on four themes: (1) anxieties and hopes are shaped by supports and resources from various sources, (2) faculty members face anxiety over matters that negatively impact them but are beyond their control, (3) faculty members hope that “good” comes from the COVID-19 pandemic, and (4) faculty members hope for a well-rounded education that will enable students to succeed both within and beyond their careers. Implications for these findings suggest a need to direct research efforts and practices toward more hopeful futures for higher education, especially in the context of online and blended learning.

Veletsianos, G., & Johnson, N. (2023). Canadian Faculty Members’ Hopes and Anxieties about the Near-future of Higher Education. Canadian Journal of Learning and Technology, 48(3), 1-23. https://cjlt.ca/index.php/cjlt/article/view/28319

4th annual Speculative Education Colloquium

This is a wonderful event that pushes the boundaries of imagination and possibility. Consider joining.

The 4th annual Speculative Education Colloquium is focused on Intergenerational Speculation. The colloquium will take place on Saturday May 13, 12-3pm ET.

This year’s event features the incredible children’s authors/illustrators Jarrett and Jerome Pumphrey and the inimitable Professor of Teaching and Learning at the Ohio State University, Detra Price-Dennis.

As always, this will be a free, virtual event. We are encouraging attendees to invite children and families to join in our collective dialogue and storytelling. Register here: https://tinyurl.com/speced2023

 

BC’s guidelines for digital learning strategy: inadvertent effects?

As you may or may not be aware, BC has developed a digital learning strategy. Here’s an earlier draft, and some earlier thoughts. With its release coming soon, I thought I would post a final set of thoughts that apply to this policy, but to other policies as well. I am only posting this because I saw that the University of California recently closed a loophole that allowed learners to fully complete their degree online. Such decision reminds me once again that decisions which are laser-focused on modality miss the bigger picture. Which then reminded me of the BC digital learning policy.

My reading and analysis of the guidelines coming to BC is that they raise quality standards for online and hybrid learning. This is a good thing. But, they are silent on the quality standards for in-person learning, and might therefore have inadvertent effects.

Because of the focus on a specific modality, the strategy creates a de facto level of standard for digital learning courses/programs/efforts that is higher than that for in-person courses/programs/efforts. While the document focuses on guidelines for “technology-enhanced learning,” it’s not explicit that these guidelines ought to apply to ALL courses.

In other words, the policy presumes that guidelines are unnecessary for in-person courses, or at the very least outside of the purview of the policy . As one example, note how the following important guideline specifically focuses on the digital but not the in-person context:

“Digital PSE in BC must achieve true, meaningful, and lasting reconciliation with Indigenous Peoples. It should advance and implement decolonial practices, promote Indigenization, and recognize Indigenous knowledge, pedagogies, and learning. To achieve these goals, technology-enhanced learning should…”

What I’d rather see is this:

Digital PSE in BC must achieve true, meaningful, and lasting reconciliation with Indigenous Peoples. It should advance and implement decolonial practices, promote Indigenization, and recognize Indigenous knowledge, pedagogies, and learning. To achieve these goals, technology-enhanced learning should…”

Or this:

Digital PSE in BC must achieve true, meaningful, and lasting reconciliation with Indigenous Peoples. It should advance and implement decolonial practices, promote Indigenization, and recognize Indigenous knowledge, pedagogies, and learning. Technology-enhanced learning provides risks and opportunities towards these goals, and in this context, institutions should… To achieve these goals, technology-enhanced learning should…

Setting a higher standard for digital learning compared to in-person learning is a problem for two reasons.

First, a different levels of standard produces the very real possibility of stifling innovation in digital learning and prioritizing in-person learning. Institutions which are considering digital learning will need to account with these guidelines, especially if they need to highlight how they are meeting them in QA and new degree approval processes. Yet, it’s unclear whether in-person offerings need to account for them. By raising the bar for one kind of approach, we might be inadvertently guiding institutions into the alternative modality.

Second, a different level of standard will impact the sector unevenly, and will disproportionately impact institutions and disciplines which are predominantly digital/online. The impacts that the strategy will have on in-person trades programs are limited compared to the impacts that it will have on education programs, which are typically blended. The impacts that it will have on  smaller institutions which are exploring expanding their digital learning offerings are greater that the impacts it will have on predominantly in-person institutions.

What is a possible solution?

This is a difficult one. One approach might be to clarify and be explicit that these guidelines apply to all courses/offerings regardless of modality. Designing education with ethics, equity, and decolonization in mind ought not be limited by whether the course takes place in-person, online, or in blended fashion. Further, any change in QA and course approval policies at the Ministry level will need to apply to all programs – not just “digital” ones.

Special Issue on Trauma-Informed Instructional Design Practices in JAID

JAID recently published a special issue on trauma-informed instructional design practices, that you can read here. The abstract for the introduction to the special issue summarizes the effort well:

This special issue of JAID begins to bridge the gap between the theories of social emotional learning/trauma informed learning with instructional design offering specific cases of design and development projects that illustrate the confluence of these two broad areas. We share these articles with our ID community in the hopes of creating principles for “compassionate instructional design” (Thomas et al., 2019) through a collection of practitioner cases and research articles on applied instructional design practices that are responsive to trauma-affected learners, and which highlight the complexities of the learning context of the learners being served.

ChatGPT is the tree, not the forest.

“Not see the forest for the trees,” is a North American idiom that is used to urge one that focusing on the details might lead them to miss the larger issue/problem. ChatGPT is the tree. Perhaps it’s the tallest or the leafiest tree, or the one that blossomed rapidly right in front of your eyes… sort of like a Japanese flowering cherry. What does this mean for you?

If you’re exploring ChatGPT – as a student, instructor, administrator, perhaps as a community – don’t focus solely on ChatGPT. Certainly, this particular tool is can serve as one illustration of the possibilities, pitfalls, and challenges of Generative AI, but making decisions about Generative AI by focusing solely on ChatGPT may lead you to make decisions that are grounded on the idiosyncrasies of this particular technology at this particular point in time.

What does this mean in practice? Your syllabus policies should be broader than ChatGPT. Your taskforce and working groups should look beyond this particular tool. Your classroom conversations should highlight additional technologies.

I was asked recently to lead a taskforce to explore implications and put forward recommendations for our teaching and learning community. ChatGTP was the impetus. But our focus is Generative AI. It needs to be. And there’s a long AIED history here, which includes some of my earlier work on pedagogical agents.

 

AIs dedication to truth, justice or equity

In response to my post from yesterday, Stephen Downes focuses on the important and difficult issue. He says:

…George Veletsianos focuses on the question, “What new knowledge, capacities, and skills do instructional designers need in their role as editors and users of LLMs?” Using the existing state of chatGPT as a guide, he suggests that “a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive.” At a certain point, I think we might find ourselves uncomfortable with the idea that an individual designer’s values can outweigh the combined insights of the thousands or millions of voices that feed into an AI. True, today’s AIs are not very good examples of dedication to truth, justice or equity. But that, I’m sure, is a very temporary state of affairs.

Good point: We might find ourselves uncomfortable with that idea. But, here’s the two assumptions that I am making:

1. That individual has developed a dedication to truth, justice, equity, and decolonization that they are able to apply to their work. Yes, I am hopeful on this.

2. For an AI to reflect values aligned with justice, equity, and decolonization, we (aka society) likely need to regulate and re-imagine their design. I am less hopeful on this.

I guess that where Stephen and I disagree is on the future commitments of AI. I would like to be as hopeful as he is, but I am not convinced yet. I would like to be wrong.

AI, ChatGPT, instructional design, and prompt crafting

David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

To focus on a single item here, much has been written about prompt engineering or crafting, which is the way in which user text is entered to guide LLMs like ChatGPT to generate desired responses. For example, a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive from a prompt. Below is an example to illustrate how assessment items vary, as the desired assessment changes from good, to efficient, to equitable. My prompt is in bold and ChatGPT output follows.

 

assume that a learning outcome of a course is “understand opportunity cost.” What is a good assessment item for this learning outcome?

A good assessment item for the learning outcome of “understand opportunity cost” would be a case study or scenario-based question that requires the student to analyze and apply their understanding of opportunity cost to make a decision.

For example:

Scenario: A small business owner is trying to decide whether to invest in new equipment or to expand the business by opening a second location. The owner has $100,000 to spend and estimates that the equipment will cost $50,000 and the second location will cost $80,000.

Question: What is the opportunity cost for the business owner in each case and which option would you recommend and why?

This question requires the student to understand the concept of opportunity cost, which is the cost of the next best alternative, in order to make an informed decision about which option would be the best investment for the business. It also allows the student to demonstrate their understanding of the concept by applying it to a real-world scenario.

assume that a learning outcome of a course is “understand opportunity cost.” What is an efficient assessment item for this learning outcome?

An efficient assessment item for the learning outcome of “understand opportunity cost” would be a multiple-choice question that allows the student to quickly and easily demonstrate their understanding of the concept.

For example:

Question: Which of the following is the correct definition of opportunity cost? A) The cost of a good or service B) The cost of the next best alternative C) The total cost of production D) The profit made from a good or service

The correct answer is B) The cost of the next best alternative. This multiple-choice question is efficient as it allows students to quickly demonstrate their understanding of the concept, and it is easy to grade, as there is only one correct answer. This multiple-choice question could be used as a formative assessment to check student’s understanding, and it could be also used as a summative assessment to evaluate their final understanding of the course’s outcome.

assume that a learning outcome of a course is “understand opportunity cost.” What is an equitable assessment item for this learning outcome?

An equitable assessment item for the learning outcome of “understand opportunity cost” would be a problem-solving or open-ended question that allows students to demonstrate their understanding of the concept in a way that is not limited by their background or language abilities.

For example:

Question: Suppose you have $1000 to spend and you want to buy a new computer and a new phone. A computer costs $800 and a phone costs $200. Provide a written explanation of how you would make your choice and what the opportunity cost of your choice is.

This open-ended question allows students to demonstrate their understanding of opportunity cost in their own words and allows them to apply their understanding to a real-world scenario. It also allows for different students to approach the problem differently and does not rely on language or background knowledge. It also allows for the teacher to evaluate student’s understanding of the concept in a more holistic way, and not just based on a single correct answer.

This type of assessment is equitable because it allows all students to demonstrate their understanding of the concept and does not disadvantage students who may struggle with multiple-choice questions or who may have a different background or language abilities.

Page 4 of 41

Powered by WordPress & Theme by Anders Norén