Interesting perspective here:
For a long time, I’ve been wondering aloud how higher education is going to deal with the sophistication of ChatGPT, Copilot, and other AI tools. Well, wonder no longer, because this is where I live now.
“Dave, you’re making assumptions. Can you prove any of this?” I can, actually, since some submissions that required screenshots also included ChatGPT browser tabs, which helpfully included the initial text of the prompt. Apparently, it’s not even something students feel they need to hide.
I love GPT, and I’ve been programming using Open AI’s GPT-3 API for a long time. It’s been breaking my understanding of what is and isn’t possible, and forcing me to think about new ways to solve technical problems. I get why everyone is excited to use it.
Here’s more for the context:
Think of all the time saved! But what if I rephrase this slightly: “an average of 46% of students’ online tests and assignments [are] being built using GitHub Copilot.” If we add ChatGPT to the mix, we could substitute words like “essays” and “reports” for “code.” Is that good or bad? Innovative or regressive? Helping or hurting our students?
A lot of people who don’t have to deal with the practical implications of this in their workaday lives have given me all kinds of helpful ideas: use oral or paper-based testing with no internet; “just” stop using assessments that people can cheat on; or, have students write about the responses from the AI, discussing why they are good or bad.
This is critical. How will this impact “higher-education” (and medical