"c." is for graduate students, mostly the last section. Too complicated.
If it were me, I would have an assignment built upon AI and AT THE END OF THE COURSE. Let them know you want to work with "their" ideas/language/whathaveyou and then experiment. Don't give in just yet and introduce the dragon at the beginning.
I build a little bit of an AI option into most assignments. The inoculation method, I guess? So far, they think it's not worth it. But that doesn't mean that some aren't using it in dishonest ways...just because they can and it is a choice that has its upsides. I figure if I can harness it as responsible assistive technology now, there will be no reason to cheat.
I dunno. I'm experimenting. Maybe too radical. And not at the cost of undergraduate learning, I hope.
While I do see AI as inevitable, I don't see any of this as giving in or capitulating to an otherwise pure creative world.
The last part of "c." is too complicated. The first part is explanatory of the project. But, I agree with Amy. Wait until it's a problem (or put this on the back page).
For me the first question, for them, is "do you want to read an AI-generated story?" So far, everyone I've asked says no. So then I ask them why. The answers (so far) tend to be a little tortured :) but I think it comes down to the idea that stories are supposed to be communications among humans, based in our shared humanity -- and an amped up autocomplete lacks that. Personally I'd rather read dictionary entries (which at least were written by people) than AI-generated stories, IF I know they're AI generated. *Knowing* they are AI-generated makes me very impatient with them.
I did have a student last semester who experimented by putting an GPT-paragraph in his essay to see if anyone would notice, and while no one realized he hadn't written it, almost everyone suggested he cut it because it was dull, repetitive, etc.
I definitely want to know how things go with this approach! :)
Thank you, Amy. You know how much I value your responses. So, I agree that most students, thus far, don't want to read or write an AI-generated story. Eventually, though, we won't be able to tell the difference between human-produced and AI-generated writing. The question reverberates for me: if you can't tell the difference, how much does it matter? Besides, as writing is changing, how we define human is also changing. What's wrong with assistive technology when we've long been cyborg anyway?
Working linearly, and I am just stopping at a moment here, I would suggest separating "This means you may not use AI for other work, such as preparing. . . " from that paragraph. Needs to be punctuated as its own point. Am continuing the reading now
"c." is for graduate students, mostly the last section. Too complicated.
If it were me, I would have an assignment built upon AI and AT THE END OF THE COURSE. Let them know you want to work with "their" ideas/language/whathaveyou and then experiment. Don't give in just yet and introduce the dragon at the beginning.
Thank you, about "c". Will rethink this!
I build a little bit of an AI option into most assignments. The inoculation method, I guess? So far, they think it's not worth it. But that doesn't mean that some aren't using it in dishonest ways...just because they can and it is a choice that has its upsides. I figure if I can harness it as responsible assistive technology now, there will be no reason to cheat.
I dunno. I'm experimenting. Maybe too radical. And not at the cost of undergraduate learning, I hope.
While I do see AI as inevitable, I don't see any of this as giving in or capitulating to an otherwise pure creative world.
The last part of "c." is too complicated. The first part is explanatory of the project. But, I agree with Amy. Wait until it's a problem (or put this on the back page).
For me the first question, for them, is "do you want to read an AI-generated story?" So far, everyone I've asked says no. So then I ask them why. The answers (so far) tend to be a little tortured :) but I think it comes down to the idea that stories are supposed to be communications among humans, based in our shared humanity -- and an amped up autocomplete lacks that. Personally I'd rather read dictionary entries (which at least were written by people) than AI-generated stories, IF I know they're AI generated. *Knowing* they are AI-generated makes me very impatient with them.
I did have a student last semester who experimented by putting an GPT-paragraph in his essay to see if anyone would notice, and while no one realized he hadn't written it, almost everyone suggested he cut it because it was dull, repetitive, etc.
I definitely want to know how things go with this approach! :)
Thank you, Amy. You know how much I value your responses. So, I agree that most students, thus far, don't want to read or write an AI-generated story. Eventually, though, we won't be able to tell the difference between human-produced and AI-generated writing. The question reverberates for me: if you can't tell the difference, how much does it matter? Besides, as writing is changing, how we define human is also changing. What's wrong with assistive technology when we've long been cyborg anyway?
Your "c." needs to become "a." Still reading. . . haha
Working linearly, and I am just stopping at a moment here, I would suggest separating "This means you may not use AI for other work, such as preparing. . . " from that paragraph. Needs to be punctuated as its own point. Am continuing the reading now