Can AI be used ethically for school work? Here’s what teachers say

Can AI be used ethically for school work? It depends upon who you ask — quite literally.

That’s because less than two years after ChatGPT was originally released in November 2022, the attitudes towards AI in the classroom still vary widely. High schools have viewed AI as a crutch at best, and at worst as a tool for cheating. But several universities leave generative AI use entirely up to the discretion of the person teaching the course.

In general, however, the answer boils down to a single golden rule: Students should develop their own answers, incorrect or not. Any AI assistance — from editing to research to actual writing — may be seen as a violation of an academic honor code that some schools require. If AI is used, the thinking now is that it should be cited, like a footnote.

‘AI’ usage depends on how you define it

AI doesn’t refer to just any one thing. Generative AI “chatbots” include Anthropic’s Claude, Google Gemini, Microsoft Copilot, and OpenAI’s ChatGPT. Then there are the supplemental AI services like Khan Academy’s Khanmigo ($4/mo for families and individuals), which acts as a supplementary tutor and shies away from providing “the answers” directly to the student. AI art plays a role, but to a small extent. The Los Angeles School District spent over $3 million commissioning its own scheduling and tutoring AI assistant, “Ed,” but then shut it down after major layoffs at the tool’s developer. Some schools are even using AI to develop lesson plans.

A demonstration of how Khamigo works, by Khan Academy.

All of these tools have their own strengths and weaknesses, including writing code and summarizing complex topics. Some pull “live” answers from the Internet, which is good for research. But not all AI tools cite their sources, and AI that can “hallucinate” answers still exists. All of this plays a role in whether an AI tool should be used for education.

Finally, governments are placing their own restrictions on what students can or cannot do with AI. Uploading personal information, test scores, or texts to AI might be literally prohibited by data-privacy laws. As of February 2024, for example, only Microsoft Copilot is officially allowed at Ohio State University, according to its resource center.

At high school, AI is often seen as a cheat

High schools and universities seem to treat AI differently, with high-school teachers taking a much more hands-on, supervisory role. Still, high schools seem to have a less comprehensive policy on using AI than major universities. In California, Los Angeles schools banked on “Ed,” but right now AI use there seems to be in limbo. Neither San Jose nor the San Francisco Unified School District has a formal policy on generative AI, but San Francisco acknowledges that “blocking AI tools will not prevent their use” and “these tools will be prevalent in our students’ future.”

That’s not the case at Mohonasen Central School District in New York, where AI services like ChatGPT have been blocked from school computers because of the temptation to cheat.

“ChatGPT, we view as the most basic of AI for education,” said Bill Vacca, director of instructional technology there. “ChatGPT, Gemini, Copilot: phenomenal for real-world use cases, like if you want to learn how to write a business letter, or make a pitch for your company. But in terms of education, they can be the most dangerous in terms of having students cheat.”

Vacca said that his district was “very out front” in terms of encouraging AI, with full-day lessons on how to utilize AI and its importance. But teachers were scared of the implications and wanted nothing to do with AI, he said. “Our biggest teacher pushback was how do we prevent students from cheating?”

Over time, Vacca said, teachers have become more familiar with AI, and have approached him specifically about using AI in the classroom. So have students, although he said that isn’t a majority. This year, the Mohonasen Central district will trial Khanmigo with a small group of teachers, emphasizing math, and using the Khanmigo tools to help assess how the students have progressed. A key Khanmigo selling point is that the tool doesn’t give “the answers,” but helps the student work toward the correct solution.

So can AI be ethically used in high schools? Vacca said he sees both sides.

“I was interested in when the calculator came out in classrooms,” Vacca said. “I was thinking about that, and it’s the same exact same thing. You had people debating where calculators were going, and if we even needed to teach how to do basic multiplication. And we’re seeing that now. But [AI] is only a couple of years old, and the calculator has been out for decades.”

The goal is to bring AI in to help students learn, and that’s what teachers at all levels appear to be wrestling with. “In a third-grade classroom, you’re not bringing a calculator out to do your math,” Vacca said. “You’re going to learn it. But when you get into the higher grades, you can use it to help create something even more extraordinary.”

Colleges have a much more freewheeling approach

Colleges are taking a more hands-off approach to AI. But Vacca’s feelings about it were echoed by John Behrens, director of the office of digital strategy in the College of Arts and Letters at the University of Notre Dame, especially as students make the transition from the senior years of high school to the early years of college.

“The first reaction from a lot of people, especially people that have been using computers for a long time, people later in their career, their first reaction was oh, this is a cheating tool,” Behrens said. But it’s not that simple, he added.

“If you’re teaching a beginning language class, ChatGPT is going to be better than the students,” Behrens said. “It might not be appropriate for users learning to translate, because then it’s just replacing the students’ learning process. But if you have really advanced students, then they’re probably going to be as good as ChatGPT. And there you want to use ChatGPT to make work and activities and interactions that are expanding what you can do with the students.”

“It’s a complex environment, because most instructors have very little idea how it works and how best to use it right now,” Behrens added.

Most U.S. universities haven’t put any hard and fast rules on the use of AI by students in the classroom. Instead, virtually all of the AI policies reviewed by PCWorld leave the issue entirely up to the professor teaching the course, from banning it entirely to encouraging its use. Of several university academic policies regarding AI — which included Harvard, Notre Dame, Ohio State, UC Berkeley and Stanford — only Stanford issued an umbrella statement stating that when there is any doubt with regards to students using AI, the answer is no.

“Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person,” Stanford’s Generative AI Policy Guidance states, dated Feb. 2023. “In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted.”

Some professors have used their academic freedom to seize upon the opportunities AI can offer.

“I received criticism when I first started incorporating AI into my classroom setting, with other professors letting me know that I was doing a disservice to my students by teaching them about how to use AI in higher education,” Carol-Lynn Perez, a senior lecturer in communication studies at San Jose State, said. “What they failed to realize early on is that AI is here to stay, and it is only going to get more advanced as the months roll ahead. In just the couple of years AI has been around our students have been far ahead of the game, and if we don’t make friends with it, we will be left behind.”

There’s also the concern about keeping up with the real world, said Nitesh Chawla, a professor of computer science and engineering at Notre Dame. In class, computer science students can use AI to create 60 percent of the code, allowing the professor can move on to a higher-level topic that the class would never have had time to cover…because the students would be writing the code, instead. Yet an employer would almost certainly demand the use of AI if it could save the company time and money, he said.

“So is a student graduating from [a university] where AI is banned…going to be at a detriment in the workforce when the student joins? We don’t have an answer to that yet,” Chawla said.

How and when to use AI in school work

Relying on AI isn’t always the best choice. In one recent paper primarily authored by researchers at the U. of Pennsylvania’s Wharton School of Business, about 1,000 high-school students were divided up into groups and asked to take a math test. One group was allowed to take the test assisted by what was essentially ChatGPT-4. The tool provided incorrect answers about half of the time, and yet students who were polled felt that they didn’t do any worse using the tool than they would otherwise.

AI, put simply, can be a crutch, the study concluded. “These results suggest that while access to generative AI can improve performance, it can substantially inhibit learning,” it said.

Perez noted that AI also has several flaws: AI hallucinates, can be racist or sexist, will make up citations, and can easily give the wrong information along with a lack of depth and absence of human nuance. But students still prefer information in short, manageable chunks.

“The creative process can begin with AI, but what students fail to realize a lot of times is that it cannot interpret problems or offer creative decisions like a human can,” Perez said.

Educators seem to agree that you shouldn’t use AI to create work that you pass off as your own. Instead, if you are going to use AI in class, treat it like any other source of information.

“As an early adopter I routinely tell my students that AI should be cited like a peer-reviewed journal article,” Perez said. ” When something is not written in your own words, it must be cited. Higher education is still figuring out this piece of the puzzle, but my motto is, ‘When in doubt, cite it out.’”

Miriam Scott, a secondary education teacher in Australia, provides guidelines to students on how to incorporate generative AI into their schoolwork.

Style guides are now beginning to accommodate the new reality by including footnote rules for citing AI-generated content, such as in this summary from the Canton State University of New York.

The problem, though, is the middle ground. Nathaniel Myers, an associate teaching professor at Notre Dame who will be teaching a course titled “Advanced Writing and Rhetoric: Writing in the Age of AI” this fall, says he worries about even using AI as an editing tool.

“The example that I’ve experienced myself is, if I’ve asked it to fix grammar, it will instead affect things that are more than just sort of simply grammatical choices,” Myers said.

Myers referred to work done by Anna Mills, an instructor at Cañada College, who has raised concerns about a writer’s voice becoming “blurry” when AI is applied to improve their draft. AI can replace a specific point of view or style with a much more generic approach, Myers said, citing Mills. That can be a problem, for example, when a writing style associated with a specific point of view, such as a minority, becomes generalized via AI.

To Chawla, the time is right to have these discussions.

“Kids know [AI] in middle school, kids know it in high school,” Chawla said. “Kids know it in elementary school. And in school, they are being taught not to use it, at least in my children’s schools. They’re being told not to use it at the school — and at the same time, they’re figuring their own way out of what is and what is not appropriate use of these technologies.

“AI is as mainstream as it gets.”

Further reading: The AI PC revolution: 18 essential terms you need to know

Can AI be used ethically for school work? It depends upon who you ask — quite literally.

That’s because less than two years after ChatGPT was originally released in November 2022, the attitudes towards AI in the classroom still vary widely. High schools have viewed AI as a crutch at best, and at worst as a tool for cheating. But several universities leave generative AI use entirely up to the discretion of the person teaching the course.

In general, however, the answer boils down to a single golden rule: Students should develop their own answers, incorrect or not. Any AI assistance — from editing to research to actual writing — may be seen as a violation of an academic honor code that some schools require. If AI is used, the thinking now is that it should be cited, like a footnote.

‘AI’ usage depends on how you define it

AI doesn’t refer to just any one thing. Generative AI “chatbots” include Anthropic’s Claude, Google Gemini, Microsoft Copilot, and OpenAI’s ChatGPT. Then there are the supplemental AI services like Khan Academy’s Khanmigo ($4/mo for families and individuals), which acts as a supplementary tutor and shies away from providing “the answers” directly to the student. AI art plays a role, but to a small extent. The Los Angeles School District spent over $3 million commissioning its own scheduling and tutoring AI assistant, “Ed,” but then shut it down after major layoffs at the tool’s developer. Some schools are even using AI to develop lesson plans.

A demonstration of how Khamigo works, by Khan Academy.

All of these tools have their own strengths and weaknesses, including writing code and summarizing complex topics. Some pull “live” answers from the Internet, which is good for research. But not all AI tools cite their sources, and AI that can “hallucinate” answers still exists. All of this plays a role in whether an AI tool should be used for education.

Finally, governments are placing their own restrictions on what students can or cannot do with AI. Uploading personal information, test scores, or texts to AI might be literally prohibited by data-privacy laws. As of February 2024, for example, only Microsoft Copilot is officially allowed at Ohio State University, according to its resource center.

At high school, AI is often seen as a cheat

Bing Image Creator

Bing Image Creator

Bing Image Creator

High schools and universities seem to treat AI differently, with high-school teachers taking a much more hands-on, supervisory role. Still, high schools seem to have a less comprehensive policy on using AI than major universities. In California, Los Angeles schools banked on “Ed,” but right now AI use there seems to be in limbo. Neither San Jose nor the San Francisco Unified School District has a formal policy on generative AI, but San Francisco acknowledges that “blocking AI tools will not prevent their use” and “these tools will be prevalent in our students’ future.”

That’s not the case at Mohonasen Central School District in New York, where AI services like ChatGPT have been blocked from school computers because of the temptation to cheat.

“ChatGPT, we view as the most basic of AI for education,” said Bill Vacca, director of instructional technology there. “ChatGPT, Gemini, Copilot: phenomenal for real-world use cases, like if you want to learn how to write a business letter, or make a pitch for your company. But in terms of education, they can be the most dangerous in terms of having students cheat.”

Vacca said that his district was “very out front” in terms of encouraging AI, with full-day lessons on how to utilize AI and its importance. But teachers were scared of the implications and wanted nothing to do with AI, he said. “Our biggest teacher pushback was how do we prevent students from cheating?”

Over time, Vacca said, teachers have become more familiar with AI, and have approached him specifically about using AI in the classroom. So have students, although he said that isn’t a majority. This year, the Mohonasen Central district will trial Khanmigo with a small group of teachers, emphasizing math, and using the Khanmigo tools to help assess how the students have progressed. A key Khanmigo selling point is that the tool doesn’t give “the answers,” but helps the student work toward the correct solution.

So can AI be ethically used in high schools? Vacca said he sees both sides.

“I was interested in when the calculator came out in classrooms,” Vacca said. “I was thinking about that, and it’s the same exact same thing. You had people debating where calculators were going, and if we even needed to teach how to do basic multiplication. And we’re seeing that now. But [AI] is only a couple of years old, and the calculator has been out for decades.”

The goal is to bring AI in to help students learn, and that’s what teachers at all levels appear to be wrestling with. “In a third-grade classroom, you’re not bringing a calculator out to do your math,” Vacca said. “You’re going to learn it. But when you get into the higher grades, you can use it to help create something even more extraordinary.”

Colleges have a much more freewheeling approach

Bing Image Creator

Bing Image Creator

Bing Image Creator

Colleges are taking a more hands-off approach to AI. But Vacca’s feelings about it were echoed by John Behrens, director of the office of digital strategy in the College of Arts and Letters at the University of Notre Dame, especially as students make the transition from the senior years of high school to the early years of college.

“The first reaction from a lot of people, especially people that have been using computers for a long time, people later in their career, their first reaction was oh, this is a cheating tool,” Behrens said. But it’s not that simple, he added.

“If you’re teaching a beginning language class, ChatGPT is going to be better than the students,” Behrens said. “It might not be appropriate for users learning to translate, because then it’s just replacing the students’ learning process. But if you have really advanced students, then they’re probably going to be as good as ChatGPT. And there you want to use ChatGPT to make work and activities and interactions that are expanding what you can do with the students.”

“It’s a complex environment, because most instructors have very little idea how it works and how best to use it right now,” Behrens added.

Most U.S. universities haven’t put any hard and fast rules on the use of AI by students in the classroom. Instead, virtually all of the AI policies reviewed by PCWorld leave the issue entirely up to the professor teaching the course, from banning it entirely to encouraging its use. Of several university academic policies regarding AI — which included Harvard, Notre Dame, Ohio State, UC Berkeley and Stanford — only Stanford issued an umbrella statement stating that when there is any doubt with regards to students using AI, the answer is no.

“Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person,” Stanford’s Generative AI Policy Guidance states, dated Feb. 2023. “In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted.”

Some professors have used their academic freedom to seize upon the opportunities AI can offer.

“I received criticism when I first started incorporating AI into my classroom setting, with other professors letting me know that I was doing a disservice to my students by teaching them about how to use AI in higher education,” Carol-Lynn Perez, a senior lecturer in communication studies at San Jose State, said. “What they failed to realize early on is that AI is here to stay, and it is only going to get more advanced as the months roll ahead. In just the couple of years AI has been around our students have been far ahead of the game, and if we don’t make friends with it, we will be left behind.”

There’s also the concern about keeping up with the real world, said Nitesh Chawla, a professor of computer science and engineering at Notre Dame. In class, computer science students can use AI to create 60 percent of the code, allowing the professor can move on to a higher-level topic that the class would never have had time to cover…because the students would be writing the code, instead. Yet an employer would almost certainly demand the use of AI if it could save the company time and money, he said.

“So is a student graduating from [a university] where AI is banned…going to be at a detriment in the workforce when the student joins? We don’t have an answer to that yet,” Chawla said.

How and when to use AI in school work

Bing Image Creator

Bing Image Creator

Bing Image Creator

Relying on AI isn’t always the best choice. In one recent paper primarily authored by researchers at the U. of Pennsylvania’s Wharton School of Business, about 1,000 high-school students were divided up into groups and asked to take a math test. One group was allowed to take the test assisted by what was essentially ChatGPT-4. The tool provided incorrect answers about half of the time, and yet students who were polled felt that they didn’t do any worse using the tool than they would otherwise.

AI, put simply, can be a crutch, the study concluded. “These results suggest that while access to generative AI can improve performance, it can substantially inhibit learning,” it said.

Perez noted that AI also has several flaws: AI hallucinates, can be racist or sexist, will make up citations, and can easily give the wrong information along with a lack of depth and absence of human nuance. But students still prefer information in short, manageable chunks.

“The creative process can begin with AI, but what students fail to realize a lot of times is that it cannot interpret problems or offer creative decisions like a human can,” Perez said.

Educators seem to agree that you shouldn’t use AI to create work that you pass off as your own. Instead, if you are going to use AI in class, treat it like any other source of information.

“As an early adopter I routinely tell my students that AI should be cited like a peer-reviewed journal article,” Perez said. ” When something is not written in your own words, it must be cited. Higher education is still figuring out this piece of the puzzle, but my motto is, ‘When in doubt, cite it out.’”

Miriam Scott, a secondary education teacher in Australia, provides guidelines to students on how to incorporate generative AI into their schoolwork.

Style guides are now beginning to accommodate the new reality by including footnote rules for citing AI-generated content, such as in this summary from the Canton State University of New York.

The problem, though, is the middle ground. Nathaniel Myers, an associate teaching professor at Notre Dame who will be teaching a course titled “Advanced Writing and Rhetoric: Writing in the Age of AI” this fall, says he worries about even using AI as an editing tool.

“The example that I’ve experienced myself is, if I’ve asked it to fix grammar, it will instead affect things that are more than just sort of simply grammatical choices,” Myers said.

Myers referred to work done by Anna Mills, an instructor at Cañada College, who has raised concerns about a writer’s voice becoming “blurry” when AI is applied to improve their draft. AI can replace a specific point of view or style with a much more generic approach, Myers said, citing Mills. That can be a problem, for example, when a writing style associated with a specific point of view, such as a minority, becomes generalized via AI.

To Chawla, the time is right to have these discussions.

“Kids know [AI] in middle school, kids know it in high school,” Chawla said. “Kids know it in elementary school. And in school, they are being taught not to use it, at least in my children’s schools. They’re being told not to use it at the school — and at the same time, they’re figuring their own way out of what is and what is not appropriate use of these technologies.

“AI is as mainstream as it gets.”

Further reading: The AI PC revolution: 18 essential terms you need to know Read More