According to an article published in the International Journal of Artificial Intelligence in Education, the majority of American students have been unable to read and write at grade level for the past decade. Due to the rising needs in education, professors are looking for technology-enhanced strategies to address the issue. Teachers now use AI-powered services like voice assistants and apps that create essays, finish sentences, and correct grammar. Educators see the potential of leveraging AI-powered features to improve the assistance offered to students.
The nonprofit Complete College America recently unveiled a 78-page document enumerating more than 170 use cases for generative AI in higher education, including predictive maintenance, data analytics and tutoring.https://t.co/0VImtS29Rp
— Center for Digital Education (CDE) (@centerdigitaled) November 21, 2023
Today, the US keeps its status as one of the powerhouses of educational technology, housing companies like Grammarly, Coursera, and ChatGPT. However, there are increased hazards known for adapting this new form of teaching. Let’s look at how AI could be dangerous in the educational sector.
Security and data privacy
The number one risk would be security and data privacy threats that coexist with this function. As with everything digital, there is a privacy concern when teachers or students use generative AI technologies. All discussions and private data may be recorded and analyzed. Many attackers attempt to steal the data of students or teachers to conduct fraud, undermine security systems, or sell the data on the black market.
There’s the temptation to depend on tools to assist in answering exams, writing paragraphs, or summarizing content. Plagiarism has always been a problem inside and outside the classroom doors, but the addition of AI-generated content makes the job of cross-checking even harder. There are obvious examples of how pupils may pass off other people’s work as their own.
However, tired teachers might also succumb to simply accepting AI-generated content rather than spending time assessing and improving it for maximum instructional value. Both these examples pose a problem for students to authentically learn and connect with their educators.
Possibility of unemployment
Making instruction more effective could reduce the need for teachers. Class size is no longer as important a determinant of educational quality thanks to massive open online courses. AI could easily make an entire course with assessments that can be used by hundreds of students simultaneously. That being said, this doesn’t mean that experts or professors can be totally replaced, but it might raise the requirements of one to be an educator.
Wrongful use of data
There are also probable data misuses resulting in prejudices against certain people or groups. Since AI is dependent on data, it is right to question the fairness of suggestions made by an algorithm. For instance, if the data contains student performance information that is predisposed to a certain racial, gender, or socioeconomic group, the AI system may come to favor kids belonging to that group.
Looking at this list, the worries of educators are numerous. However, many of these dangers are not foreign nor exclusive to AI. When these technologies were originally introduced, schools did not allow the use of calculators and phones. AI must always be accompanied by humans who are literate in its use. This is where we, the consumers of this newfound technology, come in. Our responsibility, whether as a student or an educator, is to be aware of the risks and take measures to ensure we are guarded from potential risks.
Photo credit: The feature image is symbolic and has been done by Kraken Images.
Sources: Springer Link / Loeb & Loeb / Daniel Buck (Thomas B. Fordham Institute) / The Knowledge Review / Benjamin Herold (Education Week)