The Astana Times provides news and information from Kazakhstan and around the world.In universities all over Kazakhstan, there lies a revolution under way—one that is driven not by politics or by economic transformation, but by artificial intelligence. Michael Jones.
In writing centres, classrooms, and dorms, students are increasingly employing AI resources like ChatGPT, Grammarly, and QuillBot to help with writing. Whilst some use them for basic editing or idea generation, others depend upon them to generate entire essays. With these resources becoming increasingly available, there is no longer a debate to be had over whether AI is appropriate in an educational environment.
The conversation is shifting to how properly it should be applied.AI holds promise for enriching academic life in Kazakhstan. It can assist multilingual learners dealing with writing requirements in Kazakh, Russian, and English by providing instant, customized feedback.
Still, for all that it offers, unexamined and uncritical adoption of AI in writing is fraught with alarming ethical issues concerning plagiarism and bias. These are not abstractions. They impinge on the most basic assumptions about what education should promote: originality, critical thinking, and equity.
Plagiarism redefined in an age of AIThe problem of plagiarism has existed in education for a long time, but AI is fundamentally shifting our conception of it. Traditionally, plagiarism is viewed in terms of copying another person’s material without giving credit. But with AI, the boundaries are a little more ambiguous.
When a student instructs an AI model to Write 500 words on what caused World War I and then submits the output unchanged, is that an example of plagiarism? What if it is revised slightly? What if AI is only used for structure and transitions?It is not an academic problem, but a pedagogical one. Students who use AI to do their intellectual work never get to learn what writing is supposed to teach them: to think, to synthesise, to analyse. Universities in Kazakhstan, like everywhere else, need to adapt their academic integrity policies.
But these policies need to appreciate the nuance. Not everything that involves AI is cheating. It’s a matter of transparency, when and how students disclose their use of AI, and intent.
Most students are aware that copying and pasting AI-generated content amounts to cheating and is classed as academic misconduct. It is not a matter of knowledge, but of uncertainty. Most are not certain what their university’s particular policy is, especially since institutional policy regarding AI use is emerging.
Uncertainty is compounded by inconsistency, as one professor may invite modest use of AI tools for ideas generation or language assistance, while another prohibits their use. Since there is no unified institutional policy, each student must navigate their own way through these shades of grey. In response, universities in Kazakhstan should take their cue from international institutions that are now creating transparent, nuanced guidelines and even citation practices for AI-generated content.
However, a penalty-based approach alone is not going to succeed. What is required is a change of academic culture. Students need to learn not just to avoid plagiarism, but also why originality and authorship matters.
Faculty need to teach in environments in which writing is understood not as a product to submit but as a process of thought. AI may be used to facilitate that process, but it should not substitute for it.The hidden biases of neutral technologyAnother significant ethical issue, and one that tends to get missed, is bias.
Many assume that since AI is powered by algorithms, that it is neutral. Yet in reality, AI models are trained on massive sets of data which are mostly in English and largely drawn from Western sources. Even Open AI, ChatGPT’s owners, acknowledge this on their website.
What this means is that they reflect Western cultural, linguistic, and ideological assumptions that are embedded into that data. For students, this causes two major interrelated challenges.Firstly, there exists a genuine threat that AI-based writing reinforces Anglo-American scholarly practices to the detriment of local systems of knowledge.
Writing produced by such models tends to privilege linear, thesis-based argument structures, practices of citation, and critical styles that might not fit the patterns of native or multilinguistic scholarly practices. If Kazakh students use AI tools to support their writings, they might habitually take on these practices and thus are deprived of the chance of creating a style of academic voice which represents local or regional scholarly context.Equally as troubling is how AI can enforce and exacerbate existing inequalities.
Students from rural areas or those who are more comfortable using Kazakh or Russian may discover that AI tools prefer content in English or Western examples. This sets up an uneven playing field which is based on linguistic ability and access to global discourse determining the quality of AI assistance a student receives. Such disparities risk deepening educational inequalities, privileging those already fluent in the dominant discourse of global academia.
There is a need for universities to take these issues seriously by integrating discussions of these biases into their course, making students become better critical uses of AI technology. Assessments can be based around local interpretations of regional or global issues in a way that counteracts any of the potential cultural threats which are brought about by the use of AI. Moving forward: toward a call for ethical leadershipUniversities in Kazakhstan have the potential to be at the forefront on this issue.
With a unique multilingual and multicultural environment, coupled with investment in education, the country has the ability to create AI policy which is attuned to local realities. This will involve reforming academic integrity policy to address AI generated content and investing in widespread training of faculty, staff, and students. Institutions may also hold regular workshops on the ethical use of AI, develop standardized institute guidelines for disclosure and citation of AI assistance, and embed discussions on digital ethics and algorithmic bias into the curriculum.
There is no need for a ban on AI from classrooms, that would be practically impossible and would only hinder both students and educators alike. Instead, we must openly address how it is reshaping the way students learn and think. This includes reinforcing the importance of originality and critical inquiry, encouraging educators to treat writing as a thinking process rather than a final product, and fostering assignments that prioritize individual voice and reflection.
AI should be a tool that supports learning, not a substitute for it. Fairness, equity, and intellectual honesty must remain at the heart of education.The author is Michael Jones, an writing and communications instructor, School of Social Science and Humanities, Nazarbayev University (Astana).
The post Ethics of AI in Writing: Plagiarism, Bias, and Future of Academic Integrity appeared first on The Astana Times..
Politics
Ethics of AI in Writing: Plagiarism, Bias, and Future of Academic Integrity

The Astana Times provides news and information from Kazakhstan and around the world.In universities all over Kazakhstan, there lies a revolution under way—one that is driven not by politics or by economic transformation, but by artificial intelligence. In writing centres, classrooms, and dorms, students are increasingly employing AI resources like ChatGPT, Grammarly, and QuillBot to help with writing. Whilst some use them for basic editing or idea...The post Ethics of AI in Writing: Plagiarism, Bias, and Future of Academic Integrity appeared first on The Astana Times.