A review that captures a transition point in academic life
A new scoping review by University of Phoenix scholars Patricia Akojie, Marlene Blake and Louise Underdahl argues that generative artificial intelligence is no longer a marginal tool in higher education, but an increasingly embedded part of academic practice. Published in the International Journal of Digital Society, the study examines how tools such as ChatGPT are being used across doctoral research, academic writing, literature reviews and broader knowledge development. What emerges is not a simple story of enthusiasm or alarm, but a clearer picture of a sector moving from curiosity into a more consequential phase of adoption.
Table of Contents
That shift matters because the academic environment is unusually sensitive to questions of method, authorship and intellectual responsibility. Universities are not merely testing whether AI can save time. They are confronting whether these systems can be incorporated without weakening the norms that make scholarly work credible in the first place. The core issue is not whether generative AI is useful, but under what conditions its use remains academically legitimate.
Efficiency gains are real, but they do not settle the question
The review identifies a growing role for generative AI in practical scholarly workflows. Researchers are using these systems to support literature review processes, stimulate brainstorming, and assist with forms of academic writing that require organisation, synthesis and early-stage framing. In that sense, the appeal is easy to understand: AI can help scholars navigate large bodies of material and accelerate tasks that are often cognitively demanding and time-intensive.
For doctoral education in particular, that promise is significant. Doctoral work often combines conceptual complexity with a heavy burden of reading, comparison and refinement, making AI assistance especially attractive. But the review is careful not to confuse acceleration with scholarship itself. Improving efficiency in research is valuable, yet efficiency alone cannot substitute for judgment, interpretation and original analysis, which remain the substance of serious academic work.
Academic integrity is becoming the central line of debate
That is why ethical concerns sit at the centre of the study’s conclusions. The authors highlight transparency, academic integrity and responsible use as the conditions under which AI can be incorporated into academic life without distorting it. This is an important distinction. The problem is not simply that AI can generate text, but that it can blur the boundary between support and authorship unless institutions and researchers define that boundary more clearly.
The review therefore points toward a more mature conversation about AI in universities. Rather than treating generative systems as either a threat to ban or a productivity tool to celebrate, the authors frame them as technologies that require rules, disclosure and intellectual discipline. Scholarly rigor depends not only on what is produced, but on how it is produced, and that principle becomes more important, not less, when AI enters the workflow.
Institutions will need policy, not just enthusiasm
One of the study’s most useful conclusions is that higher education may need to invest in AI literacy as much as in AI access. Doctoral students and researchers will need to understand what these systems can do, where they are limited, and how to use them without surrendering the analytical independence expected of academic inquiry. In this sense, training becomes a safeguard for quality, not merely a technical upgrade.
The institutional implication is equally clear. Universities will need clearer policies and better guidance if they want AI adoption to remain compatible with research standards and teaching goals. The next phase of AI in higher education will be shaped less by novelty than by governance, as institutions decide how to preserve originality, accountability and critical thought while adapting to tools that are rapidly becoming part of everyday academic work.
The real challenge is preserving the meaning of scholarship
What makes this review timely is that it captures a moment when generative AI is beginning to alter academic behaviour before academic culture has fully caught up. The technology is already influencing how scholars search, draft and synthesise, but the norms governing those activities are still being defined. That makes the present moment unusually important. Once AI becomes routine, the difficult questions about authorship and rigor do not disappear; they become structural.
The authors’ broader contribution is to insist that responsible adoption must remain tied to the values that define higher education at its best. If generative AI is to have a lasting place in scholarship, it will have to operate in service of critical inquiry rather than in place of it. That is the standard universities will ultimately be judged by.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency




