Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

Acceptability of AI Tools in the Conduct of Research

Sutapa Garai , Dr. Sarita Anand, Shubha Sarkar
American Journal of Educational Research. 2026, 14(3), 90-95. DOI: 10.12691/education-14-3-2
Received January 21, 2026; Revised February 23, 2026; Accepted March 02, 2026

Abstract

The rapid growth of Artificial Intelligence (AI) has influenced academic research practices, transforming the ways in which research activities were conducted. The present study examined research scholars’ perceptions of AI tools to understand their effective and ethical integration into academic research. A mixed-method research design was employed, and data were collected from research scholars across different universities in West Bengal using a self-constructed questionnaire administered online. Quantitative data were analyzed descriptively, while qualitative responses provided deeper insights into perceived benefits and concerns related to AI use. The findings indicated that research scholars demonstrated a generally positive perception toward the acceptance and application of AI tools in research, acknowledging their potential to enhance research quality, efficiency, and productivity. However, concerns related to ethics, plagiarism, over-reliance on technology, accuracy of AI-generated content, and data privacy was also reported. The study concluded that although research scholars exhibited favorable attitudes toward AI tools, institutional support, structured training, and clear ethical guidelines were essential to ensure the responsible and effective integration of AI in academic research.

1. Introduction

The rapid advancement of artificial intelligence (AI), machine learning (ML), and automation technologies is increasingly transforming the landscape of scientific research across disciplines. Alongside these benefits, the automation of science raises critical epistemic, ethical, and practical concerns. Science is widely understood to pursue epistemic goals, advancing human knowledge and understanding and practical goals, enabling prediction, control, and application of knowledge 1, 2. The increasing reliance on AI-driven systems, particularly those that operate as opaque “black boxes,” challenges these goals by potentially distancing researchers from the processes through which scientific knowledge is generated. Scholars have cautioned that excessive automation may reduce scientific understanding, increase susceptibility to error, limit critical scrutiny of results, and constrain human creativity in discovery-oriented research 2, 3. These concerns have intensified discussions around explainable artificial intelligence (XAI), accountability, and trust, emphasizing that researchers must be able to understand, evaluate, and justify AI-generated outcomes 4, 5.

Importantly, the implications of AI and automation vary across research contexts. In laboratory automation, AI-enabled systems typically execute procedures explicitly designed and supervised by human researchers, allowing actions to be observed, recorded, and verified with minimal loss of epistemic control 6. In contrast, when AI/ML systems are used for prediction, optimization, or decision-support such as in conservation decision-making, computational design, or autonomous experimentation the reasoning behind outputs may be less transparent, making explainability and human oversight central to scientific trust and validity 7, 8. As a result, scholars increasingly emphasize the need to maintain a meaningful “human-in-the-loop,” ensuring that researchers retain interpretive authority, ethical responsibility, and creative agency within automated research processes 7, 9.

Against this backdrop, understanding researchers’ own perceptions of automating scientific research is crucial. Researchers are not passive recipients of automation technologies but active agents who shape how these systems are adopted, evaluated, and governed in practice. Empirical insights into how researchers across diverse scientific domains perceive the benefits, risks, trade-offs, and responsibilities associated with AI-driven automation are therefore essential for informing responsible innovation, institutional policy, and the sustainable integration of automation into scientific knowledge production 8.

2. Objectives of the Study

The objectives of this study are

2.1. To study the perception of research scholars about Artificial Intelligence (AI) in academics.

2.2. To know what are the main concerns of researchers about integrating the AI in the research work.

3. Review of the Related Literature

Andersen et al., 10 conducted a study on Generative Artificial Intelligence (GenAI) in the Research Process: Researchers’ Practices and Perceptions.The study investigates the extent, purposes, and research integrity perceptions of Generative Artificial Intelligence (GenAI) use among researchers across Danish universities. Employing a nationwide survey design, data were collected from 2,534 researchers, including PhD students, between January and February 2024, examining 32 GenAI use cases across five stages of the research process—idea generation, research design, data collection, data analysis, and writing/reporting. Using descriptive statistics, exploratory factor analysis, cluster analysis, and qualitative thematic analysis, the study found that GenAI use was most positively perceived for language editing, transcription, and data analysis, whereas applications in experiment design, peer review, image manipulation, and synthetic data generation raised greater research integrity concerns. Three dominant perception clusters emerged: GenAI as a work horse, GenAI as a language assistant only, and GenAI as a research accelerator. The findings further revealed higher GenAI adoption among junior researchers, clear disciplinary variations favouring technical and quantitative fields, and no significant gender differences, highlighting the need for flexible, discipline-sensitive guidelines to ensure responsible GenAI integration in academic research.

Cukurova, Luckin, & Kent, 11 found on their study entitled “Impact of an Artificial Intelligence Research Frame on the Perceived Credibility of Educational Research Evidence” that how framing educational research evidence within different disciplinary contexts influences its perceived credibility among the general public and educators. This was an experimental study. A total of 605 participants from the United Kingdom and the United States were randomly assigned to one of three conditions in which identical educational research findings were framed as originating from artificial intelligence, neuroscience, or educational psychology. Using Likert-scale measures and multivariate statistical analyses while controlling for participants’ familiarity with the subject, the study found that educational research evidence framed within AI was perceived as significantly less credible as when framed within neuroscience or educational psychology, a phenomenon termed the “in-credible AI effect.” Participants also viewed AI as less helpful for understanding how children learn, less adherent to scientific methods, and less prestigious compared to the other two disciplines, and this pattern persisted even among educators. The findings suggest that negative public perceptions and media-driven misconceptions surrounding AI may undermine trust in AI-based educational research, highlighting the need for stronger engagement between the AI in Education community and key stakeholders to improve public understanding, scientific communication, and acceptance of AI-supported educational research

Douglas 8 studied on “Researchers’ Perceptions of Automating Scientific Research” This qualitative study explored how researchers across multiple scientific domains perceive the increasing use of artificial intelligence (AI), machine learning (ML), and automation in scientific research, with a focus on benefits, risks, and implications for scientific practice. Using a purposive and snowball sampling strategy, the study conducted 18 semi-structured interviews with researchers working in automation engineering, computational design, conservation decision-making, materials science, and synthetic biology within a national research organization. Data were collected between November 2022 and May 2023 and analyzed through reflexive thematic analysis using NVivo software. The findings indicate that researchers largely value automation for its practical benefits, including increased efficiency, scalability, reproducibility, safety, and the reduction of repetitive labor; however, they also express caution regarding over-automation, particularly in data analysis and decision-making tasks. Perceptions of explain ability varied by context: explain ability was considered less critical when automation followed researcher-defined laboratory protocols, but essential when AI/ML systems generated predictions or recommendations that directly influenced scientific conclusions. Across all domains, participants emphasized the importance of maintaining a “human-in-the-loop” to preserve scientific understanding, creativity, accountability, and trust in research outcomes. Overall, the study concludes that while automation and AI can significantly enhance scientific practice, their responsible integration requires context-sensitive use, human oversight, and careful consideration of epistemic and ethical implications to ensure the continued growth and trustworthiness of scientific knowledge.

Jan et al. 12 conducted the study on “The Digital Transformation of Teacher Education: A Qualitative Analysis of AI Integration across Regional Institutes of Education in India”. This study aims to integrate AI in teacher education primarily methodology of the study was qualitative and mixed-method approaches, in-depth interviews with faculty members, focus group discussions with student-teachers, document analysis of curriculum reforms, and case studies from various Regional Institutes of Education (RIEs). Quantitative methods such as surveys to assess digital readiness, infrastructure evaluations, and AI literacy levels are also conducted to get the information. The findings of the study was that AI tools such as content tutoring systems, automated grading, and adaptive learning platforms have significant potential to elevate personalized instruction, real-time feedback, and data-driven decision-making in teacher training. Although several challenges have been identified, including infrastructural limitations, faculty resistance due to low digital literacy, ethical concerns related to data privacy, and disparities in access between urban and rural regions. To resolve these issues, research emphasizes the importance of developing policy frameworks, investing in technological infrastructure, and implementing faculty capacity-building initiatives to ensure effective and equitable AI integration in teacher education programs.

Pereira et al., 13 conducted a study on Generative artificial intelligence and academic writing: An analysis of the perceptions of researchers in training. This exploratory and descriptive qualitative study examined the relationship between generative artificial intelligence (GAI), particularly ChatGPT, and academic writing by analyzing the perceptions of researchers in training across multiple higher education institutions in Brazil, Ecuador, Portugal, and Spain. Using a narrative literature review with a systematic search followed by asynchronous online interviews administered through a questionnaire, data were collected from 147 professors, researchers, and postgraduate students. The findings indicate that generative AI is predominantly perceived as a complementary tool that supports academic writing by assisting with grammar correction, translation, text structuring, organization of ideas, and preliminary literature exploration, thereby enhancing efficiency and accessibility, especially for non-native English speakers. However, participants strongly emphasized that GAI cannot replace human creativity, critical thinking, or the researcher’s intellectual responsibility. Significant concerns emerged regarding ethical and social implications, including authorship ambiguity, technology-facilitated plagiarism, academic dishonesty, loss of originality, lack of critical depth, inaccuracies in referencing, and limitations in contextual understanding. While GAI was also viewed as having inclusive potential by reducing linguistic and accessibility barriers, the study concludes that its responsible use in academic writing must be guided by clear ethical standards, transparency, academic integrity, and institutional regulations, reinforcing AI as a human-augmentation tool rather than a substitute for scholarly thinking.

Verboom et al., 14 carried out a study on “Perceptions of Artificial Intelligence in Academic Teaching and Research: A Qualitative Study of Ethical and Decent Work Dimensions’’. This qualitative study aimed to examine how artificial intelligence (AI) is currently used and perceived in higher education teaching and research, to identify its opportunities and risks, and to analyze its implications for ethical principles and decent work dimensions through a socio-technical systems (STS) perspective. Semi-structured interviews were conducted with 28 participants from Portugal, the Netherlands, and the United States, including AI experts, AI experts who are professors, and professors without AI expertise, using a snowball sampling technique. Data were analyzed through hybrid coding, sentiment analysis, and cluster analysis grounded in ethical principles (transparency, accountability, fairness, and privacy) and decent work dimensions. The findings revealed diverse perceptions of AI, ranging from optimism about efficiency, automation, and pedagogical enhancement to concerns regarding data privacy, academic integrity, workload intensification, and job security. Seven perception-based clusters emerged, reflecting varying levels of acceptance, readiness, and critical awareness. Overall, the study concludes that while AI has significant potential to enhance teaching and research productivity, its sustainable and responsible integration in higher education requires strong ethical guidelines, institutional support, continuous professional development, and alignment between technological innovation and human-centered values.

4. Research Gap

From the above literature review, it can be seen that the growing body of literature on artificial intelligence in research, several critical gaps remain, particularly concerning research scholars’ perceptions. First, existing studies predominantly focus on faculty members, AI experts, or institutional perspectives, with comparatively limited empirical attention to research scholars especially doctoral and early-career researchers who are among the most frequent users of AI tools 10, 13. Second, much of the current research emphasizes technical capabilities, ethical principles, or policy frameworks, while offering insufficient insight into how research scholars subjectively experience AI’s impact on research skills, critical thinking, authorship, and scholarly identity 8. Third, there is a lack of context-sensitive studies examining how perceptions of AI vary across disciplines, stages of research training, and socio-academic environments, particularly in developing and Global South contexts 15. Finally, although concepts such as explainable AI and human AI collaboration are widely discussed, there is limited empirical evidence on how research scholars negotiate responsibility, trust, and accountability when integrating AI into different phases of the research process. Addressing these gaps, the present study seeks to systematically explore research scholars’ perceptions of AI in research, with a focus on its perceived benefits, ethical concerns, and implications for research integrity and scholarly practice.

5. Methodology of the Study

Method:

The descriptive survey method was used in this study to examine the acceptability of AI tools in the conduct of research among research scholars. In order to strengthen the findings and verify the quantitative results, a triangulation approach was incorporated by adding a qualitative component to the study.

Population:

The population for the study consisted of research scholars from different streams and universities in West Bengal.

Sample and Sampling Technique:

A total sample of 57 research scholars was selected using the simple random sampling technique.

Research Tool Used:

A self-made questionnaire titled “AI Perception Questionnaire (AIPQ)”, consisting of three items, was used to collect quantitative data related to the acceptability of AI tools in research.

To achieve triangulation and gain deeper insights into the phenomenon, open-ended questions were also included to collect qualitative data regarding the positive aspects and downsides of using AI tools in research.

Data Collection:

The data were collected from the research scholars through Google Forms. The quantitative responses were used to measure the level of acceptability of AI tools, while the qualitative responses helped in understanding researchers’ experiences, concerns, and expectations. This triangulation enabled the researcher to validate the survey findings and develop meaningful recommendations for the responsible use of AI tools in research.

6. Results and Findings

This Item wise analysis

Item 1

The pie chart presents responses from 57 research scholars to the question: “Do you believe AI can enhance the quality of research?” The results indicate an overall positive perception of AI among respondents. A majority of participants (52.6%) agreed and an additional 10.5% strongly agreed that AI can enhance research quality, together accounting for 63.1% of the total responses. This strong affirmative trend suggests that most research scholars recognize AI’s potential to improve research efficiency, accuracy, and overall quality through tasks such as literature review, data analysis, and academic writing support.

However, a notable proportion of respondents expressed reservations. About 15.8% remained neutral, indicating uncertainty or a cautious stance, possibly due to limited experience with AI tools or concerns regarding ethical and methodological implications. Meanwhile, 12.3% strongly disagreed and 8.8% disagreed, together comprising 21.1% of the sample. This dissenting group reflects apprehensions about overreliance on AI, risks to originality, data bias, or threats to research integrity.

Overall, the findings suggest that while research scholars largely view AI as a valuable enhancer of research quality, a significant minority remains skeptical or undecided. This highlights the need for capacity-building, ethical guidelines, and institutional support to promote informed and responsible use of AI in research, thereby addressing concerns and fostering wider acceptance.

Item2

The responses of 57 research scholars to the question: “Would you consider using AI tools in your future research work?” indicate a generally positive outlook toward the adoption of artificial intelligence in research. A majority of the respondents (56.1%) expressed a clear willingness to use AI tools in their future research activities, reflecting growing acceptance of AI as a useful support for tasks such as literature review, data analysis, and research writing. At the same time, 35.1% of the respondents selected “Maybe,” suggesting a cautious or conditional openness toward AI use, possibly influenced by concerns related to ethics, academic integrity, lack of training, or uncertainty about institutional guidelines. Only a small proportion of respondents (8.8%) reported that they would not consider using AI tools, indicating limited resistance to AI adoption. Overall, the findings suggest that while most research scholars are inclined toward integrating AI into future research, targeted training, awareness, and clear ethical frameworks are necessary to address hesitation and promote responsible use of AI in academic research.

Item 3

The bar chart illustrates the concerns expressed by 57 research scholars regarding the use of artificial intelligence in research. The results show that ethical concerns constitute the most significant issue, reported by 66.7 percent of the respondents, indicating apprehensions related to responsible use, academic integrity, and ethical accountability in AI-assisted research. Plagiarism is another major concern, cited by 56.1 percent of the respondents, reflecting fears of unintentional academic misconduct and unclear authorship arising from AI-generated content. Additionally, over-reliance on technology was reported by 50.9 percent of the respondents, suggesting concerns that excessive dependence on AI may weaken critical thinking, originality, and independent research skills.

Concerns regarding the accuracy of AI-generated content were expressed by 47.4 percent of the respondents, highlighting doubts about the reliability and factual correctness of AI outputs. Data privacy issues were identified by 43.9 percent of the respondents, pointing to fears related to confidentiality, data security, and potential misuse of sensitive research information. A very small proportion of respondents, 1.8 percent each, mentioned other concerns such as AI hampering the thought process, indicating that these issues are comparatively less prominent.

Overall, the findings suggest that while research scholars are increasingly open to using AI in research, their primary concerns revolve around ethical integrity, plagiarism, over-dependence on technology, accuracy of outputs, and data privacy. These concerns emphasize the need for clear ethical guidelines, institutional regulations, and capacity-building initiatives to ensure the responsible and effective use of artificial intelligence in academic research.

7. Conclusion

The present study examined research scholars’ perceptions of artificial intelligence in the research process, focusing on its perceived usefulness, future adoption, and associated concerns. The findings reveal that research scholars generally hold a positive and forward-looking attitude toward the use of AI in research, recognizing its potential to enhance research quality, efficiency, and productivity. A majority of respondents believe that AI can improve the quality of research and express willingness to use AI tools in their future research work, indicating growing acceptance of AI as a supportive research aid.

At the same time, the study highlights significant concerns related to the use of AI in research. Ethical issues, plagiarism, over-reliance on technology, accuracy of AI-generated content, and data privacy emerged as the most prominent apprehensions among research scholars. These concerns suggest that while AI is valued for its functional benefits, scholars remain cautious about its implications for academic integrity, originality, and critical thinking. The presence of a considerable proportion of neutral or uncertain responses further indicates the need for clarity, guidance, and capacity-building in the use of AI for research purposes.

Overall, the study concludes that artificial intelligence has substantial potential to support and enhance academic research when used responsibly and ethically. To maximize its benefits and mitigate associated risks, it is essential for higher education institutions and research bodies to develop clear ethical guidelines, provide adequate training, and promote a balanced, human-centered approach to AI integration. By addressing scholars’ concerns and fostering informed usage, AI can be effectively positioned as a complementary tool that strengthens research practices without compromising scholarly values and research integrity.

8. Recommendations and Educational Implications

Based on the findings of the study, it is recommended that higher education institutions develop clear and standardized policies governing the use of AI tools in academic research. These policies should explicitly define acceptable and unacceptable practices, address issues related to authorship, plagiarism, transparency, and accountability, and provide ethical guidance for AI-assisted research. Additionally, institutions should introduce structured AI literacy and ethics training for research scholars to enhance informed and responsible use. Emphasis should be placed on maintaining human oversight and critical thinking to prevent over-reliance on AI tools. Ensuring data privacy, promoting transparency in AI use, and strengthening faculty mentorship are also essential. Continuous evaluation and further research are recommended to monitor the long-term impact of AI on research quality, integrity, and scholarly development.

The findings of the present study have important implications for higher education and research training. First, the generally positive perception of artificial intelligence among research scholars indicates the need to integrate, AI literacy into higher education curricula, particularly at the postgraduate and doctoral levels. Universities should introduce structured modules, workshops, and short-term courses that familiarize research scholars with AI tools used in literature review, data analysis, academic writing, and research management, while clearly delineating their appropriate and ethical use.

Second, the concerns expressed by research scholars regarding ethics, plagiarism, data privacy, and over-reliance on technology highlight the necessity for institutional policies and ethical guidelines on AI use in research. Educational institutions and research bodies should develop clear frameworks that define acceptable and unacceptable uses of AI, promote transparency in AI-assisted research, and ensure accountability. Incorporating discussions on academic integrity, responsible AI use, and research ethics into research methodology courses can help scholars make informed and responsible decisions.

Third, the findings underscore the importance of promoting critical thinking and human oversight in AI-supported research. Educators and supervisors should emphasize that AI is a supportive tool rather than a substitute for human judgment, creativity, and scholarly reasoning. Training programs should encourage research scholars to critically evaluate AI-generated outputs, verify sources, and reflect on the methodological and epistemic implications of AI use.

Finally, the study suggests the need for capacity-building and faculty mentorship to support effective AI adoption. Supervisors and teacher educators should be equipped to guide research scholars in the responsible use of AI tools, fostering a balanced, human-centered approach to research innovation. By addressing both the opportunities and challenges associated with AI, higher education institutions can ensure that AI contributes positively to research quality, integrity, and the overall development of competent and ethically responsible researchers.

9. Future Recommendations

Future efforts should focus on developing standardized institutional policies and ethical frameworks to guide the responsible use of artificial intelligence in academic research. Mandatory training programs on AI literacy and research ethics should be introduced for research scholars to ensure informed and transparent use of AI tools. Greater emphasis should be placed on maintaining human oversight and accountability in AI-assisted research to preserve originality, critical thinking, and research integrity. Institutions should also invest in secure and privacy-preserving AI tools and provide continuous hands-on training to keep scholars updated with emerging technologies. Further interdisciplinary and longitudinal studies are recommended to examine discipline-specific practices and the long-term impact of AI on research skills, scholarly identity, and academic integrity, thereby supporting sustainable and ethical integration of AI in research practices.

References

[1]  Resnik, D. B. (1998). The ethics of science: An introduction. Routledge.
In article      
 
[2]  Humphreys, P. (2020). Why automated science should be cautiously welcomed. In M. Bertolaso & F. Sterpetti (Eds.), A critical reflection on automated science (pp. 11–26). Springer.
In article      View Article
 
[3]  Krenn, M., Pollice, R., Guo, S. Y., Aldeghi, M., Cervera-Lierta, A., Friederich, P., … Aspuru-Guzik, A. (2022). On scientific understanding with artificial intelligence. Nature Reviews Physics, 4, 761–769.
In article      View Article  PubMed
 
[4]  Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.
In article      View Article
 
[5]  Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT ’19)* (pp. 279–288). ACM.
In article      View Article  PubMed
 
[6]  Holland, I., & Davies, J. A. (2020). Automation in the life science research laboratory. Frontiers in Bioengineering and Biotechnology, 8, 571777.
In article      View Article  PubMed
 
[7]  Chubb, J., Cowling, P., & Reed, D. (2022). Speeding up to keep up: Exploring the use of artificial intelligence in the research process. AI & Society, 37(4), 1439–1457.
In article      View Article  PubMed
 
[8]  Douglas, D. M. (2025). Researchers’ perceptions of automating scientific research. AI & Society, 40, 4131–4144.
In article      View Article
 
[9]  Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767.
In article      View Article  PubMed
 
[10]  Andersen, J. P., Degn, L., Fishberg, R., Graversen, E. K., Horbach, S. P. J. M., Kalpazidou Schmidt, E., Schneider, J. W., & Sørensen, M. P. (2025). Generative artificial intelligence (GenAI) in the research process – A survey of researchers’ practices and perceptions. Technology in Society, 81, 102813. https:// www. sciencedirect.com/science/article/pii/S0160791X2500003X.
In article      View Article
 
[11]  Cukurova, M., Luckin, R., & Kent, C. (2020). Impact of an artificial intelligence research frame on the perceived credibility of educational research evidence. International Journal of Artificial Intelligence in Education, 30(2), 205–235.
In article      View Article
 
[12]  Jan, R., Rehman, A., & Lone, M. A. (2025). The digital transformation of teacher education: A qualitative analysis of AI integration across regional institutes of education in India. International Journal of Emerging Knowledge Studies, 4(4). https:// www.researchgate.net/ publication/ 398403215_The_Digital_ Transformation_ of_Teacher_ Education_ A_Qualitative_ Analysis_of_ AI_Integration_ Across_ Regional_ Institutes_ of_Education _in _India#fullTextFileContent.
In article      
 
[13]  Pereira, R., Reis, I. W., Ulbricht, V., & dos Santos, N. (2024). Generative artificial intelligence and academic writing: An analysis of the perceptions of researchers in training. Management Research: Journal of the Iberoamerican Academy of Management, 22(4), 429–450. https:// www. researchgate.net/ publication/ 383059739_ Generative_ artificial_ intelligence_ and_academic_ writing_an_ analysis_of_ the_perceptions_ of_researchers_in_training#fullTextFileContent.
In article      View Article
 
[14]  Verboom, A. D. P. R., Pais, L., Zijlstra, F. R. H., Oswald, F. L., & dos Santos, N. R. (2025). Perceptions of artificial intelligence in academic teaching and research: A qualitative study from AI experts and professors’ perspectives. International Journal of Educational Technology in Higher Education, 22, Article 46.
In article      View Article
 
[15]  OECD. (2023). Artificial intelligence in science: Challenges, opportunities and the future of research. OECD Publishing.
In article      View Article
 

Published with license by Science and Education Publishing, Copyright © 2026 Sutapa Garai, Dr. Sarita Anand and Shubha Sarkar

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Sutapa Garai, Dr. Sarita Anand, Shubha Sarkar. Acceptability of AI Tools in the Conduct of Research. American Journal of Educational Research. Vol. 14, No. 3, 2026, pp 90-95. https://pubs.sciepub.com/education/14/3/2
MLA Style
Garai, Sutapa, Dr. Sarita Anand, and Shubha Sarkar. "Acceptability of AI Tools in the Conduct of Research." American Journal of Educational Research 14.3 (2026): 90-95.
APA Style
Garai, S. , Anand, D. S. , & Sarkar, S. (2026). Acceptability of AI Tools in the Conduct of Research. American Journal of Educational Research, 14(3), 90-95.
Chicago Style
Garai, Sutapa, Dr. Sarita Anand, and Shubha Sarkar. "Acceptability of AI Tools in the Conduct of Research." American Journal of Educational Research 14, no. 3 (2026): 90-95.
Share
[1]  Resnik, D. B. (1998). The ethics of science: An introduction. Routledge.
In article      
 
[2]  Humphreys, P. (2020). Why automated science should be cautiously welcomed. In M. Bertolaso & F. Sterpetti (Eds.), A critical reflection on automated science (pp. 11–26). Springer.
In article      View Article
 
[3]  Krenn, M., Pollice, R., Guo, S. Y., Aldeghi, M., Cervera-Lierta, A., Friederich, P., … Aspuru-Guzik, A. (2022). On scientific understanding with artificial intelligence. Nature Reviews Physics, 4, 761–769.
In article      View Article  PubMed
 
[4]  Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.
In article      View Article
 
[5]  Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT ’19)* (pp. 279–288). ACM.
In article      View Article  PubMed
 
[6]  Holland, I., & Davies, J. A. (2020). Automation in the life science research laboratory. Frontiers in Bioengineering and Biotechnology, 8, 571777.
In article      View Article  PubMed
 
[7]  Chubb, J., Cowling, P., & Reed, D. (2022). Speeding up to keep up: Exploring the use of artificial intelligence in the research process. AI & Society, 37(4), 1439–1457.
In article      View Article  PubMed
 
[8]  Douglas, D. M. (2025). Researchers’ perceptions of automating scientific research. AI & Society, 40, 4131–4144.
In article      View Article
 
[9]  Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767.
In article      View Article  PubMed
 
[10]  Andersen, J. P., Degn, L., Fishberg, R., Graversen, E. K., Horbach, S. P. J. M., Kalpazidou Schmidt, E., Schneider, J. W., & Sørensen, M. P. (2025). Generative artificial intelligence (GenAI) in the research process – A survey of researchers’ practices and perceptions. Technology in Society, 81, 102813. https:// www. sciencedirect.com/science/article/pii/S0160791X2500003X.
In article      View Article
 
[11]  Cukurova, M., Luckin, R., & Kent, C. (2020). Impact of an artificial intelligence research frame on the perceived credibility of educational research evidence. International Journal of Artificial Intelligence in Education, 30(2), 205–235.
In article      View Article
 
[12]  Jan, R., Rehman, A., & Lone, M. A. (2025). The digital transformation of teacher education: A qualitative analysis of AI integration across regional institutes of education in India. International Journal of Emerging Knowledge Studies, 4(4). https:// www.researchgate.net/ publication/ 398403215_The_Digital_ Transformation_ of_Teacher_ Education_ A_Qualitative_ Analysis_of_ AI_Integration_ Across_ Regional_ Institutes_ of_Education _in _India#fullTextFileContent.
In article      
 
[13]  Pereira, R., Reis, I. W., Ulbricht, V., & dos Santos, N. (2024). Generative artificial intelligence and academic writing: An analysis of the perceptions of researchers in training. Management Research: Journal of the Iberoamerican Academy of Management, 22(4), 429–450. https:// www. researchgate.net/ publication/ 383059739_ Generative_ artificial_ intelligence_ and_academic_ writing_an_ analysis_of_ the_perceptions_ of_researchers_in_training#fullTextFileContent.
In article      View Article
 
[14]  Verboom, A. D. P. R., Pais, L., Zijlstra, F. R. H., Oswald, F. L., & dos Santos, N. R. (2025). Perceptions of artificial intelligence in academic teaching and research: A qualitative study from AI experts and professors’ perspectives. International Journal of Educational Technology in Higher Education, 22, Article 46.
In article      View Article
 
[15]  OECD. (2023). Artificial intelligence in science: Challenges, opportunities and the future of research. OECD Publishing.
In article      View Article