Generative AI programs can be powerful research tools that may help with tasks such as visualizing data, writing software or performing computation, paraphrasing sources, formatting papers, brainstorming, and more. However, if used incorrectly or inappropriately they can have major ramifications for your research. Before using generative AI programs in your research, it can be helpful to first review resources on how these tools work and develop a basic understanding of prompt engineering. It is likewise useful to consider your goals for using generative AI and reflect on any possible pitfalls or ethical issues you may encounter such as questions of academic integrity and plagiarism, authorship, bias, hallucinations, factual inaccuracies and misinformation.
When using generative AI, a good rule of thumb is to use these tools for tasks where any answer is valuable or where the answer can easily be checked for correctness. As generative AI continues to develop, disciplinary norms related to generative AI use will likewise develop and change, so it will be important to stay up to date on all trends and guidance that may impact your work.
Moving your research forward with generative AI
When and how can I use generative AI in my research?
The Academic Affairs Guidance for Artificial Intelligence Tools establishes some basic guidelines for AI use, including the following standards:
- Follow guidelines provided by deans or any other individuals who oversee your work.
- Disclose AI use appropriately when disclosure is expected.
- Comply with all laws and university policies (including those related to privacy and confidentiality).
- Take responsibility for the accuracy and impact of AI use.
Given the rapid developments in generative AI capabilities, along with the differences in disciplinary values and priorities, you must choose how and if you will use generative AI in your research. Before proceeding, however, it is important to thoughtfully consider your situational context and any applicable policies or regulations that may impact your work. Depending on your field, attitudes towards generative AI use will vary substantially. For example, generative AI may likely be integrated into many software platforms and become a major part of future interfaces. In these disciplines where generative AI use is the norm, there may be fewer expectations that you will cite and disclose your use of generative AI. In other fields, however, you may need to cite and disclose all instances of generative AI use, even if it is used in the brainstorming stage or as a part of your research methodology. When writing and publishing your research, you will want to be especially careful as some journals have instituted a blanket ban on any text composed by generative AI, while others permit the inclusion of text authored by generative AI with appropriate citation or disclosure.
Before using generative AI in your research or writing, carefully investigate any policies or norms that influence how you may use this technology. When investigating, try to answer the following questions:
- What (if any) standards or norms exist in my field relating to the use of generative AI for writing, research or publication?
The Association for Computing Machinery, Associated Press, Committee on Publication Ethics, the International Committee of Medical Journal Ethics, the National Institute of Health, the National Science Foundation, and UNESCO have all published guidance on the use of generative AI in writing, research, peer review, and/or grant proposals. Before using generative AI, investigate whether there are any published guidelines or requirements that could apply to your research. - Does the journal I plan to publish with have standards related to the use of generative AI?
For example, Nature and Science have recently published guidance about when and how generative AI may be used. When researching journal policies, consider investigating the policies of all journals you may potentially publish with. If the journal does not have published policies and standards, you might want to reach out to the journal’s editor to inquire about their policies.
- Who is my audience, and how might they respond to my use of generative AI?
If you know a journal editor or reviewer is likely to be resistant to the use of generative AI, you may choose not to use it at all. Or you may consider reaching out to the journal for guidance before you proceed. Alternatively, you might include a clear, thorough, and convincing explanation and justification of your generative AI use in your submission. Regardless of how you proceed, understanding your audience will help you make a more thoughtful and informed decision.
What are general best practices for using generative AI in my research?
As generative AI use becomes more common, best practices will continue to change and develop. As you navigate integrating generative AI into your research, ask yourself the following questions:
- How familiar am I with techniques for using generative AI tools effectively, such as prompt engineering? How will I develop skills and competencies in this arena?
- For example, you might check out our Tips for Using Generative AI page, our Prompt Patterns page, and Vanderbilt University Professor of Computer Science Jules White’s free, self-paced course on prompt engineering.
- Where will these tools allow me to work more productively or effectively? When am I better off avoiding the use of generative AI?
- For example, using generative AI like an internet search with the expectation that it will return a set of facts is unlikely to yield meaningful results and may also be an inefficient use of these tools.
- What ethical concerns are on my radar and how will I mitigate any potential issues?
- For example, since these tools were trained on human data, they may replicate human biases in the media and content they produce.
When using generative AI, the following suggestions may help you make informed decisions about how you will use generative AI. Keep in mind that journals, grant funding agencies and other stakeholders in your work may have policies related to generative AI use that may influence both your writing and, in some cases, your research practices. These suggestions are based on some of the emerging trends; however, they do not supersede any applicable university or disciplinary guidelines—you should always adhere to and consult any specific policies that may apply to you. As you begin a project, you might consider the following best practices:
What are risks associated with using generative AI in my research, and how can I mitigate those risks?
Given the vast capabilities of generative AI tools, there are a variety of potential pitfalls. Below, we share some guidance on how to use generative AI in a way that mitigates risks germane to your research. When using generative AI tools, it is important to keep in mind that these tools are designed to identify patterns and use these patterns to create probable outputs. For additional guidance on how and when to use generative AI, visit our Tips for Using Generative AI page.
How can generative AI help my research?
When used correctly, generative AI tools can help improve your productivity and enhance the quality of your research. Below, we share some examples of how you might use generative AI in your research. Keep in mind that you should consult all policies and guidelines specific to your field of research before using any of these strategies.
The strategies below represent just a few examples of how you might use generative AI in your research. When brainstorming strategies for how generative AI may aid your research, we recommend asking yourself, “how do the strengths of generative AI intersect with my research needs?”
Additional Resources
-
We consulted the following works in the development of this webpage. For additional perspectives on these topics, we encourage you to review the following sources.