Artificial Intelligence (AI) describes computer systems that are able to mimic human reasoning, decision-making, creativity, etc. There are many different types of systems and options for researchers wanting to use AI. This page is intended to be a hub for resources related to AI at UNL. This is a rapidly growing area of research and resources, and this page will be updated as more resources are available.
- The University of Nebraska (NU) system policy and resources can be found here. It includes an overview of the types of generative AI (AI that produces original work), system tools, and guidance for using AI.
- Information Technology Services has built a training that is required before using NU AI resources. It is accessed via Bridge.
- The Office of Research and Innovation (R&I) has put together some guiding principles and best practices for AI in research and creative activities at UNL. The best practices are listed below - please see the full document for context and additional details.
Best Practices for AI in Research and Creative Activities at UNL
- Accuracy and integrity of outputs from AI tools used in research and creative activities are the responsibilities of UNL employees and students. AI-generated content often paraphrases information from other sources and may generate inaccurate information. UNL employees and students must accept responsibility and accountability for the content produced by AI technology. AI tools cannot be responsible or accountable for the content that is generated. To avoid concerns regarding bias, inaccuracy, plagiarism, and potential misappropriation of intellectual property in AI-generated content, UNL employees and students must:
- Validate results from decisions made by AI systems based on their research or other evidence.
- Actively seek and mitigate bias in the data used to train AI systems by using diverse datasets and techniques such as debiasing algorithms to address potential unfairness.
- Regularly audit AI outputs for bias and discriminatory outcomes by implementing monitoring systems and feedback loops to identify and correct biased results.
- Ensure inclusivity in research and creative design involving AI by engaging diverse stakeholders and incorporating user perspectives to avoid perpetuating or amplifying existing biases.
- Communicate the limitations and capabilities of AI tools clearly and honestly. Don't overstate the accuracy or intelligence of AI and be transparent about its potential for errors and misinterpretations.
- Data placed into AI tools that are externally available become open source and available to the public. UNL employees and students are expected to follow federal guidelines for making data generated with federal funding available to the public; however, they should be cautious about placing preliminary data or sensitive information into open-sourced AI tools prior to publication or prior to seeking and subsequent filing of patents. UNL employees and students are expected to:
- Implement robust data security measures to protect data with Personally Identifiable Information (PII) used in AI research and development. Utilize appropriate access controls, encryption, and anonymization techniques to minimize privacy risks.
- Be mindful of data ownership and consent when using data with PII for AI projects. Obtain informed consent from individuals and comply with relevant privacy regulations.
- Consider privacy implications when deploying AI systems in research or creative applications. Assess potential risks of data misuse or unintended surveillance and implement safeguards to protect individual privacy.
- Follow the University of Nebraska’s Policy for Responsible Use of University Computers and Information Systems, Policy on Research Data and Security, and Policy on Risk Classification and Minimum Security Standards.
- Follow applicable privacy laws and regulations (e.g., HIPAA, FERPA, EU GDPR, PIPL, and CCPA) and best practices when using data that includes PII.
- Before initiating future agreements with vendors, subcontractors, or collaborators inquiries should be made regarding any potential use of AI. Additional terms and conditions may be needed in current and future agreements to ensure the responsible and ethical use of AI that aligns with these guiding principles and best practices.
- Federal funding agencies prohibit the use of AI tools during the peer-review process. The National Institutes of Health (NIH) prohibits “scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.” Using AI in the peer review process is a breach of confidentiality because these tools “have no guarantee of where data are being sent, saved, viewed or used in the future.” Using AI tools to help draft a critique or to assist with improving the grammar and syntax of a critique draft are both considered breaches of confidentiality. The National Science Foundation (NSF) has similar guidelines for the use of AI in proposals and the NSF merit review process. The USDA National
Institute of Food and Agriculture (NIFA) Peer Review Process for Competitive Grant Applications also prohibits the use of generative AI tools during the proposal evaluation process. - Research personnel are accountable for any plagiarized, falsified, or fabricated material that was generated by AI, regardless of funding. The UNL Research Misconduct Policy and federal funding agencies specify the definitions and processes involved if material has been plagiarized, falsified, or fabricated.
- Research personnel must keep up with evolving AI technologies and best practices for ethical and responsible use of AI in research and creative activities by regularly reviewing university policies and relevant guidelines from federal funding agencies.
- Be mindful of the environmental impact of AI training and deployment. Choose energy-efficient algorithms and infrastructure and explore sustainable computing practices.
- Guidelines for content co-authored with an AI tool:
- The published content must be attributed to UNL and the UNL employee(s) and/or student(s).
- The role of AI in generating and/or revising content must be clearly disclosed in a way that no reader could possibly miss and that a typical reader would find sufficiently easy to understand.
- In general, the use of AI tools to edit the content of a few sentences does not usually require disclosure; however, the use of AI tools to improve the content of an entire manuscript requires disclosure, and the UNL employee(s) and/or student(s) must ensure that the AI-generated or -
revised content is accurate and has not been plagiarized. - Use of AI tools to insert or impart knowledge or creative activities must be disclosed.
- In general, the use of AI tools to edit the content of a few sentences does not usually require disclosure; however, the use of AI tools to improve the content of an entire manuscript requires disclosure, and the UNL employee(s) and/or student(s) must ensure that the AI-generated or -
- Topics of the content do not violate the AI tool company’s content policy or terms of use (e.g., are not related to adult content, spam, hateful content, content that incites violence, or other uses that may cause social harm).
- The following language may be used for this purpose: “The author(s) generated this text in part with [insert name of AI tool and company or reference for AI tool], a large-scale language-generation model. Upon generating draft language, the author(s) reviewed, edited, and revised the language to their own liking and the author(s) take(s) ultimate responsibility for the content of this publication.”
- Additional information is available from the American Chemical Society “Best Practices for Using AI When Writing Scientific Manuscripts” (ACS Nano 2023, 17, 4091−4093; https://doi.org/10.1021/ acsnano.3c01544)
Additional Resources
- How to Cite AI-Generated Content (Purdue)
- Holland Computing Center offers trainings on AI and machine learning (ML)
Additional Information
- How Pew Research Center is using LLMs (blog post)
- For information on using AI in teaching, see the Center for Transformative Teaching resources
- Please and Thank You in ChatGPT (People article)