Artificial Intelligence

Artificial Intelligence (AI) describes computer systems that are able to mimic human reasoning, decision-making, creativity, etc. There are many different types of systems and options for researchers wanting to use AI. This page is intended to be a hub for resources related to AI at UNL. This is a rapidly growing area of research and resources, and this page will be updated as more resources are available. 

  • The University of Nebraska (NU) system policy and resources can be found here. It includes an overview of the types of generative AI (AI that produces original work), system tools, and guidance for using AI. 
  • Information Technology Services has built a training that is required before using NU AI resources. It is accessed via Bridge.
  • The Office of Research and Innovation (R&I) has put together some guiding principles and best practices for AI in research and creative activities at UNL. The best practices are listed below - please see the full document for context and additional details. 
Best Practices for AI in Research and Creative Activities at UNL
  1. Accuracy and integrity of outputs from AI tools used in research and creative activities are the responsibilities of UNL employees and students. AI-generated content often paraphrases information from other sources and may generate inaccurate information. UNL employees and students must accept responsibility and accountability for the content produced by AI technology. AI tools cannot be responsible or accountable for the content that is generated. To avoid concerns regarding bias, inaccuracy, plagiarism, and potential misappropriation of intellectual property in AI-generated content, UNL employees and students must:
    1. Validate results from decisions made by AI systems based on their research or other evidence.
    2. Actively seek and mitigate bias in the data used to train AI systems by using diverse datasets and techniques such as debiasing algorithms to address potential unfairness.
    3. Regularly audit AI outputs for bias and discriminatory outcomes by implementing monitoring systems and feedback loops to identify and correct biased results.
    4. Ensure inclusivity in research and creative design involving AI by engaging diverse stakeholders and incorporating user perspectives to avoid perpetuating or amplifying existing biases. 
    5. Communicate the limitations and capabilities of AI tools clearly and honestly. Don't overstate the accuracy or intelligence of AI and be transparent about its potential for errors and misinterpretations.
  2. Data placed into AI tools that are externally available become open source and available to the public. UNL employees and students are expected to follow federal guidelines for making data generated with federal funding available to the public; however, they should be cautious about placing preliminary data or sensitive information into open-sourced AI tools prior to publication or prior to seeking and subsequent filing of patents. UNL employees and students are expected to:
    1. Implement robust data security measures to protect data with Personally Identifiable Information (PII) used in AI research and development. Utilize appropriate access controls, encryption, and anonymization techniques to minimize privacy risks.
    2. Be mindful of data ownership and consent when using data with PII for AI projects. Obtain informed consent from individuals and comply with relevant privacy regulations. 
    3. Consider privacy implications when deploying AI systems in research or creative applications. Assess potential risks of data misuse or unintended surveillance and implement safeguards to protect individual privacy. 
    4. Follow the University of Nebraska’s Policy for Responsible Use of University Computers and Information Systems, Policy on Research Data and Security, and Policy on Risk Classification and Minimum Security Standards. 
    5. Follow applicable privacy laws and regulations (e.g., HIPAA, FERPA, EU GDPR, PIPL, and CCPA) and best practices when using data that includes PII.
  3. Before initiating future agreements with vendors, subcontractors, or collaborators inquiries should be made regarding any potential use of AI. Additional terms and conditions may be needed in current and future agreements to ensure the responsible and ethical use of AI that aligns with these guiding principles and best practices.
  4. Federal funding agencies prohibit the use of AI tools during the peer-review process. The National Institutes of Health (NIH) prohibits “scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.” Using AI in the peer review process is a breach of confidentiality because these tools “have no guarantee of where data are being sent, saved, viewed or used in the future.” Using AI tools to help draft a critique or to assist with improving the grammar and syntax of a critique draft are both considered breaches of confidentiality. The National Science Foundation (NSF) has similar guidelines for the use of AI in proposals and the NSF merit review process. The USDA National
    Institute of Food and Agriculture (NIFA) Peer Review Process for Competitive Grant Applications also prohibits the use of generative AI tools during the proposal evaluation process.
  5. Research personnel are accountable for any plagiarized, falsified, or fabricated material that was generated by AI, regardless of funding. The UNL Research Misconduct Policy and federal funding agencies specify the definitions and processes involved if material has been plagiarized, falsified, or fabricated. 
  6. Research personnel must keep up with evolving AI technologies and best practices for ethical and responsible use of AI in research and creative activities by regularly reviewing university policies and relevant guidelines from federal funding agencies. 
  7. Be mindful of the environmental impact of AI training and deployment. Choose energy-efficient algorithms and infrastructure and explore sustainable computing practices. 
  8. Guidelines for content co-authored with an AI tool:
    1. The published content must be attributed to UNL and the UNL employee(s) and/or student(s).
    2. The role of AI in generating and/or revising content must be clearly disclosed in a way that no reader could possibly miss and that a typical reader would find sufficiently easy to understand.
      1. In general, the use of AI tools to edit the content of a few sentences does not usually require disclosure; however, the use of AI tools to improve the content of an entire manuscript requires disclosure, and the UNL employee(s) and/or student(s) must ensure that the AI-generated or -
        revised content is accurate and has not been plagiarized. 
      2. Use of AI tools to insert or impart knowledge or creative activities must be disclosed.
    3. Topics of the content do not violate the AI tool company’s content policy or terms of use (e.g., are not related to adult content, spam, hateful content, content that incites violence, or other uses that may cause social harm). 
    4. The following language may be used for this purpose: “The author(s) generated this text in part with [insert name of AI tool and company or reference for AI tool], a large-scale language-generation model. Upon generating draft language, the author(s) reviewed, edited, and revised the language to their own liking and the author(s) take(s) ultimate responsibility for the content of this publication.”
    5. Additional information is available from the American Chemical Society “Best Practices for Using AI When Writing Scientific Manuscripts” (ACS Nano 2023, 17, 4091−4093; https://doi.org/10.1021/ acsnano.3c01544)

Frequently Asked Questions

What is generative AI (GenAI)?

According to NASA, “artificial intelligence refers to computer systems that can perform complex tasks normally done by human-reasoning, decision making, creating, etc.” Generative AI can create original content (text, diagrams, figures, etc.), and includes tools like Copilot, ChatGPT, DALL-e, Gemini, etc.

Which GenAI platforms is UNL supporting?

The new Microsoft tenant includes the free version of Copilot, Copilot Chat, for all NU users. This version does not use entered information for model training purposes and data entered remains private to NU. Users can also purchase Microsoft 365 Copilot for a monthly fee of $30, which has significantly more capabilities and features than the free version, including Copilot Studio, while retaining the same data security features. Zoom AI Companion is enabled within NU Zoom tenants. ChatGPT Enterprise is being utilized through the OpenAI Impact program. 

GenAI is also integrated into common tools that are available to the campus, like Grammerly. We expect to see more integration of GenAI into common tools. In each case, please go to your settings for that program and disallow using your information/data for training. You may also be able to turn off the GenAI features, if you do not want to have access to them.

For all of these programs, they must be accessed within the NU computing systems (such as with SSO login) to ensure their use is governed by NU contracts and compliance with NU security and privacy controls. All guidelines for use are based on this expectation.

Can I use other GenAI tools?

We understand the excitement and interest in using GenAI tools but do not recommend the use of tools with NU data without a compliance review by NU ITS.

What do I need to do to prepare for using GenAI?

If you want to learn more about using GenAI before you start, there are a variety of resources available. The UNL Office of Research and Innovation (R&I) “Guiding Principles and Best Practices for AI in Research and Creative Activities at UNL” is a good place to start. NU ITS also strongly encourages that individuals complete NU’s AI Training.  Enroll here.

Microsoft Copilot Users Guide

What ethical concerns are there when using GenAI?

There are a variety of ethical concerns that may have more or less relevance to the user. Below are well known concerns. For a more in-depth discussion, see UNESCO’s Recommendation on the Ethics of Artificial Intelligence

  • GenAI reflects the information used to train the algorithm used by the platform. When these data involve human subjects, the data may be biased, reinforce inequalities, or leave out some populations entirely. When such data are used for decision making, it may reinforce existing (or historical) patterns of unfairness.
  • The data centers needed for GenAI are resource-intensive, use high amounts of electricity, release significant amounts of carbon dioxide, and require a great deal of water for cooling systems. GenAI is estimated to use 7-8 times more energy than typical computing, and compared to a simple web search, a single chatbot query consumes about 5x more electricity (MIT). In addition, data centers are often located in economically depressed areas, further reducing the resources available to those that live there and putting additional stress on the local electrical grid. 
  • Some GenAI models used data for training that was covered by copyright or other types of intellectual property protections, without any compensation or credit. The argument has been made that this purpose is protected under the fair use doctrine, which provides exceptions under certain circumstances. A counter to fair use is that the resulting models are used commercially. As of 2025, court cases on these topics are still being considered by the legal system. 
  • Replication and transparency are key components of the scientific method and both are difficult to achieve with GenAI in which the underlying datasets are not easily explainable or accessible. In addition, the results from GenAI are often inconsistent from one user to another even when the same query is used. 

Ultimately, it is the responsibility of the user to make sure that the information obtained from GenAI is accurate, fair, unbiased, and not plagiarized, falsified, or fabricated. The user must also be mindful of how they use GenAI, to not pose a risk to individual rights, or violate data privacy laws and expectations. Finally, use of GenAI should be declared/referenced in completed works (see Guiding Principles and Best Practices for AI in Research and Creative Activities at UNL for more on when disclosure is required). Any submitted or published content must be attributed solely to the authors.

How much does it cost to use GenAI?

The NU Microsoft suite provides a free version of Copilot, Copilot Chat, with limited capabilities to the campus community. Copilot Chat is an AI companion that includes Enterprise Data Protection but will not have access to your Microsoft Graph data (Outlook, Teams, OneDrive, SharePoint, etc.) and does not add capabilities within Microsoft Office applications.

Microsoft 365 Copilot is $30/mo (as of August 2025) and adds access to your Microsoft Graph data, enables prebuilt AI Agents such as Researcher and Analyst, and enables Copilot Studio where custom AI Agents can be built.

Custom AI services within Microsoft Azure and Amazon Web Services (AWS) are also available. Costs are variable upon services consumed (direct passthrough with no markup). Contact NU ITS for more information.

Additional on-premises AI services are available through the Holland Computing Center (HCC), including no cost options. Contact HCC for more information.

How do I know I am using the “right” version of GenAI?

If you are accessing a GenAI tool through the services.unl.edu portal or within the NU Microsoft 365 tenant using your TrueYou Username, you are using an enterprise GenAI tool that is under an NU contract.

Within Microsoft Copilot, you can confirm that you are covered by looking for a green shield image in the upper right corner of the Copilot interface.  When hovering over that image “Enterprise data protection applies to this chat” will appear.

How can I protect my work when using GenAI?

Built in protections in the NU licenses include encrypted prompts/answers, secure disposal of prompts and answers when the session is ended, and the data are not used for training purposes.

A great deal of this risk comes from user access of systems, not the systems themselves. Some actions you can take to further protect your work include deleting your session once done, and using good digital hygiene around passwords, site access, following links, etc.

What is a GenAI prompt?

A prompt is the instructions that are given to the GenAI systems. The prompt is conversational, and you can ask your questions or give direction in the same style as you might speak in natural language. Prompts can be simple or very complex. You can also follow-up on past prompts, as long as you are in the same session. Microsoft has published example prompts to try when getting started.

Prompts are more effective when including detailed and specific information, written in full sentences with instructions, and when enough context is given to structure the answer (such as the preferred format, purpose, and tone). Additional resources for prompting can be found from the Massachusetts Institute for Technology (MIT) and the Center for Transformative Teaching at UNL.

What kind of data can be used in GenAI?

Many different document types are supported by both Copilot and ChatGPT. Only data that can be shared publicly without harm should be entered into GenAI. Data entered into public, consumer versions of GenAI should not be considered secure.

Will Copilot or ChatGPT Enterprise use my data for training?

When accessed within the NU computing systems (such as with SSO login), your data is not used for training, and ensures use is governed by NU contracts and compliance with NU security and privacy controls. All guidelines for use are based on this expectation. 

What does it mean that GenAI uses data for training?

The underlying basis of artificial intelligence and GenAI is the processing of a large amount of data that it “learns” from. This is called training data, and the programs are designed to learn the structures and patterns in the data, with the goal of simulating human decision-making and language patterns. Many consumer versions of GenAI use the information entered while interacting with their product to further train their model. In some cases, you can tell the program not to use your data for training (typically in settings). Our contracts with NU supported GenAI do not allow training from our data.

These articles from IBM and Oracle explain more about how GenAI works and how the models are trained. 

Is there a limit to how much or how often I can use GenAI?

There are limits within all GenAI tools as to how many prompts you can do per chat conversation and per day. If you reach those limits, you will be notified within the tool. These limits vary by tool and change often as models improve.

Who can see the information I enter into GenAI?

GenAI tools licensed under “consumer terms” may use your data for training models and marketing purposes. GenAI tools contracted for by the NU System include GenAI enterprise data security and privacy protections so that your data and university data stay safe and are not shared with GenAI vendors or other users. All prompts and data used in GenAI tools are subject to NU records retention schedules and Freedom of Information Act requests.

Can GenAI see other information (like my files or browser history) when I use it?

The paid version of Copilot, Microsoft 365 Copilot, includes access to your Microsoft Graph data (Outlook, OneDrive, Teams, SharePoint, etc.). This capability can be disabled when using the tool if desired.

Microsoft Copilot Studio can use “Connectors” that grant access to specific data sources, but no access is available by default.

Custom GPTs within ChatGPT can use “Connectors” that grant access to specific data sources, but no access is available by default.

How are the data entered into GenAI secured?

Details on Microsoft Copilot’s Enterprise Data Protection can be found here.

How is intellectual property secured in GenAI?

Details on Microsoft Copilot Chat can be found here.

Details on Microsoft 365 Copilot can be found here.

Can I purchase access to a more secure version of GenAI?

Not at this time.

Are there best practices for using GenAI with collaborators?

At the time of this writing, the only GenAI tools that can be used for co-creation within the same session by multiple users are what are called Agents within apps that have that feature. In general, your prompts and answers are your own but they can be shared in other coworking spaces. GenAI tools can support collaboration in various ways, such as providing meeting summaries, increasing accessibility via closed captioning, and helping automate routine tasks. Microsoft Teams has a built-in interface with Copilot for such purposes. 

What policies are most relevant to GenAI use?

Guidance specific to GenAI can be found here, and the University of Nebraska system Executive Memos (EM) 41 and 42 describe the responsibilities around data and the risk levels associated with different types of data. The Office of Research and Innovation has provided guidelines for using AI in research and creative activities. 

Do I have to declare use of GenAI in my research?

Yes, if you are using GenAI to edit more than a few sentences of a larger document, then you must disclose use of the specific GenAI system you used. Using GenAI to improve an entire manuscript, or inserting GenAI content around knowledge or creative activities must be disclosed. 

How do I cite GenAI?

Purdue University provides examples of how to cite AI use in different formats, such as APA, Chicago style, and MLA. You also have the option of including a statement like this one: “The author(s) generated this text in part with [insert name of AI tool and company or reference for AI tool], a large-scale language-generation model. Upon generating draft language, the author(s) reviewed, edited, and revised the language to their own liking and the author(s) take(s) ultimate responsibility for the content of this publication.” The University of Newcastle has examples of acknowledgement statements for different tools and tasks for transparency.  

Are there uses of GenAI that are not allowed?

You are not allowed to use GenAI in any way that violates NU’s policies of terms of use (e.g., are not related to adult content, spam, hateful content, content that incites violence, or other uses that may cause social harm). You are also not allowed to upload data that is higher than low risk without explicit approval from NU ITS for your specific need. (See EM42 for more on data risk levels.)

How can I be sure that the GenAI results are valid?

Copilot lists the sources of information for its content at the end of the output. The user is responsible for verifying the references and for ensuring accuracy of content from GenAI.

Can I clear my history?

Copilot does not save your chat history (prompts and responses) past the session it is in. If you close your browser, select a new topic, or leave the chat open for a long time, the history will be deleted.

How do I save the results of my prompts?

You will likely need to save your prompt and the answers separately. In Copilot, you can save the prompt using the bookmark icon that appears when you hover over the prompt. This saves it to a prompt gallery that is only visible to you. To save the Copilot response, you can either copy or export the answer (again accessed by hovering over the answer). The export options include Word, pdf, and text. Each exported response will have their own file, so if you want to have multiple answers in one place, copy will be a more useful option.

How do I know if the results from a GenAI search are accurate? v

You can ask GenAI for answers to questions you know the answer to – more complex requests will tell you more than simple requests. You can also ask GenAI questions like how it decides if information is accurate, how it identifies fake information, or how it was trained. You should also verify the results you are given – many GenAI programs include links to their sources – they just aren’t always authentic or valid. GenAI is notorious for “hallucinating” answers, so make sure any sources or citations actually exist before using the information. You can verify the source or citation by conducting a quick web search. The user is always responsible for verifying the references and for ensuring accuracy of content from GenAI.

Additional Resources
Additional Information