Security shares helpful guidelines and resources for AI use

Bailey Troutman & Patrick McGee, CCIT Communications & Office of Information Security
February 8, 2024

Clemson University students, researchers, employees and community members may be using the generative artificial intelligence (AI) tools readily available today. ChatGPT, and other AI resources, can offer exciting new opportunities for efficiency in data processing, idea generation for projects, or even programming. As we continue looking at privacy and information security when using AI tools, the Office of Information Security is urging the Clemson community to visit the University’s AI Guidelines page and to remember to use best practices. Now more than ever, we must stay vigilant with the security of our information.

Here are some helpful reminders: 

AI Risks

  • Because data entered into AI is retained and used to train models, it can be the equivalent of disclosing that data to the public which could be considered a breach under FERPA, HIPAA, PCI, GLBA or other federal or state statutes. 
  • Generative AI tools may produce erroneous responses that seem credible, sometimes referred to as “hallucinations.”
  • Generative AI systems could be trained on copyrighted, proprietary, or sensitive data, without the owner’s or subject’s knowledge or consent.

AI Best Practices

  • Enter only public data into an AI system. Opt out of sharing data for AI learning whenever possible.
  • Verify any results through authoritative sources.
  • Consider legal, regulatory, and ethical obligations before using AI.
  • Be transparent in disclosing and citing the use of AI tools.

The AI Guidelines page also features a list of specific policies, guidelines and directions related to technology use, sensitive information, and resources where to go for help. This page will continue to be updated as more policies and guidelines are created, so we encourage you to visit it often. 

 System Status

View Status Page

Skip Instagram Feed Skip to Instagram Feed Start