Guiding Principles

Generative AI is a type of artificial intelligence that can create new content, such as text, images, music, or code, by learning patterns from existing data. Unlike traditional AI systems that follow fixed rules, generative AI can produce original outputs that mimic human creativity. This technology has the potential to transform many fields, from education and healthcare to art and business, by enhancing creativity and solving complex problems in new ways. As it continues to evolve, generative AI could significantly impact how we work, learn, and communicate. 

Privacy and Data Protection 

  • Respect Privacy - Avoid inputting sensitive, confidential, or personal data into AI systems, especially publicly available models. 
  • Use Institution-Approved Tools - Choose AI platforms that meet security, compliance, and ethical standards set by your organization or institution.  
  • Compliance - Ensure AI use complies with laws and institutional policies (e.g., FERPA, HIPAA, etc.) 

Accountability and Academic Integrity 

  • Be Transparent - Clearly disclose when AI tools are used to generate content, make decisions, or assist in communication. 
  • Ensure Accountability - Take full responsibility for any AI-assisted work; do not blame the tool for errors or harmful outcomes. 
  • Fact-Check AI Outputs - Always verify the accuracy and reliability of information produced by AI, especially in research, education, and public communication. 
  • More to come

Ethical Use 

  • Prevent Bias and Discrimination - Use AI in ways that promote fairness and inclusivity. Actively check for and mitigate biases in AI-generated content. 
  • Maintain Academic and Professional Integrity - Do not use AI to cheat, plagiarize, or misrepresent knowledge or work. Follow institutional policies on AI use. 
  • Credit and Cite Appropriately - Acknowledge use of AI tools when relevant, and cite sources if AI outputs include or are based on third-party content.  
  • Promote Accessibility and Inclusion - Ensure AI tools and practices are accessible to users of all backgrounds and abilities. 
  • Avoid Harmful or Misleading Use - Do not use AI to deceive, manipulate, spread misinformation, or create harmful content. 

Limitations

  • Hallucinations and Fabricated Information - Generative AI tools may produce content that sounds plausible but is factually incorrect or entirely made up.  
  • Lack of True Understanding - AI tools do not comprehend meaning or context the way humans do—it predicts text based on patterns in data.  
  • Bias in Outputs - AI can reflect or amplify social, cultural, or data-based biases present in its training data.  
  • No Moral or Ethical Judgment - AI cannot assess whether something is right or wrong; it lacks values, ethics, or emotional intelligence.  
  • Not a Substitute for Expert Judgment - AI should support—not replace—critical thinking, professional expertise, or human decision-making. 

Fequently Asked Questions