Topics:
Technology & Operations
BWF Services: AI and Data Science, Strategic Healthcare Fundraising Consultants

In the era of digital transformation, artificial intelligence (AI) provides transformational technology opportunities across various mission-driven sectors, including education and healthcare.

Large universities, community foundations, and healthcare institutions are increasingly leveraging AI to enhance efficiency, improve decision-making processes, and deliver better services to their stakeholders. However, these attainable possibilities also present emerging business realities along with increased responsibility, particularly when handling sensitive data governed by regulatory frameworks.  

In this article, we will focus on the standards set by the Family Educational Rights and Privacy Act (FERPA) and the Health Insurance Portability and Accountability Act (HIPAA) and the questions you should be asking as part of your overall Responsible AI strategy. Responsible AI entails not only maximizing the benefits of AI technology but also ensuring the protection of individuals’ privacy, security, and rights. With a focus on FERPA and HIPAA compliance frameworks, we delve into the considerations that institutions must prioritize when deploying AI-powered tools. 

Understanding FERPA and HIPAA Relevance to AI Development  

FERPA safeguards the privacy of student education records. It regulates how educational institutions collect, use, and disclose students’ personally identifiable information (PII). Similarly, HIPAA sets standards for the protection of individuals’ health information, ensuring its confidentiality and integrity in healthcare settings. Both frameworks mandate strict controls on data access, sharing, and storage to prevent unauthorized disclosure or misuse. Of course, these two frameworks offer a wide range of standards across key data privacy areas, but we will address the standards that are most relevant to organizations seeking to ramp up their AI utilization as part of their rhythm of business: 

FERPA Compliance Standards with AI Relevance: 

  • Privacy of Student Records 
  • Right to Access and Amendment 
  • Directory Information 
  • Disclosure Limitations 

HIPAA Compliance Standards with AI Relevance: 

  • Protected Health Information (PHI) 
  • Privacy Rule 
  • Security Rule 
  • Breach Notification Rule 
  • Minimum Necessary Standard 
Considerations for Deploying AI Tools 

When evaluating AI tools for FERPA and HIPAA compliance, institutions must conduct thorough due diligence to assess vendors’ adherence to regulatory requirements. Below you will find a summary of each of the standards mentioned above to serve as a recap for each item’s privacy and technology requirements. Each recap is then paired by a shortlist of questions you can use to 1) ask any potential third-party software provider who is selling you any AI-powered product, or 2) note key privacy elements you should consider if your organization is interested in building AI products in house. 

FERPA 

Privacy of Student RecordsFERPA mandates that educational institutions protect the privacy of student education records, including grades, transcripts, and disciplinary records. Institutions must ensure that AI tools do not compromise the privacy of student information through generative or predictive outputs. 

FERPA family educational rights and privacy act on the table.

Related due-diligence questions:  

  1. Which foundational models are being leveraged for your AI tools? Are they built in house or by a third-party? 
  2. What, if any, public data is used to train, fine-tune, or prompt your models? 
  3. How does your AI model architecture ensure that sensitive student education records are encrypted both in transit and at rest to prevent unauthorized access? 
  4. Can you explain how your AI algorithms anonymize or pseudonymize student data to protect privacy while maintaining utility for analysis? 
  5. What measures does your AI system employ to ensure data minimization, limiting the collection and processing of only necessary student information to fulfill its intended purpose? 
  6. How does your AI tool handle data access control and authentication mechanisms to prevent unauthorized users from accessing student records? 
  7. Can you provide insights into the auditability of your AI model’s data processing pipeline, including logging mechanisms and traceability of data usage?  

Right to Access and Amendment: This standard grants eligible students the right to access and request amendments to their education records. Responsible AI implementations should include mechanisms for individuals to access and control their data generated and/or processed by AI systems. 

Related due-diligence questions:  

  1. How does your AI solution facilitate secure access for students and parents to review and request corrections to their education records while maintaining compliance with FERPA requirements?
  2. Can you describe the mechanisms within your AI platform that enable transparent tracking and logging of data access and modifications to support auditability and accountability?
  3. What provisions does your AI architecture include to ensure the integrity and accuracy of student records during data modification or correction processes?
  4. How does your AI model handle requests for access and amendment of student records in real time while ensuring data security and privacy?
  5. Can you demonstrate how your AI platform integrates consent management features to capture and manage student preferences regarding data access and modification?

Directory Information: While institutions may disclose certain student information as directory information without consent, they must provide students with the opportunity to opt out. This highlights the importance of transparency and giving individuals control over their data, even in AI applications. 

Related due-diligence questions:

  1. How does your AI model differentiate and handle directory information separately from sensitive student records to comply with FERPA guidelines? 
  2. Can you explain the mechanisms within your AI architecture that allow students to easily opt out of disclosures of directory information and manage their privacy preferences? 
  3. What safeguards does your AI platform incorporate to prevent unauthorized access or misuse of directory information, particularly by third-party entities? 
  4. How does your AI system ensure that disclosures of directory information are accurately categorized and processed in compliance with FERPA regulations? 
  5. Can you provide examples of how your AI model dynamically adapts to changes in students’ opt-out preferences regarding directory information disclosures? 

Disclosure Limitations: FERPA imposes restrictions on the disclosure of student records to third parties, emphasizing the need for institutions to carefully manage data sharing and ensure that AI systems adhere to these limitations to prevent unauthorized access or disclosure. 

Related due-diligence questions:  

  1. How does your AI architecture enforce granular access controls and permissions to restrict data sharing and ensure compliance with FERPA regulations on disclosure limitations? 
  2. Can you describe the mechanisms within your AI platform that monitor and enforce policies governing the sharing of student records with authorized parties? 
  3. What strategies does your AI model employ to detect and prevent unauthorized attempts to access or disclose student records beyond permissible boundaries? 
  4. How does your AI system ensure that disclosures of student records comply with FERPA exceptions and permissible use cases? 
  5. Can you provide insights into the scalability and performance of your AI platform in enforcing disclosure limitations across large volumes of student data while maintaining responsiveness and efficiency?
HIPAA

Protected Health Information: Given the sensitive nature of health information, protecting PHI is paramount in responsible AI implementations within healthcare settings. AI systems must adhere to strict security measures to safeguard PHI from unauthorized access or disclosure. 

Related due-diligence questions:  

  1. How does your AI model architecture enforce end-to-end encryption and secure transmission protocols to protect electronic PHI (ePHI) from interception or unauthorized access? 
  2. Can you describe the mechanisms within your AI platform that ensure the confidentiality and integrity of PHI stored in databases or processed by AI algorithms? 
  3. What measures does your AI system implement to detect and mitigate potential vulnerabilities or security threats to ePHI, including intrusion detection and prevention systems? 
  4. How does your AI architecture facilitate secure data sharing and collaboration while preventing unauthorized access to PHI by unauthorized users or entities? 
  5. Can you provide details on the data residency and sovereignty policies implemented within your AI platform to comply with regulatory requirements on PHI storage and processing? 

Privacy Rule: The HIPAA Privacy Rule outlines standards for the use and disclosure of PHI, including patient rights to access and control their health information. Responsible AI requires adherence to these standards to protect patient privacy and maintain trust in healthcare AI applications. 

Related due-diligence questions:  

  1. How does your AI solution ensure that patients’ rights to access their health information and control its disclosure are upheld in accordance with HIPAA privacy standards? 
  2. Can you explain how your AI platform incorporates privacy-enhancing technologies, such as differential privacy or federated learning, to protect patient privacy while enabling valuable insights from healthcare data? 
  3. What mechanisms does your AI architecture include to facilitate secure patient authentication and authorization for accessing their health records? 
  4. How does your AI model handle patient consent management, including capturing and honoring preferences for data sharing and use in compliance with HIPAA regulations? 
  5. Can you demonstrate the interoperability of your AI platform with existing healthcare systems and electronic health record (EHR) platforms while maintaining patient privacy and confidentiality? 

Security Rule: The HIPAA Security Rule mandates safeguards to protect electronic PHI from cybersecurity threats. Responsible AI implementations must prioritize security measures to ensure the confidentiality, integrity, and availability of ePHI processed by AI systems. 

Related due-diligence questions:  

  1. What specific security controls and measures does your AI model employ to comply with the HIPAA Security Rule, including access controls, data encryption, and intrusion detection? 
  2. Can you provide insights into the resilience and robustness of your AI platform against common security threats and vulnerabilities, such as malware or denial-of-service attacks? 
  3. How does your AI architecture ensure the isolation and segregation of healthcare data to prevent unauthorized access or tampering by malicious actors? 
  4. What procedures does your AI system have in place for conducting regular security assessments and audits to identify and remediate potential security gaps or compliance issues? 
  5. Can you describe the disaster recovery and business continuity measures implemented within your AI platform to ensure the availability and integrity of healthcare data in the event of system failures or emergencies? 

Breach Notification Rule: In the event of a data breach involving PHI, covered entities must promptly notify affected individuals and regulatory authorities. This highlights the importance of proactive risk management and incident response planning in responsible AI implementations to mitigate the impact of potential breaches. 

  1. How does your AI solution facilitate timely detection and notification of data breaches involving PHI, including mechanisms for identifying anomalous activities or unauthorized access attempts? 
  2. Can you explain the procedures and protocols within your AI platform for escalating and reporting security incidents to regulatory authorities and affected individuals in compliance with HIPAA breach notification requirements? 
  3. What measures does your AI architecture incorporate to support forensic analysis and investigation of security incidents to determine the scope and impact of potential breaches? 
  4. How does your AI model ensure transparency and accountability in breach notification processes, including providing clear and informative communications to affected individuals and stakeholders? 
  5. Can you demonstrate the effectiveness of your AI platform in facilitating incident response and remediation activities to mitigate the consequences of data breaches and prevent further unauthorized access to PHI? 

Minimum Necessary Standard: Adhering to the Minimum Necessary Standard ensures that AI systems access and use only the minimum amount of PHI required to fulfill their intended purpose. Responsible AI implementations should incorporate this principle to minimize privacy risks and enhance data protection. 

  1. How does your AI architecture enforce the principle of least privilege to ensure that only the minimum necessary amount of PHI is accessed and processed to fulfill its intended purpose? 
  2. Can you describe the mechanisms within your AI platform for implementing data segmentation and access controls to restrict the use and disclosure of PHI to authorized users and purposes?
  3. What strategies does your AI system employ to minimize the exposure of PHI during data processing and analysis, including techniques for data masking or de-identification? 
  4. How does your AI model support granular data access policies and permissions to align with the minimum necessary standard and prevent overexposure of sensitive healthcare information? 
  5. Can you provide examples of how your AI platform dynamically adjusts data access and usage based on contextual factors and user roles to optimize compliance with the minimum necessary standard while enabling effective healthcare operations? 
Conclusion 

Responsible AI deployment in universities, community foundations, and healthcare institutions necessitates diligent adherence to FERPA and HIPAA compliance frameworks. Institutions must prioritize privacy, security, and regulatory compliance to foster trust and ensure the ethical use of AI technology. Whether you are looking to onboard a new AI-powered software or thinking about building one in-house, the questions posed above can act as your starting point to understand what responsible practices should be expected. By aligning AI initiatives with regulatory standards, institutions can harness the transformative potential of AI while safeguarding individuals’ rights and data privacy. 

About the Authors

Alejandro is the co-founder and chief product and technology officer at FundMiner. He is a proven product-driven leader with experience building and shipping enterprise software from zero to millions of users across the globe while upholding industry leading data privacy and compliant experimentation standards. He is a Computer Engineering graduate from UT Austin, specializing in Machine Learning and Data Science. He has served as Senior Product Manager for Microsoft’s Substrate Intelligence division where he helped ship intelligent experiences across Microsoft’s most widely used business productivity apps including Smart Reply, M365 Copilot, and Viva Topics. As a part of FundMiner, he is on a mission to empower large fundraising organizations like universities and research institutions with cutting-edge solutions to help them maximize their social impact while honoring donor intent.

Joshua LaBorde, senior vice president, product innovation, is passionate about applying his diverse operational experience and technical skills at mission-driven organizations. Drawing upon his nearly 25 years of experience in operations, information, and technology, Joshua now leads market research and industry analysis efforts to help BWF identify emerging trends, unmet needs, and opportunities for innovative products in the nonprofit sector. He oversees end-to-end product development for new products and service offerings that align with BWF’s strategic plan and client needs. Prior to leading advancement services and philanthropy operations teams for a decade and then joining BWF, his roles included product design, development, and management in telecommunications and in technology startups.