Your organization needs an AI policy
If your organization exists to serve the public—whether you’re a non-profit, foundation, school, healthcare provider, or local government—AI without a policy is a risk, full stop.
It can be valuable to let people experiment with the tools until they find out what works, but without clear guidelines, people can develop habits and practices that put your organization’s data at risk. A clear AI use policy can still leverage individual exploration within boundaries and use what they learn to develop clear consistent, auditable, mission-aligned practice.
Why Policy Matters
AI is already woven into the tools we use every day—email filters, document editors, social platforms—even if your staff never open a chatbot. Without a clear organizational policy, individual employees are left to decide: What’s safe to automate? What data can I share? How much do I trust the output? Some will avoid AI completely. Others will experiment freely. Each worker will be relying on their own understanding of the technology, relevant regulations, organizational strategy, and personal interests. The result of an individual-led AI use strategy is inconsistency, risk, and a widening gap between individual decisions and your organization’s mission.
A thoughtful AI policy ensures that everyone works from the same playbook. It protects your values, your people, and the communities you serve.
What’s different in mission-driven contexts
Non-profits & Foundations: Donor trust, beneficiary privacy, and grant compliance. You’ll need clear policies around consent to data collection, data minimization, and transparent reporting on AI-assisted work.
Education: Student privacy (and relevant laws), accommodations, and fairness in evaluation or advising. Favor human-in-the-loop and explainable uses.
Healthcare & Human Services: Heightened privacy and safety expectations and regulations. Keep protected data out of general-purpose tools; use vetted, enterprise options with strict Data Processing Addendums (DPAs) and Business Associate Agreements (BAAs).
Governments (including special districts): Transparency, privacy and accountability obligations in the form of Public Records Act (PRA)/Freedom of Information Act (FOIA), Health Insurance Portability and Accountability Act (HIPAA), open meeting rules, or public audits, to name a few. Prompts, outputs, settings, logs, training data, and vendor-held copies could be considered “records.” Contracts and storage must support retrieval, retention, and deletion.
Quick note: Nothing here is legal advice. Loop in counsel and records officers when you tailor your policy.
What a Good AI Policy Does
Protects privacy and security
Staff know what kinds of data can and cannot be shared with AI systems, reducing the risk of leaked data or inappropriate reuse.Supports compliance
Policies clarify obligations under laws like PRA, HIPAA, Family Educational Rights and Privacy Act (FERPA), General Data Privacy Regulation (GDPR), or other frameworks that may apply in your jurisdiction and domain.Builds trust
Transparent rules and clear communication demonstrate to donors, residents, volunteers, partners, and staff that your use of AI is intentional, ethical, and aligned with your mission.Empowers staff
With training and guidance, staff can leverage their deep knowledge of their particular work to innovate responsibly and grounds to refuse to do unsafe and too-risky tasks with AI. Workers who are confident about the capabilities and limitations of tools can create new efficiencies and find new ways to serve your communities.
Getting Started
Anchor in your mission and values. Use your existing strategy documents as a guide for deciding where AI adds value and where human judgment is essential. Identify business processes and values that are critical to mission advancement and trust. Protect them with policy.
Involve the right people. Don’t leave this to IT alone, but don’t go setting technology policy without them! Bring in legal, compliance, and program leaders to ensure the policy reflects your real-world obligations, values, and future plans.
Communicate clearly. Make sure staff not only know the rules but understand why they exist. Clear communication about the purpose of policy can increase compliance, improve trust, and support morale.
Plan for transparency. Assume your AI use may become public. Document your reasoning and practice continuous improvement.