Safer On-Ramps to AI: A Values-First Start for Mission-Driven Work

You hear it in board meetings, staff town halls, RFPs, and of course all over LinkedIn: “How are you using AI?” Headlines hype 10× productivity gains, organizations show off elaborate automation sequences, and board members are asking how you’ll innovate.

You’d be forgiven for feeling conflicted. It’s easy to feel behind and also worry about intellectual property, sustainability, or whether jumping in too quickly could put your data, staff, and mission at risk.

Unlike a start-up that can pivot or "fail fast," your organization’s decisions impact vulnerable communities, tight budgets, and hard-won, mission-critical trust. How can you start using AI ethically and effectively to amplify your good work?

If you are concerned about AI, this post is for you.

Risks & Values

There are a lot of legitimate​ fears that keep reasonable leaders cautious:

  • Safety & Privacy: Will staff accidentally leak sensitive client data into a public model? What are these models doing with our information? 

  • Bias & Equity: If historical data reflect systemic inequities, and AI is trained on that data, will it replicate or amplify them? 

  • Reputational Risk: It only takes one misstep that becomes a headline to erode donor and community trust.

  • Resource Diversion: Every hour spent experimenting with AI is an hour not spent on direct service, and every new tool competes with core program budgets.

  • Environmental Impact: Cloud‑based models consume real‑world resources and have an impact on the environment. 

  • Staff Morale: Will staff worry they will be replaced with AI? Will we be pressured to replace staff with AI? 

  • Much more: What about the digital divide? Illegal scraping of others’ IP for training data? Deep fakes?

First, I share your concerns. In fact, I worry so much about AI that I went and got a whole PhD about it.

But we don’t live in a world where we can all conscientiously abstain from using AI. It is increasingly integrated into software we rely on every day and we face organizational and market pressures to use AI.

Further, I argue that no one should avoid at least learning about it. For one thing, our job applicants, funding applicants, funders, competitors for funding, colleagues, supervisors, and direct reports may be using it: we need to at least understand what it does (and where it fails). Not to mention any market pressures, management mandates, or surprise “smart,” “intelligent,” or “magic” integrations from software you’ve been using for decades!

So, given that it’s worth learning more about AI and that you are concerned about using it well, safely, and ethically, let’s talk about how to carefully begin using AI in your work. 

Public vs. Private Platforms

Not all AI platforms are alike:

·      Public platforms (like the free versions of ChatGPT, Claude, or Gemini) may reuse your data for training or personalization unless you adjust settings. That means prompts could, in some cases, influence outputs for other users.

·      Private or enterprise platforms (like custom deployments, or tools hosted securely inside your organization) are closed off to outsiders. These are generally safer for sensitive or confidential data.

If you are in government, healthcare, or a nonprofit handling donor or client information, this distinction not only protects the subjects of that data and their trust in you, but also your organization’s legal liability. Meeting summaries or draft donor letters may sound low-risk—but if they include sensitive details, public tools could put that data at risk. Start with generic text or anonymized examples instead and work with IT to identify or implement a safe tool. For a deeper dive, see my post on AI, privacy, and security.

Four Safer On-Ramps

Here are starting points that build internal literacy while keeping stakes low:

1.     Lightweight guardrails first
 Draft a short “safe use” memo: for example, no personally identifiable information (PII) like names, addresses, or social security numbers in prompts, human review of outputs, and escalation paths for questionable cases. If you have legal constraints (e.g. HIPAA in healthcare or FERPA in education) or existing organizational policies around data sharing, make sure those are reflected and cited in your starter policy. Once you’ve tested, expand to tailored policies.

2.     Choose truly low-risk tasks
Focus on work where errors are recoverable and sensitive data isn’t involved: reformatting boilerplate text, creating first drafts of generic outreach, or translating public-facing material into plain language. Avoid confidential meeting notes or donor data until you are familiar with your organizational policy and any legal constraints and/or have a private, secure tool.

3.     Foster safe experimentation
Provide a shared space where staff can experiment, share results, and flag risks. This turns informal learning into collective knowledge for policy refinement.

4.     Build a best practices library
If you are using AI-driven chatbots, collect effective, safe prompts for common tasks—like simplifying policy language, formatting reports, or generating alternative phrasing. (More on prompts to come, but you can also check out these resources to learn more about it!) If you are using other AI tools, logging successes and failures can help you and your team share best practices. Over time, this becomes institutional memory and a training resource.

Bottom Line

By starting with small, safe experiments, understanding your legal and organizational constraints, and grounding your approach in your mission and values, you can harness AI without putting trust or sensitive data at risk. Stay tuned for more actionable advice.

AI disclosure: I gave ChatGPT 5 Thinking a series of unstructured notes and worked in a project that had the manuscript of a related book I am working on in attached to it. I had it create a draft, I edited it and added content about hesitance to use AI and why you should learn about it anyway. I pasted the edited post into a new chat to get feedback, got feedback from Sarah, and then integrated suggested changes (all of Sarah’s and some of Chat’s!) into this final post. It took 3 prompts to get an interested promo image and I asked it for 10 alternate titles.

Previous
Previous

Trust: How AI Earns Its Place in Mission-Driven Work

Next
Next

What is AI, Really?