Trust: How AI Earns Its Place in Mission-Driven Work
Mission-driven organizations are built on relationships with communities, constituents, clients, and partners. These relationships rest on transparency, reliable delivery of programs and services, and sustained trust. Any change that touches service delivery must strengthen, not strain, that foundation.
With any new technology that affects how services are provided, maintaining trust is critical. People deserve a plain explanation of what you are doing and why, how a tool will be used, what it will not do, and how to be heard along the way.
Internally and externally, the rule is the same. If staff do not trust the tool, if the community does not understand how it works, and if clients are unsure how their information will be protected or how service will improve, the tool will not succeed. Trust is the precondition for adoption.
Trust begins inside the organization with staff and labor partners, then extends outward to residents and community partners. When people understand what you are doing, why you are doing it, and how they can participate, they give you the room to improve.
Earn Internal Trust with Staff and Union Partners
Invite staff and union partners at the exploration stage, before procurement or scoping a pilot. Ask them what should be on the table and why. Frontline staff live the work each day. They know the pain points and the duplicate steps that slow service. Map the current process together, looking for time sinks, error points, and duplication such as double entry, forms touched multiple times, or transfers that add no value. Start improvements there and be candid about what you are considering.
Explain the tool in plain language. Commit that AI will assist, not replace, staff. Show where a tool can produce a first draft or suggestions for a deliverable, how staff will review it, and which decisions the tool will never make. Document how to escalate concerns.
Formalize the partnership with an AI and workforce memorandum of understanding and a labor seat on the organization’s AI oversight group. Keep the guardrails simple and visible, with people in the loop: a trained staff member reviews the tool’s output before any action is taken, can change or reject it, and makes the final decision. Offer office hours and a clear channel to raise concerns with response timelines.
Set rules for early testing. If the tool is not helping or creates risk, pause or pull back. Stop the test, fix the issue, or abandon the tool. Give staff a way to step out of the test while issues are resolved.
Build skills with practical training throughout the organization. Cover what AI is and is not, the use cases your organization is considering, how human judgment is safeguarded, and the basics of privacy, bias, and data security. Create a prompt library so staff can share effective prompts and practices. Keep a record of tools and ideas that did not lead to improvements as enterprise learning lessons. Explain how staff feedback will shape decisions, testing, implementation, and when you will walk away from a tool that does not meet your values and standards.
Report results and show reinvestment. Share time saved, improvements in quality of services and programs, reductions in duplication, and increases in efficiency. Be transparent about how operational savings will flow back to staff by investing in training and upskilling, wage progression tied to new skills, and hiring in frontline and understaffed areas.
The best tools do not replace people, they elevate them. Done well, AI reduces burnout, improves programs and services, boosts retention, and returns time to the work that matters most. That only happens when the workforce is included from the start, unions are genuine partners, and leaders are explicit that AI is a tool for good jobs, not fewer jobs.
Extend Trust Outward to Clients and Community
Mission-driven organizations have a long tradition of community input, advisory boards, and public participation. Build on that strength. The people who come to you for programs and services can point to what needs fixing, like duplicate forms, long waits, and repetitive steps. Bring community partners in at the exploration stage and keep them engaged through development, testing, and implementation. Ask them to help map the process, name bottlenecks they experience, and co-test on a small scale with staff before enterprise rollout. Do the tests, learn from staff and community feedback, then refine or expand.
Ask directly how the proposed tool could affect access to services and support. Surface barriers people foresee and where duplication burdens them, including forms, having to repeat their story, and slow responses or outcomes. Name the risks you are watching, including the digital divide, mistrust of public institutions, and automated bias.
Explain the tool in plain language. Be specific about what data the tool will and will not use, how privacy is protected, and how staff will monitor and override outputs when needed. For important steps, keep people in the loop: a person reviews the tool’s output and makes the final call. State the steps you will take if the tool raises concerns, including pausing, fixing the issue, or abandoning the tool. Provide materials in accessible formats and key languages.
Offer brief explainers or community briefings so people understand how the tool supports staff and service, and what it will not do.
Build feedback loops that work. Provide clear ways for community members to flag concerns or errors, such as a web form, hotline, or in person. Set and meet response timelines. Say who monitors patterns or gaps in results and how findings are reviewed. Commit to changing course based on what you learn and publish short updates that show what changed and why.
Report what matters. Share time saved, clearer communication, improvements in program or service delivery, reductions in duplication that the public feels, and any changes made based on community feedback.
Thoughts to Carry Forward
Trust is earned in practice. Bring staff and union partners in first, then the community, from exploration through rollout. Start small. Test with a single workflow, learn with staff and community at the table, publish what you learned, adjust, then expand. Be explicit that AI assists, not replaces, people and works best hand-in-hand with humans by supporting human judgment.
State the problem you are trying to improve and the service outcome you aim to strengthen. Keep safeguards and transparency simple and visible, including human review, boundaries for privacy and data use, and real feedback channels. Measure against a baseline. Reinvest savings in people through training, wage progression for new skills, and hiring in frontline and understaffed areas. Scale what meets outcomes without negatively impacting privacy or service quality, and report what you changed.
AI Disclosure: I knew what I wanted to write about for this post, starting with a detailed set of thoughts that I wanted to incorporate. I asked ChatGPT-5 Thinking to turn the thoughts into a draft post based on the tone and outcomes I was looking for, then GPT-5 Thinking and I worked through a few more iterations. I shared the new version with Karen and incorporated her feedback. Worked another draft with GPT-5 Thinking and made additional edits making sure it was true to my voice.
Note: This post offers general guidance to help organizations plan AI work. It is not legal advice. Your approach should reflect your organization’s mission, values, applicable policies and laws, labor agreements, procurement rules, privacy and security standards, accessibility needs, and public records obligations. Please work with your counsel and internal teams to tailor what’s right for your organization.