What is AI, Really?

When it comes to AI, the technology is moving forward at lightning speed. For those of you in mission-driven organizations, it’s understandable to want to wait and see, to hesitate, or to wonder, “Where do I even start?”  You serve real people, impact lives, manage sensitive information and operate with public trust.

At the same time, working in service of the public, you are grappling with increased demand for services; limited staff capacity; rising expectations for responsiveness, transparency and equity; and the need to measure and demonstrate impact.

AI can definitely help. It can save staff time on repetitive tasks, find insights from complex data, help you communicate more effectively, and focus human effort where it’s most needed.

However, AI isn’t neutral. AI tools are trained on data and that data may have biases, gaps, and assumptions. At the end of the day, the tool is only as good as the information used to train it and the judgment of those using it. That’s why team members from across the organization, not just IT, need to understand how these tools work and get trained on how to responsibly use and apply these tools. 

And the values and mission of your organization must guide implementation and use of AI, not the reverse. Guiding values also mean protecting confidentiality. Many AI tools work by processing large amounts of data and your data may contain sensitive information. Before adopting any tool, ask: what information is being shared, where is it going, and how is it being stored? Safeguarding confidentiality isn’t optional, it’s a core part of earning and keeping public trust.

What’s the Difference Between Automation & AI?

Automation: Think pre-set rules or workflows. For example, a system that automatically sends a reminder to renew a permit or membership.

AI: A tool that makes decisions based on data or patterns. For example, the junk mail sorting in your email.

You can automate work in your organization with or without AI integration. Automation saves time. AI supports decisions but must be used thoughtfully.

What Is AI – In a Nutshell?

In the big picture, artificial intelligence is technology that allows computers to do tasks that usually require human thinking such as understanding language, spotting patterns, or making decisions.  

Here are some examples of how AI shows up:

Chatbots: Computer programs that simulate a conversation, usually to answer questions or guide users through a process.

Think of a chatbot as a digital front desk assistant. You ask a question, it responds with helpful information, no waiting in line or on hold. It’s not a person, but it can handle routine questions or tasks so people can focus on more complex needs.

Examples of What They Can Do:

  • Answer simple questions on a website (“what’s the status of my application?”)

  • Guide someone through filling out a form

  • Direct employees to the right department or policy

  • Offer 24/7 help for common issues without needing live staff

Note of Caution: Chatbots can frustrate users if they don’t recognize plain language or if they give incomplete answers. And while AI-powered chatbots can sound smooth, they sometimes make things up or give answers that aren’t accurate (often called “hallucinating” in AI terms). It’s a bit like calling an automated phone system that sounds helpful but can’t quite understand what you need. You end up stuck in circles, wishing you could talk to a real person. That’s why it’s important to set clear limits on what your chatbot handles and leave a door open to human support.

Predictive Analytics: Tools that use past data to make informed, data-driven guesses about what might happen next.

Think of it as looking at traffic patterns. By studying where congestion has happened before, you can plan the best time and route for your next trip. Predictive analytics works the same way, spotting patterns so your organization can prepare for what’s likely ahead. It isn’t a crystal ball, but it gives you a data-informed way to guide decisions. 

Examples of What They Can Do:

  • Flag projects that are likely to fall behind schedule based on past delays, staffing, or seasonal cycles

  • Forecasting a reduction in volunteers during holiday or back-to-school seasons

  • Predicting which neighborhoods will see increased demand for cooling centers during an upcoming heatwave

  • Analyzing past submissions to predict which grants are most likely to be awarded based on alignment, timing, and funder focus

Note of Caution: Predictions are only as good as the data. If the data leaves out certain groups or circumstances, the forecasts may miss the mark, or worse, reinforce inequities. It’s like your favorite map/direction app that keeps sending you down a closed road because it didn’t get the update, annoying on a trip, but far more serious when decisions about resources and people are at stake.

Recommendation Systems
AI that suggests options tailored to someone’s needs. Think of it as a trusted guide who looks at your situation and points you toward what’s likely to be most helpful right now, drawing on what has worked for others in similar circumstances.

Examples of What They Can Do:

  • Suggest training or certification programs to job seekers based on their skills and career goals

  • Recommend nearby resources like food programs, housing support, or mental health services based on location and demographics

  • Help a library suggest books, classes, or events to patrons based on their past interests

  • Propose volunteer opportunities that align with someone’s availability and past engagement

Note of Caution: Recommendations depend on the patterns in the data. If certain groups are underrepresented, the system may unintentionally steer opportunities toward those already well-served. It’s like when a streaming service keeps recommending the same type of shows you’ve already watched, even though your interests are broader. Mildly irritating for weekend viewing, but potentially harmful if people in need aren’t being guided toward a full range of available services.

Text Generation Tools: AI tools that, based on prompts you provide, can draft content such as emails, reports, or summaries. They’re like a writing partner who gives you a strong first draft. You provide the tool with a goal or some notes, and it creates something to get you started. From there, you refine and shape it into your own voice.

Examples of What They Can Do:

  • Draft a press release, meeting summary, or community update

  • Turn bullet points into a polished email

  • Summarize long reports into digestible takeaways

  • Help staff write policy drafts in plain language

Note of Caution: Generated text can sound confident but still contain errors, outdated information, or bias. Always review drafts carefully and never let the tool become the final editor of your organization’s voice. Think of it as a first draft from a new team member that’s a good starting point, but you’d always want to give it a careful review and make any needed edits before sharing it with the public.

Image Recognition: AI that can identify objects, people, or patterns in images or video.  Think of it as giving your computer a pair of eyes that can scan quickly and highlight what you might want to notice. These tools don’t replace human judgment, but they can help surface issues faster so team members can take a closer look.

Examples of What They Can Do:

  • Flag possible safety issues in building photos (like water damage, blocked exits, or missing signage)

  • Scan inspection photos to spot compliance issues that need review

  • Sort photos or videos by topic (“find all images of potholes”)

  • Auto-generate image descriptions or captions to improve accessibility for people with visual impairments

Note of Caution: These systems aren’t perfect, misidentifications happen, especially in poor lighting or cluttered images, and mistakes can have serious consequences if decisions rely on AI alone. Human verification is essential, especially for safety-related tasks. It’s a bit like autocorrect on your phone: it usually gets the word right but sometimes it swaps in the wrong one, amusing in a text to a friend but much riskier when decisions about safety and resources are on the line.

Agentic AI
AI that doesn’t just respond but can carry out tasks step by step across different systems.  Think of it as a dependable teammate who doesn’t just give advice but rolls up their sleeves to get things done by drafting, scheduling, and/or pulling together information while checking in with you.

Examples of What They Can Do:

  • Automatically gather rental market data, analyze affordability thresholds, and prepare a briefing

  • Monitor grant opportunities, draft sections of applications, and alert the team of deadlines

  • Guide residents through permit applications, pre-checking forms for missing items before staff review

Note of Caution: Agents can act across systems, which means mistakes can scale quickly. Without proper oversight, they might send incomplete emails, delete records, or take steps that should involve human judgment. It’s like using an automatic mail sorter: it can process thousands of envelopes in minutes, but if the settings are wrong, it could send 5,000 letters to the wrong address before anyone notices.

Speech and Audio AI
AI that can turn speech into text (speech-to-text) or text into speech (text-to-speech).
Think of it as a translator who can capture spoken words and turn them into written records you can search by keyword or give written information a human voice in different languages.

Examples of What They Can Do:

  • Transcribe public meetings so staff and community members can quickly find where certain topics (like “housing” or “public safety”) were discussed

  • Provide voice navigation for websites, improving accessibility

  • Offer multilingual audio options for community announcements or hotlines

Note of Caution: Speech tools are improving, but they still stumble with accents, background noise, or technical terms. Without review, transcripts may introduce errors, and generated speech may sound mechanical or not on-point. It’s like the friend who confidently mishears lyrics, fun at karaoke, less so in an official record of a public meeting.

Questions to Ask Before Using AI at Work

  • What problem are we trying to solve and is AI the right tool? A cross-section of staff, labor representatives, and community stakeholders should be at the table when answering this.

  • What data is being used to train or power the tool?

  • Does this tool protect the confidentiality of the people and information we are responsible for?

  • Who is accountable if it makes mistakes?

  • Could this tool create barriers for our staff in doing their work or our community in accessing programs and services?

  • How will we measure its impact on trust, both internally and with the public?

Thoughts to Carry Forward

AI can be a powerful ally for mission-driven organizations, but only when used with care. Each type of tool, from chatbots to predictive analytics to agent AI, brings possibilities to save time, uncover insights, and serve communities more effectively. At the same time, every tool comes with limits and risks. For organizations like ours, the stakes are high: we are not just managing data or processes, we are impacting people’s lives, safety, security, and their confidential information. An inaccurate prediction, a misstep by an agent, or an unchecked bias in a recommendation isn’t just inconvenient, it can cause real harm. That’s why the responsibility is ours to ensure AI supports, not substitutes, human judgment; that it reflects, not distorts, our values; and that it strengthens, not weakens, public trust.

Sign up for news and tools to help mission-driven teams use AI with care. We’ll send the AI Reference, Responsibility & Readiness Workbook as your first resource. We protect your privacy.

I knew what I wanted to write about for this post, starting with a detailed outline and the tone I wanted. I asked ChatGPT (GPT-5 Thinking) to turn that into a draft, then we worked through a few versions and I made the final edits, refining and shaping along the way to make sure it was true to my voice.

Over the next few weeks, we will be talking about AI in mission-driven work and how to approach it thoughtfully. Please let us know what topics you’d like us to cover in the comments, on LinkedIn, or by filling out our Contact Form.

Previous
Previous

Safer On-Ramps to AI: A Values-First Start for Mission-Driven Work