Introduction
Nonprofit organizations are constantly exploring ways to streamline operations and amplify their impact. Artificial intelligence (AI), especially Generative AI, can often be a valuable and low or no-cost assistant to help reach these goals. However, when deploying AI, it is important to consider how to use AI responsibly. The information in this help center article will guide your nonprofit to responsibly integrate AI into its workstreams.
Understanding AI
Advancements in AI technology are reshaping the possibilities of daily nonprofit work. To prepare for using the latest technology effectively, it is important to understand three core concepts:
- Artificial Intelligence (AI): AI is a field of computer science focused on creating smart machines designed to simulate human intelligence, like thinking or learning.
- Generative AI: Generative AI is a specific type of AI that can generate new content such as text, images, or other media. Nonprofits can use Generative AI for tasks like drafting social media posts, synthesizing fundraising materials, combining various data sources, and more.
- Agentic AI: The type of AI that involves AI agents. AI agents are systems that combine advanced AI models with access to your everyday tools like email, calendars, or documents. This allows them to take action on your behalf and under your control. For nonprofits, this might look like: an AI agent that monitors incoming volunteer inquiries, checks the organization’s central calendar for orientation dates, and automatically sends invitation links to prospective volunteers while updating a central contact list.
Tools with AI technology act as digital collaborators, empowering staff to tackle complex tasks more efficiently, make data-driven decisions, and unlock new levels of creativity. Imagine having assistance from AI in drafting compelling grant proposals with research-backed insights, generating personalized outreach campaigns that resonate deeply with donors, or analyzing program data to pinpoint areas for improvement.
Responsible AI practices and strategies
While AI offers great potential for improving the efficiency and impact of your nonprofit organization, it's important to establish responsible practices when using this technology. Practices that mitigate bias, ensure accuracy, respect privacy, and disclose AI usage will help your nonprofit navigate AI responsibly.
Mitigate Bias
As with all data-driven systems, it's important to be mindful of potential biases that can influence the outputs from AI tools. Biases might come from the training data used to develop the AI or from the prompts and information that users provide. To mitigate bias and utilize AI tools responsibly, follow the ACT Responsibly framework:
A — Ask
Ask if this task is appropriate for AI. Start with low-stakes, repetitive tasks. Reconsider using AI when the task involves confidential data, high-stakes empathy, or "final" decision-making. AI should assist humans, never replace them.
C — Check
Check AI-generated outputs before using them. Always verify facts when extracting insights. Review the output for:
- Accuracy: Use reliable sources to confirm statistics.
- Bias: Ensure the content is fair and representative of your community.
- Mission Alignment: Ensure the tone matches your organization’s authentic voice.
T — Tell
Tell people when you use AI. Transparency is the idea that you should be open about how a tool was used. Disclose to your stakeholders when content was drafted by AI and always take responsibility for the final output.
Ensure accuracy
As part of the “C” in the “ACT” Framework above, checking for accuracy is key when working with AI tools as they can sometimes “hallucinate,” which happens when the output is not true. AI tools may provide outputs that seem confident and correct, but are actually inaccurate. These hallucinations can occur for a number of reasons, including incomplete data used by the AI tool.
To help prevent, and/or catch hallucinations, it’s important to use a human-in-the-loop approach. No AI tool has the depth of experience, practical knowledge, and empathy that nonprofit professionals possess. An effective strategy is a human-in-the-loop approach, which uses a combination of machine and human intelligence to train, use, verify, and refine AI outputs.
Remember: AI output should always be viewed as a high-quality draft or suggestion that requires your expertise for critique and verification. While AI offers great potential for improving the efficiency and impact of your nonprofit organization, it's important to establish responsible practices when using AI tools. Practices that prioritize fairness, accuracy, the protection of privacy, and an overall ethical approach will help your nonprofit navigate AI responsibly.
- Provide clear and specific prompts. When writing prompts for AI tools, use natural language in a clear and concise way, and provide plenty of context for your request. Avoid vague or open-ended prompts that could lead to inaccurate results.
- Fact-check outputs. Verify the accuracy of any information generated by AI. Use reliable sources from your own research to confirm the information.
- Be aware of limitations. Understand that AI is still under development and has limitations. For tasks requiring high degrees of accuracy, consider using resources other than AI to support the completion of your task.
Respect privacy
Whether you’re using AI to assist with basic tasks or create new content, consider how this usage may affect the privacy and security of the people in your nonprofit and those you serve. The following strategies can help ensure that your organization is mitigating privacy and security concerns when using AI.
- Review privacy policies. Read through AI tool documentation to learn about privacy safeguards the developers have established, including terms and conditions. Research to stay up-to-date on privacy regulations and best practices for AI usage.
- Limit your data input. Remove confidential, private, or personally identifiable information (PII), and sensitive organization or beneficiary data when interacting with AI tools.
- Enterprise grade security. Through Google for Nonprofits, eligible organizations can activate Workspace for Nonprofits, which includes the Gemini app and NotebookLM as a core-service with enterprise grade security. Please review the specific compliance certifications for both NotebookLM and Gemini to understand the coverage for your organization. Learn more in the Generative AI in Google Workspace Privacy Hub.
Disclose AI usage
As part of the “T” in the “ACT” Framework above, disclosing your use of AI fosters trust and promotes ethical practices in your nonprofit organization. The following strategies can help ensure transparency when using AI for your work:
- Be open about usage. Make it clear whenever your nonprofit uses AI. Disclose to your users that you are using AI tools and why.
- Provide details. Explain what type of tool you used and your intention for its use. Offer any other information that could help anyone with access to your work evaluate potential risks.
Resources for Responsible AI usage
Nonprofit organizations of all kinds are navigating how to leverage AI for their work. Guidelines for responsible AI usage can help any organization determine policies and best practices. The following resources provide additional guidance on how to use AI responsibly.
- Fast Forward’s Nonprofit AI Policy Builder, a no-cost tool designed to help nonprofits create their own AI usage policy.
- Google's Responsible AI practices and Principles. Examples of practices and principles for the design and development of AI systems.