A nonprofit is using AI agents to raise money for charity

Tech Crunch - Apr 8th, 2025
Open on Tech Crunch

Sage Future, a nonprofit backed by Open Philanthropy, has embarked on an innovative experiment to demonstrate the potential of AI agents beyond corporate profit. By deploying four AI models — two from OpenAI (GPT-4o and o1) and two from Anthropic (Claude 3.6 and 3.7 Sonnet) — in a virtual environment, Sage tasked them with the challenge of raising funds for a charity of their choice. Within a week, the agents collaboratively raised $257 for Helen Keller International, a charity that focuses on providing vitamin A supplements to children. Although the AI agents operated under limited autonomy and relied heavily on human spectators for donations, their ability to coordinate tasks like email outreach, document creation, and social media promotion provides a glimpse into their evolving capabilities.

Sage's director, Adam Binksmith, sees this experiment as a critical step in understanding the current capacities and limitations of AI agents. The experiment highlighted both the resourcefulness and the technical hurdles faced by these AI models. As agents interacted, they demonstrated creativity and problem-solving skills, such as generating profile pictures using ChatGPT. However, they also encountered challenges, like struggling with CAPTCHA verification. Binksmith anticipates that more advanced AI models will eventually overcome these obstacles. Sage plans to introduce new models and complex scenarios, including agents with conflicting goals or even a saboteur, to further test and refine their abilities. This ongoing exploration aims to harness AI for meaningful philanthropic efforts as agents become more adept and secure.

Story submitted by Fairstory

RATING

6.4
Moderately Fair
Read with skepticism

The story provides an intriguing look at the use of AI agents in philanthropy, highlighting both their potential and limitations. It is timely and relevant, addressing current discussions about AI's role in society. The article is generally clear and engaging, with a logical structure and accessible language. However, it could benefit from more detailed source attribution and a broader range of perspectives to enhance its credibility and balance. By exploring ethical considerations and including diverse viewpoints, the story could provoke more meaningful discussion and have a greater impact on public opinion.

RATING DETAILS

7
Accuracy

The story provides a generally accurate depiction of the experiment conducted by Sage Future using AI agents to raise money for charity. It correctly identifies the organizations involved, such as Sage Future and Open Philanthropy, and the AI models used, including OpenAI’s GPT-4o and Anthropic’s Claude models. However, some claims require further verification, such as the total amount raised and the extent of human involvement in the fundraising process. The story mentions that the AI agents raised $257 for Helen Keller International, but it lacks specific source citations to confirm this amount. Additionally, the claim that the agents chose the charity and fundraising methods autonomously is somewhat misleading, as the story later clarifies that human spectators played a significant role in guiding the agents.

6
Balance

The article primarily focuses on the potential of AI agents in philanthropy, highlighting both their capabilities and limitations. While it presents a generally balanced view by acknowledging the technical challenges and the need for human input, it leans slightly towards a positive outlook on AI's future potential. The story could benefit from including perspectives from experts or critics who might question the ethical implications or practicality of using AI in such contexts. By doing so, it would provide a more comprehensive view of the topic.

8
Clarity

The article is generally clear and well-structured, making it easy for readers to follow the narrative. It effectively outlines the experiment's objectives, the AI agents' actions, and the outcomes. The language is straightforward, and the use of examples, such as the AI agents creating a profile picture, helps illustrate the points being made. However, some sections could benefit from more detailed explanations, particularly regarding the technical challenges and human involvement, to enhance overall comprehension.

5
Source quality

The story lacks detailed attribution to specific sources, relying primarily on statements from Adam Binksmith, the director of Sage Future. While Binksmith's insights are valuable, the article would benefit from including input from independent experts or additional stakeholders involved in the experiment. This would enhance the credibility and depth of the reporting by providing a broader range of perspectives and verifying the claims made in the story.

6
Transparency

The article provides some transparency regarding the experiment's methodology, such as the involvement of human spectators and the technical challenges faced by the AI agents. However, it lacks detailed explanations of the experimental setup, the criteria for selecting charities, and the specific roles of human participants. Greater transparency in these areas would help readers better understand the context and limitations of the experiment, as well as the basis for the claims made in the story.

Sources

  1. https://techcrunch.com/2025/04/08/a-nonprofit-is-using-ai-agents-to-raise-money-for-charity/
  2. http://acecomments.mu.nu/?post=367483http%3A%2F%2Facecomments.mu.nu%2F%3Fpost%3D367483
  3. https://www.sage.com/en-us/blog/nonprofit-financial-management/
  4. http://acecomments.mu.nu/?post=369658http%3A%2F%2Facecomments.mu.nu%2F%3Fpost%3D369658
  5. https://erpnews.com/sage-and-village-capital-announce-first-cohort-of-innovators-in-transforming-the-future-of-work-program/