Services that rate charitable groups, such as Charity Navigator, GuideStar, Ministry Watch, and the American Institute of Philanthropy, have altered the way some foundations and other donors look at nonprofits. The information they provide, however, must be used with caution.
First of all, a funder must understand what a given rating actually rates, because the services differ significantly. Second, many evaluators base their ratings on financial data. Yet while a grantee’s financial health is important, such data cannot answer some of the most important questions a funder should ask of any nonprofit:
· Is this nonprofit’s mission a wise way to address the social problem(s) it’s confronting?
· Does its mission fit with our donor intent?
· Assuming the group’s mission passes muster, is it effectively achieving that mission?
Some evaluating groups do look at more than just financials, asking questions like, “Does the nonprofit have an independent board of directors?” But these evaluators, too, tend to avoid the overarching questions just listed. In general, veteran funders tell Philanthropy, outside ratings can help donors avoid the worst nonprofits but are less helpful in identifying the very best groups.
With these limitations in mind, let’s examine the three major types of nonprofit evaluation groups.
Ratings Organizations—These groups generate a “grade” for each charity they evaluate, usually by calculating financial ratios that allow the rating organizations to compare nonprofits by quantifying which are more “efficient” in fundraising, administrative costs, and other monetary areas. Charity Navigator—the most radical proponent of this approach—rebuffs critics who argue that the way a nonprofit carries out its mission is more important than its financial ratios. “People need to get over their mistrust of financials,” Charity Navigator executive director Trent Stamp tells Philanthropy. “It’s the key.”
Seal of Approval/Accreditation—These groups have a set of standards charities must meet to receive a “seal of approval.” These groups typically do not focus on financial ratios but instead look at audited financial statements and their footnotes, and check that boards, mission statements, staffing, and other areas of a nonprofit group meet preset criteria of acceptability. The Maryland Association for Nonprofits, Standards of Excellence program, is one such organization. It grants a seal to organizations that meet 55 preset standards for operational and financial accountability. The program is being replicated in other states and hopes to be nationwide in a few years, according to executive director Peter Berns.
Informational—The biggest name in this world, GuideStar, doesn’t fit neatly into either of the two categories described. Early on, GuideStar was little more than a clearinghouse for information, but today it’s entering the world of evaluations. Charity Check, a program GuideStar launched in 2001, provides information both on a nonprofit’s financial health and how it compares to nonprofits nationwide that share a similar mission. Charity Check is meant to generate revenue, and to offer a balance to Charity Navigator. “Missions and programs are more important than financials,” says GuideStar’s Suzanne Coffman. “The best way to determine if an organization is working is to see how well it’s accomplishing its mission.”
Evaluating Financial Health
There is little agreement among rating services about how to derive even the most basic numbers for understanding a charity’s financial health. Behind the various ratios and rating systems lie complex—and contradictory—formulas, theories, and equations. Donors who want to use them effectively must understand what is actually being measured, and more importantly, what questions are being ignored.
While this topic could fill volumes, four variables are particularly significant: (1) How evaluators figure the percentage of a nonprofit’s budget that goes to mission, (2) how evaluators handle a nonprofit’s cash on hand, (3) how gifts-in-kind are handled, and (4) what financial documents are examined.
The public has been conditioned to ask, “How much of my dollar goes to the charity’s mission?” It’s a valid question, but the answers aren’t clear-cut. There are many good reasons why a nonprofit may put more or less of a donor’s dollar directly to mission. Older charities with an established donor base can generally deliver more donor dollars to mission than newer charities, which must spend more money to promote their cause and recruit new donors. Endowed organizations tend to spend less on fundraising. The list of legitimate differences is extensive, and some specialized groups are especially shortchanged in conventional analysis. For instance, think tanks whose mission is to move the public policy debate must work hard, and spend considerably, to find donors to support their niche, even if they have outstanding records of success in changing the nation’s course in welfare policy, school reform, and the like.
Further complicating the issue, each evaluating group has a different threshold for what constitutes an acceptable level of contributions to mission. (See nearby chart.) The American Institute of Philanthropy (AIP) sets 60 percent as the minimum threshold. The BBB Wise Giving Alliance sets it at 65 percent. Charity Navigator gives its highest score to groups who allocate over 75 percent, and no points to those who allocate less than 50 percent.
Not only are the thresholds different among charity raters, but the supposedly straightforward calculations used to arrive at a nonprofit’s numbers also differ. Some evaluators use expenses as the denominator to figure the portion of each dollar going to mission. Some use income. Some use contributed income, related income, or total income. Each produces a different result.
The American Bible Society (ABS) is a good case study. An established organization, ABS is analyzed by many charity oversight groups. In 2002, evaluators who use contributed income as a denominator show ABS spends a whopping 33 percent of every dollar on fundraising. Those who use expenses as a denominator, however, report a very respectable 11 percent. For still others that use total income, the number is 14 percent. As these significantly different figures are incorporated into the various formulae, very different rankings will appear.
A relatively new benchmark for charity measurement is the amount of cash a charity is holding. Some evaluators penalize a charity for sitting on cash reserves. The BBB Wise Giving Alliance, for example, says unrestricted net assets are not to exceed more than three times the total expenses budgeted for the current year, or spent in the previous year. The American Institute of Philanthropy requires less than three years’ operating expenses. These criteria focus on the cash ceiling but pay little attention to the cash floor. Yet by keeping cash reserves low, a nonprofit can damage its long-term sustainability. For this reason, Charity Navigator is bucking the philosophical trend by granting better scores to charities that hold more money in reserve, believing that more cash means more sustainability. Here the standards are not just different, but contradictory; Charity Navigator rewards an organization for the same condition that others penalize.
Virtually all the highest-rated charities receive a significant percentage of their income as gifts-in-kind—materials such as food or medical supplies—and the explanation has to do with fundraising. In general, it costs less, sometimes a lot less, to raise gifts-in-kind than to raise cash. Because accounting records don’t distinguish between the two types of gifts, the charity with a higher percentage of its grants as gifts-in-kind will usually score better than the charity that receives mostly cash contributions. The higher-rated group is not necessarily better or more efficient; it just enjoys lower fundraising costs.
Charity Navigator and GuideStar try to compensate for this by grouping charities together by type. That makes ratings fairer, but it’s an imperfect solution because some large charities are conglomerates that receive a mix of income types and could be grouped in several categories. Some may receive 30 percent to 40 percent of their income as gifts-in-kind, while others in the same grouping may receive up to 90 percent.
The greatest weakness in the financial information that many groups use to determine a nonprofit’s well-being is that it comes from the nonprofit’s IRS tax return, the Form 990. The General Accounting Office (GAO) believes the reporting error rate on Form 990s is extremely high; for example, in a recent year 64 percent of 990 filers who received public donations either reported zero fundraising costs or left that line blank. These and other unreliable numbers are no doubt largely the result of many charities’ having their Form 990s prepared by non-professionals, or by professionals unfamiliar with nonprofit management.
The GAO also urges caution about relying too heavily on ratios and spending efficiency; it adds that how well a charity accomplishes its mission is an important aspect of its worthiness. Although everyone admits the Form 990 has flaws, some evaluators defend it as the only public document common to most nonprofits. Besides, says GuideStar’s Suzanne Coffman, “It’s the only financial document every nonprofit files with the IRS under penalty of perjury.”
Other groups, such as the Wise Giving Alliance, Ministry Watch, and the Evangelical Council for Financial Accountability (where I work) prefer using audited financial statements. These generally provide a higher confidence in the accuracy of the numbers, since the audited statements are prepared by state-licensed practitioners.
For all the enlightenment that financial information provides, it does nothing to help a donor understand how well a nonprofit is organized. For example, it doesn’t reveal anything about how the group’s board functions, whether it has internal conflicts of interest, or how responsible the group’s fundraising habits are. A charity could score high in any rating system, yet have rampant conflict of interest among its board and staff. It could have a low fundraising percentage but have manipulative, even dishonest, content in its fundraising materials. “Sweepstakes” type fundraising that preys on the elderly, for instance, doesn’t show up in any financial ratings calculation.
Over its 25 years, my own organization, the Evangelical Council for Financial Accountability (ECFA), has learned that these areas represent the greatest potential for serious problems. Deficient board governance led to the United Way’s problems a decade ago; its “blue ribbon” board of directors was not aware the CEO was using charity funds for illicit activity and for personal gain. Inadequate board governance allowed the Reverend Jim Bakker to misuse millions of dollars contributed to his PTL ministry. Both men eventually went to prison, but the cost in increased public cynicism was enormous.
ECFA was founded with a “seal of approval” concept. Organizations must meet all seven of our standards to carry the seal. We actively dismiss organizations from membership when they fail to meet the seven standards, yet we have grown in numbers precisely for that reason. Considerable attention is paid to standards compliance and enforcement, including on-site inspections by experienced professionals.
Of course, our approach is not above criticism. Many people argue that ECFA lacks objectivity and independence because we are supported largely by fees from the charities we monitor. I understand the argument but respectfully disagree; the auditing industry operates the same way. That doesn’t, as recent events have shown, make all audits perfect, but even the accounting profession’s harshest critics don’t want to scrap them entirely.
Another seal of approval group, the BBB Wise Giving Alliance—the recently reorganized arm of the Philanthropic Advisory Service of the Council of Better Business Bureaus, made stronger by its merger with the old National Charities Information Bureau—has recently completed a three-year project in which it developed new standards for nonprofit efficiency. The Wise Giving Alliance standards go beyond financial ratios and address issues such as board governance, fundraising practices, disclosure and transparency, as well as use of funds. Just this year the alliance unveiled its new seal of approval program. Like ECFA, the Wise Giving Alliance has been criticized because it takes fees from monitored organizations.
Some “seal of approval” groups are going beyond recognizing excellence to using their accreditation systems to build capacity. The Maryland Association for Nonprofits, on whose standards advisory board I sit, is one such example. This group has put together a set of 55 standards in eight areas (mission and program, governing board, conflict of interest, human resources, financial and legal accountability, openness, fundraising, and public policy and public affairs). Currently, just 35 groups have earned the seal. It takes a nonprofit a year on average to complete the application, and two or three months for the Maryland Association to complete review. Very few will pass on their first attempt. In areas where a group falls short, the association will work with them to come up to standard.
Accrediting groups are not without built in weaknesses. One problem is that there is no distinction between a charity’s compliance with major standards versus less significant standards. Let us say two groups apply for a seal of approval. The first group meets all standards save a relatively minor point, such as not providing a statement of functional expenses with its audited financial statement. The second also meets all but one standard, but a much more serious one. The public will only learn that both groups “failed to meet all standards.” In addition, a group that has earned a seal and then slipped in its standards will carry the seal until the next period of assessment—months or even a year or two away. And the larger the number of seals an evaluator has granted, the harder it is to ensure continuing compliance with standards.
The Problem of Funding
The financial stability of charity oversight groups is probably more precarious than many charities they monitor. The public may like ratings, but it doesn’t seem to like them enough to pay for them, especially on the Internet. For nonprofit evaluation groups who work primarily over the Web, generating enough revenue is a constant concern.
And so, as we’ve seen, the differing ways evaluators support themselves become a source of criticism. The new organizations Charity Navigator and Wall Watchers advertise that they are donor-supported organizations. In truth, they are primarily single-source funded and are often perceived as one-man shows, underwritten by the founder/CEO. The older American Institute of Philanthropy is in a similar situation. How long such groups can operate without more reliable income streams is uncertain.
The leading informational service, GuideStar, has since its inception received most of its funds from foundations. But foundations are not inclined to fund projects indefinitely. While GuideStar continues to secure funding, it is branching out into new services to create a revenue stream for the day when foundation support dries up.
No Substitute for Wisdom
In sum, no evaluating group, however helpful, can take the place of a funder’s practical wisdom, nor should monetary evaluations lead anyone to neglect what matters most: a nonprofit’s mission and effectiveness. To the extent that evaluators can prod charities to measure their own effectiveness and communicate it to donors, they will do much good. One example of such an effort is the Acton Institute’s Effective Compassion Program, which highlights charities that measure and achieve their mission of helping the disadvantaged. The program’s database catalogues over 150 top-performing groups around the country and recently received a grant from the W.H. Brady Foundation to expand its size considerably.
The best funders will use every tool they can find to help them guide their work, including rating services. But they will be careful to evaluate the evaluators, and they won’t put their final trust in anyone’s judgment but their own.
Paul D. Nelson is president of the Evangelical Council for Financial Accountability.