The government ministries are very keen on amassing statistics. They collect them, raise them to the nth power, take the cube root, and prepare wonderful diagrams. But you must never forget that every one of these figures comes in the first place from the village watchman, who just puts down what he damn well pleases.
—Attributed to Sir Josiah Stamp, an English economist of the early 20th century
Evaluation is important; everybody agrees on that. But how should foundations think about evaluation? Indeed, what is evaluation? Measurement is complicated because evaluations draw on a range of methods, focus on different subjects, and speak to different audiences. In addition, there are two main targets on which to focus, three broad evaluation strategies, and three different candidates to conduct the evaluation.
When thinking about conducting an evaluation, foundations must first choose the target or focus of their study. Foundation evaluations can focus—and the majority do—on the nonprofit organizations they fund. Often, the evaluation will target recipient organizations for study, sometimes to build accountability into the relationship, sometimes to inform future giving. In the best cases, the choice to focus on recipient organizations is part of a coherent strategy aimed at improving decision making within the foundation. In the worst cases, it is a perfunctory process that merely fulfills a foundation requirement.
To ensure that the evaluation speaks directly to and informs foundation practices, the target of the study sometimes shifts away from the recipient organization to the foundation itself. This second target of evaluation is far more elusive, since it is difficult to build support for turning the focus of the evaluation away from recipient organizations and toward the foundation’s own programs and activities. There are also substantial methodological issues raised by this approach, since it is more difficult to reach global assessments about a foundation’s performance than it is to see whether or not grant recipients have been successful in fulfilling the terms of their grants.
Three Types of Evaluation
Which brings us to the question of methodology. Beyond selecting a target, foundations have to make a decision about what kind of evaluation is needed. While there are myriad names for different types of evaluations, most fall within three main categories: consultative, process, and outcome. Consultative evaluations are the most informal and the most closely aligned with improving the subject under study. Thus, a consultative evaluation of a nonprofit organization will tend to emphasize usable knowledge that can be linked to improved performance. Under this type of evaluation, the subject and evaluator often work closely together with the goal of understanding what has worked well and what has not. Often the product of this kind of evaluation is a strategic plan for improving performance.
Process evaluations have a different goal and logic to them. They document and measure program implementation and seek to assess the capacity of the organization to achieve its stated objectives. Process evaluations will often measure the quantity and quality of services delivered, sometimes through surveys of clients or through on-site observations of service delivery systems. In carrying out a process evaluation, it is not uncommon for the subject matter to be either complex or new, and for the evaluation to be a key first step toward studying the way services are delivered. In this sense, the goal of a process evaluation is to dissect and understand implementation and to provide a window into the program delivery world.
By contrast, outcome evaluations focus on what kind of results a program ultimately achieves. Since process evaluations are sometimes criticized for being too focused on the minutiae of service delivery and for missing the bigger picture, evaluators often include measures of impact and outcome. Whether the chosen outcome is defined in terms of short- or long-range objectives, it must adequately represent the underlying purposes that are being evaluated. The task of defining the desired outcomes for any given project is both difficult and critical, since this decision will set the terms for the measurement to follow.
One of the challenges to defining outcomes is to think through fully what a program is really trying to achieve, beyond what may or may not have been explicitly stated when the program began. Thus, for example, the managers of a new training program may initially believe that the outcome they are trying to achieve is the placement of graduates into full-time jobs in the month following their completion of the program. This short-term outcome may evolve, however, as the organization becomes more familiar with the terrain it is negotiating. Later the organization may redefine the real outcome as stable employment for three consecutive years. The success or failure of outcome evaluations depends heavily on the quality of the defined outcomes.
Who Should Evaluate?
After deciding whether to focus on the recipient organization or the foundation and choosing the appropriate method of evaluation, donors must decide who will conduct the actual evaluation. Here there are really only three choices. The first is to encourage the recipient organization to carry out the evaluation, using its own staff. Nonprofits often see this as the least threatening route, since control over the evaluation is never wrested away from the organization. This approach can be problematic for foundations, as evaluation can blend into a grant report, which in turn mutates into requests for renewed support.
Foundations can also conduct the evaluation with their own staff. While foundation program officers routinely conduct site visits and evaluate program performance, such activity is usually intended to supplement and verify information contained in the grant proposals. The problem with having foundation staff conduct evaluations of programs they have overseen is simple, if easy to overstate—namely the built-in conflict of interest that such arrangements entail. The growth of philanthropic careers has exacerbated this problem by making foundation staff conscious of the outcomes of their grants. With professional advancement hanging in the balance, it is not entirely reasonable to ask foundation managers to evaluate the initiatives for which they have advocated.
This then points to a third option, the use of outside or independent evaluators. Many foundations currently engage professors, consultants, and practitioners to conduct independent evaluations of larger grant initiatives. By looking outside the foundation for evaluative input, donors are able to break through the incentives of recipient organizations and foundation staff that run counter to the need of foundations for full and frank performance assessment.
While much has been written on the use of outside evaluators, and while some significant independent evaluations of foundation programs have indeed been conducted, the findings have rarely been circulated widely or incorporated into the decision making of the field as a whole.
Thus, as foundation managers look at the question of evaluation, they need to answer three essential questions. First, what kind of evaluation is appropriate? Second, what will be the target or focus of the evaluation? And finally, who will implement the evaluation?
In making difficult decisions about evaluations, foundations must weigh the consequences of their choices for the audiences to which their evaluations will eventually speak. No evaluation will meet the needs of all audiences. But by asking the core questions about the evaluation—what it needs to track, who is best positioned to conduct it, and what type is most appropriate—foundations can take a critical step toward doing more than just sending checks into the nonprofit darkness.
The Five Dimensions of Philanthropic Impact
Underlying all evaluation must be an explicit sense of what is substantively important and worth measuring. To be sure, defining and reckoning value is a complex and difficult task. With pressure coming from many sides, few foundation managers are really able to spend much time thinking through what impact they are trying to have through their grants. Nonetheless, there are five main dimensions to philanthropic impact that merit measurement and evaluation.
DIMENSION ONE: Adding to Knowledge
Philanthropic efforts can construct new knowledge. Even when a program fails to improve its clients’ lives, it can still succeed insofar as it creates knowledge through a thorough and appropriate evaluation. Knowledge created by philanthropy can be useful to nonprofits, foundations, and even government, where it can shape public policy. It is shortsighted to limit the definition of valuable knowledge to successful pilot projects with impressive documented results.
DIMENSION TWO: Improving and Enriching the Lives of Clients
In addition to creating new knowledge, philanthropy can change lives. In the vast majority of grants involving services, assessment focuses on whether a given program produced significant changes in the client population. Focusing on the clients of nonprofit organizations leads to an evaluation that is both customer-centered and directed at the end result of a grant. It is the point at which philanthropic funds either translate or do not translate into changed human experiences.
In many ways, this dimension is the easiest for foundations to track and measure. It is also a popular way to think about the role of philanthropy because it involves a relatively simple theory of philanthropic impact. That is, when donors make a grant, change occurs for clients if and when recipient organizations deliver well-managed and well-conceived programs. Foundations also like the idea of focusing on client impact because it draws them closer to the front lines of charitable activity and begins to counter the criticism of grantees that foundations can be self-interested, high-handed, and unresponsive.
DIMENSION THREE: Building Organizational Capacity
Beyond reaching and assisting targeted populations, evaluation can also shape the staff and structure of the organization that delivers services. Organizational capacity includes all of the human and physical resources that organizations require to be effective. Grants can fall short of the promise of creating programmatic impact for clients but succeed in developing the capacity of an organization to act effectively in the future. For nonprofits, this can mean acquiring the staff or technology needed to achieve their missions. For foundations, this can mean building the internal resources needed to make more successful grants in the future and the infrastructure needed to identify and follow through on grantmaking opportunities.
Clearly, building capacity is far less immediately satisfying than producing and measuring programmatic impact on clients. But an exclusive focus on the client can be a false promise. There are times when organizational capacity needs to be increased before services can be delivered at either the quality or quantity that clients require. Though less commonly used as a barometer of philanthropic impact, changing the capacity of nonprofit organizations can often be a crucial intermediary step toward improving clients’ lives.
DIMENSION FOUR: Creating Social Capital
A fourth dimension of philanthropic impact is at the community level. Grants can have effects that cross the boundaries of nonprofit programs and build strong communities. Robert Putnam has used the term “social capital” to describe the array of affiliations and relationships within communities that build trust and norms. Social capital reinforces civic engagement and contributes to strong democratic institutions.
The idea that grants can build social capital and norms of reciprocity within communities is different from either the notion of client impact or capacity building. Instead of looking at the target population or even the nonprofit organization, social capital instead focuses on the quality and coherence of the community within which donors make a grant. A grant can unite a community to work on a pressing problem or issue and, thus, create enduring ties that transcend the short-term impact of the grant. The fourth philanthropic impact, then, recognizes that grants can change neighborhoods and communities as significantly as they can change their intended target clientele.
DIMENSION FIVE: Expressing Private Values and Interests
The fifth and final dimension of philanthropic impact is the most critical. In defining what is valuable and what is worth pursuing through philanthropy, foundations should not exclude the personal interests and commitments of the donor, whether alive or deceased. In fact, foundations should evaluate all grants in terms of how well they meet the goals, values, and interests of the donor, who, after all, stands at the start of the grantmaking process. It is only reasonable that the end results of philanthropic giving should reflect the impulse that set the whole process in motion.
There are at least two ways to verify that a given grant aligns with the donor’s interests and values. First, focus on any of the other four dimensions and determine if the outcome fits with the intent and purposes of the donor. Or, define the values implicit in the development and delivery of the program under study and compare them to those of the donor. The latter approach is more difficult than taking a proxy measure of performance but is often more meaningful. Nevertheless, whether assessing the substantive values of a grantee or using proxy measures of a grantee’s activities, foundations should bring their funded activities into alignment with the ideas, commitments, and values of the donor.
This last dimension of philanthropic impact is the easiest to overlook because instrumental goals appear easier to track and assess than the value content of programs. However, it is the plurality of values and ideas behind philanthropic giving that constitute philanthropy’s reason for being and give it its strength. For this reason, foundation staff and trustees should take care that the values of recipient organizations align with those of the funder.
In the end, donors must decide which types of philanthropic impact are most important to them and then seek out the best and most appropriate measurement strategies. To do less is ultimately a failure to act responsibly. After all, donors, nonprofit managers, and policymakers all stand to benefit from honest and open evaluation. In carrying out their work, foundations need to constantly ask themselves how they can best design a measurement strategy that draws on the right methodology, the right kind of evaluator, and the right target or focus for study.
If successful in these tasks, philanthropy may well begin to define for itself a new and much needed variation on the old Delphic motto, one that will stress the importance of a constant search for a better understanding of the many ways foundations shape society.
Peter Frumkin is assistant professor of public policy at Harvard University’s John F. Kennedy School of Government.