Governance Articles

Assessment Matters

Establishing organizational metrics that matter starts with the task of determining what really matters. For example, if governance doesn't require customer-based evaluation data, does it matter if volunteer leaders slip into the infallible posture of knowing exactly what members want and matching that to their personal plans for a legacy? Or how about the idea that gets launched with little or no assessment of internal readiness only to discover it wasn't there? It may be as straightforward as improving customer satisfaction or monitoring progress on the goals. Whatever it is, metrics that matter are a function of whatever you think is important.

Things that matter most in providing long-term value to members include the ability to address their needs in a satisfactory manner, to demonstrate progress on worthy goals, and to launch strategies that adapt with intelligence. Metrics enhance these functions by calibrating performance against a desirable standard and then informing us of progress in ways that point to improvement opportunities.

The term "metrics" does not simply mean a set of measures. It means a system of measurement, and the difference is important. Organizations that ask managers to measure their progress in the absence of an organizing scheme may wind up generating a babble of numbers and foolery. A system integrates the measures so they relate to each other and show the way.

Several measurement systems have been developed in the private sector, but their migration into association management has been slow for several reasons, one of which is the curse of the republic. Elected members tend to think of themselves in the bipolar sense of being both producers and customers. As producers they don't need feedback because they are customers as well, and they know themselves. Where to begin with this one?

First, active members may represent the customers, but that doesn't make them representative. They have higher levels of appreciation for the mission and a compelling need to believe they are making a difference, which is otherwise known as a bias. Second, associations tend to be weak at segmenting their populations into meaningful need sectors. To think that a unified community translates into a uniform marketplace is a common mistake. Third, the pricing structure of a single dues payment for a basket of goods hides the value of each item in the basket by distorting normal economic feedback. And finally, although mission-relevance outranks profit margin in determining value in an association, that judgment still requires objective assessments. Hence, a system of measurement is required.

Two measurement systems will be addressed here, but there are certainly others to consider. The first began with Proctor and Gamble and has evolved in ways that I have adapted to associations. The second is one that began with a Harvard Business Review article and has since become a juggernaut of books, bandwagons, and believers.

The first operates on two levels and begins with an annual governance survey that maintains trend line data on member satisfaction. The value delivered by each program is summarized in a single statement, carefully crafted by objective parties to be free of spin and distortion. These statements typically fall along budgeted program lines, but the point is to capture distinct lines of service as seen by the average member. Collectively they constitute the association's value propositions.

The statements are evaluated on two scales. The first rates the importance of the proposition, and the second rates the association's performance. A third measure is calculated by determining the gap between the two. Findings are charted, and the gap between importance and performance ratings becomes apparent.

For example, the American Society of Landscape Architects (ASLA) identified its value propositions in 14 brief, but direct statements. A few examples of the association's value statements are:

  • Providing information-based products that address the business practices of member firms, and
  • Helping students and faculty maintain productive links with practitioners for the purposes of mutual education and efficient transition into professional practice.

A sampling of the ASLA membership reacts to the statements annually, rating importance and performance.

This type of assessment is referred to as a governance survey, because a policy-centered board uses it to monitor progress without being pulled into more elaborate evaluations. The importance measure gives an index that affects resource allocations. The gap analysis lets them issue directives regarding the need for improvements without getting into the details or prosecuting anyone. Managers are simply told to narrow the gap by the next annual survey or expect repercussions.

Much more information can be taken from this approach than space allows here, but one quick example reflects on the budgeting implications. At ASLA the fellows program consumed considerable resources and was a favorite of the senior officers who were mostly fellows. When it became clear that that the rank and file members felt performance exceeded importance, the scope was reduced and resources moved to other areas.

Comparative results from a subsequent survey showed that in a two-year period, most of the programs showed significant gains in member perception of improvement. This gives a board the assurance needed to delegate management responsibility and remain focused on the big issues that drive adaptation.

The governance survey is one of two levels of measurement in this system. The second is geared to the unique attributes of each program and provides information that helps managers know what they must do to improve customer perception of performance. These measurements vary with each program area, depending on the nature of the transactions and the types of customers. Two types will be illustrated here to show how they vary.

The National Electrical Manufacturers Association (NEMA) has 80 sections that receive standard-setting services from an engineering department. While the customers in this value chain range from the committees that write standards to people who benefit from compatible equipment, staff focused on the services they provide to the committees. An annual survey much like the governance survey was developed to track committee satisfaction. But rather than testing the department's assumptions regarding the value they offered, this effort asked the customer to first define the service attributes that drive their satisfaction.

Baseline data showed that one of the attributes regarding "time to develop a new standard" ranked third in importance, but had the greatest gap in performance. Since you can't improve everything at once, you start where you can make the greatest difference. This critical information led the department to focus its energy on expediting the process. The ability to identify satisfaction drivers is a skill that may require outside assistance at first, but over time this should become a staff core competency.

The government affairs department at NEMA worked out a set of process metrics with its member committee to essentially measure effort. These are not satisfaction measures, but in this area the ultimate customer is an indirect recipient of the service and therefore not in a position to identify the capabilities that need improvement. The governance survey indicates their overall satisfaction with the outcomes. These measures focus on the processes believed to be most responsible for that perception.

The balanced scorecard can only be touched upon briefly here, but the potential benefits to associations are significant, and a burgeoning body of literature is available. The theory holds that strategy should be thought of as having four linked elements that are measurable in nature and comprehensive in scope.

These elements begin with a statement of the financial impact of the proposed strategy. The second statement defines the value proposition to the customer that will be responsible for the financial outcome. Next comes a statement concerning the internal processes needed to deliver the value propositions. The final statement describes the competencies and technologies needed to drive the processes. In this manner, strategy is expressed as a series of cause-and-effects.

For example, an association representing teachers wanted to increase revenue diversity. Research indicated that of all the materials teachers purchased in mission-related areas only 17 percent came from the association. This measure of account share became the first element in a balanced scorecard strategy. A strategy map that included learning objectives, internal process, customer value, and financial elements demonstrated the cause-and-effect elements.

The balanced scorecard has many applications and advantages, but this example captures the concept and highlights one critical advantage. Associations often suffer from the passions of well-intended leaders who feel pressure to make a difference during their short term of office. In the rush, they come up with ideas that get launched before they are thought through. This tool makes people think through all the implications. And when finished, the measures are in place.

Measurement is often thought of as a wasteful pursuit of marginal use. A measurement system should be focused on value and based upon trainable competencies. Once a system is in place, the costs should be compared with those of not knowing if you are getting anywhere or asking a board to engage in evaluations of their own making.

Back to Articles