Labor and State Performance Reports Earn Top Rating

MEDIA CLIPPING

Government Executive

Labor and State Performance Reports Earn Top Rating

The Labor, State and Transportation departments earned top spots in an independent organization's rankings of the quality of agencies' 2005 annual performance reports, and Treasury jumped up significantly.

George Mason University's Mercatus Center released its seventh Annual Performance Report Scorecard on Tuesday, rating and ranking agencies on the transparency, public benefits and leadership evidenced in their performance and accountability reports. The ratings measure the quality of the reports, rather than agencies' success at meeting the goals outlined in them.

David Walker, comptroller general and head of the Government Accountability Office, presented the score card at Tuesday's event. "We owe it to taxpayers to provide transparency for what they get for their tax dollars," he said.

The scores of most agencies changed little from last year. The five top agencies last year -- Labor, State, Transportation, Veterans Affairs and Commerce -- remained at the top of this year's list.

Treasury showed the greatest gain in rank, moving from 16th last year to a tie with Commerce for fifth place this year. Reviewers noted that Treasury's report included detailed baseline and trend data for transparency, successfully listed many program goals in terms of desired outcomes and consistently provided explanations for missed goals.

The Homeland Security Department's report landed next to last in the rankings. Last year Mercatus did not rank DHS at all because its report was not available in time. The Health and Human Services Department fell last on this year's list, with poor scores in all categories. Reviewers noted that 32 of the 50 largest HHS programs were not addressed in the department's 2005 report, and that the document was heavily laden with jargon and acronyms that hurt readability.

The 24 agencies included under the Chief Financial Officers Act have produced performance and accountability reports each year since 1999, when a reporting requirement in the 1993 Government Performance and Results Act first took effect, and Mercatus has issued its score card on those reports every year.

The score card awards agencies up to 60 points, spread across 12 criteria designed to evaluate the performance reports for how accessible, readable and usable they are; their delineation of program outcomes and costs; and the extent to which leadership is demonstrated by justifying agency performance and making links between programs, goals and policies.

Agencies earning ratings of "satisfactory" or higher, with at least 36 points, were responsible for only 15 percent of the $2.45 trillion in fiscal 2005 noninterest spending, the report stated.

Report authors Jerry Ellig, Maurice McTigue and Henry Wray emphasized that the score card evaluates the quality of agency reports on programs rather than the results achieved by those programs. "You could get a good mark from us even if you miss a lot of your goals, because part of what we look at is whether you disclose that you missed those goals," McTigue stressed.

In this regard, the Mercatus rankings differ from the Office of Management and Budget's Program Assessment Rating Tool, which offers evaluations of program performance, and from OMB's President's Management Agenda score card, which rates agencies' accomplishments on the administration's key management initiatives.

Like the PMA score card, though, the Mercatus rating criteria become more challenging each year to assure that agencies must continually improve to score well.

McTigue said his group consults with agencies on their results, and many areas highlighted as needing work in the report do reflect shortcomings in internal management practices.

"Given the paucity of links between outcomes and costs in most reports, it's tempting to conclude that vast swaths of federal spending are essentially 'faith-based' initiatives," said Jerry Ellig, a Mercatus Center senior research fellow and a co-author of the study. "Intentions and values, rather than systematic proof of actual outcomes, drive much of the support for these programs."

' '