By David Gill
Sovereign credit ratings play an important role in the global economy. The potential advantages of a strong rating are widely known: the ability to borrow more money, on better terms. And the downsides of a poor one—less credit, higher costs—are equally so. Yet the path to a top rating is less clear. Economists and political scientists have spent decades trying to understand how governments can secure better sovereign credit ratings, principally by focusing on a handful of economic indicators, such as a country’s GDP per capita, real GDP growth, default history, and the like. Such indicators, however, are incomplete guides on their own. The “big three” credit rating agencies—Fitch Ratings, Standard and Poor’s, and Moody’s Investors Service—rely on more than quantitative factors, which is why their conclusions about the same numbers sometimes differ.
Indeed, that fact, combined with some recent damaging downgrades, has led some experts to conclude that the ratings process is too subjective or ill-thought out and that political leaders should dismiss credit ratings agencies as a result. But adopting such an approach risks missing a valuable opportunity. Subjectivity, after all, is a two-way street, since it can work in a country’s favor as well as to its disadvantage. Governments that understand how ratings are made can take steps to hold or improve their position; those that don’t may end up more vulnerable. And with new rating agencies now emerging alongside the old guard, knowing the rules of the game matters more than ever.
David Gill is an Assistant Professor at the School of Politics and International Relations. His research focuses on the relationship between strategy, economics, and diplomacy. This article first appeared on Foreign Affairs and can be found here.
For more on this subject, see David James Gill, “Rating the United Kingdom: The British government’s sovereign credit ratings, 1976-1978,” Economic History Review (In press, winter 2015).
Image credit: Wikipedia Commons