Monetary policy in the United States is associated with interest rates and availability of credit.
See main article: Monetary policy.
Instruments of monetary policy have included short-term interest rates and bank reserves through the monetary base.[1]
With the creation of the Bank of England in 1694, which acquired the responsibility to print notes and back them with gold, the idea of monetary policy as independent of executive action began to be established.[2] The goal of monetary policy was to maintain the value of the coinage, print notes which would trade at par to specie, and prevent coins from leaving circulation. The establishment of central banks by industrializing nations was associated then with the desire to maintain the nation's peg to the gold standard, and to trade in a narrow band with other gold-backed currencies. To accomplish this end, central banks as part of the gold standard began setting the interest rates that they charged, both their own borrowers, and other banks who required liquidity. The maintenance of a gold standard required almost monthly adjustments of interest rates.
During the 1870–1920 period, the industrialized nations set up central banking systems, with one of the last being the Federal Reserve in 1913.[3] By this point the role of the central bank as the "lender of last resort" was understood. It was also increasingly understood that interest rates had an effect on the entire economy, in no small part because of the marginal revolution in economics, which demonstrated how people would change a decision based on a change in the economic trade-offs.
In the first half of the 19th century, many of the smaller commercial banks within New England were easily chartered as laws allowed to do so (primarily due to open franchise laws). The rise of commercial banking saw an increase in opportunities for wealthy individuals to become involved in entrepreneurial projects in which they would not involve themselves without a guaranteed return on their investment. These early banks acted as intermediaries for entrepreneurs who did not have enough wealth to fund their own investment projects and for those who did have wealth but did not want to bear the risk of investing in projects. Thus, this private banking sector witnessed an array of insider lending, due primarily to low bank leverage and an information quality correlation, but many of these banks actually spurred early investment and helped spur many later projects. Despite what some may consider discriminatory practices with insider lending, these banks actually were very sound and failures remained uncommon, further encouraging the financial evolution in the United States.
In 1781, an act of the Congress of the Confederation established the Bank of North America in Philadelphia, where it superseded the state-chartered Bank of Pennsylvania founded in 1780 to help fund the American Revolutionary War. The Bank of North America was granted a monopoly on the issue of bills of credit as currency at the national level. Prior to the ratification of the Articles of Confederation & Perpetual Union, only the States had sovereign power to charter a bank authorized to issue their own bills of credit. Afterwards, Congress also had that power.
Robert Morris, the first Superintendent of Finance appointed under the Articles of Confederation, proposed the Bank of North America as a commercial bank that would act as the sole fiscal and monetary agent for the government. He has accordingly been called "the father of the system of credit, and paper circulation, in the United States."[4] He saw a national, for-profit, private monopoly following in the footsteps of the Bank of England as necessary, because previous attempts to finance the Revolutionary War, such as continental currency emitted by the Continental Congress, had led to depreciation to such an extent that Alexander Hamilton considered them to be "public embarrassments". After the war, a number of state banks were chartered, including in 1784: the Bank of New York and the Bank of Massachusetts.
In 1791, Congress chartered the First Bank of the United States to succeed the Bank of North America under Article One, Section 8. However, Congress failed to renew the charter for the Bank of the United States, which expired in 1811. Similarly, the Second Bank of the United States was chartered in 1816 and shuttered in 1836.
See main article: Banking in the Jacksonian Era.
The Second Bank of the United States opened in January 1817, six years after the First Bank of the United States lost its charter. The predominant reason that the Second Bank of the United States was chartered was that in the War of 1812, the U.S. experienced severe inflation and had difficulty in financing military operations. Subsequently, the credit and borrowing status of the United States was at its lowest level since its founding.
The charter of the Second Bank of the United States (B.U.S.) was for 20 years and therefore up for renewal in 1836. Its role as the depository of the federal government's revenues made it a political target of banks chartered by the individual states who opposed the B.U.S.'s relationship with the central government. Partisan politics came heavily into play in the debate over the renewal of the charter. "The classic statement by Arthur Schlesinger was that the partisan politics during the Jacksonian period was grounded in class conflict. Viewed through the lens of party elite discourse, Schlesinger saw inter-party conflict as a clash between wealthy Whigs and working class Democrats." (Grynaviski) President Andrew Jackson strongly opposed the renewal of its charter, and built his platform for the election of 1832 around doing away with the Second Bank of the United States. Jackson's political target was Nicholas Biddle, financier, politician, and president of the Bank of the United States.
Apart from a general hostility to banking and the belief that specie (gold and/or silver) were the only true monies, Jackson's reasons for opposing the renewal of the charter revolved around his belief that bestowing power and responsibility upon a single bank was the cause of inflation and other perceived evils.
During September 1833, President Jackson issued an executive order that ended the deposit of government funds into the Bank of the United States. After September 1833, these deposits were placed in the state chartered banks, commonly referred to as Jackson's "pet banks". While it is true that 6 out of the 7 initial depositories were controlled by Jacksonian Democrats, the later depositories, such as the ones in North Carolina, South Carolina, and Michigan, were run by managers who opposed Jacksonian politics. It is probably a misnomer to label all the state chartered repositories "pet banks".
Prior to 1838 a bank charter could be obtained only by a specific legislative act, but in that year New York adopted the Free Banking Act, which permitted anyone to engage in banking, upon compliance with certain charter conditions. The Michigan Act (1837) allowed the automatic chartering of banks that would fulfill its requirements without special consent of the state legislature. These banks could issue bank notes against specie (gold and silver coins) and the states regulated the reserve requirements, interest rates for loans and deposits, the necessary capital ratio etc. Free banking spread rapidly to other states, and from 1840 to 1863 all banking business was done by state-chartered institutions.[5]
Numerous banks that were started during this period ultimately proved to be unstable.[6] In many Western states, the banking industry degenerated into "wildcat" banking because of the laxity and abuse of state laws. Bank notes were issued against little or no security, and credit was over extended; depressions brought waves of bank failures. In particular, the multiplicity of state bank notes caused great confusion and loss. The real value of a bank bill was often lower than its face value, and the issuing bank's financial strength generally determined the size of the discount.
See main article: National Bank Act.
To correct such conditions, Congress passed (1863) the National Bank Act, which provided for a system of banks to be chartered by the federal government. The National Banking Acts of 1863 and 1864 were two United States federal laws that established a system of national charters for banks, and created the United States National Banking System. They encouraged development of a national currency backed by bank holdings of U.S. Treasury securities and established the Office of the Comptroller of the Currency as part of the United States Department of the Treasury and authorized the Comptroller to examine and regulate nationally chartered banks.
Congress passed the National Bank Act in an attempt to retire the greenbacks that it had issued to finance the North's effort in the American Civil War.[7] This opened up an option for chartering banks nationally. As an additional incentive for banks to submit to Federal supervision, in 1865 Congress began taxing any of state bank notes (also called "bills of credit" or "scrip") a standard rate of 10%, which encouraged many state banks to become national ones. This tax also gave rise to another response by state banks—the widespread adoption of the demand deposit account, also known as a checking account. By the 1880s, deposit accounts had changed the primary source of revenue for many banks. The result of these events is what is known as the "dual banking system". New banks may choose either state or national charters (a bank also can convert its charter from one to the other).
See main article: Coinage Act of 1873.
See also: Bimetallism and Gold standard.
Toward the end of the nineteenth century, bimetallism became a center of political conflict. During the civil war, to finance the war the U.S. switched from bimetallism to a fiat currency, greenbacks. In 1873, the government passed the Fourth Coinage Act and soon resumed specie payments without the free and unlimited coinage of silver. This put the U.S. on a mono-metallic gold standard, angering the proponents of monetary silver, known as the silverites. They referred to this act as "The Crime of ’73", as it was judged to have inhibited inflation.[8]
The Panic of 1893 was a severe nationwide depression that brought the money issue to the fore. The silverites argued that using silver would inflate the money supply and mean more cash for everyone, which they equated with prosperity. The gold advocates countered that silver would permanently depress the economy, but that sound money produced by a gold standard would restore prosperity.
Bimetallism and "Free Silver" were demanded by William Jennings Bryan who took over leadership of the Democratic Party in 1896, as well as the Populist and Silver Republican Parties. The Republican Party nominated William McKinley on a platform supporting the gold standard which was favored by financial interests on the East Coast. A faction of Republicans from silver mining regions in the West known as the Silver Republicans endorsed Bryan.
Bryan gave his famous "Cross of Gold" speech at the National Democratic Convention on July 9, 1896. However, his presidential campaign was ultimately unsuccessful; this can be partially attributed to the discovery of the cyanide process by which gold could be extracted from low grade ore. This increased the world gold supply and caused the inflation that free coinage of silver was supposed to bring. The McKinley campaign was effective at persuading voters that poor economic progress and unemployment would be exacerbated by adoption of the Bryan platform.
See main article: Federal Reserve System.
The Panic of 1907 was headed off by a private conglomerate, who set themselves up as "lenders of last resort" to banks in trouble. This effort succeeded in stopping the panic, and led to calls for a Federal agency to do the same thing. In response, the Federal Reserve System was created by the Federal Reserve Act of 1913, establishing a new central bank intended to serve as a formal "lender of last resort" to banks in times of liquidity crises, panics when depositors try to withdraw their money faster than a bank could pay it out.
The legislation provided for a system that included a number of regional Federal Reserve Banks and a seven-member governing board. All national banks were required to join the system and other banks could join. Congress created Federal Reserve notes to provide the nation with an elastic supply of currency. The notes were to be issued to Federal Reserve Banks for subsequent transmittal to banking institutions in accordance with the needs of the public.
The Federal Reserve Act of 1913 established the present day Federal Reserve System and brought all banks in the United States under the authority of the Federal Reserve (a quasi-governmental entity), creating the twelve regional Federal Reserve Banks which are supervised by the Federal Reserve Board.
To deal with deflation caused by the Great Depression of the 1930s, the nation went off the gold standard. In March and April 1933, in a series of laws and executive orders, the government suspended the gold standard for United States currency.[9] Anyone holding significant amounts of gold coinage was mandated to exchange it for the existing fixed price of US dollars, after which the US would no longer pay gold on demand for the dollar, and gold would no longer be considered valid legal tender for debts in private and public contracts. The dollar was allowed to float freely on foreign exchange markets with no guaranteed price in gold, only to be fixed again at a significantly lower level a year later with the passage of the Gold Reserve Act in January 1934. Markets immediately responded well to the suspension, in the hope that the decline in prices would finally end.[10]
See main article: Bretton Woods system.
The Bretton Woods system of monetary management established the rules for commercial and financial relations among the world's major industrial states in the mid 20th century. The Bretton Woods system was the first example of a fully negotiated monetary order intended to govern monetary relations among independent nation-states.
Setting up a system of rules, institutions, and procedures to regulate the international monetary system, the planners at Bretton Woods established the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development (IBRD), which today is part of the World Bank Group. The chief features of the Bretton Woods system were an obligation for each country to adopt a monetary policy that maintained the exchange rate by tying its currency to the U.S. dollar and the ability of the IMF to bridge temporary imbalances of payments.
See main article: Nixon Shock.
In 1971, President Richard Nixon took a series of economic measures that collectively are known as the Nixon Shock. These measures included unilaterally cancelling the direct convertibility of the United States dollar to gold. This essentially ended the existing Bretton Woods system of international financial exchange.
The Federal Reserve has used the Federal funds rate as a primary tool to bring down inflation (quantitative tightening) or induce more inflation (quantitative easing) to get to get to their target of 2% annual inflation.[11] To tame inflation the Fed raises the FFR causing shorter term interest rates to rise and eventually climb above their longer maturity bonds causing an Inverted yield curve which usually predates a recession by several months which is deflationary. [12] [13]
In August 2020, after undershooting its 2% inflation target for years, the Fed announced it would be allowing inflation to temporarily rise higher, in order to target an average of 2% over the longer term.[14] [15] It is still unclear if this change will make much practical difference in monetary policy anytime soon.[16]
The central bank influences interest rates by expanding or contracting the monetary base, which consists of currency in circulation and banks' reserves on deposit at the central bank. The primary way that the central bank can affect the monetary base is by open market operations or sales and purchases of second hand government debt, or by changing the reserve requirements. If the central bank wishes to lower interest rates, it purchases government debt, thereby increasing the amount of cash in circulation or crediting banks' reserve accounts. Alternatively, it can lower the interest rate on discounts or overdrafts (loans to banks secured by suitable collateral, specified by the central bank). If the interest rate on such transactions is sufficiently low, commercial banks can borrow from the central bank to meet reserve requirements and use the additional liquidity to expand their balance sheets, increasing the credit available to the economy. Lowering reserve requirements has a similar effect, freeing up funds for banks to increase loans or buy other profitable assets.
A central bank can only operate a truly independent monetary policy when the exchange rate is floating. If the exchange rate is pegged or managed in any way, the central bank will have to purchase or sell foreign exchange. These transactions in foreign exchange will have an effect on the monetary base analogous to open market purchases and sales of government debt; if the central bank buys foreign exchange, the monetary base expands, and vice versa. But even in the case of a pure floating exchange rate, central banks and monetary authorities can at best "lean against the wind" in a world where capital is mobile.
Accordingly, the management of the exchange rate will influence domestic monetary conditions. To maintain its monetary policy target, the central bank will have to sterilize or offset its foreign exchange operations. For example, if a central bank buys foreign exchange (to counteract appreciation of the exchange rate), base money will increase. Therefore, to sterilize that increase, the central bank must also sell government debt to contract the monetary base by an equal amount. It follows that turbulent activity in foreign exchange markets can cause a central bank to lose control of domestic monetary policy when it is also managing the exchange rate.
In the 1980s, many economists began to believe that making a nation's central bank independent of the rest of executive government is the best way to ensure an optimal monetary policy, and those central banks which did not have independence began to gain it. This is to avoid overt manipulation of the tools of monetary policies to effect political goals, such as re-electing the current government. Independence typically means that the members of the committee which conducts monetary policy have long, fixed terms. Obviously, this is a somewhat limited independence.
In the 1990s, central banks began adopting formal, public inflation targets with the goal of making the outcomes, if not the process, of monetary policy more transparent. In other words, a central bank may have an inflation target of 2% for a given year, and if inflation turns out to be 5%, then the central bank will typically have to submit an explanation.
The Bank of England exemplifies both these trends. It became independent of government through the Bank of England Act 1998 and adopted an inflation target of 2.5% RPI (now 2% of CPI).
The debate rages on about whether monetary policy can smooth business cycles or not. A central conjecture of Keynesian economics is that the central bank can stimulate aggregate demand in the short run, because a significant number of prices in the economy are fixed in the short run and firms will produce as many goods and services as are demanded (in the long run, however, money is neutral, as in the neoclassical model). There is also the Austrian School of economics, which includes Friedrich von Hayek and Ludwig von Mises's arguments,[17] but most economists fall into either the Keynesian or neoclassical camps on this issue.