Central Banks and Movements in Stock Market

The behaviour of central banks towards movements in the stock market has been an interesting issue amongst researchers in recent decades. While there has been a central agreement that inflation and output gap are major targets in central banks’ monetary policies, the consideration of stock prices in these policies has been taken with more sceptical attitude. The motivation behind examining this issue is that being able to identify the actions central banks take in order to eliminate macroeconomic volatilities, financial panics will disappear and overall economy will improve due to more transparency in monetary policies. Also, the importance of responding to stock market movements will be explored, which means a better guidance for monetary policy decisions. The main issue addressed here is whether central banks should target changes in stock prices explicitly in their monetary policies as was suggested by Cecchetti et al. (2000) or implicitly only when they affect the forecasts of inflation and output gap as was suggested by Bernanke and Gertler (1999, 2001). The effect of this is a proposition of a monetary policy that is very close to the actual policy conducted by central banks and therefore, a better identification of policy reactions that will eventually contribute to the economy’s welfare. A review of the current literature will help to accentuate the main variables studied in this paper and to construct an initial theoretical framework which is modified in a later section to reflect the results of the empirical analyses. The most appropriate model to consider here that will help in the empirical study of this paper appears to be the Taylor rule. Data from the United States will be used, hence the focus will be on the behaviour of the Federal Reserve. Tests of robustness will be conducted for different specifications of the Taylor rule to confirm that they are well specified. In addition, subsamples will be created to highlight any breaks in the sample studied. Finally, the endogenous relationship that is apparent between stock prices and monetary policy responses will be examined to give a clearer picture about the mechanism of monetary policy behaviour.

The paper proceeds as follows. Section 2 reviews the literature available on the topic covering the main and most relevant issues addressed in relation to this study. Section 3 is the theoretical framework providing a better visualisation of the relationship between the variables studied. Section 4 describes the data and gives its descriptive statistics. Section 5 presents the different specifications of the empirical model used in the paper. Section 6 presents the results from all the empirical tests and analyses conducted. Section 7 concludes the main points and findings of the research paper.

LITERATURE REVIEW

Justifications and objections to respond to stock market movements

History is full of examples where large swings in the stock market coincided with lasting booms and busts. The ultimate objectives of monetary policy are of macroeconomic benefits relating to inflation, economic growth and employment. Swings in asset prices can affect central banks’ goals of low inflation and real growth. Hence, some economists have argued that responding to asset prices directly can improve macroeconomic performance (Lansing, 2003). Cecchetti et al. (2000) reiterates that policy makers can exploit information about the economy carried by asset prices and this will help them improve macroeconomic stability. Additionally, Bernanke and Kuttner (2005) point out that stock markets are seen as a source of macroeconomic volatility that policymakers may wish to respond to. These arguments suggest that by identifying a set of actions that appropriately responds to stock market movements beside other goals of the central bank, economies will be stabilised almost immediately and financial panics will disappear. Therefore, identifying a nominal anchor – a basis monetary policy for central banks – will contribute to the welfare of economies. From the point of view of market participants, this is also important to make effective investment and risk management decisions. On the other side, the endogeneity problem [1] that exists between monetary policy and stock market movements makes it difficult to estimate the monetary policy reaction (Rigobon and Sack, 2003). Also, a number of other variables including news about the economic outlook are likely to affect stock prices (Rigobon and Sack, 2004). This in effect is the revision in expectations about future monetary policy as a result of news about changing economic conditions.

Endogenous relationship

It is important to highlight that the relationship between monetary policy and stock market movements is endogenous. That is, in any model of monetary policy estimation, the values of the variables are determined by the equilibrium of a system. In other words, the direction of causation might be from monetary policy to stock market movements or from stock market movements to monetary policy. This issue has been addressed in the works of Rigobon and Sack (2003, 2004). In their 2003 paper, they argue that it is difficult to identify the monetary policy response to the stock market due to the simultaneous response of the stock market to policy decisions. They find that when monetary policy reacts to low stock market prices by reducing interest rates, the stock market simultaneously reacts to low interest rates by increasing stock market prices, and vice versa. In their 2004 paper, they look at the other side of the relationship – how asset prices react to changes in monetary policy –. Their findings confirm that stock prices have a significant negative reaction to monetary policy; an increase in the short-term interest rate results in a decrease in stock market prices, with the effect being reduced for longer time maturities. So, when the effect runs from stock prices to interest rates, there appears to be a positive reaction. Whereas, when the effect runs from interest rates to stock prices, there appears to be a negative reaction.

In light of what most of the literature has been available on, I will concentrate on discussing the response of monetary policy to stock market movements, and not the other way round. The main question that needs to be addressed here is: how should central banks respond to stock market movements as part of their monetary policy imposed? Or, what is the appropriate monetary policy to be imposed so that volatility in stock prices will have the least impact on the macroeconomy? There has been an extensive literature covering the topic with many views and models in proposition. The works of leading researchers on the topic are summarised below.

Taylor rule

Since his breakthrough paper which was published in 1993, Taylor has attracted a vast attention to his simple, yet surprisingly accurate characterisation of the Federal Reserve’s monetary policy. It expresses the federal funds rate as a linear function of current inflation’s deviation from an inflation target and the output gap. This was not only a good description of monetary policy in the U.S. but also a reasonable policy recommendation (Osterholm, 2005). His findings are consistent with the agreement that monetary policy rules should increase short-term interest rates if the price level and real income are above target and decrease them if the price level and real income are below target. This in effect was also his guiding principle behind the rule, which was disclosed in the Federal Reserve’s Annual Report for 1945 describing the implicit predominant purpose of Federal Reserve’s policy. Taylor (1993), however, concludes that following his rule mechanically is not practical and policymakers should be discretionary in their application. Greenspan (1997) emphasises on that by saying: “these types of formulations are at best ‘guideposts’ to help central banks, not inflexible rules that eliminate discretion”. Svensson (2003) opposes the use of Taylor rule as guidance for monetary policy conduct and argues that such simple rule is not representative of what the world’s most advanced central banks are using to optimise macroeconomic benefits. He argues that other variables beside inflation and output gap might also be important to achieve the central bank’s objectives. These include the real exchange rate, terms of trade, foreign output and foreign interest rate. Meyer (2002) states: “my experience during the last 5-1/2 years on the Federal Open Market Committee (FOMC) has been that considerations that are not explicit in the Taylor rule have played an important role in policy deliberations”. Incorporating more relevant variables in the central bank’s reaction function promises better results than adopting the basic Taylor rule. Osterholm (2005) also doubts Taylor’s explanation of how monetary policy is conducted after testing the parameters in the rule’s regressions and finding them inconsistently estimated. Although he concludes that Taylor rule provides an accurate description of U.S. monetary policy during the 1960s and 1970s, it didn’t show much consistency in more recent decades. Orphanides (2003) finds similar results and concludes that Taylor’s simple rule does not appear as a reliable estimate of monetary policy over the past twenty years.

Inflation targeting

Inflation targeting approach was first introduced by Bernanke and Mishkin in 1997. In simple terms, it is the future inflation level that the central bank will strive to hold. More practically, it is defined by Bernanke and Gertler (1999) as an approach which “dictates that central banks should adjust monetary policy actively and pre-emptively to offset incipient inflationary or deflationary pressures”. Then how is this exactly related to the monetary policy response due to changes in stock prices? The idea here is that inflationary asset prices will increase interest rates and deflationary asset prices will decrease interest rates, all via affecting household wealth and in turn consumption spending. Therefore, they extend the definition of inflation targeting to include: “policy should not respond to changes in asset prices, except insofar as they signal changes in expected inflation”. In their 2001 paper, they summarise important findings from their work on the topic. They conclude that responding to changes in asset prices through aggressive targeting of inflation stabilises both inflation and output, and that responding to asset prices will not add significant benefit to the policy decision. In other words, the more central banks increase nominal interest rate by more than one percentage point in response to one percentage point increase in expected inflation, the better the reaction is and therefore, the greater the reduction is in the economic effects of volatility in asset prices. The final conclusion that has been drawn is: monetary policy that targets inflation aggressively without considering stock prices, unless they help in forecasting inflationary or deflationary pressures, works best. Fuhrer and Tootell (2008) have similar conclusion based on finding little evidence that stock prices affect monetary policies directly. Bernanke and Mishkin (1997) support the inflation targeting approach by presenting a number of its advantages. First, inflation targeting is not a rule, but rather a framework that allows central banks to consider other issues in the economy such as unemployment and exchange rates besides inflation. Therefore, inflation targeting is not a rigid tool which allows for discretionary policies in the short run and other concerns of the central bank. Second, the announcement of inflation targets by central banks reduces uncertainty between the general public regarding central banks’ intended actions after stock price movements. This is an important aspect because uncertainty about central banks’ intentions causes volatility in financial markets. Therefore, inflation targeting allows for more transparency in monetary policies. Third, inflation targeting approach is relatively easy to understand, unlike other policy strategies such as money growth targeting. This is because the general public, certainly, find it more difficult to understand growth rate of monetary aggregates than to understand growth rate of consumer prices.

Consideration of asset prices in monetary policies

Contrary to what Bernanke and Gertler (1999, 2001) have argued, Cecchetti et al. (2000) point out the importance of including asset prices in the monetary policy rule. This stems from their findings that asset prices include important information that can be used by policymakers to better stabilise the economy. They conclude that a central bank will have superior performance when it targets not only inflation and output gap (or their forecasts), but also asset prices. This will reduce the volatility of output and the likelihood of asset price bubbles, therefore, reducing the risk of booms and busts. In fact, their findings suggest that in the majority of cases, interest rate adjustment to asset prices in the presence of a bubble is necessary. The reason for different conclusions between the two sides is that Cecchetti et al. (2000) seem to cover a wider range of possible policy responses. Rigobon and Sack (2003) agree with Cecchetti et al. and clarify that stock market movements, through their influence on the macroeconomy, can be useful guidance to monetary policy responses. However, it is difficult to identify these responses due to the simultaneous reaction of stock prices to policy decisions. Christiano et al. (1999) also observes that unlike prices and output which react to changes in federal funds rate within more than a quarter, stock prices respond to them within minutes. Yet, being highly sensitive to economic conditions and among the closely monitored asset prices, stock prices are not only important in understanding the conduct of monetary policy but also the potential economic impact of policy actions and inactions (Ioannidis and Kontonikas, 2008). Lansing (2003) presents the results of Cecchetti et al. (2000) in a simple way by plotting two graphs that show how using stock prices in the Taylor rule increases the fit between actual and proposed monetary policy. The problem in Cecchetti et al.’s proposal is that misalignments should also be taken into account when reacting to asset prices. This is deemed impractical. The reasons being, asset prices are too volatile to be helpful in determining monetary policy, their misalignments are very difficult to identify, and systematically reacting to them may be destabilising (Cecchetti et al., 2000). Nevertheless, the researchers defend their position by arguing that measuring misalignments is not as difficult as measuring the output gap and therefore, stock prices should not be ignored on this basis. According to Goodhart and Hofmann (2000), disregarding asset prices not only results in ignoring information contained in them regarding future demand conditions, but also introduces empirical biases that may mean that monetary policy is based on a mis-specified model of the economy. Bernanke and Gertler (2001) criticise the work of Cecchetti et al. (2000) by saying: “effectively, their procedure yields a truly optimal policy only if the central bank (i) knows with certainty that the stock market boom is driven by non-fundamentals and (ii) knows exactly when the bubble will burst”. Their criticism reflects the fact that Cecchetti et al. (2000) base their tests on one scenario which is: asset prices are driven by bubble shocks that last five years, and not by any other means.

Proactive and reactive

According to Kontonikas and Ioannidis (2005), there are two ways in which monetary policy responds to asset price movements, either proactive or reactive. A reactive approach is consistent with inflation targeting policy that focuses on price stability and according to it, the central bank should see if asset price reversal occurs first, and if it does, react accordingly to the extent of influence on inflation and output stability. A proactive approach, on the other hand, is consistent with the views of Cecchetti et al. (2000) and according to it, the central bank should target inflation, output and asset prices in its policy rule. This is in effect a Taylor rule with extra variable considered in it. In simple economies, Taylor rule would be optimal with the interest rate being a function of current and lagged inflation rate and output gap. However, in open economies, reaction to movements in asset prices is significant (Goodhart and Hofmann, 2000).

THEORETICAL FRAMEWORK

From the literature review it can be extracted that researchers have identified three main variables that can be considered in monetary policy rules to identify the behaviour of central banks towards macroeconomic volatility. These are inflation, output and asset prices. The figure [2] below depicts the relationship between these three variables with the interest rate.

Figure 1: Theoretical framework

Independent variables Dependent variable

Inflation

Interest rates

Output

Asset prices

The figure shows that inflation, output and asset prices have direct impact on interest rates. According to Stock and Watson (2003), because asset prices are forward-looking, they constitute a class of potentially useful predictors of inflation and output growth. Hence, there appears to be a relationship between the independent variables. This relationship is, however, not clear as of yet and to be tested and illustrated in the coming sections. For now, the mechanism, in simple words, seems to work as follows: when there are inflationary pressures, wealth, demand and output increase raising stock prices. As a result, central banks will increase interest rates to offset the macroeconomic variability. The opposite applies when deflationary pressures occur. In regards to asset prices, the focus in my research will, obviously, be on stock prices. The main hypothesis to be tested in this paper is whether monetary policy rules target stock prices explicitly, or implicitly only through their effects on forecasts of inflation and output.

DATA

United States quarterly data for federal funds rate, consumer price index (CPI), gross domestic product (GDP) and Standard & Poors (S&P) 500 stock index covering the period from 1990 to 2009 is used. The source of data is the International Monetary Fund. Federal funds rate is the interest rate banks are charged for borrowing loans from other banks overnight and is a closely watched barometer of the tightness of credit market conditions in the banking system, and is therefore the stance of monetary policy (Mishkin, 2010). CPI estimates the average price of a market basket of goods and services purchased by households. The percentage change in CPI is a measure of inflation. GDP is the market value of all products and services made within a country in a year and is therefore a measure of its output. Data and empirical study are analysed using EViews software. The table below shows the descriptive statistics of the data.

Table 1: Data descriptive statistics

Federal Funds Rate

CPI

GDP

S&P 500

Mean

4.015

87.955

10728.869

924.034

Median

4.55

86.6

11028.65

992.895

Maximum

7

112.3

13415.3

1526.75

Minimum

0.5

65.6

7950.2

306.05

Std. Dev.

1.777

13.182

1830.690

378.220

Skewness

-0.385

0.188

-0.077

-0.189

Kurtosis

2.175

1.932

1.596

1.656

Jarque-Bera

4.248

4.273

6.652

6.495

Probability

0.120

0.118

0.036

0.039

Observations

80

80

80

80

Federal funds rate and CPI appear to be more normally distributed than GDP and S&P 500 according to the probability values of Jarque-Bera statistic. Data will be divided into two subsamples to highlight any change in monetary policy reflecting the change in the serving chairman of Federal Reserve during the sample period.

EMPIRICAL FRAMEWORK

The statistical model am going to use is the Taylor rule. There are several reasons for the choice of this estimation method of central banks’ behaviour. First, the concept of Taylor rule was deduced from the implied practice of the Federal Reserve, which is tightening policy during booms and easing policy during busts. Second, the fundamental Taylor rule targets both inflation and output, which is in effect an aggressive inflation targeting approach. Third, stock prices can be easily added to Taylor rule as another variable to test the difference in its effectiveness as an indicator of central banks’ behaviour compared to the fundamental one. The last two justifications replace the need for another rule to expand the research. Taylor’s (1993) original rule is shown in equation (1):

(1)

where is the federal funds rate (short-term nominal interest rate), is the equilibrium real interest rate, is the observed inflation rate (yearly percentage change in CPI), is the targeted inflation rate (assumed to be zero [3] ), is the real GDP (output) and is the potential output. The latter is in effect the smoothed version of real GDP calculated using Hodrick-Prescott Filter which eliminates short-term business cycle fluctuations and thereby highlights long-term trends in the variable’s time series. The difference between and represents the percentage deviation of inflation rate from a target. The difference between and is the output gap (expressed as a yearly percentage change). and are arbitrary parameters, and a good monetary policy implies that they are equal to 0.5 each. This will be tested in the next section.

The Federal Reserve Board explained in its first Annual Report for 1914: “[A reserve bank’s] duty is not to await emergencies but by anticipation, to do what it can to prevent them”. Therefore, as early as the founding of the system, Federal Reserve officials have always described the formulation of monetary policy as a forward-looking process, and policy rules that fail to incorporate such information into historical analyses of policy decisions could easily prove inadequate (Orphanides, 2003). Having that in mind, a good modification of the Taylor rule is to replace current inflation and output with expected values as follows:

(2)

where is the expected value conditional on information available at time and is an error term. The added to the variables denotes the forecasted period. Fuhrer and Tootell (2008) suggest a good method to create expected values for inflation and output gap based on estimating the following formulas respectively:

(3)

(4)

In words, we are forecasting inflation and output gap one-quarter ahead by including measures of inflation, output gap and stock prices lagged one-quarter. After estimating the equations, the residual series from equation (3) is taken and subtracted from the inflation series to give inflation forecast. Likewise, the residual series from equation (4) is taken and subtracted from the output gap series to give output gap forecast.

Incorporating stock prices into the Taylor rule requires the use of the following formula which is also called the ‘augmented Taylor rule’:

(5)

where denotes the yearly percentage change in stock prices and the length of lag.

RESULTS

Fundamental Taylor rule

The first step is to plot inflation and output gap against the federal funds rate to see if they are correlated and move together. The following graph depicts the relationship.

Graph 1: Federal funds rate, inflation and output gap movementsfirst equation.jpg

It can be seen that the three variables are roughly moving together, and that inflation and output gap are particularly correlated throughout the whole sample period, with federal funds rate showing better correlation with the other two variables after 2001. Second, estimation of equation (1), which is the basic Taylor rule, using Ordinary Least Squares (OLS) estimator is needed to compare the actual and proposed policy response. This estimator chooses the regression coefficients so that the estimated regression line is as close as possible to the observed data (Stock and Watson, 2007). Results are shown in the following table and graph.

Table 2: Estimation output of equation (1)Darbin-Watson.jpg

Graph 2: OLS estimation of equation (1)basic taylor rule.jpg

The graph shows how Taylor rule (dotted line) is roughly in line with the actual rule (connected line) of the Federal Reserve, with only few periods where the fitted interest rate is over- or under-estimated from the actual interest rate. Additionally, the coefficients associated with inflation and output gap in Table 2 are significant, which means that they have considerable effect on interest rates. Nevertheless, the goodness of fit of the regression which is measured by R-squared (also reported in Table 2) explains only 49% of the variability in interest rates. Moreover, the variables do not seem independent and identically distributed (i.i.d.), i.e. they do not seem to have the same probability distribution with being mutually independent. Therefore, to ensure that the standard Taylor rule is well specified, the following tests are performed based on equation (1). To begin with, the arbitrary parameters in the Taylor rule need to be tested. Wald coefficient test verifies whether the joint null hypothesis of = 0.5 and = 0.5 holds. The results of the test are shown below.

Table 3: Wald test statisticsWald test.jpg

From above we can see that both F-statistic and Chi-square statistic have p-values of 0, which indicates that we can safely reject the null hypothesis that both restrictions hold. However, Wald tests are only valid when the error terms are normally distributed. Hence, there is a need to test for that. Jarque-Bera statistic is used for this purpose. It measures the difference between skewness and kurtosis from the sample series with those from a normal distribution.

Figure 2: Normality test statisticsJarque_Bera test.jpg

Since the p-value associated with Jarque-Bera statistic is smaller than 0.05, the null hypothesis of a normal distribution at the 5% level is rejected. This confirms that the restrictions of = 0.5 and = 0.5 in the Taylor rule do not hold. The next step is to test whether the residuals of the regression are spherical by testing for heteroskedasticity and serial correlation. If there is heteroskedasticity, then OLS estimates are still consistent but not efficient, i.e. OLS is not the best linear unbiased estimator (BLUE) and hence, the standard errors are no longer valid. According to Stock and Watson (2007), the error term is homoskedastic if the variance of the conditional distribution of given is constant for = 1, …, and in particular does not depend on . Otherwise the error term is heteroskedastic. The test of the null hypothesis of homoskedasticity against the alternative of heteroskedasticity is White test. The results of the test are reported in the following table.

Table 4: White test statistics for equation (1)White heteroskedasticity test.jpg

According to the F-statistic and R-squared measures, it can be concluded that there is a strong evidence of no heteroskedasticity, i.e. the null hypothesis of homoskedasticity cannot be rejected. To test for serial correlation, the Durbin-Watson (DW) test statistic reported in the estimation output of equation (1) is used. is said to be autocorrelated or serially correlated if it is correlated with for different values of s and t (Stock and Watson, 2007). DW statistic tests for first-order serial correlation versus no correlation. If the residuals are autocorrelated, OLS is no longer BLUE and the standard errors computed are not correct. It can be seen from Table 2 that there is a strong first-order serial correlation because the value of DW statistic is below 1.5 given more than 50 observations in the sample. This test statistic, however, is a rule of thumb and has few limitations including: (i) it is not valid in the presence of lagged dependent variables, (ii) it only tests for first-order serial correlation and (iii) results are not always conclusive. A more general test would be the Breusch-Godfrey Lagrange Multiplier (LM) test. It can be used to test for higher serial correlation and in the presence of lagged dependent variables. The null hypothesis of LM test is no serial correlation against the alternative of order-p serial correlation. The results of the test are shown below.

Table 5: LM test statistics for equation (1)Lm for basic.jpg

It is clear from both F-statistic and R-squared measures that we can confidently reject the null hypothesis of no serial correlation, i.e. there is first-order serial correlation. Since the residuals from the regression are homoskedastic but serially correlated, OLS is still not BLUE. Re-estimating the regression model by OLS but computing the covariances differently to account for autocorrelation is appropriate. Newey-West [4] covariance estimator is used for this purpose.

Table 6: Estimation output of equation (1) with Newey-West covariance estimator

newey-west eq 1.jpg

Results are very similar to the ones obtained by normal estimation of equation (1) – in Table 2 –. Also, variables are still entering the regression significantly. Therefore, it can be concluded that the model is well specified now.

Taylor rule with expected values

As was mentioned in the previous section, a forward-looking rule is preferred to a rule with current values for the variables. Therefore, the next step in the analysis is to estimate equation (2) using OLS. The results are illustrated below.

Table 7: Estimation output of equation (2)est eq 2.jpg

Graph 3: OLS estimation of equation (2)eq 2.jpg

The graph shows that there is a very slight improvement in the fit between proposed and actual interest rates, particularly in the period from 1995 to 2000. However slight the improvement in fit is, it supports the arguments of Orphanides (2003) and Fuhrer and Tootell (2008). Nevertheless, the proposed policy behaviour in this case tends to fluctuate more during the whole period, especially from 2001 to 2009. This might be due to the increased uncertainty stemmed from forecasts. Coefficients of the variables are also highly significant as Table 7 indicates. Yet, their significance is a bit lower than what has been obtained from estimating the fundamental Taylor rule – equation (1) –. The following table summarises the results of White and LM [5] tests applied to equation (2).

Table 8: White and LM tests statistics for equation (2)

White Heteroskedasticity Test

F-statistic 2.486938

Obs*R-squared 11.43993

Breusch-Godfrey Serial Correlation LM Test

F-statistic 238.3915

Obs*R-squared 57.20317

The table shows that both heteroskedastisity and serial correlation are present, and that OLS is, therefore, not BLUE. Setting Newey-West in the estimation of equation (2) gives the following results.

Table 9:

study
http://au.freedissertation.com

You must be logged in to post a comment