This chapter will discuss the Mergers and Acquisition features based on the academic literature to date. It will also point out the main factors that affected the takeovers during the credit crunch and how the economy as a whole and the banking sector in particular were affected by the recession.
Further, it will examine mergers and acquisitions in banking industry and the formation of financial conglomerates. The main purpose of this study is to assess the impact of the mergers on the value of the newly formed companies.
The Mergers and Acquisitions literature is largely based on empirical evidence for the period 1970 – 2006 in deregulated markets (F. Weston, 2003), leaving a gap in the research done for the period 2007 to date and for the impact of the Government intervention. The paper will provide further evidence on the importance of studying the M&As formation and their behaviour in the presence of severe market failures by looking at the impact of some major takeovers during the financial crisis of 2007 on the economy in general, and on the stakeholders in particular.
Mergers and acquisitions are generally very sensitive transactions, hence the access for primary research and detailed data is limited, mainly due to confidentiality constraints (Harwood, 2005), resulting in little coverage of pre-deal negotiations and timing delays, creating a void in the literature. Furthermore, the constantly changing regulatory and legal frameworks associated with the rapid changes in the economic environment make the continuous updating of mergers and acquisitions literature a necessity.
In a changing world induced by globalization, increased competition, emergence of new industries and markets, technological change, increased favourable economic and financial conditions (F. Weston, 2003), organizations found that through joining forces may reduce the business risks and enhance the exploitation of opportunities in the environment.
Therefore, the favourable grounds of a rather deregulated financial market led to a massive expansion of large multinational organizations, sometimes becoming even “more powerful than some nations”. Thereby, through their strengths, they can both manage and generate risks, with possible dramatic consequences at national or even global level (Bridget Hutter, 2009).
The most recent example is the “disruptive” financial crisis started in 2007 which, according to the de Larosière Report of 25 February 2009, represents the international catastrophic effect of the greed of global financial conglomerates which undertook too high risks, associated with the Governments and regulatory bodies failure to foresee the economic and financial imbalances and regulate the system accordingly, as well as the credit rating agencies involvement in major conflicts of interests.
Prior research into Mergers and Acquisitions
An acquisition or merger is the buying of a “target” company by another. However, the following distinction between the two terms should be considered: a “merger” is the result of an amiable negotiation process between the parties, whereas an “acquisition” is rather a hostile takeover. However, in the present study the two terms will be interchangeable.
The main focus of the Mergers and Acquisition research to date is the relationship between the causes and possible effects of mergers, the motives of their occurrence and how the process takes place (F. Weston, 2003).
One way of understanding the takeovers process is by studying their activity picks and the determinants of their slow-downs. These fluctuations are known as “merger waves” and based on their variations, six phases have been identified:
1895 – 1904. This is a period of consolidating acquisitions described by Stgiler (1950) as “merging for monopoly”. Due to the failure of the Sherman Antitrust Act in 1880 to control the intense takeover activity (A. Gaughan, 2001), coupled with the 1903 economic recession, the first wave ended. Therefore, in 1914 the Clayton Act along with the Federal Trade Commission and the Justice Department, enforced the antitrust legislation, which represented the starting point of the second merger movement.
1922 – 1929. This period consisted of horizontal and vertical transactions were mainly the result of developments in communication and transportation. The business monopolies were replaced by oligopolies (Stigler 1950, cited by F. Weston, 2003).
1965 – 1969. This wave featured conglomerate acquisitions, due to the increasing antitrust enforcement provided by the Cellefer-Kefauver Act in 1950, forcing the companies to shift their expansion strategy towards acquiring businesses from unrelated industries (A. Gaughan, 2001).
1984 – 1989. According to Gaughan, 2001, this was a megamergers period, with highly leveraged transactions and financial innovations, such as the junk bonds and hostile deals. The first charges of “insider trading” reported, coupled with the economic recession and the war against Iraq, ended the mergers activity (F. Weston, 2003).
1992 – 2000. The main contributors to this movement were: globalization, deregulation, technologic and telecommunication developments, new method of payments, share repurchase, internet development (M. Mitchell, 2003). The new features of this wave were the creation of M&As through stock-for-stock transactions, diversification of financing sources, foreign direct investments in developing countries in Europe, where the privatization process took form of takeovers (Calderon, Loayza, Serven, Luis, 2004). However, this movement came to an end with “the bursting of the Millennium Bubble and great scandals” which increased the role of Corporate Governance (M. Lipton, 2006).
2002 – 2006. The spectacular increase of M&As from $1.2 trillion in 2002 to $3.4 trillion in 2006 was the result of increased globalization, some governments’ incentives for creation of national and global “champions“, price fluctuations, availability of cheap financing sources, growth of private equity and increase in management-led buyouts (M. Lipton, 2006).
One characteristic of merger waves is that they tend to commence during economic expansions and technological changes and they tend to end when the economy slows down, giving raise to high fluctuations in mergers activities. Mitchell and Mulherin (2003) argued that economic, political and financial factors were the main determinants of these variations. Moreover, another important aspect to be considered is the direct relationship between deregulation and increased number of takeovers.
Motives behind M&As
”Mergers are an integral part of market capitalism” (M. Lipton, 2006), thereby the rationale behind M&As to seek and improve their performance. However, there are various reasons embodied in several theories summarised below for which companies merge (A. Risberg, 2006):
The purpose of mergers is to achieve synergies, obtained when the total value of the merged firms is greater than the sum of the value of each firm operating independently.
Operational synergies allow firms to increase their operating income through:
- Economies of scale by reducing the fixed costs through removing duplicate operations, reducing inventory costs and increased specialisation.
- Higher profit margins from reduced competition and increased market share.
- Economies of scope (J. Weston, 2003)
Financial Synergy are achieved by lowering the cost of capital through diversification of portfolio, access to cheaper capital and efficient allocation of resources across different divisions.
Managerial Synergy are achieved when the acquiring company’s managers are more efficient and hence, their abilities are transferred to the new entity. This approach is supported by the free cash flow theory which states that only efficient performers have the free cash flow required for the takeover, whereas the target companies are poor performers. However, Jensen (1986) claims that through M&As managers seek to enhance their power by increasing the assets under their control and not to ensure the company’s growth.
This approach assumes that the acquirer will absorb a major competitor for the purpose of achieving monopoly power by increasing its market power through expanding its market share or increasing its profit margins.
This approach rests on the information asymmetry model. It assumes that managers who have access to better information about a target company, which is believed to be undervalued, will acquire it.
This model underlays the concept that managers’ behaviour is driven by self-interest. Therefore, they engage in takeovers to pursue their own goals, other than shareholders’ wealth maximization. Based on empirical evidence, R. Rolls’ hubris hypothesis (1986) states that “takeovers reflect individual decisions” which sometimes result in decision-makers paying too much for their targets. According to Black’s overpayment hypothesis (1989), this occurs because managers are overoptimistic and they are interested in achieving personal goals. This approach is supported by empirical evidence brought forward by Rhodes (1983) and Black (1989) (cited by A. Risberg, 2006) and it is given the most credit of the M&A theories to date.
According to Risberg (2006), this theory is built on two models: ”game theory” which considers that individuals possess ”bounded rationality” and hence their decision-making ability is limited; and Allison’s ”political decision-making process” between the members of an organisation. Accordingly, the takeover decisions are not always rational and based on the value-maximisation principles (Jemison and Setkin, 1986; Duhaime ad Schwenk, 1985, cited by A. Risberg, 2006).
Brealey and Meyers (2003) found that mergers may occur for tax purposes when the acquisitions are financed through shares. Another benefit is the gain obtained from the tax reduction from acquiring a loss-maker.
The global financial crisis was first announced in June 2007 by Bear Sterns, saying that two of its hedge funds worth over US$3 billion, operating in the US mortgage market, were failing. However, in less than two years, according to Jacques de Larosière report published in February 2009, the write-offs and write-downs by banks and insurance companies worldwide, were worth €1 trillion. As a result of the disastrous economic, financial and social consequences of the current recession, the Governments all over the world had to intervene through a series of measures to prevent the further collapse of the economic system and correct the market failures.
One of the most urgent measures that had to be taken in order to prevent the domino effect of the collapse of the global financial corporations on the whole economic and social system, was the implementation of “rescue deal” measures. One of the most notorious examples of such interventions was the US Federal Reserve agreement with JP Morgan for the rescue of Bear Stearns investment bank, according to which the former will fund $30bn of Bear Stearns’ less liquid assets, whereas the latter will guarantee to meet all the payments due to Bear Stearns clients (BBC News, 2008).
Following the collapse of Lehman Brothers in September 2008, the crisis started to expand rapidly affecting other sectors. Hence, the Governments in developed countries intervened by providing support to financial institutions through (i) “capital injections”; (ii) “explicit guarantees on liabilities” to help banks preserve access to funding; and (iii) “purchases or guarantees of impaired legacy assets” to reduce banks’ exposure to massive losses (Panetta, 2009).
Further, the paper will apply the Mergers and Acquisition theory in the context of the financial crisis between 2007 and 2009 and will evaluate the effect of such state-orchestrated “rescue deals” on the stakeholders, and in particular on shareholders’ wealth. It will also discuss the critical aspects of the Government intervention and creation of even larger financial conglomerates.
The main research methods used to evaluate the success or failure of takeovers during the financial crisis and assess the pre- and post-merger company’s performance, as well as the shareholders’ value creation /destruction are described below.
Methodology and Data Collection
Data and Samples
The mergers and acquisitions data is obtained mainly from the Data Stream and Thomson Financial database. The sample is restricted to mergers and acquisitions over 1bn USD from January 1st, 2007 to December 31st, 2009 in the banking sector. However, for comparison purposes data from previous years will be included in this study.
The methods that can be used to gather and analyse information in order to assess the mergers and acquisitions impact on the stakeholders, can be classified according to their process, into quantitative and qualitative research.
Quantitative research measures phenomena and analyses numerical data using objective, statistical methods to gain understanding on the research topic.
Qualitative approach investigates phenomena and analyses data using subjective interpretive methods (Lecture notes, 2010). Data is gathered through interviews, case studies and observation techniques and it can be quantified as with census and background information about the object studied, which is then analysed and interpreted by the author (Strauss and Corbin, 1998).
Recent developments argued that the two methods should not have a separate status, but should instead interact (W. Olsen, 2004). For a more reliable interpretation of the results, the Triangulation method should be applied both in searching and gathering data at different times and situations, and
from different subjects (“data triangulation”), analysing data by applying two or more theoretical approaches (“theoretical triangulation”), and combining different research methods, such as ‘quantitative’ and ‘qualitative’ methods (“methodological triangulation”) (Downward, 2006).
Kvale and Brinkmann (2009) defined interviews as a method of collecting information and data from a group of participants on the purpose of investigating what they believe or feel about a certain matter. One of the main benefits of interviews is that they enable the researcher to ask complex and detailed questions and gain „decisive knowledge” and reliable data when compared to other research methods. Moreover, according to McBurney and White (2003) interviews are associated with both positivist and phenomenological methodologies.
However, there are certain limitations in conducting interviews, arising from the interviewing process which can be tiresome, time consuming and expensive. In addition, the researcher has to take into account the element of confidentiality (Collis and Hussey, 2009). Moreover, the questions must be the same and asked in similar manner and circumstances, otherwise the interview can be ineffective and the findings and conclusions derived may be distorted (Kvale and Brinkmann, 2009).
Direct observations are another method of collecting data associated with either a positivist or a phenomenological methodology. According to Saunders (2009), they can be conducted in two ways: non-participant observation and participant observation. However, there are a number of limitations regarding this methodology, such as the ethics, objectivity and technology available for recording and analyzing the information.
According to Beiske (2007), a case study approach implies an analysis of a company or a group of companies, an event, a process or even an individual. It consists of gathering detailed information about the area of study. The main stages of a case study are: selecting a case, investigations, analysis and the report stage.
The weaknesses identified with this methodology are mainly the issues concerning the gain of access to the selected organisation for the research. It is also difficult to decide on the delimitation and boundaries of the study.
Event Study Methodology
The event study methodology comprises of a thorough investigation of the effect of an event on a specified dependant variable and it includes two types of variables, independent variables and dependant variables. It is stated by Brockett (1994) that the most common type of dependant variable used in corporate finance is the common share/stock price of a firm. Other variables that can be considered are: earnings-per-share, dividend yield, price-to-earnings ratio. However, the event study methodology is based on some key assumptions for it to hold true. These assumptions include that the market must be efficient and the effect of the event will be immediately incorporated in to the share price of the firm (Binder, 1998).
The primary focus of event study methodology is to observe any abnormal return behaviour around the event date (Glen Larsen, 1999). Abnormal returns represent the difference between a single stock or portfolio’s performance in regard to the average market performance over a set period of time.
The event study has become the “standard method” of measuring security price movements in response to an event, such as an announcement. The event study methodology studies the flow of information to the market and assesses the extent to which the event affects the stock returns. Measures for the event study methodology include computations of abnormal stock price returns, cumulative abnormal stock price returns and the Sharpe Ratio in order to determine whether or not there is an abnormal stock return following an unanticipated event (Sudarsanam, 2003). This methodology is important because it focuses on stock prices and hence, it hinders the need to analyse accounting-based performance measures. Other testing theories that can be used to measure the impact of an event are: the dividend policy, corporate control and capital structure changes.
However, a security’s price change can only be considered abnormal with reference to some benchmark. Three models have been broadly used and examined in the literature for measuring abnormal returns: the Mean Adjusted Model, Market Adjusted Model, and Market and Risk Adjusted Returns Model (Larsen, 1999). To measure the impact of a particular event, the unrelated factors should be controlled. The selection of the benchmark model to measure normal returns is essential to carry out an event study. The returns can be generated in several ways:
Capital asset pricing model. According to Sudarsanam (2003), the expected return for security i in time t is given by:
Mean-adjusted model. The assumption is that the expected return on a security is constant over time, but it may differ across securities (Sudarsanam, 2003). According to Larsen (1999), the abnormal return is computed as eit = Rit – Ki, where Rit is the observed return, and Ki is the predicted return.
Market and Risk Adjusted Returns. The model was first developed by Sharpe in 1964 and takes the market and mean returns risks into consideration Larsen (1999). In the Sharpe (1964)-Lintner (1965):
E(Rit)= Rit + Bi[E(RMt) – RK] = Kit , for security i, where Rft is the risk-free rate of return and B is a vector of parameters, such as the security beta (β). The ex-post abnormal return on security i is given by the difference between its actual return and that predicted by: eit = Rit – [RK(1 – Bi) + BiE(RMt)].
The procedure for event study methodology is described below (Larsen, 1999):
Identification of the event for the research question
Estimation of the sample size of companies
Estimating important parameters that would provide the expected returns (ER) during the event
Compute abnormal return (AR) by deducting expected return from actual return:
AR it=Rit-E(Rit), where Rit is the actual return; E(Rit) is the normal return on security i for period t.
Calculate cumulative abnormal return (CAR) by adding the abnormal returns over the entire period, T. CARiT=∑ARit
Test the AR and CAR for their significance. In this paper, the abnormal returns will be tested for their variation from zero.
Regress the abnormal returns on relevant features of the stock that impact the event.
This research study will be conducted by using the event study methodology. The sample size and the event window for the study will be identified to obtain the relevant figures and results by calculating the ER, AR and CAR.
Efficient Market Hypothesis
The starting point of any event study is the hypothesis about how a particular event affects the firm’s value. Any change in the value of a company is reflected in the stock price by the occurrence of an abnormal return. The null hypothesis is that such an event has no impact on the return generating process.
This paper will measure the impact of the mergers and acquisition in banking and insurance sector during the financial crisis of 2007-2009 on shareholders’ wealth by testing the Efficient Market Hypothesis.
The concept of “market efficiency” was introduced by Eugene Fama, who argued in his work “Random Walks in Stock-Market Prices” that actual prices of securities already reflect the effects of information based both on past events and events which the market expects to take place in future. Therefore, the actual price of a security will be an estimation of its intrinsic value, which will change over time as a result of new information, such as the success of a new project, a change in management, new regulations imposed by a country, or other actual or expected changes which may affect the firm’s value. Moreover, Fama (1965) argued that in an efficient market, competition will cause the intrinsic value based on new information to be incorporated “instantaneously” in actual prices. Later, based on Harry Roberts’ distinction between weak and strong forms, Fama distinguished between weak form, in which past information is reflected in securities prices, semi-strong form, in which prices adjust to all “publicly available information” (i.e. announcements of annual earnings, annual reports, stock splits, etc.) and instantly change accordingly, and the strong form concerned with the effect of „insider” information on stock price, where investors have monopolistic access to any relevant information (Fama, 1970).
In this paper, the semi-strong hypothesis will be tested, in terms of the relationship between a takeover announcement and any movement in the market value of the stock, and hence the shareholders’ wealth, during the period following the announcement.
An important tool in testing market efficiency is the “event study” methodology, which assess the economic impact of an event by comparing the expected returns in the absence of the event with the actual returns as a result of that event (Cable and Holland, 1999).
However, the limitations of this method lays with the assumption that relevant information is freely available to all participants, as the processing of the information and other associated costs such as the opportunity cost with portfolio evaluation and transactional costs; large portfolios, in addition, may be subject to additional costs caused by market impact. Michael Jensen (1978) argued that the abnormal returns should also consider the transactional costs involved.
Meir Statman (2010) stated that the limitations of this approach comes from the various ways of defining “market efficiency”, arguing that “market efficiency is defined either as Rational Markets where ‘the price is always right’, or Unbeatable Markets where „investors are unable to generate consistent positive alphas from securities”, meaning that there is no “systematic way to beat the market”. Therefore, the security prices reflect only “fundamental” or “utilitarian” characteristics, but not “psychological” or “value-expressive” characteristics, such as the market sentiment (Statman, 1999). Another weakness of the efficient markets hypothesis is its confusion with free markets where, governments do not intervene in the economy.
Empirical evidence that the returns embody other unusual information which sets a predictable pattern was brought forward by various researchers. Burton G. Malkiel (2003), in his study “The Efficient Market Hypothesis and Its Critics” described the high January returns documented by Haugen and Lakonishok (1988) in their work, „The Incredible January Effect” and French’s (1980) evidence of higher Monday returns and the predictable patterns in returns around holidays (Ariel, 1990).
The first step in event study methodology is defining an event period (Weston, 2003). According to Mackinlay (1997), the windows are classified into estimation and event window. The estimation window is used to estimate the parameter of the benchmark model, whereas the event windows are used to predict the abnormal returns. According to Weston (2003), research studies generally apply a range of ± 40 business days around the event date, in this case the takeover announcement date, which is considered day zero (t=0). However, the estimation of the pre-event period is -250 days to
-41 days prior the announcement.
In this study an estimation window of [-250,-41] days is used for more accurate results. Therefore, the share price information of the selected sample will be collected from t=-250 to t=40 to estimate the effect of the event on the shareholders’ wealth.
Word Count: 3,912