Archive for May 2020
HISTORY OF CHAOS THEORY
Posted May 31, 2020
on:IMAGE#1: DOUBLE PENDULUM CHAOS
IMAGE#2: LORENZ DETERMINISTIC CHAOS BY ROBERT GHRIST
IMAGE#3: CHAOTIC BEHAVIOR IN THE REAL WORLD [LINK]
A HISTORY OF CHAOS THEORY by Christian Oestreicher [LINK]
 THE DETERMINISTIC SCIENCE OF LAPLACE: Determinism is predictability based on scientific causality. Local determinism concerns a finite number of elements as in ballistics, where the trajectory of a projectile can be precisely predicted on the basis of the propulsive force, the angle of shooting, the projectile mass, and air resistance. This is local determinism. In contrast, universal determinism gets complicated – the universe for example or even just the solar system.
 In a whirlwind of dust, raised by elemental force, confused as it appears to our eyes. the classical mathematical scientists knew exactly the different energies acting on each particle of dust, with the properties of the particles moved, could demonstrate that after the causes given, each particle acted precisely as it ought to act, and that It could not have acted otherwise than It did.
 This is the principle of universal determinism assumed in climate science where it is thought that the complexity of the large number of particles moving in different directions can be captured in a climate model of sufficient complexity. It follows that given a computer model of the universe of sufficient complexity we ought to be able to exactly describe the motions of the greatest bodies of the universe and those of the lightest atom such that nothing would be uncertain and the future, as the past, would be present would be exactly reproduced on the computer model.
 THE PHASE SPACE OF POINCARE: Henri Poincaré developed another point of view as follows: in order to study the evolution of a physical system over time, one has to construct a model based on a choice of laws of physics and to list the necessary and sufficient parameters that characterize the system (differential equations are often in the model). One can define the state of the system at a given moment, and the set of these system states is named phase space.
 This view persisted until the phenomenon of sensitivity to initial conditions was discovered by Poincaré in his study of the the nbody problem. Thus, a century after Laplace, Poincaré discovered a very small cause, which eludes us, determines a considerable effect that we cannot fail to see, and so we say that this effect Is due to chance.
 If we knew exactly the laws of nature and the state of the universe at the initial moment, we could accurately predict the state of the same universe at a subsequent moment. But this is not always so because small differences in the initial conditions may generate very large differences in the final phenomena. Prediction then becomes impossible, and we have a random phenomenon. This was the discovery of of chaos in nature.
 ANDREI KOLMOGOROV is one of the most Important mathematicians and statisticians of the 20th century. He is the creator of probability theory, turbulence theory, Information theory, and topology. In going over the work of Poincaré, he showed further that a quasiperiodic regular motion can persist in an integrable system even when a slight perturbation is Introduced Into the system. This Is known as the KAM theorem which Indicates limits to integrability.
 The theorem describes a progressive transition towards chaos within an Integrable system. All trajectories are regular and quasiperlodlc. As levels of perturbation are introduced, the probability of quasiperiodic behavior decreases and an increasing proportion of trajectories becomes chaotic, until a completely chaotic behavior is reached. In terms of physics, in complete chaos, the remaining constant of motion is only energy and the motion is called ergodic.
 In a linear system, the sum of causes produces a corresponding sum of effects and it suffices to add the behavior of each component to deduce the behavior of the whole system. Phenomena such as a ball trajectory, the growth of a flower, or the efficiency of an engine can be described according to linear equations. In such cases, small modifications lead to small effects, while Important modifications lead to large effects as in reductionism.
 The nonlinear equations concern specifically discontinuous phenomena such as explosions, sudden breaks In materials, or tornadoes. Although they share some universal characteristics, nonlinear solutions tend to be individual and peculiar. In contrast to regular curves from linear equations, the graphic representation of nonlinear equations shows breaks, loops, recursions and turbulence. Using nonlinear models, one can identify critical points in the system at which a minute modification can have a disproportionate effect.
 EDWARD LORENZ: Edward Lorenz, from the Massachusetts Institute of Technology (MIT) is the official discoverer of chaos theory. He first observed the phenomenon as early as 1961 while making calculations to predict weather as in weather forecasting. Lorenz considered, as did many mathematicians of his time, that a small variation at the start of a calculation would Induce a small difference in the result, of the order of magnitude of the initial variation. This was turned out to be not the case. The sensitivity to initial conditions was addressed by meteorologist Philip Merilees, who organized the 1972 conference where Lorenz presented his results. The title of the presentation by Lorenz was “Predictability: does the flap of a butterfly’s wing in Brazil set off a tornado in Texas?”. This title has become famous and a popular way of describing chaos theory.
 Lorenz had rediscovered the chaotic behavior of a nonlinear system, that of the weather, but the term chaos theory was only later given to the phenomenon by the mathematician James Yorke, in 1975. Lorenz also gave a graphic description of his findings using his computer. This graphic was his second discovery: the attractors.
 Strange Attractors: The Belgian physicist David Ruelle studied the Lorenz graphic and he coined the term strange attractors in 1971. The clearly recognizable trajectories in the phase space never cut through one another, but they seemed to form cycles that are not exactly concentric, not exactly on the same plan. It is also Ruelle who developed the thermodynamic formalism. The strange attractor is a representation of a chaotic system in a specific phase space, but attractors are found in many dynamical systems that are nonchaotic. There are four types of attractors. They are fixed point, limitcycle, limittorus, and strange attractor.
 The four types of attractors are displayed in the graphic above as 1a Fixed Point=a point that a system evolves towards, such as the final states of a damped pendulum. 1b=Limit Cycle a periodic orbit of the system that is isolated as in the swings of a pendulum clock and heartbeat at rest. 1c=Limit Taurus, In a Limit Taurus there may be more than one frequency in the periodic trajectory of the system through the state of a limit cycle. If two of these frequencies form an irrational ratio, the trajectory is no longer closed, and the limit cycle becomes a limit torus. 1d=Strange Attractor. The strange attractor characterizes the behavior of chaotic systems in a phase space. The dynamics of satellites in the solar system is an example.
 We can describe future trajectories of our planet with Newton’s laws but how do we know that these laws work at at the dimension of the universe? These laws concern only the solar system and exclude all other astronomical parameters. Therefore, while the earth is indeed to be found repetitively at similar locations in relation to the sun, these locations will ultimately describe a strange attractor of the solar system.
 ALEKSANDR LYAPUNOV: Chaos amplifies initial distances in the phase space. Two trajectories that are initially at a distance D will be at a distance of 10 times the value of D after a sufficient delay described as Characteristic Lyapunov Time. If the characteristic Lyapunov time of a system is short, then the system will amplify its changes rapidly and be more chaotic. It is within the amplification of small distances that certain mathematicians, physicists, or philosophers consider that one can find randomness. The characteristic Lyapunov time of the solar system is thought to be in the order of 10 million years.
 NEGATIVE AND POSITIVE FEEDBACK: Negative and positive feedback mechanisms are ubiquitous in living systems, in ecology, in daily life psychology, in climate, as well as in mathematics. A feedback does not greatly influence a linear system, while it can induce major changes in a nonlinear system. Thus, feedback participates in the frontiers between order and chaos.
 FELGENBAUM AND THE LOGISTIC MAP: Mitchell Jay Feigenbaum proposed the scenario called period doubling to describe the transition between a regular dynamics and chaos. His proposal was based on the logistic map introduced by the biologist Robert M. May in 1976. The logistic map is a function of the segment [0,1] within itself defined by: xn+1=rxn(1xn) where n = 0, 1, … describes the discrete time, the single dynamical variable, and 0≤r≤4 is a parameter. The dynamic of this function presents very different behaviors depending on the value of the parameter r.
 For 0≤r≤3, the system has a fixed point attractor that becomes unstable when r=3.For 3<r<3,57…, the function has a periodic orbit as attractor, with a period of 2n where n is an integer that tends towards infinity when r tends towards 3,57. When r=3,57, the function has a Feigenbaum fractal attractor. When r>4 the function goes out of the interval [0,1] as seen in the graphic below.
 This function of a simple beauty in the eyes of mathematicians. It has numerous applications. For example, for the calculation of populations taking into account only the initial number of subjects and their growth parameter r (as birth rate). When food is abundant, the population increases, but then the quantity of food for each individual decreases and the longterm situation cannot easily be predicted.

Mandelbrot and fractal dimensions
In 1973, Benoît Mandelbrot, who first worked in economics, wrote an article about new forms of randomness in science. He listed situations where, in contrast to the classical paradigm, incidents do not compensate for each other, but are additive, and where statistical predictions become invalid. He described his theory in a book where he presented what is now known as the Mandelbrot set. This is a fractal defined as the set of points c from the complex plane for which the recurring series defined by z_{n+1} = z_{n} ^{2} + c, with the condition z_{0} = 0, remains bounded.
 A characteristic of fractals is the repetition of similar forms at different levels of observation (theoretically at all levels of observation). Thus, a part of a cloud looks like the complete cloud, or a rock looks like a mountain. Fractal forms in living species are for example, a cauliflower or the bronchial tree, where the parts are the image of the whole. A simple mathematical example of a fractal is the socalled Koch curve, or Koch snowflake. Starting with a segment of a straight line, one substitutes the two sides of an equilateral triangle to the central third of the line. This is repeated for each of the smaller segments obtained. At each substitution, the total length of the figure increased by 4/3, and within 90 substitutions, from a 1 meter segment, one obtains the distance from the earth to the sun.
 Shown above are the first four iterations of the Koch snowflake. Fractal objects have the following fundamental property: the finite (in the case of the Koch snowflake, a portion of the surface) can be associated with the infinite (the length of the line). A second fundamental property of fractal objects, clearly found in snowflakes, is that of self similarity, meaning that parts are identical to the whole, at each scaling step. A few years later, Mandelbrot discovered fractal geometry and found that Lorenz’s attractor was a fractal figure, as are the majority of strange attractors. He defined fractal dimension. Mandelbrot quotes, as illustration of this new sort of randomness, the French coast of Brittany; its length depends on the scale at which it is measured, and has a fractal dimension between 1 and 2. This coast is neither a onedimensional nor a twodimensional object. For comparison the dimension of Koch snowflake is 1.26, that of Lorenz’s attractor is around 2.06, and that of the bifurcations of Feigenbaum is around 0.45.
THE PRIVATE SECTOR IN CHINA
Posted May 30, 2020
on:LINK TO THE HOME PAGE OF THIS SITE
The Private Sector in China, 2001
Funded&published by International Securities Consultancy, Hong Kong, 2001
All rights reserved by International Securities Consultancy
 Prior to socialism the private sector was the primary economic strength of China. In 1820 it made China the largest economy in the world and by 1860 Shanghai had become the financial and commercial hub of the East. Private enterprise was universal and vigorous and the private sector exercised considerable political power. In fact it was the attempt by Empress Dowager to nationalize the railroads that brought about the fall of the Qing Dynasty and ironically, by way of Sun Yat Sen’s popular revolt, the ascent of the Communist Party of China (CPC) in 1949. At the time there were about half a million private enterprises in China active in a variety of sectors including mining, manufacturing, transportation, finance, and munitions.
 The private sector continued to drive the Chinese economy even under socialism. It supplied the People’s Liberation Army during the Korean War. The initial position of the CPC was that private enterprise was complementary to and could coexist with socialism. But after the Korean War the CPC reversed itself and began to expropriate private sector assets citing ideological reasons. Labor is the primary economic asset in socialism just as capital is the primary economic asset in capitalism. It is therefore unacceptable in socialism for an individual to profit from the labor of another.
 During periods of recession and unemployment socialist doctrine was tempered to engage the private sector in job creation; but by 1966 private enterprise had disappeared from China. An industrial structure consisting of stateowned enterprises (SOE) with centrally planned capital allocation, production targets, wages, and prices was installed in its place. The system failed.
 The SOEs grew into giant bureaucracies with social service obligations intermingled with production obligations with thin and invisible lines separating SOE business decisions from from government policy decisions. The SOEs were overstaffed and they used outmoded and polluting technology to produce an excess of inferior goods. As a business enterprise they were grossly inefficient and not competitive in a global context.
 In 1978 a program of reforms was initiated to overhaul these enterprises. Enterprise reform is the centerpiece of China’s market reform program. Its premise is that the Stateowned enterprises will morph into efficient and globally competitive businesses if they are subjected to market forces. Chinese leaders chose to carry out these reforms within the context of State ownership and without a mass privatization plan.
 It was acknowledged, however, that privatization issues aside, a private sector would be needed in China if only to absorb the excess workers that will be laid off by enterprise reform. Accordingly, the Party formalized the recognition of private enterprise with constitutional amendments in 1988 and again in 1993 in which the
private sector is described once again as an acceptable “complement” to socialism.  A third amendment in 1999 elevated the status of the private sector to a “component” of socialism. Although entrepreneurs were still barred from Party membership, it became clear that the CPC (Communist Party of China) wanted to grow a grassroots entrepreneurial sector.
 Private enterprise responded enthusiastically and grew rapidly from zero to a significant economic force though mostly as selfemployed tradesmen and small family businesses such as restaurants, laundries, repair shops, retail stores, manual trades, transport, and light industry. There are only a handful of private businesses with assets of more than 100 million yuan and these are restricted to specific sectors and they are owned for the most part by insiders (2001). Even so, the private sector soon took over as the primary engine of China’s economic growth disproportionately to its relative size.
 Yet, ambiguities remain in regards to the intended role of the private sector (2001) in the reform era. The State’s Constitutional responsibility as an ally of the private sector is compromised not just by ideological conflicts but by its own vested interest in the State sector that includes obligations to workers, retirees, and an entrenched bureaucracy. The private sector finds itself operating in an economy where the necessary industrial and economic infrastructure is captured by an inefficient State sector and where the interests of the State sector are the primary concern of the State’s regulators and economic planners.
 Besides, in the absence of the rule of law and without clearly defined property rights, the relatively higher profitability of the private sector provides the incentive and the opportunity for State predation. So the relationship between the private sector and the State is characterized by contradictions. The State simultaneously encourages the private sector and persecutes it.
 Discrimination against the private sector is both formal and informal. The tools of formal persecution include entry barriers, excessive fees and permit requirements, sectoral restrictions, and restricted access to capital, raw materials, product markets, and industrial infrastructure. The most severe is the capital constraint. The nation’s banks and securities markets are owned by the State and are reserved for the State itself. At stake is a 7 trillionyuan household savings pool held hostage by the State’s banking monopoly.
 Informal persecution arises because the reform process functions in a legal vacuum with respect to property rights. The institutional mismatch endows bureaucrats with discretionary power that victimizes both the State and the private sector. In dealing with government officials, entrepreneurs must resort to bribery or rely on guanxi, a method of developing a personal relationship with bureaucrats by virtue of mutual friends and relatives.
 The capture of the formal capital market by the State squeezes the entrepreneurial sector into niche and informal capital arrangements and a higher cost of capital. Venture capital has flowed into China sourced mostly in Hong Kong, Singapore, and Taiwan but the venture capital market is still relatively small because of limited exit strategy options. Domestic and foreign venture capital could serve a bigger role but venture capitalists need an IPO exit, an option not available to the private sector in China.
 The State’s predatory policy towards capital fragments capital markets in China. Asset allocation is distorted, scarce resources are wasted, and the ability of the private sector to generate new wealth and to provide employment is curtailed. Escape from this trap is possible because the savings pool is a zero sum game only in the short term. Given access to capital, the private sector will create new wealth that will increase and not decrease domestic savings in the medium term. There are even benefits in the short term because increased competition for scarce capital will enhance capital allocation efficiency in the State sector too.
 It is extremely difficult for private enterprise to operate in the same product market as State enterprises. For example, a private sector domestic airline would need an operating license, approval of its pricing structure, and airport access from the CAAC, a government agency and regulator of the airline industry that also owns ten airlines. It would purchase jet fuel from a Stateowned monopoly that also supplies the State’s airlines. Its business travelers would be largely State personnel. It would also face capital market constraints. To raise equity capital in the stock market it would
retain a stateowned investment banker and a state owned accounting firm to submit a request to the CSRC, a state agency; and a listing application to a staterun stock exchange. To obtain bank credit it would approach one of the four state bank monopolies1. These banks give preference to State firms and lend to achieve certain policy targets. The loan officer that reviews the application is held personally liable if a private sector debtor defaults although she will face no such liability in loans to state owned enterprises.  More than ¾ of the banking industry is controlled directly by the BigFour Stateowned banks in a country with the world’s highest propensity to save. The balance of deposits and loans are accounted for mostly by thousands of small locally run urban credit cooperatives and rural credit collectives and by the socalled Trust and Investment Companies or TICs. The TICs are allowed to enter speculative markets where banks may not go but as an apparent contradiction, are able to pass their bad loans to the banks. All of these financial institutions are public sector enterprises and controlled by the PartyState.
 All of the State’s financial entities are charged with providing a steady flow of capital for enterprise reform and for a bank rescue program. In China separation of the State sector from the private sector is complicated by institutional weaknesses and by the terminology itself. The “State” refers to the central government in Beijing. Yet the government hierarchy includes three levels of local government in addition to the State that are both geographically dispersed and functionally distributed with devolution of administrative and fiscal powers. Provincial, township, and village governments, in addition to discharging their administrative functions, also own and operate factories, railways, fishing fleets, mines, and other productive assets. These socalled “township and village enterprises” or TVEs, are normally smaller than SOEs in scale. In some cases, Beijing has turned over small and medium sized SOEs to the province in which they are located and their status has been changed to that of township enterprises.
 TVEs are not included in the statistics for “state owned enterprises” and are not the concern of the State’s enterprise reform project. Their functional category as private or public is often uncertain because of poorly defined property rights. Not all local government enterprises are clearly in the public sector. Some enterprises are leased to private sector operators for fixed rents and they function more like private than public enterprise. Others are actually sold outright to a shareholding corporation that normally consists of managers and workers and these firms would appear to have been “privatized”. However, the lessees and new owners in both cases may choose to pay a fee to local officials and register their business as a TVE to gain favorable access to government controlled markets and infrastructure. Of the local government assets that are still theoretically in government hands, many are actually operated as if they were private sector enterprises. An informal “privatization” takes place and local officials act more like owners than government agents. Legal institutions have not evolved sufficiently in China to distinguish between ownership structures of these various enterprise forms.
 It is for this reason that analysts often lump the entire TVE sector with the private sector as simply “nonstate”. Significant questions remain, however. Thousands of Stateowned SMEs whose control is ceded to provincial governments, but whose exact number is not reported, appear in the data as “growth” of the private sector and an apparent decline in the State sector. There are also complications in the data for large SOEs. Some of these firms have undergone an equity carveout procedure in which a shareholding corporation is formed that takes over the productive assets of an SOE in exchange for shares issued to the SOE. State statistics treat these corporations as SOE if the government owns more than a given portion of the shares but as “nonstate” if government ownership is less. The discriminate level of ownership is set at an apparently arbitrary value between 26% and 51% and revised from year to year. These revisions confound trends in the data.
 The SOEs are organized in a complex network of pyramidal and crossholding patterns in which the State and Stateowned firms may own 100% of the shares even when “State ownership” per se is minor. In cases where the corporation is listed and 30% or so of the shares have been issued to the public, the State uses pyramiding, cross holdings, and an autocratic corporate governance system to shut out minority shareholders. These complexities render the statistics on state ownership impossible to interpret with a reasonable degree of accuracy.
 It might appear that we could estimate the real size of the private sector by adjusting the State’s numbers downwards to account for these effects; but it’s not that simple. First, the incomplete corporate governance system not only victimizes minority shareholders but the State itself. Managers take advantage of institutional weaknesses to seize de facto control of State assets. The methods employed normally involve a shell game with equity in a complex crossownership network of assets. In some instances funds are removed by managers in hard currency to a shell company in Hong Kong; and these funds often reappear in the mainland as joint venture foreign direct investment.
 In all of these cases except for those involving pure capital flight, State sector assets do in fact become transformed into private sector assets. In other words, State assets are privatized. No estimate for the size of this transformation is available or even possible. Chaos underlies the superficial Confucian orderliness in Chinese corporate governance and presents daunting measurement problems for researchers. Finally, the reported GDP of China refers only to the formal economy and does not account for the production of the informal sector. The informal sector lies entirely in private hands. Analysts and senior executives of international investment banks who have observed private sector business activity first hand in China feel certain that the informal sector in China is at least as large as the formal sector. They cite the extraordinary official savings rate more than 40% of GDP as evidence that China’s actual GDP is much larger.
 Many researchers have found a middle ground by accepting the State’s estimate of the SOE sector and by defining the private sector as the nonstate sector. Defined in this way, the private sector appears to have made a remarkable resurgence. In a single decade, from 1989 to 1999, the State and private sector shares of industrial production have flipflopped. The SOE share is down from ¾ to ¼ with a corresponding rise of the private sector from ¼ to ¾. The pure entrepreneurial sector is smaller but still shows dramatic growth. Its share of industrial production
went from 2% to 16% of GDP in the same period.  The pattern of private sector development in China flows mostly from the State’s policy decisions. The leaders and bureaucrats of China in charge of its transition are its former central planners; and so it is not surprising that the target economic structure and the transition process itself would contain elements of central planning. A system of geographical preferences is used to set aside “special economic zones” where private and foreign joint venture projects would be allowed. Special consideration is given to coastal regions for private ventures that are most likely to result in exports. There is also a hierarchy of sector designations for private enterprise that run the spectrum from those most encouraged to those that are forbidden. Geographical and sectoral considerations are sometimes
combined to encourage private sector development in certain geographical regions only in a predetermined sector. For example, Hainan Province and Shenzhen in Guandong Province are expected to develop a private sector in the information technology industry.  Finally, a timetable is used to set up target production rates by the private enterprise in each sector. It may turn out that the important characteristic of state owned enterprises that distinguishes them from privatized firms is not the extent of state ownership per se but the extent to which the state is willing and able to exercise control. It is widely recognized that decentralization and devolution of
powers in the reform era in China have fostered regional autonomy for local governments and TVEs and has created a “weak center”. The same process has devolved power from the State to SOE managers, particularly in the “unprotected” sectors where the productive assets of SOEs have been carved out into corporations and some have even issued shares to the general public and engaged in joint ventures with foreign firms. These managers enjoy expanded authority to make both cost center and investment center decisions and a system of incentives and profit sharing within the experimental, innovative, and evolving “responsibility system”. The system was initiated at the outset of reforms to overcome limitations imposed by Confucianism. The Confucian ideal is a sense of order even at the expense of truth. For example convicting a suitable scapegoat to regain order after a high profile crime, something that is often observed in China, is a perfectly legitimate course of action for Confucians because social harmony can be achieved without a downside if the scapegoat is either dead or known to be a criminal who deserves punishment anyway.  The State no longer dictates production levels and prices to these firms. Instead, it subjects them to market forces. In other words, State control is yielding to market control. The SOEs compete in the commodity and capital markets. It is expected that market forces will determine prices, production, and cost of capital. Therefore, management expertise and worker productivity will determine efficiency and survival. The role of the State in SOE management is greatly diminished consistent with the “weak center” model of regional decentralization. Whether these efficiency increases have been achieved is a topic of debate among scholars. Initial empirical data do not seem to contain any evidence of increases in SOE efficiency attributable to these organizational enhancements. However, conventional wisdom and anecdotal evidence indicate otherwise. It may be that sufficient time has not elapsed for these efficiency increases to show up in cross sectional statistics.
 An efficiency time lag of this nature has been observed by Villalonga in Eastern Europe. In China, the mechanism is one of attrition and consolidation. Inefficient firms will not survive and their assets will either be mothballed or acquired by efficient managers. Until this process advances sufficiently the efficiency losses of he losers will match the efficiency gains of the winners and the cross sectional average efficiency will still be “average. We ought to see an increase in the variability of efficiency across the sample and not necessarily an increase in the mean. A statistically detectable efficiency improvement may occur after enough of the bad firms have dropped out of the population.
 The dynamics of the household electrical appliance industry is illustrative. The prereform emphasis on regional selfsufficiency in conjunction with the government’s barriers to interregional trade bred a large multiplicity of inefficient producers of household appliances. The reform program removed these barriers, discontinued production quotas and centrally planned pricing policies, and set off an intense competition among these producers in the nationwide market for household electrical appliances. Many firms sought to leverage their competitive position with joint venture agreements with established international firms such as Maytag, Siemens, Electrolux, and Sanyo and by raising new capital in the IPO market. The industry went through a period of rapid growth to one of overproduction. Price wars broke out and profit margins stabilized. Winners and losers became apparent. Regional specializations took root.
 Washing machine maker Rongshida Group of Heifi in Anhui Province partnered with Maytag and became an industry leader. Among its acquisitions was Chongqing Washing Machine, a money losing SOE that was returned to profitability by Rongshida’s management. The Heifi region emerged organically as a national center for the industry. The city is home to several other industry winners including Meiling Group and Philco. Meiling raised capital by issuing shares in the Shenzhen Stock Exchange and became an industry leader in refrigerators while Philco, like Rongshida, chose the foreign joint venture route to capture market share in air conditioners. Over this period economic growth in Heifei outpaced the national average. Within a decade average household income in Heifei grew from from 500 RMB/month by an order of magnitude in nominal terms.
 With thanks to Richard Margolis, Bill Overholt, Steven Xu, Xiaonian Xu, Yan Wang, and Christopher Lingle. Also … Hongbin Li and Scott Rozelle, “Saving or Stripping Rural Industry: An Analysis of Privatization and Efficiency in China”, Agricultural Economics, Vol 23, No 3, September 2000 …. and …. Belen Villalonga, “Privatization and Efficiency: Differentiating Ownership Effects from Political, Organizational, and Dynamic Effects”, Journal of Economic Behavior and Organization, Vol 42, No 1, May 2000.
HOW BUREAUCRATS LIE WITH BUZZWORDS
Posted May 29, 2020
on:
WHAT THE UNITED NATIONS CLIMATE CHANGE PRESS RELEASE SAYS [LINK]
UN CLIMATE PRESS RELEASE 28 MAY 2020
GOVERNMENTS COMMIT TO TAKE FORWARD VITAL WORK TO TACKLE CLIMATE CHANGE IN 2020
WHAT ACTUALLY HAPPENED
 THE UN FORMED A COMMITTEE CALLED the Bureau of the Conference of the Parties to the UNFCCC with 11 members nominated by each of the five United Nations regional groups and six Small Island Developing States.
 A VIRTUAL ONLINE MEETING OF THIS COMMITTEE WAS HELD AND A MOTION WAS TABLED AS FOLLOWS: “Committed to take forward crucial work to tackle climate change”
 ALL ELEVEN MEMBERS OF THE COMMITTEE VOTED FOR THE MOTION.
 THE UNITED NATIONS ANNOUNCED THIS VOTE TO THE WORLD THROUGH PRESS RELEASES AND ONLINE POSTINGS [LINK] AS FOLLOWS:

GOVERNMENTS COMMIT TO TAKE FORWARD VITAL WORK TO TACKLE CLIMATE CHANGE IN 2020.
THIS STATEMENT IS A FALSEHOOD! IT IS AN EXAMPLE OF THE MANY BUREAUCRATIC LIES TOLD TO US BY OUR UNITED NATIONS BUREAUCRATS
FULL TEXT OF THE PRESS RELEASE WITH COMMENTARY
 UN CLIMATE PRESS RELEASE / 28 MAY, 2020
Governments Commit to Take Forward Vital Work to Tackle Climate Change in 2020  Bonn, 28 May 2020 – At a virtual meeting today, the Bureau of the Conference of the Parties to the UNFCCC committed to take forward crucial work to tackle climate change under the umbrella of the UN Framework Convention on Climate Change (UNFCCC), despite the COVID19 crisis. (the Bureau of the Conference of the Parties is an internal committee of the UN)
 The 11 members of the Bureau are nominated by each of the five United Nations regional groups and Small Island Developing States, and provide advice and guidance regarding the ongoing work under the Convention, the Kyoto Protocol, and the Paris Agreement, the organization of their sessions and the relevant support by the UN Climate Change secretariat.
 Since the last meeting of the Bureau in April, the work of the UNFCCC secretariat has not slowed, with several initiatives launched in order to drive Momentum, showcase continuing climate action and urging greater climate Ambition from all segments of society.
 UN Climate Change Executive Secretary Patricia Espinosa, said: “Our efforts to address climate change and COVID19 are not mutually exclusive. If done right, the recovery from the COVID19 crisis can steer us to a more inclusive and sustainable climate path. We honor those whom we have lost to COVID19 by working with renewed commitment and continuing to demonstrate leadership and determination in addressing climate change, and building a safe, clean, just and resilient world.”(translation: we use pandemic deaths to sell climate change)
 In addition to several technical meetings organized by the UN Climate Change secretariat, key political events have taken place this year, for example the Petersberg Climate Dialogue and the Placencia Ambition Forum. (
 From 1 to 10 June 2020, a series of online events will be conducted under the guidance of the Chairs of the UNFCCC’s Subsidiary Body for Scientific and Technological Advice and the Subsidiary Body for Implementation and with the support of the UNFCCC secretariat, the June Momentum for Climate Change.
 The June Momentum events offer an opportunity for Parties and other stakeholders to meet virtually and continue exchanging views and sharing information in order to maintain Momentum in the UNFCCC process and to showcase how climate action is progressing under the special circumstances the world is currently facing.
 This will include advancing technical work under the constituted bodies, as well as providing a platform for information exchange and engagement on other work being done under the UNFCCC, including on adaptation, mitigation, science, finance, technology, capacitybuilding, transparency, gender, Action for Climate Empowerment, and the preparation and submission of nationally determined contributions.
 Formal negotiations and decisionmaking are not envisaged for these events and will take place at the UNFCCC Subsidiary Body sessions which are scheduled for October of this year.
 The Bureau of the Conference of the Parties, with the UK and its Italian partners today also agreed new dates for the COP26 UN Climate Change Conference, which will now take place between 1 and 12 November 2021, in Glasgow.
 With 197 Parties, the United Nations Framework Convention on Climate Change (UNFCCC) has near universal membership and is the parent treaty of the 2015 Paris Climate Change Agreement.
 The main aim of the Paris Agreement is to keep a global average temperature rise this century well below 2 degrees Celsius and to drive efforts to limit the temperature increase even further to 1.5 degrees Celsius above preindustrial levels.
 The UNFCCC is also the parent treaty of the 1997 Kyoto Protocol. The ultimate objective of all agreements under the UNFCCC is to stabilize greenhouse gas concentrations in the atmosphere at a level that will prevent dangerous human interference with the climate system, in a time frame which allows ecosystems to adapt naturally and enables sustainable development. THIS ENTIRE SENTENCE IS COMPLETE AND MEANINGLESS BUREAUCRATIC NONSENSE. THEIR ONLY TASK IN THE CLIMATE GAME IS TO REPEAT THEIR MONTREAL PROTOCOL SUCCESS IN ELIMINATING CFC EMISSIONS BY ELIMINATING FOSSIL FUEL EMISSIONS. THEY HAVE NOT DONE THAT. ALL ATTEMPTS TO RECREATE A “MONTREAL PROTCOL” CLONE FOR FOSSIL FUEL EMISSIONS HAVE FAILED.
 THERE IS OF COURSE A BIG DIFFERENCE BETWEEN THE INCONVENIENCE CAUSED IN A BAN ON CERTAIN REFRIGERANT & HAIRSPRAY INGREDIENTS AND THAT OF OVERHAULING THE ENERGY INFRASTRUCTURE OF THE WORLD. THIS DIFFERENCE WAS COMPLETELY MISSED BY THE BUREAUCRATS WHOSE FAILURE TO REENACT THE MONTREAL PROTOCOL WITH FOSSIL FUELS IS STILL NOT UNDERSTOOD AT THE UN EVEN AT THE HIGHEST LEVELS SUCH THAT IT IS THOUGHT THAT BUZZWORDS LIKE AMBITION AND MOMENTUM IS WHAT THEY NEED TO TURN THINGS AROUND.
A COMMON BUREAUCRATIC ASSUMPTION IS THAT BUZZWORDS AND PHRASES ARE A SUFFICIENT SUBSTITUTE FOR ACTION. THIS PATTERN IS SEEN IN THE CLIMATE ACTION STRATEGY OF UN BUREAUCRATS WHERE REPEATED USE OF THE WORDS MOMENTUM & AMBITION IS THOUGHT TO OVERCOME THEIR FAILURE TO PUT TOGETHER A GLOBAL PROGRAM TO CUT GLOBAL FOSSIL FUEL EMISSIONS. WHAT WE SEE HERE IS THAT IT TAKES LIES TO MAKE BUZZWORDS WORK AND IT TAKES BUZZWORDS TO SELL THOSE LIES.
THE SCIENCE OF CLIMATE SCIENCE
Posted May 28, 2020
on:FLASHBACK TO 2017
DR. JONATHAN BAMBER, PhD [LINK] PHYSICIST AND PROFESSOR OF PHYSICAL GEOGRAPHY AT THE UNIVERSITY OF BRISTOL IS A CLIMATE SCIENTIST WITH A SPECIFIC RESEARCH INTEREST IN POLAR ICE MELT AND SEA LEVEL RISE.
HE EXPLAINS ….
 “What’s happening to ice and why? The way in which ice acts on the global climate and what’s happening to ice and why it’s happening and the ways in which changes to the ice in the Arctic are having global impacts on our climate system and therefore causing effects which we need to really be worried about. It’s not just the ice disappearing by itself. There are two effects. One is the albedo feedback. When sea ice melts and retreats, a big area of white which was the ice is replaced by a big area of dark which is the ocean water.
 The ice surface reflects. Fresh snow on the ice reflects 80% of the radiation. It’s called albedo, so it has an albedo of 80%. As the ice gets dirtier as summer approaches and it starts to melt, the albedo goes down but it still has some albedo left. It goes down to 50% or 60% but as soon as the ice goes completely, the albedo of open water is less than 10%. That means that a huge amount of energy is now being absorbed by the surface that wasn’t being absorbed before because it was being reflected, and that speeds up global warming because it increases the temperature of the planet.
 So we have our first positive feedback – positive meaning nasty, which is that the retreat of sea ice directly impacts the rate of warming of the atmosphere. As more of the ice melts the faster the ice can melt. This feedback is what we need to worry about in terms of polar ice melt.
 So Professor Jonathan Bamber asked 22 researchers how the West Antarctic Ice Sheet (WAIS) would respond to a 5C warming scenario. Aggregating their responses he found a 1 in 20 or a 5% probability of global mean eustatic sea level rise as high as 2 meters. This is something we really really need to worry about.
 Bloggers comments: So that’s the science of it anyway, and the physics too. Of course the climate deniers will deny these things because that is their way. They don’t believe in science, so they deny climate science in defense of the fossil fuel industry and they surely profit from their denial in various ways. Believe it or not this kind of nasty climate denial is actually found in polite society as shown in the images below.
CLIMATE SCIENCE VS ENVIRONMENTALISM
Posted May 27, 2020
on:[LINK TO THE HOME PAGE OF THIS SITE]
RELATED POSTS: [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK]
THIS POST EXAMINES THE REACTION OF CLIMATE SCIENTISTS TO THE MOOREGIBBS FILM “PLANET OF THE HUMANS” DESCRIBED IN A RELATED POST [LINK]
PART1: WHAT THE PLANET OF THE HUMANS SAYS
 The film “Planet of The Humans” is a critical evaluation by environmentalists of the environmentalism claims of the climate change movement. In the evaluation, environmentalists find that the renewable energy options proposed by climate science to replace fossil fuels are not as reliable nor as environmentally friendly as climate science says they are.
 The evaluation finds that solar arrays, solar towers, windfarms, and biofuel, and biomass energy options proposed by climate science as reliable and environmentally friendly energy alternatives to fossil fuels are neither reliable nor environmentally friendly.
 In the case of biofuel and biomass, the destruction of trees and forests, either for harvesting trees (as in biomass projects) or for clearing land (as in biofuel projects) is cited from an environmentalism point of view as an unacceptable level of the destruction of nature just to keep humans supplied with an unreasonable amount of energy in view of what nature can withstand.
 For wind and solar, their downsides are found not only in terms of their environmental impact but also in terms of significant technical limitations that make them an impractical and unreliable energy source. It is argued that wind and solar energy require a very large expanse of land that often needs to be cleared of trees and wildlife. The other serious concern with respect to wind and solar is that their power generation is intermittent and the amount of power they generate is variable. For these reasons these power generation devices are not reliable and must be backed up by fossil fueled power plants anyway. Therefore it is not logical to claim that wind and solar can replace fossil fuels.
 Other considerations include useful life, regular maintenance, and fossil fuel consumption of wind and solar. Solar panels and wind turbines have a useful life of 10 to 20 years. They must therefore be regularly disposed of and replaced. Their disposal has environmental impact implications because of the enormous amount of environmentally damaging material involved. Also, the environmental impact of their replacement has cost and environmental considerations because of the very large quantities rare earths and metals needed from mines in Africa in an activity with serious health and environmental issues.
 It is also mentioned that thermal concentrated solar power plants consume natural gas and therefore not really an “alternative to fossil fuels”. Also, it is claimed that they are often abandoned after a few years and that such abandonment involves an unacceptable level of environmental harm . In all cases, renewable energy production must be backed up by coal or natural gas plants. It is therefore a falsehood and an underhanded strategy to claim that they are replacements for fossil fuels.
 The film also points out that the the involvement of capitalists in the renewable sector raises serious questions about their environmental credentials. The move to renewables is making rich capitalists richer and that provides an alternative capitalism interpretation of the move to renewables that has nothing to do with their claimed environmental credentials.
 An important contribution of the video is that it finds widespread cheating and lying in the renewable business that exposes an ugly side of the climate movement. This finding implies that climate science claims about their proposed climate action plans are not credible.
 The study also finds an inadequacy of battery capacity in the extreme such that the only real solution to intermittency is full scale fossil fuel backup power and thereby rejects the claims to battery storage as an option to overcome intermittency. In summary, the Moore/Gibbs film finds that from a perspective of environmentalism and practicability, the environmentalism claims of the renewable option by climate science must be rejected.
PART2: WHAT CLIMATE SCIENCE SAYS ABOUT THESE CLAIMS BY MOORE/GIBBS
 A possible rational response by climate science could have been that yes, there are some unresolved issues in the renewable option being offered to replace fossil fuels. We are well aware of these issues and we are working on their resolution and would be pleased to work with the Moore/Gibbs team in that project. This option would have retained the kind of planet saving environmentalism claimed by climate science. But this is not what happened.
 Instead what we find is climate scientists crying foul in a strangely angry and hostile response rejecting the Moore/Gibbs presentation wholesale as rubbish, and calling for the immediate removal of the film to prevent further public viewing. This kind of hostility towards environmentalism by individuals claiming to be environmentally concerned scientists who have set out to replace environmentally unacceptable fossil fuels with an environmentally correct energy infrastructure leads to a very different image of climate science than the image climate science has presented to governments and consumers.
 The essence of the climate science response appears to be that the Moore/Gibbs assessment is an unjustified attack on their renewable energy climate action plans because it is made from a purely environmentalism perspective. These objections raise even more serious issues about climate science than prior claims of skeptics.
 Climate scientists had voluntarily appealed to environmentalists for support claiming to be environmental agents that will save the planet from the industrial economy of humans. What we see in the Moore/Gibbs video is that this claim is not accurate. This collision between climate science environmentalism and what appears to be real environmentalism reveals that climate scientists lied about their science in both directions by telling climate deniers that their science is bathed in the holy waters of environmentalism and now, blaming real environmentalists for holding them to that claim. These contradictions and conflicts further weaken the reliability and legitimacy of the climate movement.
 A more rational response might have been that “yes, we are of course aware of these issues in the renewable option and we are working on it and have made substantial advancements that are not included in the video”. BUT NO SUCH SCIENTIFIC AND HONEST RESPONSE FROM CLIMATE SCIENCE IS FOUND. What we find instead is a childish name calling shouting match with climate scientists calling for the film to be taken down immediately.
 Whatever harm the movie may have done to the climate movement now appears to be insignificant in comparison to the harm that climate scientists have done to the legitimacy and credibility of the anti fossil fuel movement of climate science [LINK] and of the claimed environmentalism credentials of climate science such that they are the environmental agents that will save the planet from destruction by fossil fuels [LINK] .
THE MOORE/GIBBS VIDEO HAS BEEN REMOVED BY YOUTUBE ON AN INTELLECTUAL PROPERTY RIGHTS ISSUE BUT A TRANSCRIPT OF THE VIDEO IS AVAILABLE HERE [LINK]. AND HERE IS A LINK TO THE VIDEO PROVIDED BY CHARLES ROTTER AT WUWT THAT APPEARS TO WORK. [LINK]
PLEASE SCROLL DOWN TO “PART2: TRANSCRIPT OF THE VIDEO ”
WBM POSTS ARE MY OLD LOST WORKS FOUND IN THE WAY BACK MACHINE
The Agency Theory of Securities Regulation
 In presenting the history of securities regulation we examine the rise of credit and equity financing in preindustrial Europe and trace the evolution of regulation from the Bubble Act of 1720 through the market break of 1929 and the formation of the SEC, to the present day issues with regard to financial innovations, computerization of trading, globalization of capital markets and operational risks in emerging markets.
 Modern finance theory is wanting a coherent theory of regulation. Scholarly work on the theory of regulation forwarded by Bennett, Bentson, Friend, Kripke, Loss, Mann, Merton Miller, Rosen, Seligman, Stigler, and others during a 30year period (1964 to 1994) address different aspects of regulation and their theories are in conflict.
 Much of the theoretical debate has been in the form of for and against regulation. For example, Stigler and his supporters are against regulation because, they argue, disclosure requirements impose unnecessary costs on corporations. According to this view, “disclosure increases share prices” and since managers pay is tied to firm performance it is in the self interest of managers to disclose anyway without the need for externally imposed regulation.
 Friend and others dispute this view and argue for regulation. They contend that “it is precisely because” of the managers selfinterest in high share prices that he is likely to withhold adverse information or exaggerate positive information. Similar debates (for or against) exist for other forms of regulation such as antitrust laws, the control of insider trades and SEC rules concerning the behavior of brokers, the operation of exchanges and new issue offerings by corporations.
 The purpose of regulation itself is embroiled in controversy. Why does securities regulation exist? Is it to protect the individual investor? from whom? Or is it to preserve capital markets? Or to assist corporations in raising capital? In our analysis we consider all forms of regulation including uniform accounting methods, NASD and exchange internal regulations, and externally imposed regulation by legislation and we argue that regulation exists for all of these reasons.
 We support the argument by building a coherent model of regulation using the cost of capital to corporations in the aggregate economy as the operational variable. We then define a term called the `agency cost of capital formation and show that regulation influences the cost of capital (and therefore the wealth of the economy) by acting through this variable.
 Rather than argue for or against regulation our theoretical model sets the framework for an optimal level of regulation. We contend that the agency cost of capital formation is a significant portion of the cost of capital to corporations. A well designed regulatory structure is necessary to reach an optimal balance between monitoring costs imposed by regulation and the reduction in agency costs achieved by regulation.
 In the absence of regulation, wealth transfers between shareholder, debtholders, and management may occur that drive investors from the market; and prices that are not optimal and do not allocate capital efficiently. Because of the added risk investors demand additional returns. The result is an increased cost of capital to corporations, and less overall investment in productive assets.
 Excessive and inappropriate regulation are associated with high monitoring costs to corporations because they limit the ability of the market to attract new capital, to provide liquidity to investors, or to allow corporations to utilize the market to finance new projects.
 CONCLUSION: Regulation is optimized when the agency cost of capital is minimized.
WBM POSTS ARE MY LOST WORKS FOUND IN THE WAY BACK MACHINE
Tobin’s Liquidity Preference
Tobin, James: Liquidity Preference As Behavior Toward Risk, Review of Economic Studies, Vol. 25 1958 , pp. 6586.
 The Tobin 1958 paper is a landmark in the annals of Finance. A truly great paper does not only provide new evidence and interpretation but also gives new insight. Economist James Tobin brings together two great works – the liquidity preference theory of John Maynard Keynes and the portfolio theory of Harry Markowitz and shows that they are complementary and can be considered to be two manifestations of the same underlying principle of economics.
 Keynes had postulated that in addition to cash needed to carry out transactions, EUs (Economic Units) also hold cash because of a speculative motive, that is, in case interest rates rise next year. He therefore concluded that the demand for money by the aggregate economy will rise and fall with the prevailing interest rate. When interest rates are high, EUs would be less inclined to think that they will rise any higher and will therefore invest in bonds and give up money. Conversely, when interest rates are low, the EUs would hold more cash and wait for higher rates on bonds.
 So, if one plotted the demand for money against interest rate one would get the now well known textbook curve that slopes gently downward from all the money held at zero interest on bonds and asymptotically approaches zero money held at infinite interest. This description of liquidity preference has been severely criticized by Leontief and Miller.
 Tobin, inspired by Markowitz, saw Keynes liquidity preference curve in terms of portfolio theory, i.e., the maximization of utility in meanvariance space. Tobin’s theory satisfied Leontiefs critique and brought liquidity preference into harmony with the mainstream of economic thought of his time. He proposes the following lucid and enlightening explanation of the downward sloping money curve.
 Each EU holds a portfolio of two assets, $(1B) in money and $B in bonds (note 2). B is less than unity and represents the value weighted percentage of the portfolio held in bonds. Money pays no interest and holds its value with certainty (note 3). Bonds sell today for $1 and will pay a perpetuity of $Ro. (i.e., Ro% interest rate).
 However, next year, the prevailing interest rate may change to R1. At that time, a bond which pays $Ro will sell for $(Ro/R1) and the capital gain will be G=1Ro/R1 since the EU paid $1 for it. Thus the EUs net gain from holding a bond is interest + capital gain = Ro + G = Ro + 1 – Ro/R1. The assumption that g is normally distributed about an expected value of zero with a standard deviation of Sg leads to the following equations.
 (a) Dollar value of net gains from holding the portfolio for 1 year: R = B*(Ro + G). (b) The expected value of R is: E(R) = E(B*Ro) + E(G) but E(G) = 0 therefore E(R) = B*Ro. The variance of R can be computed as var(R) = E{[R – E(R)]2} = E{[B*Ro + B*G – B*Ro]2} = E{B2G2}. Since G is normally distributed about zero (i.e., E(G) = 0), G2 = Sg2 and the standard deviation of R is given by: Sigma = sqrt(var(R)) = B*Sg. Rearranging B = Sr/Sg.
 Now we can substitute for B in the equation for expected value. E(R) = B*Ro = Sr * (Ro/Sg) or E(R)/Sr = Ro/Sg = the market price of risk. This ratio which establishes the market price of risk in the aggregate economy is analogous to the security market line and is called the opportunity locus (OL) by Tobin.
 Each EU in the economy is identified in the same meanvariance space by his own set of indifference curves (whose shape is defined by his peculiar attitude toward risk) and maximizes his meanvariance utility by holding a portfolio of money and bonds at a point in this space where his indifference curve is tangential to the OL. The point determines the fraction of his liquid wealth he holds in money and that which he holds in bonds.
 With returns on the yaxis and risk on the xaxis, the OL gets steeper as interest rate rises. It is easy to see graphically, (and possible to show analytically as Tobin has done for all brave souls that have waded through section 3.3) that for riskophobes a steeper OL means that a larger fraction of the portfolio will be held in bonds easing the demand for cash. This is consistent with Keynes liquidity preference theory (increased interest rate leads to lower demand for money). QED.
 Tobin wrestles with the fact that the relationship is reversed for riskophiles but offers no answers. One interpretation I can offer is that since empirical data show that the demand for money is indeed downward sloping, it must mean one of two things; either Tobins simple model of risk free money and perpetual bonds is too simplistic to yield results that can be compared directly with empirical data, or the implicit assumption of invariant prices (i.e. risk free cash) is not justified by the data. Of course, it could also mean that the empirical data suggests that the population at large is risk averse.
WBM POSTS ARE MY LOST WORKS FOUND IN THE WAY BACK MACHINE
ABSTRACT
Methodological flaws have become commonplace in financial research to the point that familiarity has bred de facto acceptance. Some of these only require refinements in the findings but in most cases the methods used bring to question the findings themselves. In this paper we survey these flaws and investigate their potential impact on conclusions normally drawn from the research. We use simulation to demonstrate how an incorrect conclusion about a known population may be drawn using these methods. We then offer alternate methods that may be used to avoid the pitfalls described. Finally, we examine the validity of the “large sample” statistical model in understanding phenomena and developing theory in financial economics.
INTRODUCTION
Finance, like the other social sciences, inherited a version of the scientific method from the natural sciences and to this day classical NeymanPearson large sample hypothesis testing is generally accepted as the only credible research tool in finance. In this research model, we think we live in the world of Ho the null hypothesis, where entropy is maximized and no patterns and therefore no information exists. To find information we seek patterns. We do this by randomly selecting a finite sample from an infinity of possibilities. This is our observation; and given the distributional properties of Ho we may compute the odds that we would observe such a sample by chance. If this probability turns out to be lower than a threshold value of unlikelyhood we proclaim that it is unlikely that we would make such an observation in Ho; therefore we must not be in Ho. The distributional properties of Ho are usually assumed to be described by the Gaussian function.
In practice, financial researchers often stray from this pattern seeking model partly because of convenience and and partly because of unfamiliarity with the actual statistical underpinnings of the research methods; but more importantly, I believe, because the methodology, though a stunning success in the natural sciences, is not a good fit in finance. The Krueger and Kennedy (1990) finding that stock market performance is highly correlated with football games is a harsh reminder that spurious statistics do exist but such a faux pas is harder to identify when the relationships found are the ones we are looking for; or one that is easily rationalized in terms of a dominant theoretical framework.
MULTIPLE COMPARISONS
A commonly found departure from the pattern seeking model is to look for a number of different patterns. The more patterns you look for the greater the odds that a chance pattern will be observed in a sample taken from the Ho distribution. The procedure compromises the statistical method because even in the world of Ho, if you look long enough you will find the pattern you are looking for. In Finance papers mutiple comparisons are frequently used in conjunction with mutiple values of the alpha probability and exposte selection of the direction of the effect. Table 1 shows a typical result (Aggarwal 1990). A common practice is to set up a table of many ttests and then to identify the ‘improbable’ ones with asterisks. The usual code is * = ‘significant at alpha=10%, ** = ‘significant at alpha=5%, and *** = ‘significant at alpha=1%.
In Table 2 we perform a similar test on 10 samples of n=100 drawn from the Ho population and find three spurious ‘rejections’ which might have been reported as findings in financial research. The example underlines the need for careful analysis of multiple comparison cases.
At an alpha level of 5%, the probability that all 10 samples would fail to reject Ho is 0.95^10 or about 60%. This means that there is a 40% probability that at least one chance rejection will occur. In other words, the experimentwide error rate is 40% instead of 5%. To hold the experimentwide error rate at 5% each comparison must be made at alpha = 0.005. In general, when making m comparisons, the single comparison alpha level (ac) and the eperimentwide alpha level (ae) are related by the equation
(1ac)^m =( 1ae)
MULTI COLINEARITY
In simple linear regression models with many explanatory xvariables the beta coefficient of each xvariable may be interpreted only if the xvariables act independently on y, the response variable. When the xvariables are correlated or even when some linear combination of the xvariables are correlated, the regression coefficients become unstable so that neither the direction nor the magnitude of the observed ‘effect’ may have a valid interpretation.
Most papers in Finance that use linear regression do not check for this condition and the few that do refer only to a correlation matrix of the xvariables. A correlation matrix is unable to detect correlations between linear combinations.
Some Examples of Multi Colinearity in Financial Research
(i) The Intervention dummy variable in time
An example model is presented in Table 3 (Blennerhassett and Bowman 1994). The authors wish to measure the impact of the screen trading intervention in the time series of offmarket trading. But here they are analyzing the data in real time and not in event time and the authors suspect that the model is subject to historical effects. An intervention dummy variable identifies the implementation of screen trading. The addition of the time variable to the model is an attempt to remove historical effects. But the dummy variable is also a time indicator. Therefore the two variables are highly correlated. Simulated data in Table 3 show a time variable with two different intervention dummy variables. Note that both dummys are highly correlated with time and, unless one dummy is symmetrical around the other, the dummy variables are also likely to be correlated with each other. Models such as these are common in financial research that attempts to model the impact of a historical event such as a change in regulation, market structure, interest rates, trade agreements, and so on.
(ii) Explanatory variables that are algebraically related
Capital structure, that is, the amount of debt a firm carries relative to equity, is a contentious and unresolved topic in Finance. (DeAngelo and Masulis 1980, Bowen, Daley, and Huber 1982, Bradley, Jarrell, and Kim 1984, Kane, Marcus, and McDonald 1984, Kim and Sorensen 1986). A popular model of capital structure maintained that the value of debt is that it is a tax shield and therefore, that firms that enjoy other forms of tax shields such as depreciation would use less debt. The empirical results are conflicting and inconclusive. The regression coefficients of the models differ in sign from one study to another. Some found the non debt tax shield (ndts) coefficient to be negative and argued that the ndts model is correct since it showed that firms with non debt tax shields used less debt. Others found the coefficient to be positive and argued that firms with more tax shields enjoyed higher cash flows and therefore had larger debt capacities. Still others found no relationship between ndts and capital structure.
A possible explanation for the failure of these empirical studies is the natural algebraic relationship that exists between total assets and depreciation. Since both of these variables were used in the model the regression coefficients were unstable and their spurious sign and maginitude became interpreted into financial theory. The empirical results summarized in Table 4 reveal their random and contradictory nature.
(iii) Linear combinations of explanatory variables are correlated
Frequently the starting point of empirical investigations in finance is not theory but an available database such as COMPUSTAT or CRSP. All the variables in the database become candidates for the model and models built in this manner are frequently unstable even when a correlation matrix of the explanatory variables does not reveal the extent of the dependencies between them.
One of the many models proposed for measuring the determinants of capital structure includes D=total debt, E= book value of equity, and A=total assets as explanatory variables (Kim and Sorensen 1986). The three variables have an exact accounting relationship. A = D+E.
(iv) The small firm effect
Banz (1981) first reported that stocks of small firms tend to have higher returns and this relationship was quickly confirmed by studies that followed. Investigators of asset pricing and market efficiency saw Banz’s size effect as an anomaly that must be removed from the data before other effects can be detected. Fama and French (1992) first removed the size effect from the data before they tested the relationship between beta and returns. If the Sharpe (1964) model is correct, they reasoned, then there ought to be a significant and positive relationship between beta risk and returns. They found that the relationship if any between these variables was negative.
But small firms not only have higher returns; they also higher betas. Chan and Chen (1988) report a very strong correlation between beta and size. This relationship implies that tests of asset pricing such as those of Fama and French (1992) and Fama and MacBeth (1973) are likely to produce spurious values of the coefficients for beta risk; and this is indeed the case. Some studies have found a strong positive effect of beta risk that is consistent with the CAPM (capital asset pricing model) while others have found no significant relationship and still others report a relationship in the opposite direction. These empirical results have caused a great deal of controversy and have forced financial economists to reevaluate some fundamental assumptions of asset pricing. Yet, these findings may turn out to be a fluke of multicollinearity.
For example, when Fama and French (1992) removed the size effect from the data they may have also removed that which they intended to measure – the beta effect. A failure to find a beta effect in the residuals of the size effect does not imply an absence of a beta effect. There are better ways to find the unique contributions of the two correlated variables. For example, one might first regress beta against size and use these residuals as the unique contribution of beta and then regress size against beta and use those residuals as the unique contribution of size. Alternately, one might look for orthogonal principal components of size and beta.
An added complication in asset pricing research is that some of the empirical models include the PE ratio (PE = stock price over accounting earnings) as an explanatory variable in addition to the risk measure beta. But PE too contains a risk measure. Current financial theory interprets PE as a combination of two effects. Ceteris paribus higher perceived risk would lower the PE ratio and higher perceived growth would raise the PE ratio. Empirical studies are complicated by a high collinearity between PE and beta. There are other problems with asset pricing studies that have to do with the time series nature of the data and the methods by which the concept of risk is rendered and we review these concerns in a another paper.
STATISTICALLY SIGNIFICANT FINDINGS FROM VERY LARGE SAMPLES
Sample sizes in financial research are typically very large. For example the Fama French (1992) paper on asset pricing models is a study of monthly stock returns from over 2000 firms in the period 1962 to 1989. This represents over 672,000 observations. When we treat these observations as a sample we get a sampling distribution with a tiny standard deviation. For instance, The degrees of freedom is approximately 670,000 whose square root is 818. In this case the standard devaition of returns was 52%. This means that the standard deviation of the sampling distribution is 0.0636% or about 6 basis points and a difference as small as 12 basis points might be considered significant although no fund manager would pursue a policy to take advantage of such a small effect. Many of the empirical investigations of the determinants of capital structure cited in this paper have proposed regression models that were found to be statistically significant at rsquared values of 1 to 5%. That is to say 95 to 99% of the sum of squares in the sample remain unexplained and are considered by the model to be random backround noise.
FAILURE OF CLASSICAL LARGE SAMPLE STATISTICS IN FINANCE
Black (1993) observes that although there are many researchers in finance, perhaps thousands, there is only one history, and therefore, pretty much just one set of data. Researchers use past studies to formulate new hypotheses and models. They then test these models on the same data that were used to test the past hypotheses. Recall that classical hypothesis testing is formulated on the basis of a priori hypotheses and the probability of the data given the null. But in finance we find ourselves repeatedly testing the probability of the data given the hypothesis given the data; i.e. testing for pattens ex poste.
In this sense classical statistics is a failure in financial research and I would like to propose a reevaluation of our research agenda and methodology and an emphasis on well designed case studies on small samples and even anecdotes.
TABLE 1: A MULTIPLE COMPARISON EXAMPLE
Variable sample statistic Rejection status
United Kingdom spot rate 0.3619 – forward rate 0.2821 – risk premium 0.8977 **
West Germany spot rate 0.2299 – forward rate 0.4454 * risk premium 0.4544 *
Switzerland spot rate 0.2643 – forward 0.0340 – risk premium 1.6703 ` **
Rejection status: – = fail to reject, * = reject at alpha=10%, **=reject at alpha=5%
TABLE 2: MULTIPLE COMPARISON OF RANDOM NUMBERS
tTest: Ho: µ = 0 Ha: µ ≠ 0
y1 :Sample Mean = 0.11183675 tStatistic = 1.342 p=0.1828 –
y2 :Sample Mean = 0.18801382 tStatistic = 1.995 p=0.0487 **
y3 :Sample Mean = 0.04132946 tStatistic = 0.438 p=0.6621 –
y4 :Sample Mean = 0.21933539 tStatistic = 2.112 p=0.0372 **
y5 :Sample Mean = 0.05426054 tStatistic = 0.533 p=0.5953 –
y6 :Sample Mean = 0.04306246 tStatistic = 0.469 p=0.6403 –
y7 :Sample Mean = 0.05917902 tStatistic = 0.712 p=0.4780 –
y8 :Sample Mean = 0.14411801 tStatistic = 1.308 p=0.1939 –
y9 :Sample Mean = 0.15260649 tStatistic = 1.570 p=0.1197 –
y10 :Sample Mean = 0.18155813 tStatistic = 1.804 p=0.0742 *
TABLE 3: REGRESSION MODEL WITH DUMMY VARIABLE IN TIME
MODEL: Vt = Bo + B1D + B2t
Vt = ratio of onmarket to total market value traded on day t
D = 1 after screen trading was implemented
D = 0 before screen trading is implemented
t = time in days
(coefficients and tstatistics)
Bo 0.3135 18.2
B1 0.1444 4.7
B2 0.000139 0.35
Simulated data
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 1
8 0 1
9 0 1
10 0 1
11 1 1
12 1 1
13 1 1
14 1 1
15 1 1
16 1 1
17 1 1
18 1 1
19 1 1
20 1 1
Pearson ProductMoment Correlation
time dummy1 dummy2
time 1.000
dummy1 0.867 1.000
dummy2 0.795 0.655 1.000
TABLE 4: EMPIRICAL INVESTIGATIONS THE NDTS HYPOTHESIS
note: a negative effect supports the DeAngelo Masulis theory
Study Direction Statistical test
Bradley Jarrel and Kim: positive significant
Kim and Sorensen: negative not significant
Boquist and Moore positive significant
Bowen, Daley, and Huber negative significant
DeAngelo and Masulis negative significant
TABLE 5: EXTERNAL EFFECTS IN A TIME SERIES
Fvalue of the test: 44.392
Rsquared: 0.334
Number of observations: 800
AAA rate regression weight: 0.004382
tvalue for beta: 2.504
probability t<2.504 0.0125
REFERENCES
Aggarwal, Raj, Distribution of spot and forward exchange rates: empirical evidence and investor valuation of skewness and kurtosis, Decision Sciences, v21, p588595, 1990
Banz, Rolf, The relationship between return and market value of common stocks, Journal of Financial Economics, v9 p3, 1981
Black, Fisher, Beta and return, Journal of Portfolio Management, Fall 1993, p8
Blennerhassett, Michael, and Robert Bowman, A change in market microstructure – the seitch to screen trading in the New Zealand Stock Exchange, Asia Pacific Finance Association, Sydney, Australia, 1994
Boquist, John and William Moore, Inter0industry leverage differences and the DeAngelu Masulis tax shield hypothesis, Financial Management, Spring 1984, p59
Bowen, Robert, Daley, Lane, and Charles Huber, Evidence on the existence and determinants of interindustry differences in leverage, Financial Management, Winter 1982, p1020
Brown, Keith, W.B. Harlow, and Seha Tinic, How rational investors deal with uncertainty: reports of the death of the efficient market theory are greatly exaggerated, Financial Management Collection, Fall 1990
Brown, S.J., and J.B. Warner, Using daily stock returns, the case of event studies, Journal of Financial Economics, March 1985, p3
Chan, K. C., and Naifu Chen, An unconditional asset pricing test and and the role of firm size as an instrumental variable for risk, Journal of Finance, v43, p309, 1988
Chan, K. C., and Josef Lakonishok, Are reports of beta’s death premature?, Journal of Portfolio Management, Summer 1993
Chen, N.F, and Ingersoll, E., Exact pricing in linear factor models with finitely many assets: A note, Journal of Finance June 1983 page 985
Chen, Naifu, Richard Roll, and Stephen Ross, Economic forces and the stock market: testing the APT and alternate asset pricing theories, Working paper, December 1983
Chen, Naifu, Some empirical tests of the theory of arbitrage pricing, Journal of Finance, Dec 1983 pp 1393, p1414
DeAngelo, H. and R.W. Masulis, Optimal capital structure under corporate and personal taxation, Journal of Financial Economics, v8, March 1980, p329
Dimson, Elroy, Risk measurement when shares are subject to infrequent trading, Journal of Financial Economics, v7, p197, 1979 (the nonsynchronicity problem)
Dybvig, Phillip, and Ross, Stephen, Yes, the APT is Testable, Journal of Finance, Sep, 1985
Fama, Eugene, and Kenneth French, The cross section of expected stock returns, Journal of Finance, v47:2, 1992, p427
Fama, Eugene, and James MacBeth, Risk, return, and equilibrium, Journal of Political Economy, 1973, 81, p607
Garman, Mark, and Michael Klass, On the estimation of security price volatilities from historical data, Journal of Business, v53, p67, 1980
Gatward, Paul and Ian Sharp, Capital structure dynamics with interrelated adjustments: Australian evidence, Third International Conference on Asia Pacific Financial Markets, Singapore, 1993
Haugen, Robert, and Nardin Baker, Interpreting risk and expected return: comment, Journal of Portfolio Management, Spring 1993, p36 (confirms FF and rationalizes higher returns for lower ris. the market prices growth stocks too high)
Hsieh, David, Nonlinear Dynamics in Financial Markets, Financial Analysts Journal, JulyAugust 1995, p55
Kane, Alex, Marcus, Alan, and Tobert McDonald, How big is the tax advantage of debt, Journal of Finance, July 1984, p841855
Kim, Wi, and Eric Sorensen, Evidence of the impact of the agency cost of debt on the corporate debt policy, Journal of Financial and Quantitative Analysis, v21:2, July 1986, p131143
Kolb, Robert, and Ricardo Rodriguez, The regression tendencies of betas, The Financial Review, v24:2 May, 1989, p319 (beta is not stationary)
Krueger, Thomas, and William Kennedy, An examination of the superbowl stock market predictor, Journal of Finance, June, 1990, p691
Kryzanowski, Lawrence, Simon Lalancette, and Minh Chau To, Some tests of APT mispricing using mimicking portfolios, Financial Review, v29: 2, p153, May 1994
Mandelbrot, Benoit, The variation of certain speculative prices, Journal of Business, October 1963
Markowitz, Harry, Portfolio selection, Journal of Finance, v12, March 1952, p77
Parkinson, Michael, The extreme value method for estimating the variance of the rate of return, Journal of Business, v53, p61, 1980
Peters, Edgar E., Fractal structure in the capital markets, Financial Analysts Journal, July/August 1989, pp. 3237
Reinganum, Marc, A new empirical perspective on the CAPM, Journal of Financial and Quantitative Analysis, v16, p439, 1981
Roll, Richard, A critique of the asset pricing theory’s tests, Journal of Financial Economics, March 1977, p129
Roll, Richard and Stephen Ross, An empirical investigation of the arbitrage pricing theory, Journal of Finance, Dec 1980, p1073
Ross, Stephen, The arbitrage theory of capital pricing, Journal of Economic Theory, v13, p341, 1976
Scheinkman, J.A., and Blake LeBaron, Nonlinear dynamics and stock returns, Working paper number 181, Dept of Economics, University of Chicago, 1990
Schwert, G. W., Why does stock market volatility change over time?, Journal of Finance, Dec 1989, p1115
Sharpe, William, A simplified model for porftolio returns, Management Science, 1962, p277
Sharpe, William, Capital asset prices: a theory of market equilibrium under conditions of risk, Journal of Finance, v19, p425, 1964
Shukla, Ravi, and Vhrles Trzcinka, Research on risk and return: Can measures of risk explain anything?, Journal of Portfolio Management, Spring 1991 (weekly returns, capm just as good as multifactor apt)
Schwert, G. William, Why does stock market volatility change over time?, The Journal of Finance, December 1989, page 11151153
Velleman, Paul, Definition and comparison of robust nonlinear data smoothing algorithms”, Journal of the American Statistical Association, 75, September 1980, 609615.
WBM POSTS ARE MY LOST WORKS FOUND IN THE WAY BACK MACHINE
ABSTRACT
Monte Carlo simulation is used to help students visualize concepts in Finance that are not obvious when traditional methods of instruction are used. Two models are described. The first model displays the distribution of the NPV of a project when sales projections are not known with certainty. The second model uses simulation rather than the Black Scholes stochastic model for the valuation of call and put options on underlying assets whose value in the future, when the option may be exercised, is uncertain.
BACKGROUND AND MOTIVATION
A key element in Finance is the notion of risk. It measures the uncertainty in projecting values that financial variables such as returns on investment may assume in the future. Risk is operationalized in financial theory as the standard deviation computed either from historical data or from subjective probabilities. Using the normality assumption and stochastic calculus, we then proceed to derive the Gaussian parameters of the decisional variables from the ones that are projected. For example, project unit sales and price and derive NPV (net present value); and use the NPV distribution to compute the probability of negative NPV in order to make the investment decision.
The problem with such an approach is twofold. First, the procedure usually requires the use of crippling assumptions that tend to make the results unrealistic and a hard sell to the sharper students. And second, the equations themselves are complex and obfuscating; far from being an educational tool they reveal nothing about the process and leave most students with the notion that the concepts being taught are too complex to understand.
These concerns may be alleviated by using simulation rather than stochastic calculus to teach concepts of risk and return in Finance. In this paper we offer two examples to show how simple simulation models may be used to teach Finance. In the first example a simulation model is built using the DataDesk statistical package to describe a capital budgeting problem with uncertain cash flows. In the second example, the DataDesk program is used to develop a model for the valuation of contingent claims such as stock options. In each example, we show that the simulation models are easier for students to grasp because the components of the model are kept simple.
Other areas of Finance where simulation may be used include portfolio theory, the capital asset pricing model, tests of the efficient market hypothesis, and studies of market microstructure.
THE CAPITAL BUDGETING PROBLEM
Capital budgeting plays a crucial role in financial economics. It deals with the investment decision and it uses the usual axiom in Finance that the value of an investment is the present value of future cash flows that the asset is expected to generate. A project is therefore evaluated by subtracting the required investment in capital (both productive assets and working capital) required today from the present value of the expected cash flows that the project will generate. This difference, called the net present value or NPV is then used to make the investment decision. If the NPV is positive, then invest else do not invest . The time horizon for the decision is an assumed finite value at the end of which all remaining assets are expected to be liquidated at a projected liquidation value (Pinches 1992).
Some of the cash flows generated may have to be set aside if additional working capital is needed for operations in the following year. The remaining cash flows, called net cash flows or NCF, are assumed to be reinvested so that during the life of the project, they earn returns at a known reinvestment rate r. These reinvestment rates are used to compute the future value of the NCF stream at the end the life of the project. To compute the present value of the NCF stream, this future value must be discounted back to the present using a discount rate, k, that is appropriate for the perceived riskiness of the project. In general k will be different from and higher than the reinvestment rate (Mao 1969). The present value is then subtracted from the investment capital required to compute the NPV. In making the above computations all cash flows are assumed to be known with certainty.
A numerical example may clarify the salient points. Suppose that an investment opportunity exists that will require an initial investment in fixed assets of $200 which at the end of the 5year project life will have a salvage value of $90. It is expected to generate sales of $100 the first year and thereafter, the sales volume is expected to grow at an annual rate of 15%. Since it is typical for revenues to dip in the last year (in preparation for shutdown) we will assume that revenue in year 5 will be the same as that in year 2.
The following operating data are projected. Variable costs are 60% of sales. Fixed costs are $10 per year not including depreciation. Working capital requirements are estimated to be 14% of sales. We expect to be able to reinvest operating cash flows at our corporate cost of capital of 6%. This project represents a risky venture and we will use a required return or discount rate of 8% to evaluate its wealth effect on our shareholders. Should this investment be made? The investment required is $214 ($200 plus 14% of $100 as working capital). The future value of the NCF stream after year 5 is $317 if each NCF is reinvested at 6%. The present value discounted at 8% is $214.60. Since the NPV is positive (14 cents) the decision is to invest.
UNCERTAINTY IN SALES PROJECTION
The essential concepts of capital budgeting are first introduced to students using the scenario described above. We then extend the generality and usefulness of the model by allowing the projected sales vector to have a degree of uncertainty. The uncertain sales projection is modeled with a Gaussian distributions having the expected value of sales as its mean and a standard deviation that is in proportion to the degree of uncertainty. It is assumed that uncertainty in sales projection is the only source of risk, that is other factors such as rates of return, the cost structure, and salvage value are known with certainty. The problem can then be stated as follows: given the distributions of the sales projections and a fixed set of operating and financial parameters what is the distribution of the NPV?
To derive a strictly stochastic algebraic solution we may use the relations mean (y+a) = mean(y) + a mean (ay) = a*mean(y) variance (y+a) = variance(y) variance (ay) = a2*variance(y).
The so called “portfolio” equation may be used to compute the variance of a sum of stochastic variables as portfolio variance = SUMi(SUMj(wiwjsigmaij)) where sigmaij is the pairwise covariances between the present value of the cash flow (PV) of the ith year and the PV of the jth year and is equal to the variance of the PV of the ith year when j=i. The terms wi, wj represent the relative weights of the PV of the ith and jth years in the portfolio. The term `SUMi indicates summation over all values of i and similarly `SUMj is summation over all values of j (Pinches 1992) .
It is evident that we could slog through this algebra and come up with an algebraic solution especially if the number of years in the project time horizon is kept at a manageable number. The number of terms in the portfolio equation increases with the square of the number of cash flows to be combined. For n=5 years there will be 5 variances and (255)/2 or 10 covariances with a total of 15 terms. But for n=10 years there will be 10 variances and (10010)/2 or 45 covariances for a total of 50 terms. This problem is sometimes avoided by making the assumption that the sales distributions are independent. This simplifying, and perhaps crippling, assumption gets rid of the covariance terms leaving us only with n variance terms where n is the number of years in the time horizon of the project evaluation.
The problem is that by now we are so deep into computational details that we have lost sight of conceptual underpinnings of capital budgeting under uncertainty. Even more egregious, we have lost the attention and the interest of most undergraduate business students. Not only have we failed to convey to them the simple concepts involved, we have actually succeeded in convincing them that these concepts are difficult and possibly beyond their ability to understand.
A MONTE CARLO SIMULATION OF THE NPV PROBLEM
A Monte Carlo simulation of such problems differs in one respect from simulation of dynamic processes such as bank queues and factory operations. In dynamic simulation, the passage of each tick of time causes events to occur some of which may be generated by stochastic processes. These events “flow” through the simulation interacting according to the model specifications.
However, Monte Carlo simulations are not dynamic but static and each pick from the random number generator is not an event in flowing time but a trial of many trials that will be used to describe the distribution of the target variable (Morris 1987).
To set up the 5year NPV problem we need generators that will produce numbers from a given distribution for all five years of sales during each trial. Once these 5 numbers are available, we may compute the NPV for that trial using the simplified procedure used when sales are known with certainty. This procedure is repeated as many times as desired until a sufficiently large sample of NPV values is obtained. It is then possible to observe the distributional properties of NPV under conditions of uncertainty.
A convenient tool for setting up such a simulation is an interactive statistical program that has a variety of stochastic number generators and dynamic and interactive computation and graphical display capability. Using these criteria, the DataDesk program from Data Description inc. for the Apple Macintosh computer was selected as the appropriate simulation tool.
DESCRIPTION OF THE DATADESK MODEL
The numerical example described above is modeled to demonstrate the procedure and its effectiveness. For this demonstration, we have arbitrarily used a sample size of 100 trials, a time horizon of n=5 years, Gaussian sales distributions, and fixed operating and financial parameters. The operating and financial parameters may be made stochastic variables and distributions other than the Gaussian may be specified. For each of five years we generate 100 standard normal numbers (drawn from a Normal population with a mean of zero and a standard deviation of 1). Sales for each year is then computed as its expected value plus a risk multiplier times the standard normal values. Since the standard normal numbers were picked independently, an additional mechanism is used to induce degrees of dependence or correlation among the sales projections.
DataDesk has a feature called “slider variables” that allows users to set up parameters that may be dynamically controlled by dragging the mouse across a scale (Velleman 1993). In the model, all operating and financial parameters are slider controlled as are those that impart uncertainty to sales projections and the degree of dependence of sales in any given year to that realized in the previous year.
RESULTS OF THE SIMULATION The distribution, mean, standard deviation, and 95% confidence interval of the NPV are shown below for various degrees of uncertainty and dependence. The effect of some of the other parameters is also shown.
The RISK variable is used to set the degree of uncertainty in sales projections and the CORR variable attenuates the dependence between successive years of sales.The GROWTH parameter sets the percentage rate at which sales are expected to grow for the first n1 years. Revenue in the nth year are expected to be the same as that in year2. The WCS parameter sets working capital management policy and is equal to the percentage of sales that is to be held as nonproductive working capital. The reinvestment and discount rates are shown below as `r and `k respectively.
In the actual model, students are able to watch the histograms and computed sample statistics change dynamically as they drag the slider variables to different locations on the scale. Within several minutes they develop a `feel for the effect of these parameters. More important, the concepts used to build these models are so simple that with a little help they are able to modify the model and build their own models to introduce additional levels of complexity.
Figures 1 through 4 show some of the ways that data may be displayed using DataDesk. Figure 1 shows sample statistics and confidence intervals of NPV under various conditions. Figures 2 and 3 are boxlots and are useful for viewing a the series of uncertain sales and net cash flows. The uncertainty in sales projection is set as an increasing function of time.
FIGURE 1:SOME VALUES OF NPV COMPUTED
 RISK=15,CORR=75%,GROWTH=15%,WCS=14%
 Mean NPV 0.58522615, StdDev 16.961188
 With 95.00% Confidence NPV within 2.7802415 and 3.9506938
Impact of working capital management
 RISK=15,CORR=75%,GROWTH=15%,WCS=7%
 Mean NPV 3.1756781, StdDev 17.239772
 With 95.00% Confidence NPV within 0.24506675 and 6.5964230
Impact of reduced dependency in sales
 RISK=15,CORR=25%,GROWTH=15%,WCS=14%
 Mean NPV 1.9813802, StdDev 15.277067
 With 95.00% Confidence NPV within 1.0499214 and 5.0126818
Impact of increased uncertainty in sales projections
 RISK=25,CORR=75%,GROWTH=15%,WCS=14%
 Mean NPV 0.35074655, StdDev 28.268646
 With 95.00% Confidence NPV within 5.9598592 and 5.2583661
Impact of growth rate
 RISK=15,CORR=75%,GROWTH=20%,WCS=14%
 Mean NPV 11.993128, StdDev 17.778327
 With 95.00% Confidence NPV within 8.4655219 and 15.520733
WBM1998: INVESTMENT RISK & RETURN
Posted May 24, 2020
on:THE WBM POSTS ARE MY LOST WORKS FOUND IN THE WAY BACK MACHINE
IS THE RISKRETURN RELATIONSHIP IN FINANCE AN ARTIFACT?
 Modern financial theory presupposes a rational risk averse behavioral pattern of investors that forms the basis of portfolio theory and the theory and models for the pricing or risky assets. The actual risk and return metrics for stock market investments are defined and computed as follows.
 “Mean stock returns” is the arithmetic average of the series k(i)=(P(i)+Div(i))/P(i1)1 where P is the price and Div represents any cash distributions received. In most instances stock dividend yields are very small and so most index yields are estimated simply on the basis of price appreciation as k=P2/P11.
 “Risk” is computed as the standard deviation of the k(i) series returns. In case of portfolio effects some of the standard deviation may cancel and this of course leads to portfolio theory. But the residual portfolio risk is really a portion of the standard deviation. Therefore the underlying relationship is that between mean returns and standard deviation. Here we investigate this relationship when returns are generated by price changes alone.
 Because of the way they are defined both returns and risk are generated by price movements and therefore the two metrics are mathematically related. Some numerical examples expose the nature of the relationship. First consider two stocks U1 and U2 each with a current price of $100. In the following periods the price of U1 rises steadily at $1 per period and that of U2 rises at $2 per period. After 49 periods we compute the mean returns and standard deviation and find that U2 offers higher returns at higher risk. But this relationship was arrived at purely algebraically without any behavioral assumptions. To underscore this point consider now two stocks D1 and D2 that decline in price at $1 per period and $2 per period repsectively. After 49 periods we find that D1 offers a higher return at lower risk. Once again the relationship is imposed by mathematics and not by human behavior.
 However, long run stock market trend are upward with only brief bear periods. Therefore, the data will appear to show the generally assumed high risk – high returns pattern because of a purely mathematical trend effect. There is no underlying theory or philosophy, just math.
 I demonstrate this historical relationship with monthly S&P500; data from 1972 to 1992. The historical series is shown in Figure 1. The annual market volatility is shown in Figure 2. Finally, in Figure 3 we see the positive riskreturn relationship. But when we reverse the S&P500; series as shown in Figure 4 we compute the volatilities shown in Figure 5 and the downward sloping risk return relationship shown in Figure 6.
 Now consider stocks V1, V2, V3, three stocks with a mean price of $100 but with different price volatilities. The price of V1 fluctuates by $1 per period about the mean and that of V2 by $2, and V3 by $3. The average price of all three stocks (regardless of the number of periods) is $100 and the mean returns and standard deviations are higher for V3 than for V2 and higher for V2 than for V1. Once again we are able to generate the positive riskreturn relationships of finance purely mathematically.
 The V1,V2,V3 relationship exists because of the way returns are defined and computed. Returns are higher on an uptick than on an equal price downtick because the divisor is higher in the downtick. And the greater the volatility the greater this odd effect of returns; and hence the high risk high return relationship. The risk and return values of stocks U1, U2, D1, D2, V1, V2, and V3 are shown in Table 1.
 Since these relationships exist independent of behavior, the relevance of the empirical data to behavioral theory is uncertain. At the least, the data used to test the fundamental question of whether standard deviation risk is priced must first be partitioned to remove the mathematical effects inherent in the data.
 The volatility effect shown in V1, V2, and V3 may be reduced by choosing a returns period that is significantly larger than the standard deviation period. For example we might compute returns as year to year price changes and standard deviation as month to month price changes. This is what I have done in Figures 1 thru 6 and as we can see the trend effect still persists.
 An alternate approach might be to redefine “returns” to remove the hysteresis effect in returns in equal uptick and downtick movements. Ideally we would like to have risk and return metrics that are independent of each other. Only if the metrics are independent may we ascribe observed relationships to investor behavior; but we do not see this independence in the data presented below.
Table 1: Results of risk and return computations
Stock Risk Returns
Stock V1  2.02%  0.02% 
Stock V2  4.04%  0.08% 
Stock V3  6.07%  0.18% 
Stock U1  0.09%  0.81% 
Stock U2  0.27%  1.39% 
Stock D1  0.27%  1.37% 
Stock D2  8.90%  7.25% 
FIGURE 1: THE S&P500 INDEX SERIES
FIGURE 2: THE VOLATILITY OF THE S&P500 INDEX SERIES
FIGURE 3: RISK AND RETURN IN THE S&P500 INDEX SERIES
FIGURE 4: THE INVERTED S&P500 SERIES
FIGURE 5: VOLATILITY IN THE INVERTED SERIES
FIGURE 6: RISK AND RETURN IN THE INVERTED SERIES