Thongchai Thailand

Archive for May 2020




  1. A simple banking model: In its simplest form, a bank is a business that borrows short term at low interest rates and lends long term at a higher interest rate and profits from the spread. The extent to which this spread can enhance the wealth of the bank’s shareholders depends on how well managers can control costs and loan losses and optimize risks in carrying out the operations necessary to make the spread. The gross margin is reduced by the cost of running the operation that generates the spread and also by losses such as defaults, fraud, and theft. What remains is a fraction of the gross spread which we can refer to as the net spread. The net spread is the wealth managers deliver to stakeholders and it is divided up between owners and the tax man. The actual return on equity earned by owners is leveraged by the ratio of total assets to equity. The managers also take advantage of fractional reserve banking by holding only a small portion of their demand deposits in reserves. Increasing the fraction in reserves enhances the bank’s liquidity but decreases the bank’s profitability by limiting the loans it can make at any given level of reserves.
  2. Bank management styles: Bank managers juggle a complex nexus of trade offs to reach their particular level of risk, return, diversification, and liquidity within regulatory constraints. In addition to capitalization and liquidity decisions, trade offs include the levels of loan default risk, marketing and origination costs, customer services and fees, loan servicing and collection, loan sales and securitization, and FDIC insurance. The intertwined trade off nature of the bank management variables allow bank managers to operate within a range of values of these variables. The choices that each management team makes may be used to identify its particular set of priorities, objectives, and management style.
  3. In this study we compare the management styles of ten FDIC insured community banks in the North Coast region of California using 1997 data from the FDIC. We examine patterns in these data to infer management style and performance against the average California bank in the same asset class. We find that North Coast community banks are a mixed bag with a variety of management performance and styles. The banks in the sample span four asset classes. The ten banks in the study are compared in the 15 charts below according to asset and liability management used to generate the spread, operational efficiency to conserve the spread, and the level of risk. In these charts, the ten banks in the study are identified by acronyms. The letters A thru D identify the averages for FDIC insured commercial banks in California in each of four asset classes. The acronym for North Coast Bank has been assigned as “NCNB”.
  4. The size of the banks: The sample consists of ten community banks in rural California with total assets from less than $40 million to over $700 million and span four FDIC defined asset classes. The asset classes are identified as A (less than $100 million), B ($100 million to $300 million), C ($300 million to $500 million), and D ($500 million to $1 billion). There are six banks in class A (North Coast bank, Bank of Willits, Clear Lake National Bank, Lake Community Bank, Bank of Lake County, and Sonoma Valley Bank), two in class B (Bank of Petaluma and Sonoma National Bank), one in class C (National Bank of the Redwoods), and one in class D (Exchange Bank). Sonoma Valley Bank, Bank of Willits, Lake Community Bank, Clear Lake National Bank, Bank of Lake County, Sonoma National Bank, and National Bank of the Redwoods are larger than the average bank in their class while Bank of Petaluma and Exchange Bank are somewhat smaller and North Coast Bank is tiny by comparison. The size data are shown in the charts below. The three largest banks in the sample, Exchange Bank, National Bank of the Redwoods, and Sonoma National Bank account for more than half of the $2 billion in total assets in the sample.
  5. Capitalization:    The purpose of bank capital is to absorb loan losses. Increasing bank capital is “good” because it decreases the probability of insolvency at any given charge-off rate; but it is also “bad” because it decreases the leverage multiplier which in turn decreases return on equity (ROE) at any given level of return on assets (ROA). The capitalization level chosen will depend on the manager’s conservatism, the anticipated charge-off rate, and extent of diversification in the loan portfolio. Core capital data shown in the charts below range from 5 times loan loss allowance (National Bank of the Redwoods) to almost 14 times (Exchange bank) and show how managers differ in balancing safety against leverage. We see from these figures that seven out of ten banks in the sample are more aggressive than average. They set their capitalization at a reduced solvency spread in order to gain ROE by leverage. But three of the banks, Exchange Bank, Bank of Willits, and Bank of Petaluma, are more conservative than average, holding excess capital for safety and sacrificing some ROE to do so. In terms of equity as a percent of total assets we find that Bank of Willits and Exchange Bank are capitalized well above average. The figures range from about 7% for Clear Lake National Bank to to over 15% for Bank of Willits. Clearly Bank of Willits and Exchange Bank managers have chosen safety over ROE leverage in making their capitalization decision.
  6. DIVERSIFICATION: An important way for bank managers to narrow the solvency spread and squeeze out more ROE for shareholders is to diversify their loan portfolio. In general, the more diversified the portfolio, the lower the solvency risk for a given spread and likewise it is possible to operate a lower solvency spread, and therefore lower capitalization, at the same level of loan risk. In this respect community banks are at a disadvantage when compared with national and regional banks for two reasons. First, they face the “big customer” problem. A single large business in the community may form a large part of their loan portfolio, and if that single customer defaults it may drive the bank into insolvency. A second diversification disadvantage borne by community banks by virtue of the nature of community banking is that their loan portfolio is tied to the local economy. For example, if the local economy is based on wine, the solvency of the bank may be controlled by the demand for wine or by agricultural pests and climactic events that threaten profitable wine production.
  7. For these reasons, large diversified banks can operate at a thinner solvency margin than small community banks by setting either higher charge-off rates or lower capitalization or both. Community bank capitalization and tolerable levels of loan losses may not be directly compared with those of regional banks and they typically must carry a higher capitalization rate.
    1. Asset and Liability Management: Asset management decisions include asset utilization, asset allocation, and risk/returns characteristic of loans. Asset utilization refers to minimizing the level of non-producing assets within regulatory and risk tolerance constraints. In this sample, between 75% and 90% of bank assets are earning assets which was computed as (securities + loans) divided by total assets. Sonoma National Bank and Bank of Lake County show very high asset utilization and North Coast Bank and Lake Community Bank have the lowest utilization ratios. The utilization data are displayed in the charts below.
    2. FED FUNDS: In addition to securities banks are also active in the fed funds market in which banks lend excess liquidity to each other. In the charts below we see that banks vary a great deal with respect to how much excess liquidity they hold. Bank of Lake County and Bank of Petaluma hold almost none while North Coast Bank and Bank of Willits hold more than 12% of their total assets in fed funds or reverse repos. This compares with an average of 6% held by the larger community banks and about 10% held by smaller banks. If fed funds is considered an earning asset, then the asset utilization of the banks is seen to be significantly higher ranging from 82% for Lake Community Bank to over 95% for Bank of Willits. These figures are displayed in the charts below. Except for Bank of Petaluma (about average) and lake Community Bank (below average) we find that most community banks in the North Coast have a total asset utilization ratio that is higher than California averages.
    3. Cash to deposits ratio: But, like capitalization, utilization also involves a trade-off that becomes evident when we examine the cash to deposits ratios and cash to transaction assets. We see that Bank of Willits and Sonoma National Bank face the highest liquidity risk in the sample because they hold less than 4% of deposits or 15% of transaction assets in cash; and Lake Community Bank with a low utilization ratio maintains a safe liquidity level with over 12% of deposits or approximately 45% of transaction assets held in cash.
    4. Safety from the depositor’s point of view is not as important a function of liquidity as it has been historically because of the emergence of active and liquid fed funds, repo, and loan sales markets. However, an optimal liquidity level is still important to shareholders because activity in these markets is costly. Optimality in this case consists of balancing the cost of holding non-productive assets to enhance liquidity against the expense the bank may incur in repeatedly acquiring liquidity in the repo market.
    5. Activity in the fed funds and repo markets is depicted in the charts below. We see from these figures that North Coast banks in general are net providers of cash in the interbank market for liquidity. The notable exceptions are Bank of Lake County and Bank of Petaluma which obtain upwards of 5% of their liquidity needs in this market in what may be termed an aggressive liquidity policy. North Coast Bank and Bank of Willits are exceptionally large liquidity providers. Small banks tend to have more of their assets in fed funds (fed funds sold and reverse repurchase agreements) than larger banks.
    6. An alternate measure of liquidity that includes the use of fed funds is a ratio of total short term assets including fed funds sold to total short term obligations including fed funds purchased. These ratios are shown in the charts below. They show that North Coast Bank, Lake Community Bank, Bank of Willits, and Sonoma Valley Bank maintain exceptionally high liquidity holding more than 15% of their short term liabilities in short liquid assets while Bank of Lake County, Sonoma National Bank, Bank of Petaluma, and Exchange Bank have a more aggressive liquidity policy and hold less than 10% of their short term obligations in liquid assets. These ratios are significantly higher (20% to 60%) when term deposits are removed from the liability side.
    7. In principle, the fed fund market provides banks a vehicle to diversify their liquidity wherein temporary and unpredictable shortfalls in some banks are made up by borrowing from equally temporary and unpredictable excesses in others. In such a case the fed fund holdings for each bank will cancel and average long term holdings will be close to zero. But this is not the case for all the banks in this study. Some banks such as North Coast Bank and Bank of Willits show large positive sustained average fed funds balances. These banks appear to be using fed funds as earning assets. Others such as Bank of Petaluma and Bank of Lake County that show a net fed funds liability appear to be using the fed funds market to fund earning assets.
    8. THE COST OF FUNDING EARNING ASSETS: The strategy appears on the surface to be suboptimal in either case since 5% is a relatively low yield for assets and a high cost of funding earning assets. Earning assets consist of loans, which earn higher interest but are subject to default, and Treasury and Agency securities, which may be thought to be free of default risk. We show this allocation as the percent of earning assets held as loans in the charts below. Here we see a large of range of values with Sonoma National Bank, once again as the most aggressive bank, holding almost 95% of its earning assets as loans. This figure is extremely high relative to state-wide averages which are well below 75%. The most conservative asset allocator is Bank of Willits which holds more securities than loans in its portfolio. Bank of Lake County, Exchange Bank, and Bank of Petaluma are also conservative with less than 65% of earning assets as loans. The asset allocation decision is used by managers to set their level of risk and returns.
    9. The interest rate earned on loans depends both on managers’ marketing and sales ability and on the level of risk taken. Good asset management results in high interest on loans with relatively low risk. But it is also possible to increase the overall returns on loans by making high risk loans at very high interest rates. These dynamics and tradeoffs are apparent when we compare the yield on earning assets shown in the charts below as well as the loan loss allowance figures. Once again, we see a very wide spectrum of performance and managerial choices in the sample. Sonoma National Bank earned the highest returns, an astonishing 9.6% on earning assets in 1997, and it did so with a loan loss allowance of slightly over 1%, the lowest in the sample. Bank of Lake County, at less than 8%, earned the lowest yield, and with the highest loan loss allowance ratio of almost 3%. Clearly there is a variety of asset quality management and marketing and sales ability among these banks; and they provide an excellent opportunity for comparative analysis to students of bank management.
    10. Marketing is also important in liability management. Bank managers must fund their earning assets at as low a cost as possible to deliver a large and profitable margin. Exchange Bank, which seems conservative and unspectacular on the asset side, is the clear leader in liability side management as we can see when we we compare the cost of funding earning assets shown in the charts below along with the gross interest rate spread. Exchange Bank managers have access to funds at less than 2.5% while cost of funds at Sonoma National Bank is well over 4%. High cost of funds at Sonoma National Bank negates its yield advantage and leaves the bank with a gross interest rate spread of less than 5.4%. At the same time low cost of funds at Exchange Bank allows managers to overcome weaknesses in other areas to post a spread of over 6%, second only to North Coast Bank’s 6.5% and significantly higher than the California average 5.2% in the same asset class. In contrast, North Coast Bank achieves a spectacular spread by combining less than spectacular cost of funding with good yield on earning assets.Efficiency of Operations.
    11. Measures of efficiency and inefficiency: Generating the spread is costly because it takes office rent, marketing costs, customer services, loan processing, and loan losses to maintain the spread. These costs reduce the spread so that the net spread delivered by managers to stakeholders is significantly smaller than the gross spread. In very efficient operations these costs are low and more of the spread is preserved but inefficient operations are unable to conserve as much of the margin. The FDIC publishes a measure of inefficiency computed as non-interest expense over total expense. These values are shown in the charts below. The values for the asset class averages, A, B, C, and D, show that there are economies of scale in banking efficiency. Class A has the highest FDIC inefficiency (75%) and class D the lowest (60%). Among the banks in the sample, we find that North Coast Bank and National Bank of the Redwoods are the least efficient with FDIC inefficiencies exceeding 75. Sonoma National Bank and Bank of Willits are the most efficient with FDIC inefficiencies approaching 50. Bank of Petaluma and the all the class A banks except for North Coast Bank are more efficient than average. We may also compare efficiency as the percent of gross spread that survives as net spread. This measure, shown in the charts below, is somewhat more discriminating than the FDIC measure because it does not penalize managers who overcome higher non-interest expense by using that expense well to generate higher spreads. Once again Sonoma National Bank and Bank of Willits are the most efficient banks in the sample but the Bank of Willits appears more efficient than Sonoma National Bank by this measure of efficiency. North Coast Bank retains its position in the efficiency cellar by either measure; but the other five small banks in class A are more efficient than the average class A bank possibly because of size. The average class A bank has $56 million in assets and is significantly smaller than the small banks in the study except for North Coast Bank. Exchange Bank and Bank of Petaluma are of average efficiency and National Bank of the Redwoods appears to be less efficient than average bank in its asset class. Efficiency may also be indicated using a salary productivity measure defined as the dollars of pre-tax operating earnings generated per dollar of salary expense.
    12. Dollars of pre-tax operating earnings per dollar of salary: The results are shown in the charts below. Once again we find that Bank of Lake County, Sonoma National Bank, and Bank of Willitsk are the efficiency leaders with Bank of Lake County managers producing over $2 of operating income per dollar of salary expense. The other 5 banks produce less than a dollar of operating earnings per dollar of salary. North Coast Bank and National Bank of the Redwoods generate less than 50 cents of operating earnings per dollar of salary and come out as the least efficient by this measure.




Exchange Bank, the largest bank in the sample, enjoys a very low cost of funds and this advantage allows the bank to overcome otherwise conservative and sluggish performance. On the other hand, the much smaller Sonoma National Bank with very efficient and aggressive management overcomes a very high cost of funds to deliver a better net interest rate spread than Exchange Bank by earning a very high yield on its portfolio and controlling office costs. These banks pose an interesting contrast to Bank of Willits which combines all aspects of good management to generate a high net spread. The National Bank of the Redwoods, the second largest bank in the sample, is surprisingly inefficient with a very large salary expense. The Bank of Petaluma that may be described as average with good management and an efficient operation and somewhat conservative but safe portfolio. All of of the small banks in the sample except for Bank of Willits show evidence of efficient well managed banks but with specific drawbacks. Lake Community Bank and Clear Lake National Bank suffer from a high cost of funds. Sonoma Valley Bank and Bank of Lake County face higher than average loan risk but both offer excellent efficiency and spread management; and North Coast Bank suffers from operational inefficiency possibly because of its small size. The ten banks in the study offer a good case study of small bank management because of their varied strengths, weaknesses, and management styles. The findings are summarized according to the charts below. The measures of efficiency used assume that managers are agents of external owners and may be misleading in cases where the banks are run by owner-managers who may choose salary rather than dividends as a method of recovery.





  1. Total assets
  2. Asset utilization
  3. Asset allocation
  4. Loan loss allowance
  5. Equity capital
  6. Equity coverage of loan risk
  7. Liquidity of transaction accounts
  8. Ratio of cash to total deposits
  9. Yield of earning assets
  10. Cost of funding earning assets
  11. Gross interest rate spread
  12. Net interest rate spread
  13. Pre-tax operating income per dollar of salary=net spread
  14. Efficiency of operations
  15. FDIC inefficiency measure


































  1. Although the advantage of RDBMS in terms of query flexibility, ad hoc report generation, and dramatically lower application maintenance costs have been long recognized, their adoption in mainstream MIS applications has been retarded by performance problems when compared with traditional `hard wired’ hierarchical systems such as IMS. However, a new generation of RDBMS with improved search algorithms and search optimizers has largely overcome these performance problems. Clearly relational databases are now entering the mainstream of MIS even in large production systems aided not only by their inherent data flexibility but also by the increased productivity of end-users and application developers alike.
  2. In the RDBMS metaphor, a database is a collection of tables that together represent a cohesive data environment of the business enterprise. Each table has a unique name within the database and consists of columns and rows. Each column has a unique name within the table and represents an attribute relevant to the user. Each row contains an instance of the table. One or more columns form a key structure whose value appears only once in each table and uniquely identifies a row in the table. Tables may also contain one or more foreign keys which appear as keys in other tables. It is through these foreign keys that row to row linkages are established between related tables. It is a property of the relational model that the conceptual data structure is established by the semantic content of the data rather than the preconceived or so called `hard wired’ reporting requirements. Therefore, theoretically, if the set of tables is properly constructed, all queries that are meaningful within the semantic constraints are possible using SQL.
  3. The construction is proper if all non-key attributes are functionally dependent on the key, the whole key, and nothing but the key and if no part of the key is functionally dependent on any other part of the key. This state of the tables is referred to as the `normalized set’ and the normalization process consists of procedures to produce this set.
  4. Conventionally, the normalization concept and procedure are explained in stages called `normal forms’. The process begins with one single table that contains all attributes (columns) that are relevant to the database being designed. The single large table is then decomposed stepwise into normal forms. Each step addresses a single normalization issue.
  5. In the first step repeating groups are removed and the resultant tables are said to be in the first normal form. The first normal relations are then decomposed stagewise by addressing each of the normalization criteria. When dependencies on part of the key are removed, the relations are said to be in the `second normal form’. Similarly, when transitive dependencies (dependencies on non-key attributes) are removed by further decomposition the result is the third normal form or the so called Boyce Codd normal form.\
  6. Further decomposition to remove multivalued dependencies may be used to produce a set of relations in the fourth normal form. Some designers recognize a Domain Key Normal Form (Fagin 1981) in which every constraint on the relations is the result of only two variables: the key and the domains of the non-key attributes. No generalized method for achieving this state has been proposed.
  7. The decomposition method of normalization is difficult to use and confusing to students and analysts alike and it is based on the unrealistic notion that we begin the design process with the largest possible tables. In the synthesis method proposed in this paper, we reverse the process and begin with the smallest possible tables which are normalized by definition since each of these tables states a single dependency equation.
  8. We call these relations `elemental tables. They are produced by the following procedure: first list all the entity-types and their attributes; select the attributes one at a time and determine the functional dependency and key structure needed to identify an instance of this entity type; pick those attributes from the attribute list and construct the key for that one single non-key (or partial key) attribute; and place the attribute and its key structure into an elemental table. Then go on to the next attribute and keep constructing elemental tables until all attributes have been accounted for.
  9. Although we now have a normalized set of relations, their use is cumbersome since the number of tables that must be referenced by SQL queries will be unnecessarily large. The database will also suffer severe degradation problems. To enhance database performance and simplify queries, we now synthesize larger tables from the elemental tables.
  10. In the synthesis process we delete elemental relations that are semantically redundant and combine relations that have exactly the same key structure. The result will be a set of tables that are as normalized as any analysis method is able to achieve but the procedure is immensely simpler. In the resultant tables every non-key attribute is FD on the key, the whole key, and nothing but the key and no attribute of the composite key is FD on any other part of the key.

As an example, consider Chens famous example (Chen, P., ACM TODS, vol. 1, no. 1)


The functional dependencies are described in the paper. First we produce the elemental tables as follows:


ER7 contains all the information in ER4. Therefore ER4 is semantically redundant and is dropped. ER1, ER2, and ER3 contain identical keys. They are collapsed into a single table. The resultant normalized tables are:




Using the SQLDS relatinal database in the MVS/CMS/COBOL environment: a cookbook for MIS students

A Brief Description of System Environment

  1. MVS: The machine you will be using is an IBM 4381/13. Its primary operating system is MVS or Multiple Virtual Storage. Virtual storage is a feature of the IBM/370 architecture. Virtual storage allows each application or job to view and decode memory addresses independent of the computers physical addresses. The translation between the virtual address and the real address is performed by the Dynamic Address Translator which is completely transparent to the user. MVS is principally a batch job processor. Jobs are submitted to MVS using JES2 or Job Entry Subsystem #2. Users issue commands to JES2 using a specialized language called JCL or job control language.
  2. VM/SP: VM/SP or Virtual Machine/System Product (VM for short) is a program written by IBM and used by our machine here at UofA to control interactive access of machine resources via on-line terminals. The name derives from the fact that VM builds an environment for each user that seems to be a complete mini computer with its own mini disks, memory, and communication channel via the virtual reader. When many VM users are on the system, the system is logically similar to a PC network where each user has his own machine but is also able to communicate with other users and able to link to a central file server. In the VM mode, it is convenient though not accurate to think of MVS as just another user to whom certain jobs may be sent via the network; the results of the job may be directed to any node in the network which includes the users virtual machine virtual reader and the various network printers. There are four components of VM. These are (1) CP or Control Program, (2) CMS or Conversational Monitor System, (3) RSCS, and (4) IPCS. We need concern ourselves only with CMS with an wareness of CP.
  3. Control Program (CP):  CP manages the real (or as Kroenke likes to say, the physical physical) computing system while CMS manages each users virtual machine. CP is in direct control of all resources of the 4381 including the real storage (as opposed to virtual storage presented to batch applications by MVS), processor time, and all I/O devices including DASDs (the battery of 3330 and 3380 disk drives). Its job is to make these resources available to a number of online users at the same time by presenting the virtual machine to each user. While CMS is the operating system for each one of these virtual machines, CP is the environment that allows all of these virtual machines to co-exist in the same real computer.
  4. CMS:  This is the most important component of the operating system for interactive users (us). CMS is a component of VM/SP that acts as the operating system of the virtual machine of each online user; and, together with CP, provides access to system resources to the interactive user and serves as a base on which to build interactive applications. From within the user friendly CMS environment, the interactive user may submit MVS jobs, build and manipulate minidisk files, link to virtual minidisks that contain compilers and development platforms such as COBOL, and SQL/DS; communicate through the network with other CMS users as well as with MVS; and even make direct links with OS files and VSAM files, and build an MVS environment. A useful utility for building interactive applications is DMS or the Display Management System.
  5. The Display Management System or DMS:  The Display Management System allows your interactive COBOL programs to use predefined screens for interactive input-output. The predefined screens consist of text that is to be displayed as background plus fields that are to contain input and/or output data and are referred to as panels. Panels are created by using the panel editor called PANEL. Panels are loaded or read from the COBOL program by making calls to EUDCOBOL.
  6. Creating Panels:  The panel editor PANEL, is not command compatible with the system editor and requires strict adherence to a set of rules. In other words, the program is difficult to use. An alternative is to use the program PANELIT that is included with this package. PANELIT converts the XEDIT text file into PANEL format. To use PANELIT, first create your screen form using XEDIT denoting data fields with underscores. Background display is entered as normal text. The only hard and fast rule is that the data field specifications should have one more underscore than is needed by the data to be displayed. PANELIT (and DMS) uses the first underscore to indicate the beginning of a field. Here is an example of a PANELIT input file created with xedit. The file might be named income stmt (i.e., the CMS filename is income and the CMS filetype is stmt). This panel is to be loaded by a COBOL program that will supply values for each field designated by underscores.

Income Statement ————————- Company _____________________________ Year _____ Sales _______ Cost of Goods Sold _______ Selling and Admin costs _______ Depreciation Expense _______ NOI (net operating income) _______ Interest Expense _______ Income Tax _______ NIAT (net income after taxes) _______ </plaintext> <p> The simplest way to invoke PANELIT is from the FLIST or FILEL display. This is a list of files in your directory. On the directory list, move the cursor to the file to be converted, in this case income stmt and enter the command panelit. PANELIT will process this file and produce a panel file according to DMS specifications. The output file will be called income panel. The filename is retained and the filetype of panel is assigned. This is required by DMS. All panel files must have a filetype of panel. The output file income panel will look like this: <p> <plaintext> ^Income@Statement ^————————- ^Company _@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ^Year _@@@@ ^Sales _@@@@@@ ^Cost@of@Goods@Sold _@@@@@@ ^Selling@and@Admin@costs _@@@@@@ ^Depreciation@Expense _@@@@@@ ^NOI@(net@operating@income) _@@@@@@ ^Interest@Expense _@@@@@@ ^Income@Tax _@@@@@@ ^NIAT@(net@income@after@taxes) _@@@@@@ </plaintext> <p> After creating the panel file, PANELIT will invoke PANEL and turn you over to it. This is necessary so that you may add field specifications if you want to; and so that PANEL will create the needed PCB file for the new panel. Most of the time, you will not use field specifications so simply enter PF5 to exit PANEL and ENTER to save the files. Your new panel income panel is now ready to display your COBOL variables. PANELIT requires two files. A REXX program called panelit exec and a companion file called panelit needsit. <p>COBOL Code Needed for DMS <p> The minimal cookbook COBOL code necessary to use DMS is as follows. Immediately prior to the procedure division, you must have the command copy eudcobol. This causes prewritten portions of DMS access code from the system file eudsmac maclib to be inserted into your source prior to compilation. The placement of the copy statement prior to the procedure division is shown below. Note that the copy command is in area B, or beyond column 12. <p> <plaintext> IDENTIFICATION DIVISION. ENVIRONMENT DIVISION. CONFIGURATION SECTION. INPUT-OUTPUT SECTION. FILE-CONTROL. DATA DIVISION. FILE SECTION. WORKING-STORAGE SECTION. * setup to use dms copy EUDCOBOL. PROCEDURE DIVISION. </plaintext> <p> Before you use the panel code, however, two of the DMS variables need to be initialized. These are, unload-list and load-list. The necessary initialize code is as follows: <p> <plaintext> InitializeDMS. Move “=” to UNLOAD-LIST. Move “Y” to LOAD-LIST. </plaintext> <p> When using a large number of panels, it may be necessary to release your panels since no more than 15 panels may be defined at any given time. The release panel code is shown below and may be used verbatim. <p> <plaintext> ReleasePanels. Move spaces to PANEL-NAME. Move “R” to Display-Code. Move 0 to NUMBER-DATA-FIELDS. CALL “EUDCOBOL” USING EUDCNTRL. Move “D” to Display-Code. </plaintext> <p> The variables load-list, unload-list,panel-name, number-data-fields,and display-code are declared by the macros from eudsmac maclib. <p> Panel Loader Code <p> Panels may now be read (unloaded) or written to (loaded). The code needed to load a panel requires specification of the number of data fields to be loaded, the name of the panel file to use, and a list of the variables in panel sequence. A call is made to the DMS subroutine eudcobol to perform the actual data display. The following code loads the ten variables listed into the panel called income. It can be used as a cookbook by changing only the the number of data fields from 10 to the number being loaded, the panel name from income to the the name of the panel file being used, and the variable list to the list of variables being displayed. The number in this list must be the same as the number of data fields declared. Note that the call sentence ends with a period only after the last variable has been declared. <p> <plaintext> * load the panel called income with 10 data items Move 10 to number-data-fields. Move INCOME to panel-name. Call eudcobol using eudcntrl CompanyName Year Sales Cgs SellAdmin Depreciation Noi Interest Tax Niat. </plaintext> <p>Panel Unload Code <p> Panels may be used for data entry using CRT terminals in exactly the same way as they are displayed. For example, the code below uses a panel called mainmenu and displays the value of UserChoice that is currently in memory. If the user enters a new value, the old value is updated. If the user simply depresses enter then the displayed value is accepted. This is a common way to show and use default values to minimize keystrokes. <p> <plaintext> MainMenu. Move HELP to UserChoice. * if no entry, then show help screens with commands Move “MAINMENU” to panel-name. Move 1 to number-data-fields. Call “eudcobol” using eudcntrl UserChoice. </plaintext> <p> Structured Query Language / Data System or SQL/DS <p> SQL/DS or Structured Query Language Data System is one of two relational database products from IBM (the other is DB2). It supports the standard SQL query language. The SQL commands may be issued interactively or embedded in application programs written in COBOL (also Pl/I, C, FORTRAN, and Assembler). <p> In a relational database the data are presented to the user in a set of relations or tables. The SQL query language is designed to produce new tables from these primary relations by extracting specific columns and rows from one or more relations. These operations are usually categorized as selections, projections, and joins. <p> Selection, Projection, Join <p> A selection is any SQL operation that extracts specific rows of a table. The SQL keyword WHERE is used to effect the row selection process using a command syntax such as: <p> <plaintext> SELECT * FROM Managers WHERE Years &lt; 15 </plaintext> <p> The asterisk (*) indicates that all columns of the table called managers are to be selected into the new display table. However, the WHERE clause restricts the rows selected to only those where the value of the column years is less than 15; in this case, a list of managers who have been with the company for less than 15 years. <p> The project operation of SQL restricts the display table to specified columns of the source table(s). For example, if only the columns designated by Years, Location, and Division are required, then the asterisk of the above query is replaced by a list specifying the columns to be moved to the display table. <p> <plaintext> SELECT Years, Location, Division FROM Managers </plaintext> <p> This is an example of a pure projection process. All rows are returned but the display table is restricted only to the specified columns. However, most SQL queries are a combination of selection and projection. For example, we can combine the commands above to produce a display table containing only the columns Years, Location, and Division and only those rows having a Years value of less than 15 years would be; <p> <plaintext> SELECT Years, Location, Division FROM Managers WHERE Years &lt; 15 </plaintext> <p>

The real power of the relational model is realized in the join operation. In this case, projections and selections from two or more related tables can be consolidated into one display table. The row-to-row correspondence between tables is maintained by virtue of foreign keys; that is, the key of one table is retained as a column in the other. <p> For example, say that in addition to Managers, we have a table called Engineers such that a manager can have many engineers but an engineer can only belong to one manager, then we might indicate the belonging by including the Manager Number as in the Engineers relation. The SQL command to produce a display table of the names of the managers and engineers would be as follows: <p> <plaintext> SELECT m.Name,e.Name FROM Managers m, Engineers e WHERE e.MgrNumber=m.MgrNumber </plaintext> <p> The two tables Managers and Engineers are thus joined to produce one table that contains columns from both. Rows are matched according to the common value of MgrNumber. <p> Views <p>

Recall that SQL operations simply produce a new table from one or more existing tables. In the SELECT commands we have used so far the new table produced is a temporary display table that evaporates upon completion of the display or print operation. In many instances it is convenient, efficient, or necessary for security purposes to to save these tables for future use. This is done with the create view … as command. For example, to save the display table produced by the join operation above, we could write; <p> <plaintext> CREATE VIEW NamesList AS (SELECT m.Name,e.Name FROM Managers m, Engineers e WHERE e.MgrNumber=m.MgrNumber) </plaintext> <p> This would save the display table as new virtual table called NameList which can then be treated as a table in constructing subsequent queries or in granting access privileges. <p> Interactive Access to SQL/DS <p> The SQL/DS system provides a program called ISQL (the I is for Interactive) to allow direct ad-hoc queries to be made to SQL from CRT terminals. Once in ISQL, the user simply types in the SELECT commands and ISQL responds immediately by displaying the resultant tables on the screen. It is especially useful to programmers who wish to test their SQL codes before embedding them into application programs. <p> However, the procedure for saving, editing, and re-invoking SQL commands is cumbersome and difficult to use. Yet, this sort of debug cycles must be performed several times while pretesting SQL code. <p> There are also problems relative to printing the results from each query. First, each query produces a different print file. It would be advantageous to collect the results of all queries from one session and produce one organized print-file. <p> Further, ISQL forces the user to view the display tables prior to printing. This can be very time consuming when a number of different display tables are to be printed. <p> All of these problems with ISQL can be avoided by using the XQL EXEC file provided with this package. XQL, along with its companion program RUNSQL EXEC, sets up an ISQL environment that alleviates all of the aforementioned difficulties in interactive usage. <p> The XQL access to ISQL offers the following features: <ul> <li> A CMS file can be used as a batch command file. This command file can be executed with PF11 and edited with PF10 while in ISQL. (and with XEDIT while in CMS). <li> The batch file processing allows display tables to be printed without being viewed. <li> All display tables sent to the printer are collected into one print file and sent to the users virtual reader. The user can then view them or re-route them to a selected network printer. (using the LOOK utility described below). <li> PF5 and PF6 are defined to access two frequently used ISQL utilities. PF5 produces a listing of all tables owned by the user. PF6 produces a complete data dictionary; listing all column names in all tables and their attributes. </ul> <p> Using XQL <p> To use XQL from CMS, you must have the programs XQL EXEC and RUNSQL EXEC in one of your CMS minidisks. To begin execution simply type in <plaintext> XQL <enter>. </plaintext> <p> The SETUP SQL command normally used prior to the ISQL procedure is not required since XQL will perform the setup for you. <p> The SQL commands file <p> The second step in using XQL is to enter the name of the CMS file that contains (or will contain) the SQL commands you wish to execute. If you dont have a file yet, then it is the file that will be created when you use PF10 to edit the command file. The default name is SQL COMMANDS. This name may be retained by making a null entry. <p> A typical command file looks like this: <p> <plaintext> /* List names and ages of managers who could retire*/ select name,age- from managers- where age&gt;55 display end /* print out names of managers in plastics division */ select mgrnumber,name,location- from managers- where division=plastics print end stop /* thats all folks */ </plaintext> <p> The features of the command file demonstrated above are: <ul> <li> Comments are entire lines that begin with /* and end with */. XQL is not smart enough to deal with comments that follow a command such as <li>select name,haircolor- /* get the name and hair color */ <li>As in interactive ISQL, each line must end with a hyphen (-) except the last. <li>All query commands must be followed by two lines containing instructions about the disposition of the resultant table and the word end. The disposition is either display to see the result on the screen or print to add the result to the print file without a screen display. <li>Blank or null lines may be used as needed for clarity. <li> The word stop may be used anywhere to end execution of the file. This makes it possible to debug portions of a large file without having to run through the entire list. <p> Normal usage of XQL <p> In the initial phases of SQL code development, it is usual for the programmer to loop through many cycles of PF10 and PF11 editing and re-executing the SQL commands until there are no sql errors and the desired results are obtained. The command files are used not only for SELECT commands but also to create tables, load data into tables, update tables, and generally perform all necessary operations normally required in database management, maintenance, and usage. For example, the following file creates a zipcode file and loads some new data into it. <p> <plaintext> /* first drop the table in case it exists */ drop table zipcode /* now create the zipcode table */ create table zipcode – (zip integer,- city char(20),- state char(2)) /* enter some data into this table */ input zipcode 72701,fayetteville,ar 72702,fayetteville, ar 72703,springdale,ar 73533,duncan,ok 74102,tulsa,ok 77002,houston,tx 74603,ponca city,ok 72902,fort smith,ar 72716,bentonville,ar 74004,bartlesville,ar 76067,dallas,tx end /* index the table on zip */ create index zipindex on zipcode (zip) </plaintext> <p> It is a prudent practice to save these files so that the tables can be re-built. The command files can also be used to automate the production of routine reports. Also simple application-specific user views can be programmed as with the ROUTINE facility of ISQL. XQL is considerably more flexible and easier to use than ROUTINE. <p> Embedding SQL code in COBOL programs <p> In addition to direct and interactive access, SQL provides for direct access to SQL data from application programs via a series of complex BIND instructions to set up variable address pointers. However, these bind instructions are so complex and unforgiving, that IBM has provided a preprocessor that takes normal SQL code as input and generates the necessary code for access to SQL data from Cobol. Although the preprocessor makes it a lot easier to use sql, there are a few rules that the COBOL programmer needs to follow for trouble free compilation and execution. The minimum SQL code needed is presented below in cookbook fashion. For additional information consult the SQL programmers reference manual. <p> Preparing source code for the preprocessor <p> Rule no. 1: The source program containing SQL code must have file type of COBSQL. The preprocessor will write the COBOL file which will contain all your non-sql code plus all your sql code translated into bind instructions. This COBOL file can then be compiled using the COBOL2 compiler. <p> Rule no. 2: The cobsql file must have a WORKING STORAGE section and must have a host variable declaration section. A good practice is to always put the host variable declarations in the working storage section. The host variables will receive values transferred from SQL to COBOL or contain values destined for SQL tables. <p> Rule number 3: All SQL instructions to the preprocessor begin with EXEC and end with END-EXEC. These delimiters signal the preprocessor to translate the SQL instructions within for the COBOL2 compiler. <p> Rule number 4: Do not use hyphens in host variable names. Theoretially, the preprocessor will translate them to underscores since otherwise SQL will interpret hyphens as continuation indicators. But it is easy to avoid hyphens. <p> Rule number 5: Declare all host variables as COBOL data level 77 . Multi-level data declarations may not be used except in the case of variable length strings, when it must be used. Variable length strings are declared with two 49 level declarations within a 01 level group. The first of the 49ers must have a pic of s9(4). It is used to hold the length of the string. The second 49er must be a pic x(n) where n is the maximum length of the string. The OCCURS, SIGN, JUSTIFIED, and BLANK WHEN ZERO clauses are absolutely forbotten. The following example may be used to cookbook the host variable declaration section. <p> <plaintext> WORKING-STORAGE SECTION. EXEC SQL BEGIN DECLARE SECTION END-EXEC. 77 FirstHostVar comp-1. 77 AnotherOne pic x(20). 77 LastHostVar pic s9(9). 01 VariableLength. 49 Length pic s9(4). 49 TheString pic x(80). EXEC SQL END DECLARE SECTION END-EXEC. EXEC SQL INCLUDE SQLCA END-EXEC. </plaintext> <p> Rule Number 6: The begin declare and end declare statements must contain within them all the host variable declarations. The include sqlca must immediately follow the end declare. It sets up the SQL communication area. <p> Variable conversion table <p> Rule Number 7: The host variable type to be declared depends on the SQL data type to be converted. The following correspondence between sql and host variable data types must be maintained; thats a MUST with a capital M. <p> <plaintext> Type of Data Data Declaration SQL COBOL 31-bit integer integer pic s9(9) comp. 15-bit integer smallint pic s9(4) comp. character string of fixed length n char(n) pic x(n). single precision floating point real comp-1. double precision floating point real comp-2. decimal number p.s that is p digits wide and has s digits to the right of the decimal. (n=p+s) decimal(p,s) pic s9(n) </plaintext> <p> Embedded SQL Queries <p> Rule Number 8. Embedded SQL queries differ from interactive SQL queries in two respects. First, every query must begin with the delimiter EXEC SQL and end with END-EXEC. Second, unlike interactive modes, these queries pass values between SQL variables and COBOL variables (the declared host variables). Therefore, not only the SQL column names but the corresponding COBOL variables must be listed. <p> Passing One Row From SQL to COBOL <p> When it is certain that data from only one row will be passed as a result of the query, then a direct transfer can be made by using the SELECT…INTO syntax as in this example. The end of the COBOL sentence occurs after the end-exec clause and is denoted with the customary period (.). <p> <plaintext> Exec SQL SELECT Division, Location, Years INTO :Division, :Location, :Years FROM Managers WHERE MgrNumber=:MgrNumber end-exec. </plaintext> <p> Rule Number 9. Host variable names must be preceded by a colon (:) when used in SQL code. In this example, the host variable names, Division, Location, Year, and MgrNumber are the same as the SQL column names. Other names could have been used but it is best to follow the same name convention to keep the data correspondence clear. If you do use other names remember that; Rule Number 10. Host variable names are limited to eighteen characters. <p> <p>All variables that receive values from SQL (INTO…) or transfer values to SQL (VALUES…) must be declared in the host variable portion of the working storage section. <p> Both COBOL2 and SQL/DS will accept mixed case variable names such as LineItemNumber. As such, COBOL programmers may now begin to wean themselves from hyphenated names and use case changes to delineate words within variable names. Dropping hyphens makes it is easier to stay within the 18 character limitation and it skirts the hyphen issue of SQL/DS. <p> Passing Multiple Rows from SQL to COBOL <p> The method just described would fail if the query did not return exactly one row. To allow for the possibility of more than one row or no rows being returned in response to an SQL query, the CURSOR method should be used. Many programmers only use the CURSOR method since it is completely general and works under all circumstances. <p> The CURSOR method defines a memory stack area where all data from SQL are stored. After the transfer is made, the data may be removed from the stack one row at a time until it returns an SqlCode=100 which signifies that the stack is empty. If the stack is empty to start with it means that no data were returned by SQL. The cookbook code for the CURSOR shown below is the COBOL procedure called OldManagers which extracts selected rows from SQL into the stack and uses the procedure PopData to transfer the values from the stack into host variables. <p> <plaintext> OldManagers. * define a new data stack called C1 * and stash all the data into it. exec SQL declare C1 cursor for SELECT Name, Location, Years FROM Managers WHERE Age &gt; 65 end-exec. * now lets take a look and see what weve got exec SQL open C1 end-exec. * if anything in it, then pop em out one at a time * until the stack is empty. Perform PopData until SqlCode=100. * tell them we dont need this stack anymore. exec SQL close C1 end-exec. PopData. * try to pop out the next row of SQL data * into host variables exec SQL fetch C1 into :Name, :Location, :Years end-exec. * if sqlcode=100, there was nothing to pop If SqlCode not equal 100 then * we have some data. so move em into * display variables and output. perform DisplayData. </plaintext> <p> The procedure OldManagers opens an SQL stack area, names it C1 and places into it the Name, Location, and Years columns of all rows returned by SQL in response to the query. <p> Rule Number 11. Every time a cursor is declared it must be assigned a unique name. These names cannot be re-used. Like host variable names, these names can be 18 characters long. <p> Rule Number 12. Every Cursor operation must consist of the triad of Declare xx cursor, Open xx, and Close xx in that order. Note once again, that every command to the preprocessor is enclosed in the delimiters exec sql and end-exec. <p> After the data is transferred from SQL tables to the buffer (cursor), the cursor is opened. The rows of data are then removed from the data one at a time. Each time, the SQL variable SqlCode is checked. A value of 100 indicates that there is no more data. An initial value of 100 means that no rows met the selection criteria set. <p> The data are removed from the stack using the SQL fetch verb. The COBOL procedure PopData is used to perform the fetch operation. The values thus retrieved are then moved to regular COBOL display variables appropriately PICed for the reports being generated. <p> Neither the variable SqlCode nor the cursor names need be declared in COBOL picture clauses. The preprocessor and SQLCA will take care of these variables.

Updating SQL Tables Two types of updating is normally necessary. Either a new row of data is added to an existing table, or an existing data item (row and column) of a table is changed to a new value. Other table operations such as dropping tables, creating new tables, etc. are best done in interactive mode or using XQL batch files discussed earlier. <p> Before a table can be updated, the new values must be moved into COBOL variables that have been declared in the host variable declare section with a compatible variable type. The two types of updates are described using examples. <p> Adding a New Row <p> The SQL insert into …. values clause is used to add new rows to a table. For example, to add a new row of values to the Managers relation, we might use a COBOL procedure like the one shown below. <p> <plaintext> AppendManagers. * first move the values into the host variables. Move InMgrNumber to MgrNumber. Move InMgrName to Name. Move InMgrAge to Age. Move InMgrYears to Years. * now move them from the host variables into SQL table exec SQL INSERT INTO Managers * list of columns that will be updated (MgrNumber,Name,Age,Years) VALUES * COBOL host variables containing new values (:MgrNumber, :Name, :Age, :Years) end-exec. * all done </plaintext> <p> Normal COBOL continuation rules apply. For example, <plaintext> (MgrNumber, Name, Age, Years) </plaintext> is equivalent to <br> <plaintext> (MgrNumber, Name, Age, Years) </plaintext> <p> The Cobol sentence that begins with exec sql does not end until the end-exec phrase. The code above has been excessively commented to explain each movement of data. The essential append verbs are “INSERT INTO” and “VALUES”. <p> Changing Values in an Existing Row <p> The SQL update code is composed of the keywords “UPDATE…. SET….WHERE” as shown below. In the example the Salary of Engineer number 543 is being changed to $44,000 along with an change in position to Senior Engineer. Update identifies the table to be changed, Set lists the columns to be updated and the new values that the columns should take, and Where identifies the row(s) to be affected by the change order. <p> <plaintext> ChangeSalary. * move data from input variables into host variables move 44000 to Salary. move Senior to Position. move 543 to EngrNumber. * move data from host variable into table exec SQL UPDATE Engineers SET Salary = :Salary, Position = :Position WHERE EngrNumber = :EngrNumber end-exec. </plaintext> <p> All rows meeting the WHERE condition are updated. In this case, presumably, only one row will be changed. The set command allows arithmetic operations. For example, all engineers having a position of Senior could be given a 10 percent raise with the set command: <p><plaintext> SET Salary = Salary * :Raise </plaintext><p> where a value of 1.1 has been loaded into the Raise variable. In all the examples above, wherever host variables have been used to add new data or update existing data in tables the actual numbers or literals can be used directly in the SQL code as in: <p><plaintext> SET Salary = 44000, Position=Senior </plaintext><p> However, this is not considered good programming practice.. <p> Invoking the Preprocessor and COBOL Compiler <p> To set up the CMS environment for development of interactive SQL application programs in COBOL, the following setup commands must be issued at the beginning of the CMS session. <p> <plaintext> SETUP SQL SETUP COBOL2 GLOBAL </plaintext> <p> These commands define new logical CMS minidisks in the users environment to allow direct access to SQL/DS and the COBOL2 compiler. One of the programs in the SQL/DS minidisk is the ISQL interactive access system. Another is a program called SQLPREP,the preprocessor that converts COBSQL programs with embedded SQL code into code that can be compiled with the COBOL2 compiler. SQLPREP will remove all the SQL code (with an asterisk (*) in column 7) and replace them with the necessary bind instructions for accessing the data in the SQL/DS database. All of these instructions will be tested against the actual data and not only syntax errors, but data errors such as no such table, or no such column will be flagged. <p> To convert a COBOL program called mycobol cobsql to a compilable Cobol version, the sqlprep program is invoked as follows: <p><plaintext> SQLPREP COB PP(PREP=mycobol,COB2,QUOTE) SYSIN(mycobol) </plaintext><p> The COB and COB2 parameters direct the preprocessor to prepare a program for the COBOL2 compiler. The QUOTE option, normally used, indicates that the single quote () is used in the program to delimit text strings. (COBOL2 expect double quotes otherwise.) <p> SQLPREP will create two files on the CMS minidisk A. These are mycobol COBOL, and mycobol LISTPREP. Depending on program and minidisk size, these files may require more disk space than a normal 1 cylinder CMS minidisk can supply. <p> If errors are encountered, SQLPREP will issue a message to the terminal and insert error messages into the COBOL file. Once an error-free COBOL file is successfully produced it can be compiled with the COBOL2 compiler. In this example, the name of the COBOL file will be mycobol Cobol. The commands necessary to compile the program are: <p><plaintext> GLOBAL MACLIB EUDSMAC COBOL2 mycobol (LIB </plaintext><p> It is being assumed that the program mycobol uses DMS panels. The CMS global command is used to specify that the macro library called eudsmac (i.e., a CMS file called eudsmac maclib) is to be searched for missing macros and copy files during compilation. This is necessary because we need some code from this file to be included prior to compilation in order to access DMS panels. <p> The compiler will read the COBOL file and produce a TEXT file which contains the object code. In this case, the object file will be called mycobol text. If storage space on the minidisk is scarce, the LISTPREP file produced by SQLPREP can be deleted (erase * listprep *) to make room. <p> If there are compilation errors, these errors will be listed on the terminal as well as on the LISTING file. For the sake of source code integrity, it is best to make all corrections to the COBSQL file and re-SQLPREP. That way, the COBSQL file will contain all changes. After successful compilation, the LISTING and COBOL files can be deleted. The TEXT file can now be link-loaded to produce a MODULE or executable file. The link procedure requires these CMS commands in the sequence shown. <br><plaintext> GLOBAL TXTLIB ARIRVSTC ARIPADR VSC2LTXT EUDSTXT LOAD mycobol </plaintext><br> The CMS command GLOBAL is used to assign four TXTLIB files to the library of code to to be link-loaded along with the TEXT file mycobol. In addition to the standard COBOL2 library vsc2ltxt, we are including the SQL code libraries called arirvstc and aripadr, and the DMS text library called eudstxt. These files contain compiled subroutines mycobol needs in order to access SQL tables and DMS panels. <p> A successful link will produce the object code and place it in the file mycobol text. To convert the object code to an executable module (by adding entry point addresses from CMS) the GENMOD command is used as; <br><plaintext> GENMOD mycobol </plaintext><br> This will finally produce our executable program called mycobol module. The program is executed with the name of the program; in our case, the new CMS command mycobol which we created will begin execution. <p> Normally the executable, or MODULE, file is only produced after the development cycle is complete. The CMS command START is used to make test runs during development. <br><plaintext> LOAD mycobol (START or, if already loaded START mycobol </plaintext><br> If CMS files are to be used for input or output, then the appropriate filedef command must be issued prior to execution. The filedef command makes the logical link between cms files and COBOL files. For example the filedef command, <br><plaintext> FILEDEF mydata DISK mycrud data </plaintext><br> links a CMS minidisk file called mycrud data to a Cobol file SELECTed as mydata which may be ASSIGNed to its COBOL name of TRANSACTION-FILE. The corresponding COBOL assignments are: <br><plaintext> SELECT mydata ASSIGN TO transaction-file. </plaintext><p> The LOOK Utility <p> The XQL utility described often sends very large print files to the virtual reader. These cannot normally be RECEIVEd into CMS files due to disk size limitations. More problematic is that the readers PEEK facility allows the user to view only the first 200 lines of the reader file. What the adroit SQL programmer would like is the ability to view these print files in their entirety and, if appropriate, re-route them to a network printer. <p> The LOOK EXEC allows exactly these options. To use LOOK, first produce a normal PEEKable reader list with the CMS command RL. Then place the cursor next to the file you wish to look at and enter the command LOOK. The file will be displayed, as in peek, but without any line limitations. When done looking at the file, exit normally with PF3. At that point, LOOK will give you the option to reroute the file to a printer (PRINT command) or to simply get rid of it (PURGE) command. A null command will return the user to the RL reader list screen. <p> If a print command is issued, then a valid network printer must be identified with its name such as remote5. <p> The programs PANELIT EXEC, LOOK EXEC, and XQL EXEC are written by <a href=”/web/20010210040648/”>Jamal Munshi</a> and made freely available to anyone upon request.<p> <a href=”/web/20010210040648/”>back to working papers index</a><p> <!– FILE ARCHIVED ON 04:06:48 Feb 10, 2001 AND RETRIEVED FROM THE INTERNET ARCHIVE ON 05:39:09 May 24, 2020. JAVASCRIPT APPENDED BY WAYBACK MACHINE, COPYRIGHT INTERNET ARCHIVE. ALL OTHER CONTENT MAY ALSO BE PROTECTED BY COPYRIGHT (17 U.S.C. SECTION 108(a)(3)). –> <!– playback timings (ms): RedisCDXSource: 0.56 CDXLines.iter: 13.207 (3) esindex: 0.011 PetaboxLoader3.resolve: 55.371 load_resource: 108.986 exclusion.robots: 0.257 captures_list: 81.03 PetaboxLoader3.datanode: 84.11 (4) LoadShardBlock: 63.006 (3) exclusion.robots.policy: 0.247 –>


Wine Business Data Models

Entity-Relationship modelling is a simple but powerful diagramatic technique that may be used to describe the data environment of a wine business. Business processes, policies, procedures, and even regulatory requirements may be captured in these logical models as long as one is able to identify all the data entities and their logical relationships. Entity-Relationship modelling is useful to business managers in many different ways as enumerated below:

  • Insight
      First, these diagrams may be used to gain insight into the decision matrix of the firm at each level and the information needs of key managers in all phases of wine production from the vineyard to the marketing department.
  • Basis of Design
      Second, the diagrams also serve as a “basis of design” for wine business information systems. Information systems based on these models are more likely to support the information needs of the firm. The match between information needs and information system design increases the chances that the information system will support decision-making by providing the kind of reports needed by managers and required by regulators.
  • Basis of Comparison
      Third, the model serves as a basis for requesting and comparing vendor designs when the firm outsources the design of the winery information system. All vendors receive a consistent set of specifications and are their bid may be compared on the same basis.
  • Accounting and Control
      The model may be implemented as an accounting system for costing and auditing, and ultimately to generate the information set for constructing BATF and financial reports.



To demonstrate the method and its application, we present a bulk wine model for a generic winery which has been put together as a synthesis of the wineries I visited in the North Coast appellation of Calfornia during the data-gathering phase of this research. During this phase of the project I was able to interview approximately fifty key players in the industry including vineyard managers, winemakers, winery MIS personnel, winery marketing and finance personnel, wine industry accountants, Federal regulators, and educators. The generic winery and its information environment was designed with their assistance. The wine business may be thought to exist in three distinct but inter-related sub-systems that may be described as the “fruit” phase, the “bulk wine” phase, and the “case goods” phase. These sub-systems may be described as follows:



  • Fruit System
      The fruit system deals with the grapes prior to crush. Agriculture, harvest, acquisition, sale, pricing, grape contracts, and forecasts of demand of each varietal are the key areas of managerial decision making in this phase. Linkages with other phases are due to demand forecasts, winery tankage and barellage capacity, harvest and crush scheduling, long term contracts, vineyard designated labelling, and product pricing and positioning. The primary decision makers are vineyard managers, winemakers, contract managers, and marketing managers.
  • Bulk Wine System
      The bulk wine system deals with a liquid or slurry phase after crush and before bottling and paying federal alcohol taxes. Once alcohol is produced during fermentation the product becomes a controlled and regulated commodity. Liquid product storage, transfers, losses, quality, chemical state, purchases, sales, and inventory accounting are some of the areas of managerial concern. The winery may simultaneously buy and sell bulk wine in addition to the better understood business of case goods sales. Most of the productin activity of the winery is concerned with the bulk wine phase which ends with the production of a liquid product blend. It is the “blend” that is bottled as product and once bottled the product changes from “blends” to “case goods”.
  • Case Goods System
      Bottle-aging is the only concern of the winemaker in the case-goods phase. Once the winemaker “releases” a product a new decision matrix is activated. Regulation, taxes, storage, transfers, sales, marketing, exports, customer relations, distributor accounts, promototions, and financial accounting are important managerial activity in the case goods phase of the wine business. Sales tracking produces demand data and new forecasts that provide backward linkages to the fruit phase for contract and varietal decisions that must be based on 5-year snd 10-year sales forecasts.



To construct the data model we further divide each of the three phases of the wine business into “elemental” parts. Each of the elemental parts is then “constructed” by identifying the entities and their relationships. The data model of each phase is produced by synthesizing the models of the elemental parts. Finally, the model of the wine business is completed by identifying the relationships that form the linkages between the three phases. The modelling process is presented as a series of Power Point slides. The slides are presented below in sequence.

With thanks to the experts in the industry who took time out to help me with this project and to some very bright students during the semester who contributed by actively participating in the project.







bandicam 2020-05-25 07-24-44-229

There is no satisfactory explanation for glaciation cycles . For at least two million years the size of the northern polar ice cap has followed a cyclical pattern; growing at times to cover most of the northern continents in the glaciated state and then receding to approximately where it is today during the “interglacial” periods. The traditional theory of this cycle is the one proposed by Milutin Milankovitch. The theory attempts to link the earth’s precession, tilt, and obliquity of the orbit to glaciation cycles.

The period of the earth’s precession is 26,000 years; so we would expect ice formation to peak and to have warm interglacials every 26,000 years or so. But this is not the case. The evidence suggests that icy periods last from 20 to 100 thousand years and interglacials between 7 and 20 thousand years and not integer multiples of the precession period. The non-periodic nature of the phenomenon has not been adequately addressed in the Milankovitch theory of glaciation cycles.

Another mystery of the glaciation cycle is that within any icy period there are violent ice melt cycles. During the meltdown phase of these cycles large chunks of ice slide out to sea and the continental ice sheets get thinner. But within a few years it begins to get thicker again. The commonly held explanation for this behavior is due to Hartmut Heinrich. Heinrich postulates that as the ice gets thicker it acts as insulation and allows internal heat from the earth to melt the bottom of the ice and cause glacial flows. The problem with the Heinrich theory is that evidence suggests that glacial flows are not regional but global and at such a large scale that synchronization of localized hot spots is highly improbable.

Theories such as these subsume a cause and effect mechanism for these ice cycles in which for any given climactic condition there is a corresponding stable steady state ice level on the northern continents; and that any change from the steady state level can only be caused by a significant event with sufficient energy to cause the change. But this is not always the case in nature. Many natural systems exhibit non-linear dynamics and are metastable. In these systems many different “equilibirium” states are possible and even the slightest trigger (the proverbial butterfly) can bring about substantial changes in the equilibrium state.

A graphical model of metastability is shown below. The ball in the upper frame is in stable equilibrium. It will require a great deal of energy to shift the ball to another equilibrium state and if such a shift is observed a theory like that of Heinrich or Milankovitch might be required. The ball in the lower frame is in metastable equilibrium. Although it appears to be in steady state, many other steady state conditions are equally likely and minute random events can make wholesale changes to the position of the ball.


We propose here that ice formation in the northern continents is such a system. The time series of ice fractions is in chaotic equilibrium at wildly different levels of ice. The non-linearity in the system is imposed by the annual summer/winter heat cycles and by the reflective nature of ice. Such a non-linear model may be used to explain glaciation, interglacials, Heinrich events, and non-periodicity of these events. The waxing and waning of the ice fraction is nonlinear because ice is melted by heat that the planet has absorbed from sunlight; and the heat absorbed by the planet is a function of the ice fraction because ice reflects sunlight. This kind of inter-relationship is known to create chaotic behavior as shown in a video representation of a mathematical model of such a system at the end of this post.

The chaos model shown below demonstrates the surprising impact of this non linear behavior. In the model, a sine function is used to generate the annual incident solar radiation on the northern hemisphere of the tilted earth as it rotates on its axis and revolves around the sun. We begin the simulation with an assumed size of the polar cap which has a tendency to grow unless melted by solar radiation. A small perturbation (1%) is added to the solar radiation function to account for random effects.

We find that large swings in the ice fraction are possible under these conditions simply due to chaotic behavior. The glaciation states (high ice fractions) form naturally and tend to persist. Just as naturally the ice recedes into brief interglacial periods. What’s more surprising is the existence of the Heinrich events within these epochs. Both the glaciation cycle and the Heinrich events are produced as a result of a nonlinearity and chaos in the heating function and without imposing an external causal force in a purely cause and effect relationship.

The more ice you have the less energy gets absorbed and even more ice can be formed. Conversely, the more ice you melt, the more energy you can absorb and more ice you can melt. Chaos derives from the behavior of this dynamic because it can be set off in either direction by minute random effects. We propose that it is this non-linearity that is responsible for periods of otherwise inexplicable growth in ice formation and periods of melting and shrinking of the ice fraction.

A Youtube video of chaotic behavior due to the so called Hurst Persistence in time series data is shown below.







Jamal Munshi, Sonoma State University, December 1996, working paper
For presentation to the 48th annual meeting of the ASEV, San Diego, California, July, 1997.

A method is described for topping wine barrels continuously and automatically using gravity feed. The method is appropriate for small or large scale barrel arrays. The primary benefit of the process is savings in labor and workers compensation insurance costs involved with periodic manual topping operations. The use of the automatic method also reduces contact with air during the barrel aging process that is imposed by the topping operation and not within the winemaker’s control. The process therefore offers increased quality control and the possibility of producing higher quality wines. An additional benefit is that barrel rooms that use this system are expected to be safer because they are free of fermentation CO2. All fermentation CO2 is piped and vented outside the barrel room.

On the downside, the use of such a system would require additional capital investment and would severely restrict the movement of barrels. Barrel room management systems that require movement of barrel pallettes to staging areas must be substantiallly re-designed. The proposed topping and venting system may not be compatible with wine making styles that require frequent racking and cleaning operations.

The method requires a stainless steel tank, a header pipe assembly, a pressurized CO2 source, and pressure regulators. Sufficient topping fluid is stored in a stainless steel tank that is designed to hold a positive pressure and sized to provide airspace at full liquid capacity. A line from the bottom of the tank is connected to a header that feeds into the barrel array. The airspace is vented to the atmosphere through a back pressure regulator valve V1 that is set to maintain a pressure P1 with a margin of D1. If the tank pressure rises above P1+D1 this valve will open and if it falls below P1-D1 it will be shut. Upstream of this valve a pressurized CO2 source is connected to the tank with a forward pressure regulator valve V2 set at pressure P2 with a margin of D2. If the tank pressure falls below P2-D2 this valve will open and if it rises above P2+D2 this valve will be shut.

For the system to work it is necessary that P1-D1 is greater than P2+D2 by an amount that is sufficient to prevent the CO2 bottle from bleeding directly to the atmosphere; and that P2-D1 exceeds atmospheric pressure by more than the pressure drop in the header (ajusted for height differences) so that topping flow can occur even when all fermentation has ceased and the barrels are no longer generating CO2. The actual values of P1 and P2 will depend on the anticipated pressure drops, elevation differences, and safety margins. The use of this apparatus will leave no airspace in the barrels themselves. In fact the barrels will be subjected to a positive pressure of P1 plus pressure drops through the header assembly. This means that the bung must be affixed to the barrel using a mechanism that will sustain this overpressure. Various bung designs including a screw design could be used.

The header assembly is designed to carry fluids in both directions and to operate under two-phase flow conditions. During fermentation activity a wine-CO2 mixture will flow from the barrels to the topping tank to be vented. These peak flows are the sizing criteria for the header assembly. Later in the aging cycle, as fermentation slows or stops, transpiration losses will flow from the topping tank back to the barrels. Wine in barrels connected to a single header will become intermingled. Therefore, barrel lots must be isolated with each lot assigned to its own venting and topping system.

The diagram below is a schematic of the essential process.




GMAT Test Strategy Outline
by Jamal Munshi, Fall 1988

Ignore the numbers. They refer to questions in a sample exam.



1.1 2-3 sections, 2 live 1.2 20 questions/30 minutes/progressive,8,7,5 1.3 math problem requiring solution/pick an answer from 5 1.4 arithmetic/algebra/plane geometry/logic diagrams 1.5 no calculus/trigonometry/statistics/economics/finance


2.1 choose the BEST answer/question interpretation 2.2 figures drawn accurately EXCEPT where indicated 2.3 figures lie on a plane unless indicated differently


3.1 do not read instructions on the clock 3.2 Read the question FIRST, then the story

3.3 Careful reading 3.3.1 define the precise relationship being sought 3.3.2 careful attn to underlined,bold,italicized words 3.3.3 watch out for thought reversers (NOT,EXCEPT) 3.3.4 exploit inherent weaknesses in mc format 3.3.5 watch out for units

3.4 Exploit the problem structure 3.4.1 to interpret the problem 3.4.2 to avoid calculation 3.4.3 to avoid algebra 3.4.4 to approximate and eliminate

3.5 Overall problem solving strategy 3.5.1 set up to cancel 3.5.2 quadratic equations will factor 3.5.4 look at answers first to determine precision sought 3.5.5 look at answers to determine form sought 3.5.6 working backwards thru problem statement 3.5.7 substituting answer choices

3.6 Test is designed so bulldozer approach will fail 3.6.1 Find the underlying simplifying structure 3.6.2 If overwhelmed by numbers look for cancellations look for approximations 3.6.3 The clock is the enemy/beat the clock


4.1 Arithmetic manipulations 4.1.1 the calculator question [11,12,18] 4.1.2 set up to cancel/unit cancellations [24,35] 4.1.3 factoring/polynomials [39] 4.1.4 cumulative operations/bookkeeping [22,23,25,26 4.1.5 approximation problems [13,14] 4.1.6 percentages [21]

4.2 Peculiar problem types that recur 4.2.1 2×2 contingency tables [29,30,31] 4.2.2 mixture concentration problems [8,32,54] 4.2.3 three part ratios [33,34] 4.2.4 percent change [36,37] 4.2.5 Venn diagram questions [57,58] 4.2.6 Determine f(x) from tabular data [p94,#8] 4.2.7 formations and possibilities [16,40,55,56] 4.2.8 discontinuous processes [50]

4.3 Algebra problem types 4.3.1 properties of numbers [5,15,38 4.3.2 manipulations [41 4.3.3 setting up [42,43,44,45,46] 4.3.4 phony operations [47]

4.4 Word problems 4.4.1 weighted averages [48] 4.4.2 rate problems [49] 4.4.3 interest problems [6,52] 4.4.4 profit and loss problems [9,10,53] 4.4.5 watch out for units [20] 4.4.6 working backwards [28] 4.4.7 combined rates [51]

4.5 Geometry Problems 4.5.1 Right triangle ratios [59,65] 4.5.2 Circles/cylinders/spheres [60,61] 4.5.3 Inscribed figures [62] 4.5.4 Parallel lines/ parallelograms [63] 4.5.5 Angles [64]


5.1 Work one section thru at a time 5.2 Set alarm clock for 30 minutes 5.3 List equations and principles on missed questions 5.4 Write down and bring unresolved questions to class. 5.5 Timing drills make the difference. the key.

6.0 An alternate mixture problem algorithm using an example.

If we mix 3 gallons of 12% Chardonnay with 1 gallon of 24% sake, what will be the mixture concentration?

Draw a diagram like this and place on it the actual data from the problem representing the unknowns with their symbols:

d1 d2 3,12%————|————1,24% cm,vm

d1 = the distance between component 1 and the mixture in terms of composition. i.e., the difference in concentration between component 1 and the mixture. d2 = the distance between component 2 and the mixture.

The method algorithm is: v1/v2 = d2/d1 v1/vm = d2/(d1+d2)

that is, the volumes are inversely related to the composition differences.

Here, the concentration difference between the two components (d1+d2) is 12%. This is to be split up in the ratio of v1/v2 or 3/1. The answer is 9:3. i.e., d2 is 9, and d1 is 3, so cm must be 12+3 or 15%. ETS problems always work out into nice whole numbers like this.

This method is much faster and cleaner than the one propose in the book. The other possible question we could ask about this problem:

6.1 How much v2 would we need to obtain a mixture of 15%? d1=15-12 or 3. d2=24-15 or 9. so v2/v1 = d1/d2 = 3/9 = 1/3 we need 1/3 of 3 gallons or 1 gallon.

6.2 In what ratio must 1 and 2 be mixed to obtain a15% mixture? d1=15-12 or 3. d2=24-15 or 9. so v2/v1 = d1/d2 = 3/9 = 1/3

6.3 If we are to mix 3 gallons of 1 with 1 gallon of a component to obtain a 15% mixture, what does c2 have to be? d1=15-12 or 3. v2/v1 = 1/3 so d1/d2 has to be 1/3 d1 is 3. so d2 has to be 9. c2 = 15+9 = 24%


1.0 TYPE OF QUESTIONS (most,least,except)

1.1 Attack the arg “Which would most weaken the arg” 2,27,29,30,35,36,47 “Which would undermine” 21 “What is the best response to the arg” 11,22,24,32 “Which of the following args would be weakened by the above” 1.1.1 Methods of weakening args counter example weak assumption logical fallacy (see fallacies) alternate causation “Which of the following have attacked an assumption” 19,20 “The arg could be criticized on which ground” 46

1.2 Support the arg “Which of the following would most strengthen the arg?” 25 “Which of the following would be strengthened by the arg” “The arg is valid only if ” 33

1.3 Identify arg type “The author makes his point primarily by..” 1,34 “Which most closely parallels the arg” 43 “Which is logically consistent” “Which is logically inconsistent” 7 “Which arg is logically similar to the one above” 4,12,31 “Which best describes the reasoning of the arg” 27,45 “The arg can be characterized as ” 48

1.4 Making deductions “Which of the following can be deduced from the arg” 49 “Which of the following conclusions can be drawn” “If the arg is true which of the following cannot be true” 8 “If the arg is true which of the following must also be true” 50 “The main purpose of the arg is to ..” 10,15 “What is the point that the author is trying to make ” 14 “The statement above can be deduced from which of these” 6 “Which of the following would logically complete..” 13,16,18 “Which of the following would logically contradict 17 “Which would be most reliable” 37

1.5 Implicit and explicit assumptions “The arg makes which of the following assumptions” 9 “The arg makes the presupposition that” “The arg makes the unsupported assumption that” 23, “The arg depends on the assumption that” 26,

1.6 Fallacies [38-45 “The arg above suffers from which weakness” 1.6.1 circular reasoning (petitio principii) 44,43,41 1.6.2 ad hominem 1.6.3 false authority 37 1.6.4 shifts burden of proof (ignorantium) 40 1.6.5 weak/unsupported assumption 1.6.6 confuses correlation with causation 1.6.7 non sequitur 42c 1.6.8 loaded question (stop beating wife) 44d 1.6.9 ambiguity in def of terms 45,46,p238#15,47 1.6.a hasty generalization 26 1.6.b false/alternate cause 28,29,30 1.6.c error of composition/division 31 1.6.d false dilemma 32,33 1.6.e false analogy 38,34,35 1.6.f appeal to emotion 38 1.6.0 appeal to popular opinion 39

1.7 Analyzing verbal exchanges “A has misinterpreted Bs remark to mean” 3

2.0 Syllogisms

2.1 Valid forms (alternate wording)

all A is B all A is not-B all not-A is B x is A x is A x is not-A x is B x is not-B x is B

all A is B all A is not-B all not-A is not-B x is not-B x is B x is not-A x is not-A x is not-A x is not-B

Either A or B or both all not-A is not-B not-A x is B therefore B x is A

Either A or B but not both (exclusive or) not-A->B, A->not-B, B->not-A, not-B->A

if A then B if and only if A then B A->B, not-B->not-A A->B, B->A, not-A->not-B, not-B->not-A

2.2 Invalid forms

all A is B all A is B all A is not-B x is B x is not-A x is not-A x is A x is not-B x is B

all A is not-B all not-A is not-B all not-A is not-B x is not-B x is not-B x is A x is A x is not-A x is B

all A is B A or B (or both) x is C A->not-B x is (A,B,D)


3.1 Read the question first —- THEN the story 3.2 Importance of careful reading/dissection 3.3 Some means at least one 3.4 Eliminate (x out) inconsistent/implausible answer choices 3.5 Pay attention to tenor of arg 13 strongly for, strongly against, dispassionate humorous(15), sarcastic (10) 3.6 Watch for qualifiers 17 3.7 The Roman Numeral strategy. Eliminate all AC containing -RN 3.8 Dont go beyond the scope of the arg as presented 3.9 Do not ascribe value judgement to author unless clearly does 18 3.a Watch for suppressed premise 19,20 3.b In verbal exchange put words AC in speakers mouth 3 3.c Watch for unrelated additional variables 23 3.d Define the perspective and bias of author p245 #11, 11 3.e Evaluate authors qualifications 3.f Assign variables and write out syllogisms 4,5,6 3.g When parallelling weak args, dont try to fix them


1.0 Extreme time pressure section. Key is good time management.

1.1 Three 500 word essays each with 7-9 questions for a total of 25 questions. 1.2 Timing per essay: Total of 10 minutes per essay.

1.2.1 Scan questions for buzzwords: 0.5 minutes Read the essay: 3 minutes Answer questions: 6 minutes 1.2.2 If less than 5 minutes left after 2 essays, dont attempt the third. Use this time to go over first 2 essays.

2.0 Attack strategy

2.1 First pick which essay to do last (may not get to)

2.1.1 Number of questions varies from 7 to 9 so pick the one with the least questions for last. –or– 2.1.2 Boredom is a big factor in this section. Read the first sentence of each essay and save the most boring for last.

2.2 Before reading the passage scan questions for buzzwords, i.e., words and phrases that suggest a specific subject matter, and circle them with your pencil. Do this quickly. Do not read answer choices. Dont make a great effort to remember these.

2.3 Read the passage. Do not rush but dont dawdle or re-read sentences and paragraphs. Push on.

2.3.1 Remember that these essays are very boring. Boredom is your enemy. It will cause your mind to wander. Be alert to this and push on thru the passage. 2.3.2 As you read, circle buzzwords you recognize and otherwise, underline, take notes and mark up the passage with your pencil. This serves two functions: it keeps you awake and it builds a road map so that you can find things easier when doing look-ups to answer questions. 2.3.3 While reading the passage look for title and tone. What is the passage about? What is the tenor of the passage? What is the attitude of the author toward the subject?

2.4 Answer the questions.

2.4.1 Read questions carefully underlining key words as necessary. 2.4.2 X-out the implausible answer choices. (Usually 1 or 2 answer choices completely out of line) 2.4.3 With the remaining answer choices play elimination . Compare A to B. Eliminate one and compare winner to C. etc. Throwing our wrong answers works better than trying to find the right answer. Elimination tournament approach minimizes re-reading of answer choices. DO NOT re-read answer choices. No time. 2.4.4 Abandon the question if no clues after 1 minute. If two or more answer choices crossed out, guess but you must go on to the next question.

3.0 Passage and question types

3.1 Passage types. All poor quality D papers. Out of context. Condensed. Boring, dry and difficult reading (on purpose). Not expected to be familiar with subject matter. If you are, you must be very careful to take passage literally.

3.1.1 Literature and the arts: interpretation of some event or a piece of work. 3.1.2 Science: description of recent scientific findings. 3.1.3 History: revisionist thesis 3.1.4 Economics and politics: evaluate various ways of solving a problem.

3.2 Question we already know

3.2.1 Thesis: What the passage is about “The author is primarily concerned with..”, “The best title for this passage would be..”, “The main idea of this passage is..” Pg 112 Q 1,2, Pg 116, Q 1-3 3.2.2 Tenor: The tone of the passage. “What is the tone of the passage?”, “The authors attitude toward the subject can best be described as..” Pg 113 Q 15, Pg 118 Q 19-20, 3.2.3 Context: What came before or after this passage. “It is most likely that immediately preceding/following this passage the author discussed…” Pg. 113, Q 14, Pg 118 Q 18, 3.2.4 Logical Structure: Type of argument being made. Why certain statements are made (its function in the arg) “The author mentions xxxx to…”, “The author develops his passage primarily by..” Pg 113, Q. 11,12, Pg 117 Q 14-16

3.3 Questions that vary with the passage

3.3.1 Explicit : What is clearly stated in the passage. “The author states that…”, “According to the passage..”, “According to the author..”, Pg112, Q. 3-7, Pg 116, Q 4-9 These are the most important questions because there are more of these than any other type and they usually require a lookup. Dont hesitate to refer to the passage.

3.3.2 Implicit: Like explicit but not stated as such in the passage but can be clearly inferred. “Which conclusion can be inferred from the passage?” Pg 113, Q. 8-10, Pg 117, Q 10-13, 17, 3.3.3 The algebra question: Whenever some direct or inverse proportionality is stated in the passage, ETS likes to test your understanding of the exact relationship by restating it in several different ways. Pg 113 Q. 13

4.0 Why wrong answer choices are wrong

4.1 Too broad in scope, too narrow in scope (thesis, context) 4.2 Wrong relationship between variables (algebra) 4.3 Too strong, too weak (tone, attitude) 4.4 Extraneous, not consistent with the passage, or , consistent with the passage but not responsive to the questions. These latter types may trip you up. 4.5 Partially consistent, but partially inconsistent or extraneous.

5.0 Specific attack strategies

5.1 More than one seemingly correct answer choice: contrast and eliminate. (explicit, implicit, context) 5.2 Test initial words and eliminate answer choices on this basis first. (thesis, tone, logic) 5.3 Place a + , 0, or a – next to each tone question answer choice, then eliminate answer choices that are not consistent with your appraisal of the tone as plus or minus. 5.4 Careful of thought reversers (not, least, except) and all words in bold face or italics.


1.0 Layout of this section

1.1 25 questions/ 30 minutes/ approx 70 seconds per question./ tests elementary grammar with very limited scope as explained below/ review of grammar not needed. 1.2 Each question consists of a sentence (the stem) a portion of which is underlined. The underlined portion may or may not have grammatical errors. 1.3 There are 5 answer choices proposed as a replacement for the underlined portion. The BEST replacement is the correct answer. The correct answer may not be the only grammatically correct answer choice.

2.0 Points of strategy

2.1 DO NOT read answer choice A. Answer choice A is a repetition of the underlined portion of the stem and means that the sentence is best stated as written. A good way to avoid reading A is to start with E. 2.2 Watch out for more than one error in the stem. Some answer choices will fix one error but not the other. Its a trap. 2.3 If suspected error condition occurs in all choices, its not an error. One of the 5 choices, a-e, IS the right answer. Given!! 2.4 As you read answer choices, use X to eliminate, and check mark to keep for now. 2.5 Dont forget that A is a possible answer, i.e., there is nothing wrong with the sentence. Dont be afraid of As. 2.6 Eliminate obvious bad answer choices and then compare and eliminate to distill out the answer or guess if no clue. 2.7 Conciseness always wins in ETS-land. If two answers look right, pick the shorter one. 2.8 In vague pronoun reference, put the noun back in. 2.9 In parallel verb clauses where a verb is omitted (understood) substitute the verb back in to test. (eg Pg 5 Q17, “should keep” and “has not” –> test with “has not keep” (no good) “does not keep” (good) so it must be “does not” instead of “has not”

3.0 Rules of grammar tested

3.1 Misplaced modifier. Pg3 Q1, Pg5 Q23, Pg6 Q24, Q25, Q29 3.2 Number agreement (singular-singular, plural-plural) 3.2.1 noun-pronoun Pg4 Q11-13, Pg6 Q25 3.2.2 subject-verb Pg4 Q7, Q8,Q9, 3.2.3 pronoun-pronoun 3.2.4 parallel verb clauses 3.3 Pronoun-pronoun person agreement (first person-first person, second person-second person, third person-third person, one- one) Pg4 Q14, 3.4 Parallel verb clauses 3.4.1 tense agreement Pg4 Q15-17 3.4.2 type agreement (infinitive-infinitive, gerund-gerund) Pg 5 Q20, 3.4.3 active-active/ passive-passive Pg6 Q26, 3.4.4 prefer active over passive 3.4.5 omitted verb must be the same or cant omit Pg5 Q18 3.5 faulty comparison (apples and oranges) Pg5 Q21, Q22, 3.6 Awkward construction 3.7 Conciseness over wordy Pg3 Q3, Pg6 Q30, 3.8 Change in meaning 3.8.1 Special case – the “algebra” question. Stem states a direct or inverse proportionality. Wrong answer choices are wrong because they confuse the relationship. Pg 3 Q4, 3.9 Specific word usage 3.9.1 Usage of “of”, “for”, and “to” Pg3 Q2, 3.9.2 Usage of “who”, “which”, “that” 3.9.3 “Why” cant start a noun clause 3.9.4 “being”, “in that” usually indicate wrong choice 3.9.5 “Respectively” Pg6 Q28, 3.9.6 “Among” three or more, “between” two only Pg6 Q26, 3.9.7 Adverbs must be near verbs they modify. Special case that ETS likes… “still”, “only” Pg4 Q7 3.10 Confusing subject with object of intervening clause Pg4 Q8, Q9 3.11 Vague pronoun reference Pg4 Q10, Pg5 Q19, 3.12 Redundancy (eg “reason is because”, “repeat over again”) Pg6 Q27, 3.13 Closer element controls in disjunctive subject, i.e., the verb takes the number and person of the closer noun or pronoun. Pg3 Q6, Pg5 Q9 3.14 Conjunctive is plural Pg3 Q5,

4.0 Logic Diagram

Read the stem. There are three possibilities: you found the error(s), or it looks right to you (choice A), or your are not sure.

4.1 Found error(s) 4.1.1 From EDCB, eliminate those that contain the error 4.1.2 If more than one remain, discard CM (changed meaning) 4.1.3 Still more than one?, pick most concise or guess 4.2 Looks right (looks like an A) 4.2.1 From EDCB discard CM 4.2.2 In remaining choices, look for errors and eliminate 4.2.3 If still one of the EDCB left, compare with A and pick the more concise. 4.3 Not sure (elimination tournament) 4.3.1 Compare B to A and eliminate one of them 4.3.2 Winner plays C 4.3.3 Winner of 4.3.2 plays D 4.3.4 Winner of 4.3.3 plays E


1.0 Layout of the section

1.1 20 questions/ 30 minutes/ increasing order of difficulty 1.2 Do 8 in first 10 minutes/ 7 in next ten minutes/ 5 last 10 minutes. This is an approximate progression. 1.3 Math problems. Each question consists of information in the stem, followed by a question, and then two pieces of further information, fact 1 and fact 2. No answer choices marked abcde. The problem is to determine whether we have enough information to answer the question. 1.4 Diagrams shown are not necessarily to scale and will more often fool you than help you. 1.5 Drawings are topologically correct. i.e., point shown inside a circle is inside.

2.0 Logic to determine answer choice a,b,c,d,or e.

2.1 Information in the stem is never sufficient. We need some or all of the information contained in facts 1 and 2. 2.2 If fact 1 works –> its an A If fact 2 works –> it;s a B If we gave it an A and a B –> give it a D instead

2.3 If neither facts work——–>>>>>>>> (enter CE-land) If facts 1 and 2 together work –> its a C And if they dont work –> Its an E

2.4 Elimination table

IF IT HAS TO BE IT CANT BE — ———— ———– 1 works AD BCE 2 works BD ACE 1 didnt work BCE AD 2 didnt work ACE BD 1 and 2 didnt work CE ABD 1 and 2 are the same DE ADC

2.5 Logic Diagram

Forget 1 <– yes no –> Forget 1 v v v v Does 2 work? yes –> D Does 2 work? yes –> B no –> A no –> CE

CE-land (not a fun place. Do not enter unless you have to)

Does 1 & 2 work together? –> yes C –> no E

3.0 Points of strategy

3.1 Eliminate answer choices according to table in 2.5 3.2 Be wary of entering CE-land. Go only if you have to. If either 1 or 2 has worked, do not go to ce-land. 3.3 Dont work out the problem. You dont have time. Just determine that you could do it if you had to. 3.4 In Ds do not try to match the answer you would get using fact 1 to the one you would get using fact 2. They need not be the same for it to be a D. 3.5 In equalities, must get a unique answer. 2 possibilities not good enough. eg, sqrt(9) could be 3 or -3. Dont know. 3.6 In inequalities, set up table of values to test for exception. Use positive integers, negative integers, and fractions. If answer to the question is always yes or always no then it works else it does not work (insufficient information). 3.7 Remember that if we can answer the question, it works. Even if the answer is no. eg, stem info: x is a positive integer question: is x > 9 ? fact 1: x=7 Here, fact 1 works, because it enables us to answer the question definitively. The answer is no. 3.8 In geometry problems, use the technique of distortion. Draw your own figure that conforms minimally to the specifications in the stem and in the fact being tested. Figures may be drawn to fool you. Pg 56 Q11,12 3.9 Complex and intimidating algebraic expressions usually simplify to extinction with cancellation of one of the variables. Factor as needed. Pg 57 Q21,22 3.10 Problem types: same as DQ: mixtures, averages, %change, distance, interest, profit, algebra, geometry. 3.11 Important to clearly identify and remember the question posed. 3.12 Ratios (or percentages) alone cannot yield value. However, any one value with associated % yields all values. Pg 57 Q17


1.0 Never read instructions on the clock. The exam is a race against the clock. Reading instructions which we already know is a waste of time. 2.0 Remember the Chinese saying “Hurry up slowly”. Economize on time but dont rush. A sense of haste would be suicidal. 3.0 Have at least 4 sharpened #2 pencils, a good resin eraser, a watch, and a sharpener on your desk. Dont sharpen pencils on the clock. But do sharpen them in between sessions. 4.0 When filling in the answer bubble, use an efficient 7 or 8 stroke fill. No need to do a work of art. Be sloppy. Dont use too much pressure because you may have to erase it. 4.1 Wear loose and comfortable clothing. 4.2 Dont cram the night before. Just relax. Listen to music and get a good nights sleep. If you drink coffee, do so on the morning of the test. 4.3 Prior to the test date, visit the test site. On the morning of the test arrive have an hour early. Remember to take your admission ticket. 4.4 Remember: no calculators, no beeping watches. 4.5 Pace yourself thru each section. If bogged down, abandon the question and move on. 4.6 Read questions carefully. Remember that a lot of the ETS answer choices are otherwise correct but not responsive to the question asked. Ask yourself “Whats the question?” (This is important in DQ, DS, RC, CR) 3.7 You lose 1/4th of a point for each wrong answer. So, dont guess unless you can eliminate some answer choices. 3.8 Remember: only one response per question. Multiple response is automatically wrong. Erase completely. (A light touch on the eraser is better. Avoids smudges) 3.9 Make sure that question numbers and answer sheet numbers match. 5.1 Do not circle answers and then go back and enter them into the bubble sheet. It is a terrible strategy. First if you run out of time it could be a real disaster. Second, you waste a couple of very precious minutes picking answers twice.



THE GENERAL THEORY OF EMPLOYMENT, INTEREST, AND MONEY by John Maynard Keynes Chapter 12 The State of Long Term Expectation Chapter 13 The General Theory of the Rate of Interest.

  1. In these two chapters, Keynes lays out his theory of the determinants of returns from holding assets (stock) and those of holding debt (bonds) and vents his disgust for speculatitive trading in financial assets.
  2. Capital Asset Valuation: To a rational investor, asset valuation should be based only on the present value of the best projections of the future cash flows that the investor believes will be generated by the asset (adjusted for risk). As such, capital investment is a long term decision based on long term projections and goals.
  3. However, by allowing the stock market to value the stock of the firm, this ideal goes wrong, says Keynes, since the “ignorant masses” buy stock only for short term capital gains rather than long term income. Keynes eschews “playing the market” which he likens to a “feeding frenzy”. He despises our “animal instincts” to gamble. He feels that investors are simply trying to out-guess and out-wit each other rather than attempting to project and compute the cold mathematics of future cash flows and associated risks.
  4. He claims that the valuation of capital assets thus established is “absurd” since it has become a forum for speculation rather than one of enterprise. He admits, however, that the liquidity and the outlet for our animal instincts provided by the market is necessary for capital formation.
  5. Speculation is evil according to Keynes. He says that speculation causes undue and irrational fluctuations in the bond market that do not reflect real changes in economic variables. The stock market syndrome thus also affects the debt market which is turned into a casino-like venue for speculators instead of being the rational arena’of the economic man who through his liquidity preference, propensity to consume, and propensity to save, would make all his equations come out right. Mankind apparently is too stupid to follow his prescribed behavior.
  6. Although our view of capital markets has changed since 1935, Keynes was, of course, a giant in his time and his wisdom still influences economic and financial thought. However, it helps to understand that his work was largely part of the overall response of economists to the 1929 crash and the ensuing depression. It must have seemed at the time that the system of free enterprise that we inherited from the Dutch and the British Dissenters and their Industrial Revolution may not be viable; that capitalism had failed; just as today (1995) it seems that communism has failed.
  7. Yet, it was Keynes’ work more than any other ideology that delivered us from at least the psychological depression of the times. In the midst of all the wringing of hands and gnashing of teeth, he rose up and said that capitalism had NOT failed and prescribed a cure for the depression. It was a very uplifting piece of work.
  8. Keynes’ discussion of market mechanism was not necessary to his general theory, yet he felt compelled to write about it anyway because no economic work of his time was deemed complete without an ‘explanation’ for the crash. The legacy of the crash was to make the market into the whipping boy. And Keynes’ strong language in this regard can be forgiven on this score.
  9. We believe today that the market is, on the aggregate, rational and at least weak form efficient. We do not question market valuation and investor behavior. We only seek to understand them.

From the archives - John Maynard Keynes | Free exchange | The ...



Turning the Climate Tide by 2020





(1): CLAIM: The climate math is brutally clear.

RESPONSE: Except for where it isn’t [LINK] .


(2): CLAIM: The world needs high-speed climate action for an immediate bending-down of the global greenhouse-gas emissions curve. Aggressive reduction of fossil-fuel usage is the key to averting devastating heat extremes and unmanageable sea level rise.

RESPONSE: The climate science lead author in the Nature article is UN bureaucrat and climate activist Christiana Figueres. Her view on the COVID19 tragedy is “Well, that is, ironically, of course, the other side of this right? It may be good for climate. But I think because there is less trade, there’s less travel, there’s less commerce. Expect more disease outbreaks if we continue to deny, delude and delay on climate change. If we continue to eat animals, we will be poisoning ourselves and being the genesis of new diseases we have not seen before“. 


(3): CLAIM: The UN has prioritized the protection of the UN Sustainable Development Goals, and in particular the eradication of extreme poverty. 

RESPONSE: Support for SDG is inconsistent with eradiction of poverty as explained in this document: [LINK] . 

(4): CLAIM: The climate math is brutally clear that while the world can’t be healed within the next few years, it may be fatally wounded by negligence if the negligence continues to the year 2020.

RESPONSE: This claim that “we only have a window of opportunity to control climate change and when we pass up this window of opportunity  we are screwed” has been an ongoing feature of the climate movement where each time the window ends it is simply pushed forward. [LINK]


(5): CLAIM: We have been blessed by a remarkably resilient planet over the past 100 years, able to absorb most of our climate abuse.

RESPONSE: Translation: All our prophecies of climate doom have proven false. 

(6): CLAIM: Technological progress and political momentum have reached a point now that allows us to kick-start the ‘great sustainability transformation.

RESPONSE: This is pure UN bureaucratic word soup that comes in handy when they run out of rational arguments. These impressive sounding words and phrases are thrown around by these people a lot but they don’t have any rational interpretation in plain language. What’s different this time around is that this charade is being underwritten by scientists at the Potsdam Institute. 


(7): CLAIM: The authors and co-signatories to the Nature article comprise over 60 scientists, business and policy leaders, economists, analysts and influencers.

RESPONSE: Translation: The validity and relevance of Christiana’s research paper is supported by the fact that she found 60 infuential people that will support it.


(8): CLAIM: “This monumental challenge coincides with an unprecedented openness to self-challenge on the part of sub-national governments inside the US, governments at all levels outside the US, and of the private sector in general.

RESPONSE: This is pure UN bureaucratese word soup. 


(9): CLAIM: But there is still a long way to go to decarbonize the world economy. The year 2020 is crucially important because if emissions continue to rise after 2020, the Paris climate goal becomes unattainable. Mission 2020 is a campaign to raise ambition. We must bend the greenhouse-gas emissions curve downwards by 2020. Mission 2020 is a campaign to raise ambition and bend the greenhouse-gas emissions curve downwards by 2020.

RESPONSEHer prayers have apparently been answered by the COVID-19 as seen in the chart below but with no measurable change in the rate of rise in atmospheric CO2 or temperature.  [LINK] . That the critical year is now 2020 is a pattern that goes back many years in this climate game. In the past the critical year has been 1980 to 2009 and then again it was 2015 [LINK]



(10): CLAIM: The world needs high-speed climate action for an immediate bending-down of the global greenhouse-gas emissions curve, leading experts caution. Aggressive reduction of fossil-fuel usage is the key to averting devastating heat extremes and unmanageable sea level rise, the authors argue in a comment published in the renowned scientific journal Nature this week. 

RESPONSE: The “high speed bending down of the curve” is surely new and innovative verbiage but the message is the annual song and dance by UN bureaucrats about the importance of climate action having failed to deliver the “Montreal Protocol for the Climate” that they had said that they could do.  [LINK]


(11): CLAIM: There are six milestones for a clean industrial revolution. This call for strong short-term measures complements the longer-term ‘carbon law’ approach introduced earlier this year by eminent Hans Joachim Schellnhuber, Potsdam Institute’s Director,  in the equally eminent journal Science.

RESPONSE: The word IF is indeed a powerful word and it makes up for the the failure of the UN put together a coordinated global climate agreement to reduce global fossil fuel emissions. The “carbon law” is the Potsdam Institute innovation that IF in 2020, all the countries of the world simply commit to halving their fossil fuel emissions every decade, then the world will get to net zero by 2050. An interesting mathematical innovation by Potsdam but this bright view of the future is made possible with the word IF. The bottom line is that the word IF is needed because there is no globally coordinated plan to lower global emissions. By hanging out with UN bureaucrats, Potsdam too is learning the powerful new language of UN BUREAUCRATESE. 


(12): CLAIM: Thus a full narrative of deep decarbonization emerges. We stand at the doorway of being able to bend the GHG emissions curve downwards by 2020, as science demands, in protection of the UN Sustainable Development Goals, and in particular the eradication of extreme poverty. 

RESPONSE: The narrative does emerge but that narrative has always existed. The narrative can be reworded and restyled but that the only thing that matters is whether the UN put together a globally coordinated progam to reduce global fossil fuel emissions to zero. As for sustainable development, I would like to add that the insertion of sustaiinability undercuts the UNDP’s mandate of reducing poverty as explained in a related post: [LINK]


(13): CLAIM: FIGUERES: “This monumental challenge coincides with an unprecedented openness to self-challenge on the part of sub-national governments inside the US, governments at all levels outside the US, and of the private sector in general. The opportunity given to us over the next three years is unique in history”. {Figueres is the convener of the Mission 2020 campaign to make carbon emissions begin to fall by 2020}.

RESPONSE: The only real information contained in this claim is that UN bureacrat Figueres is going to meet the challenges of the Mission 2020 campaign with bureaucratese rhetoric and by throwing in irrelevant UN activities such as SDG.  


(14): CLAIMThe great sustainability transformation: The authors are confident that both technological progress and political momentum have reached a point now that allows to kick-start the ‘great sustainability transformation’. 2020 is crucial, because in that year the US will be legally able to withdraw from the Paris Agreement. Even more compelling are the physics-based considerations, however: Recent research has demonstrated that keeping global warming below 2 degrees Celsius becomes almost infeasible if we delay climate action beyond 2020. And breaching the 2°C-line would be dangerous, since a number of Earth system tipping elements, such as the great ice sheets, may get destabilized in that hot-house.

RESPONSE:  Here we get a glimplse into how UN bureacrats think and their disorganized bureaucratese ideas such that the urgent need for climate action is proposed by repeated reference to irrelevant UN programs that they are proud of such as the SDG. 


(15) CLAIM: We have been blessed by a remarkably resilient planet over the past 100 years, able to absorb most of our climate abuse. Now we have reached the end of this era, and need to bend the global curve of emissions immediately, to avoid unmanageable outcomes for our modern world.

RESPONSE: TRANSLATION: OK so we were wrong about how horrible the climate impacts will be and how the planet itself will be destroyed BY climate change but that is only because the planet turned out to be more resiilient than we thought but we have now reached the end of its resilience and so this just can’t go on


(16) CLAIM: Power generation from wind and solar is booming already. In Europe renewables are three quarters of new energy capacities installed. China is establishing a national emissions trading scheme. Financial investors are wary of carbon risks. The six milestones for 2020, {renewables to 30% of total electricity supply; retiring all coal-fired power plants; electric vehicles and mobilize 1 trillion US dollars a year for climate action

RESPONSE: TRANSLATION: As noted above, we have failed to put together a global climate action program to reduce global fossil fuel emissions but we are UN bureaucrats and so we an always dig up and throw around data that make us look good


(17): CLAIMHans Joachim Schellnhuber, Potsdam Institute:  “The climate math is brutally clear: While the world can’t be healed within the next few years, it may be fatally wounded by negligence. Action by 2020 is necessary, but not sufficient. It needs to set the course for halving CO2 emissions every other decade. The ‘carbon law’ can become a self-fulfilling prophecy. This will be unstoppable if we propel the world into action




CONCLUSION: It is noted that although these individuals claim to speak as scientists for the science of climate science, their language and their agenda is clearly one of failed activism with failed old to-do lists loudy recited with great pride and pretension to climate action. We also note that this statement by the Potsdam Institute makes it clear that it is a climate activism organization and that makes it impossible for it to be a scientific organization.

The close association of Potsdam climate scientists with UN climate activists underscores this assessment. It is not possible to carry out unbiased scientific inquiry into a research question if the researcher has an activism agenda in terms of the research question. This relationship holds no matter how academically qualified the researcher may be



akg-images - Hans Joachim Schellnhuber





  1. BACKGROUND: Climate scientists have determined that the use of fossil fuels since the industrial revolution has caused carbon dug up from under the ground to be released into the atmosphere. It is argued that because fossil fuel carbon is not part of the current account of the carbon cycle it acts as a perturbation that causes atmospheric CO2 concentration to rise and thereby to cause warming by way of the greenhouse effect of carbon dioxide. Climate scientists have also determined that such warming is unnatural, human caused, and harmful to nature and to the planet itself and that therefore it cannot be allowed to continue. Climate scientists have therefore proposed that human intervention in the form of climate action is necessary to moderate the rate of climate change to no more than 1.5C above pre-industrial levels or no more than 0.5C above current levels. The proposed climate action is to reduce global fossil fuel emissions and to continue to reduce global fossil fuel emissions until it is eliminated altogether.
  2. UNITED NATIONS CONFERENCE OF PARTIES: Since the Kyoto Protocol of 1997 that was later re-written as the United Nations Framework Convention on Climate Change or UNFCCC, the United Nations held a series of Conference of Parties (COP) to come to an international agreement for reductions in global fossil fuel emissions as a global project to which all nations will subscribe and with which all signatories will abide. All 25 COPs held so far have failed to produce such an international effort. Though the so called “Paris Agreement” at COP21 in 2015 is often advertised as an international agreement for reductions in global fossil fuel emissions, the language of the agreement as “Intended Nationally Determined Contributions” and that the Agreement consists of a collection of “agreements” that don’t agree makes that interpretation impossible particularly so since the emission reduction is not an obligation but an intention. 
  3. POST PARIS AGREEMENT CLIMATE ACTION:  Since 2015, the UN’s role as a cheerleader in brokering a global effort to reduce global fossil fuel emissions has consisted mostly in holding more COPs, emphasizing the dangers of the extreme RCP8.5 “business as usual” temperature forecast, and demanding that national leaders show greater “AMBITION” in their climate action plans. This plan depends on the effectiveness of cheerleaders that include Antonio Guterres, Leonardo DiCaprio,  Sir David Attenborough, and Pope Francis.
  4. CLIMATE ACTION IN THE POST PARIS AGREEMENT WORLD: With the Paris Agreement for global fossil fuel reductions being an agreement to not agree, the state of global climate action today is dependent on the AMBITION of national governments or super-national governments such as the EU that has proceeded so far as a kind of heroism contest egged on by activists. In this contest, the European countries led by the EU along with the UK, Canada, and perhaps Australia have emerged as climate heroes as they have adopted aggressive climate action plans. However, these heroic climate action countries are up against an economics trap created by a non-global “agreement” to cut global emissions.
  5. THE ECONOMICS TRAP OF A NON-GLOBAL MOVEMENT TO CUT GLOBAL EMISSIONS: Although the world of humans is separated into nation states, they are connected by economics. This connection is vast and complex and involves cross border investments, stocks, bonds, monetary policy, technology, intellectual property rights, and so on and so forth but most importantly in this respect, the nations of the world are connected by trade. International trade is so important, that even though we think of our civilization in terms of the nation states, we are really one huge global economy because we are connected by trade.
  6. THE ANOMALY OF NON-GLOBAL EMISSION REDUCTION PLANS IN THE CONTEXT OF TRADE:  Because nation states are independent nations in some respects but global in terms of trade, a climate action decision by an individual nation state will not lead to global emission reduction. This is because any national climate action plan by a single nation state will increase the economic cost of production and make that nation state less competitive in international trade and hand over a cost advantage to nations that do not have a national climate action plan. The cost advantage of non-climate-action takers will cause their production and exports to rise by virtue of demand from climate action taking nations. The net result will be that economic activity {and fossil fuel emissions} will decline in climate action taking nations but with a corresponding rise in economic activity {and fossil fuel emissions} in non-climate-action taking nations. In the net there may be no emission reduction. This is the Catch-22 of national level emission reduction plans. 
  7. ECONOMICS PROFESSOR WUSHENG YU OF THE EU EXPLAINS:  The EU has an ambition of being climate neutral in 2050. It is hoped that this can be achieved through a green transition in the energy sector and CO2-intensive industries, as well as through altered consumer behavior such as food habits and travel demands among the EU population. However, should the EU implement its most ambitious decarbonization agenda, while the rest of the world continues with the status quo, non-EU nations will end up emitting more greenhouse gases, thereby significantly offsetting the reductions of EU emissions. This is the conclusion of a new policy brief prepared by economics experts at the University of Copenhagen’s Department of Food and Resource Economics. For every tonne of CO2e emissions avoided in the EU, around 61.5% of that tonne will then be emitted somewhere else in the world. This carbon leakage, as it is known, will result in a global CO2e savings of 385 kilos only. The policy brief is based on the conclusions of a purposely-built economic model. The model, part of the EU Horizon 2020 project EUCalc, seeks to describe various pathways to decarbonizing the EU economy. “Obviously, the EU’s own climate footprint will be significantly reduced. But the EU’s economy is intertwined with the rest of the world through trade relations, which would change as we implement a green transition in our energy sector, industries and ways of life. Part of the emissions that Europe “saves” through an extensive green transition could possibly be ‘leaked’ to the rest of the world through, among other things, trade mechanisms, depending on the climate policy of other countries,” according to economist and brief co-author Professor Wusheng Yu, of the University of Copenhagen’s Department of Food and Resource Economics. “If the world beyond the EU does not follow suit and embark on a similar green transition, the decline in global greenhouse gas emissions will effectively be limited and well below the level agreed upon in EU climate policy,” adds co-author, economist and Yu’s department fellow, Francesco Clora. Less exports, more imports. In the most ambitious 2050 scenario as calculated by the EUCalc model, the EU pulls all of the green levers for production and consumption in various sectors, including the industrial and energy sectors. In this scenario, a green transformation of CO2-intensive industries (e.g. concrete, steel and chemicals) will incur new costs for new green technologies, which in turn, will increase the price of products. This could impact the competitiveness of EU products on the global market and be advantageous to China and the United States, who would be continuing their production of similar, yet cheaper goods. The prediction is that fewer goods would then be manufactured in Europe, which would lead to an increase in new imports to satisfy consumer and commercial demand. Similarly, a phase-out of fossil fuels by the EU would lower global demand, thus making them cheaper. In response, non-EU countries would be likely to import and consume larger quantities of fossil fuels. Finally, more climate-friendly consumer behaviour in the EU could end up pushing part of the saved CO2e out into the rest of the world as well. For example, while a decrease in red meat consumption by Europeans may reduce imported feed grains such as soybean, it may also result in increased imports of food grains and other plant-based foods, the latter of which would increase emissions in the rest of the world. So what should the EU do? Should Europe simply throw in the towel and drop its high ambitions for a better global climate? Certainly not. But we must make sure not to go it alone. “A green transition in the EU alone cannot significantly reduce global greenhouse gas emissions. We need to find ways to get others on board. Otherwise, the impact of our efforts will be largely offset by increased emission elsewhere, making it impossible to meet the Paris Agreement targets.






(1)China is the world’s largest emitter of fossil fuel emissions.With a population of 1.4 billion and per capita emissions of 7.2 metric tonnes of CO2 per person, China’s total emissions are 10.08 gigatons of CO2 that represents about 27.5% of global emissions. The climate change focus on China derives from this significant statistic. Overlooked in this statistic is that these emissions come mostly from export oriented manufacturing and not from consumption with much of the industry consisting of overseas manufacturing facilities of Western business enterprise. Although China is the largest economy in the world with a gross national GDP of $27 trillion in 2019, this figure is driven largely by industry and by population and not by consumption and living standardA partial list of Western business enterprises that operate their factories in China, provided by JIESWORLD.COM, appears below. These factories and their products are of the West by the West and for the West but their emissions appear in China’s account. The West has exported its emissions to China.

(2) An important sector of export oriented industrial production in China that contributes to much of the fossil fuel emissions noted above, is the manufacture of solar panels, wind turbines, and electric cars for export. The importer of these products benefits from emission reduction but the emissions for their manufacture accumulate in China’s emission account. In 2019 China exported about $18 billion of solar panels with total energy production capacity of more than 200 GW. The export of wind turbines that year was $12 billion with energy capacity of more than 400 MW. Thus, much of the West’s manufacturing emissions including the emissions from the manufacture of renewable energy equipment is offloaded to China. {Footnote: China’s domestically installed renewable energy capacity is about 400 GW divided almost equally between wind and solar}.

(3): With regard to the wealth of China described as the largest economy in the world, it should be noted that the gross GDP of the country was $14.4 trillion in 2019 compared with $21.4 trillion for the USA but in terms of purchasing power parity (PPP), the adjusted PPP GDP are China $25.3 and USA $17 trillion. This PPP-GDP comparison is the basis of the assessment that China is the largest economy in the world – but this direct comparison of PPP-GDP as a measure of the wealth and standard of living is flawed in a financial context because the poorer you are and the lower your cost of living, the higher your PPP-GDP gets. The GPD assessment also contains the hidden flaw that China’s GDP derives not from consumption but from export oriented industrial production that makes goods for export to the West at lower cost than would be possible in the West. In terms of per capita consumption, China lags way behind the West with $3,224 per person in 2019 compared with $45,000 in the USA. The analysis provided above implies that significant social and structural differences make it impossible to make a direct comparison of gross national GDP and that therefore from the consumer’s point of view, China is not the richest country in the world.

(4) A similar error is found in a direct comparison of emissions. Firstemissions in China are primarily industrial emissions and not consumer emissionsSecond, much of these emissions come from two sources that have a direct link to the West. These are, (i) Western firms that have chosen to locate their factory in China, and (ii) Chinese factories that are making solar panels and wind turbines for the West. The ownership of these emissions must therefore have a more rational distribution than a single minded consideration of national boundaries. An implication of these complexities is that climate action emission accounting must be global because it cannot be understood on a country by country basis. 

(5) There is also a cultural issue in the emission confrontation between China and the West. Absolute literal truth no matter how ugly is a foundational principle of Western Civilization. Confucian philosophy contains the Li Principle. It affirms that manners are a primary means by which we express moral attitudes and carry out important moral goals. Confucian views on ritual extend this insight further by emphasizing the role that manners play in cultivating good character and in finding the conceptual boundaries of manners. What we call etiquette, social customs, and ritual Confucians see as expressions of Li , something we would understand as decorum. It expresses moral character and attitude. Li expresses the principle that good etiquette and good manners cultivate and express good intentions and good character. {SOURCE: Cline, E.M. The Boundaries of Manners: Ritual and Etiquette in Confucianism. Dao 15, 241–255 (2016). (abbreviated and edited)}

(6) CONCLUSION: We propose in this post that the arguments presented in items (1) to (4) above imply that a confrontational attitude of the West with respect to China’s emissions contains serious weaknesses because the complexity of this issue is not taken into account.The Chinese response to Western demands for a greater climate action role of China is best understood in this light and in terms of the principle of Li in Confucianism.

An additional consideration is that a demand that China should live up to its commitments in the Paris Accord overlooks the weaknesses and inconsistencies in the details of what is called the “Paris Agreement” as discussed in a related issue on this site: LINK:

Here Xi Jinping has risen above the petty arguments in items (1) to (4) to calm the discourse with declarations of good intentions and expressions of good character that should probably be understood in terms of the Principle of Li in Confucianism. A bitter confrontation is not in either party’s interest because of the deep economic linkages described above.

Yet, these expressions of Li may have been taken literally in the West. The communication is likely made difficult by cultural differences.


Abercrombe & Fitch, Abbott Laboratories, Acer Electronics, Adidas, AGI- American Gem Institute, Agrilink Foods, Inc., Allergan Laboratories, American Eagle Outfitters, American Standard, American Tourister, Ames Tools, Amphenol Corporation, Amway Corporation, Analog Devices, Inc., Apple Computer, Armani, Armour Meats, Ashland Chemical, Ashley Furniture, Audi Motors, AudioVox, AutoZone, Inc., Avon, Banana Republic, Bausch & Lomb, Inc., Baxter International, Bed, Bath & Beyond, Belkin Electronics, Best Foods, Big 5 Sporting Goods, Black & Decker, Body Shop, Borden Foods, Briggs & Stratton, Calrad Electric, Campbell ‘s Soup, Canon Electronics, Carole Cable, Casio Instrument, Caterpillar, Inc., CBC America, CCTV Outlet, Checker Auto, Cisco Systems, Chiquita Brands International, Claire’s Boutique, Cobra Electronics, Coby Electronics, Coca Cola Foods, Colgate-Palmolive, Colorado Spectrum, ConAgra Foods, Cooper Tire, Corning, Inc., Coleman Sporting Goods, Compaq, Crabtree & Evelyn, Cracker Barrel Stores, Craftsman Tools, Cummins, Inc., Dannon Foods, Dell Computer, Del Monte Foods, Dewalt Tools, Dial Corporation, Diebold, Inc., Dillard’s, Inc., Dodge-Phelps, Dole Foods, Dow-Corning, Eastman Kodak, EchoStar, Eclipse CCTV, Edge Electronics Group, Electric Vehicles USA, Inc., Eli Lilly Company, Emerson Electric, Enfamil, Estee Lauder, Eveready, Fisher Scientific, Ford Motors, Frito Lay, Furniture Brands International, Gateway Computer, GE General Electric, General Foods International, General Mills, General Motors, Gentek, Gerber Foods, Gillette Company, Goodrich Company, Goodyear Tire, Gucci, Haagen-Dazs, Harley Davidson, Hasbro Company, Heinz Foods, Hershey Foods, Hitachi, Hoffman-LaRoche, Holt’s Automotive Products, Hormel Foods, Home Depot, Honda Motor, Hoover Vacuum, HP Computer, Honda, Honeywell, Hubbell Inc., Huggies, Hunts-Wesson Foods, ICON Office Solutions, IBM, Ikea, Intel Corporation, J.M. Smucker Company, John Deere, Johnson Control, Johnson & Johnson, Johnstone Supply, JVC Electronics, KB Home, Keebler Foods, Kenwood Audio, Kimberly Clark, Knorr Foods, Kohler, Kohl’s Corporation, Kraft Foods, Kragen Auto, Land’s End, Lee Kum Kee Foods, Lexmark, LG Electronics, Lipton Foods, L.L. Bean, Inc., Logitech, Libby’s Foods, Linen & Things, Lipo Chemicals, Inc., Lowe’s Hardware, Lucent Technologies, Lufkin, Mars Candy, Martha Stewart Products, Mattel, McCormick Foods, McKesson Corporation, Megellan GPS, Memorex, Merck & Company, Mitsubishi Electronics, Mitsubishi Motors, Mobile Oil, Molex, Motorola, Motts Applesauce, Multifoods Corporation, Nabisco Foods, National Semiconductor, Nescafe, Nestles Foods, Nextar, Nike, Nikon, Nivea Cosmetics, Nokia Electronics, Northrop Grumman Corporation, NuSkin International, Nvidia Corporation, Office Depot, Olin Corporation, Old Navy,
Olympus Electronics, Orion-Knight Electronics, Pacific Sunwear, Inc., Pamper’s, Panasonic, Pan Pacific Electronics, Panvise, Papa Johns, Payless Shoesource, Pelco, Pentax Optics, Pep Boy’s, Pepsico International, Petco, Pfizer, Inc., Philips Electronics, Phillip Morris Companies, Pierre Cardin, Pillsbury Company, Pioneer Electronics, Pitney Bowes, Inc., Plantronics, PlaySchool Toys, Polaris Industries, Polaroid, Post Cereals, Pfister, Pringles,
Praxair, Proctor & Gamble, PSS World Medical, Pyle Audio, Qualcomm, Quest One, Ralph Loren, RCA, Reebok International, Reynolds Aluminum, Revlon, Rohm & Hass Company, Samsonite, Samsung, Sanyo, Shell Oil, Schwinn Bike, Sears-Craftsman, Sharp Electronics, Sherwin-Williams, Shure Electronics, Sony, Speco Technologies, Skechers Footwear, SmartHome, Smucker’s, Solar Power, Inc., Stanley Tools, Staple’s, Steelcase, Inc., STP Oil, Sunkist Growers, SunMaid Raisins, Sunkist, Switchcraft Electronics, SYSCO Foods, Sylvania Electric, 3-M, Tamron Optics, TDK, Tektronix, Inc, Texas Instruments, Timex, Timken Bearing, Tommy Hilfiger, Toro, Toshiba, Tower Automotive, Toyota, Toy’s R Us, Inc., Tripp-lite, Tupper Ware, Tyson Foods, Uniden Electronics, Valspar Corporation, Victoria ‘s Secret, Vizio Electronics,
Volkswagen, VTech, WD-40 Corporation, Weller Electric Company, Western Digital, Westinghouse Electric, Weyerhaeuser Company, Whirlpool, Wilson Sporting Goods, Wrigley, WW Grainger, Inc., Wyeth Laboratories, X-10, Xelite, Xerox, Yamaha, Yoplait Foods, Yum Brands, Zale Corporation.

Made in China: GM Might Introduce Chinese-Made Buicks to US Market -  Sputnik International

Wind turbine manufacturing, Nantong, Jiangsu, China Stock Photo - Alamy

  • chaamjamal: Thank you for your input
  • Ruben Leon: When your mind is made up you ignore the data and try to justify the bias you acquired as a juvenile and never questioned. The fact that the Antar
  • chaamjamal: Thank you for raising these interesting points. We live in strange times. Some day we may figure this out.