History of Soybean Crushing: Soy Oil and Soybean Meal - Part 7

by William Shurtleff and Akiko Aoyagi


A Chapter from the Unpublished Manuscript, History of Soybeans and
Soyfoods, 1100 B.C. to the 1980s


Copyright 2007 Soyinfo Center, Lafayette, California

Part 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

Developments During the 1950s. The postwar period in the US, extending through the 1950s, was characterized by relative affluence, full employment, rising wages, surplus crops, and an end to the wartime psychology of deprivation, hardship, rationing, and self sacrifice. These conditions gave rise to new dietary patterns and rising consumption of meat, poultry, and fats, all of which stimulated and transformed the US soybean crushing industry.

Starting in the 1940s and accelerating during the 1950s this industry underwent dramatic structural changes, which have been well analyzed by Goldberg (1952), Nakamura and Hieronymous (1965), and Kromer (1970). From the mid-1930s until the early 1950s the number of mills crushing soybeans increased rapidly. By 1950 there were about 250 mills crushing soybeans; 139 of these crushed only soybeans, 200 crushed predominantly soybeans, and 51 were primarily cottonseed mills that crushed soybeans after their supply of soybeans had been exhausted (Fig. ?.??). Nakamura and Hieronymous (1965) reported that in 1954, of the 261 mills crushing soybeans full or part time, 158 used screw presses, 85 used solvent extractors, and 18 used hydraulic presses. After the early 1950s the number of soybean crushing plants began to drop sharply, as larger, more economical solvent plants were constructed and many old expeller plants were retired. Thus the total number of mills crushing predominantly soybeans decreased from 193 in 1951 to only 94 in 1979, while the average mill's yearly capacity increased from 43,550 tonnes (1.6 million bushels) in 1951 to 391,900 tonnes (14.4 million bushels) in 1979, or a nine fold increase in plant capacity in only 28 years (Fig. ?.??). During this period, of course, the total amount of soybeans crushed increased rapidly (Fig. ?.??), rising an average of 7.9% during the 1950s and 6.5% yearly during the 1960s. During the 1970s US soybean mills operated roughly 355 days a year at an average of 76% of capacity, with capacity utilization dropping in times of narrow crushing margins, sluggish livestock demand, or tight farmer selling of soybeans ( Soya Bluebook 1981). In 1945 the largest soy oil mill had a capacity of 1,620 tonnes of raw soybeans a day; this increased to 2,515 tonnes in 1951, then to 4,000 tonnes in 1980. This latter plant, the Archer Daniels Midland (ADM) mill at Decatur, Illinois, used a single solvent extractor that required the soybeans from 4,000 acres a day to keep it running at capacity. In 1978 ten plants with a daily capacity of 2,500 tonnes or more accounted for 28% of total industry capacity. Only four plants had a capacity of under 500 tonnes a year.

Starting in the 1950s, there was a rapid increase in both horizontal and vertical integration within the industry, as firms sought to expand their business activities and market share, and to increase profits. Firms integrated horizontally by acquiring new plants. It is more economical to have several plants at various locations than one very large central plant; the shipping costs of having to buy soybeans from a wider area and then transport the products to a wider territory put upper limits on the economical size of any one plant. Moreover, with the rise in exports, many new plants were built on rivers or ports, or in areas of comparative price or transportation advantage. Soybean crushing firms could also integrate vertically, by entering the mixed feed business or the oil refining business, or both. In the late 1940s major crushers who also refined soy oil were ADM and Spencer Kellogg & Sons. It was generally considered easier, however, to integrate in the direction of meal. In 1950, of the 250 plants crushing soybeans part or full time, 72 also had mixed feed operations, 11 also did oil refining, and 2 did both. In 1960, of 148 crushing firms, 72 combined feed operations, 13 did oil refining, and 9 did both. A few firms integrated one step further forward by making foods (such as soy flour or modern soy protein products; see Chapter 30) from defatted soybean meal. Rarely did crushers integrate forward into making soy oil consumer products such as margarine or salad dressings. Some firms also integrated backward into soybean crushing. Many feed manufacturing firms built their own soybean crushing plants to supply them with high-protein meal. During and immediately after World War II, when demand for soy oil was quite strong, some oil refiners started crushing soybeans to secure a supply of oil for their edible products. Moreover, some cooperative country elevators integrated forward into soybean processing, primarily so that the farmer members could obtain soybean meal from their soybeans at a time of shortage of high protein livestock feeds.

By 1952 all the major soybean crushers (except A.E. Staley Mfg. Co.) were vertically integrated, manufacturing their own mixed feeds or edible oil products or both. Major manufacturers at that time included Allied Mills, ADM, Borden Co., Cargill, Central Soya, General Mills, Glidden, Iowa Milling Co., Pillsbury Mills, Procter & Gamble, Ralston Purina, Spencer Kellogg, and Swift (Goldberg 1952).

The various forms of integration and increase in the size of individual plants and firms led to a gradual concentration of control in the hands of a small number of huge companies. In 1947, 1958, and 1981, the four largest soybean crushing firms controlled 44%, 40%, and 54.5% respectively of the total industry crushing capacity. In those same 3 years the eight largest firms controlled 63%, 63%, and 75.1% respectively of the total capacity, and the 20 largest firms controlled 81%, 86%, and 96.4% of the total (Nakamura and Hieronymous 1965; Dunn et al. 1981; Schiffman 1944). In 1977, according to the latest survey by Shearson Hayden Stone, Inc., the five largest soybean crushers were Cargill (with 18.3% of the total industry capacity), ADM (14.5%), Central Soya (7.7%), A.E. Staley Manufacturing Co. (7.6%), and Ralston Purina (7.5%). In addition, the various cooperative crushers (co-ops) controlled 19.0% of the total industry crushing capacity (see also Chapter 40).

During the 1940s and 1950s, America's soybean crushing industry came to be concentrated in the Corn Belt, especially in the four largest soybean producing states; Illinois, Iowa, Indiana, and Ohio. Because of the economics of shipping soybeans and soybean products, soybean mills tend to locate near potential markets for soybean meal. These markets exist where large amounts of livestock feeds are formulated and used, or where good transportation to such areas is available. In 1942, some 71% of the US soybean crushing capacity was in the four states mentioned above; by 1944 this figure had increased to 81%. From 1960 on, reflecting the rapid rise of soybean production in southern states, quite a few soybean crushing plants started in the South, especially along the Mississippi River (Fig. ??.??).

Over the years, as both throughput and efficiency steadily rose, crushing margins (defined as the difference between what a processor paid for a bushel of soybeans and the price he received from selling the oil and meal) steadily dropped. In 1947-50, the crushing margin averaged $0.68, but by 1966-69, it had fallen to only $0.25. Slim margins led to lower priced products, which expanded their market share and led to handsome profits. Soybean processing (like corn processing) became one of the glamour industries of the 1950s and 1960s (Merrill Lynch 1969).

Soy oil truly came into its own starting in the late 1940s and increasingly from the 1950s on, primarily because of its competitive price, the plentiful and dependent supply, and the major improvements in its flavor and oxidative stability. The expansion of soybean acreage, crushing capacity, solvent extraction, and the demand for soybean meal, coupled with the fact that the inflation adjusted prices for soybeans received by US farmers fell steadily and dramatically from 1947-1969 (Fig. 2.??), all combined to keep the price of soy oil low. In fact, following temporary leap in price 1946 and 1947 after wartime price controls were lifted, the inflation adjusted price of soy oil fell steadily thereafter until 1968, then again after the shortages of 1973. And in 1971 the real (inflation adjusted) price of US soy oil was the lowest it had ever been! (Fig. ?.??). Soy oil's quality also improved steadily. A 1964 study by Consumer Reports (Ref??) showed that although neither soy salad oil nor lightly hydrogenated winterized soy oil rated as high as cottonseed oil for cooking and salad use, the quality was nevertheless very good. Various writers have noted that the striking improvement in the oil's quality since the early 1940s, the result of millions of dollars spent on research around the world by industry, government, and universities over a period of more than 50 years, and the application of these research findings by the oil industry, represents a major triumph of oil chemistry and food technology. In 1947 soy oil production in the US passed that of butter and in 1953 it passed lard to become the number one food oil or fat made in America (Fig. ??.??). Per capita consumption of soy oil in the US more than doubled between 1960 and 1980. And while a log graph shows that the rate of increase in soy oil production slowed somewhat in the period from 1942-1982 (Fig. ?.??), the absolute amounts produced increased exponentially and strikingly (Fig.


Nevertheless, although soy oil had been transformed from a relatively minor and unknown oil to the undisputed leader, it was still largely unrecognized by consumers. In 1948 ADM noted appropriately: "Many Americans will be surprised to learn that they seldom go through a day without eating soy oil in some form." Soon a new generation of quality soy oil products began to be introduced. In 1955, for example, a number of new, high-quality soy oil margarines began to appear on the market, and in 1959 lightly hydrogenated, winterized soy oil was introduced. Consequently, soy oil production continued its rapid rise, more than doubling during the decade (Fig. ??.??), and increasing by an average of 8.8% a year during the 1950s and 6.3% a year in the 1960s (Kromer 1970).

In 1950, in recognition of the new importance of soy oil in the US economy, soy oil futures started to be traded on the Chicago Board of Trade. Soybean meal was added in 1951. Soybean futures had been traded since 1947. Now the "price discovery" of the futures market joined the "price determination" of supply and demand in setting oil, meal, and ultimately soybean prices. From this period most of the major processors began to minimize their risks by hedging on their soybean buying and crushing transactions.

The US first became a net exporter of soy oil in 1938 (Fig. ??.??). Large wartime exports of soy oil were made in 1943-45 to the United Kingdom and the USSR under the Lend-Lease Program. Soy oil exports to Europe almost tripled under the Marshall Plan (April 1948-June 1952, during which the US spent an unprecedented $13,600 million to underwrite the recovery of Western Europe), the 1949 Act of Congress, which gave authorization for foreign food donation programs, and other postwar relief and rehabilitation programs. From 1948-1952 soy oil exports reached an average of 338 million pounds a year, or about 16% of annual US soy oil production (Burtis 1950; Nakamura and Hieronymous 1965; Houck et al. 1972). Perhaps more important, the Marshall Plan played a key early role in the internationalization of food distribution, which led to profound changes in the soybean crushing industry. For example, a plant not located so that it had access to world markets (as near a major river or port) might quickly become obsolete and almost worthless.

International food trade was expanded dramatically after 1954 with the passage of the Agricultural Trade and Development Act (later more widely known as Public Law 480 or PL 480), which was subsequently amended through the Food for Peace Act of 1966. From 1955-1966, the program served largely as a vehicle for the disposal of surplus US commodities, whereas from 1966 on, a new emphasis was placed on self-help efforts by recipient countries. Under Title I of PL 480 authorized foreign governments could buy US agricultural products with their own currency on concessional terms (def??). By reselling the US products in its own country, the recipient government could generate its own funds. Long-term, low-interest paybacks were guaranteed by the US Commodity Credit Corporation. Under Title II, after 1960, the US government financed donations of surplus food to friendly, often poorer, food-deficit countries. The food was distributed by American private voluntary organizations (PVOs such as CARE or Catholic Relief Services), international organizations (such as UNICEF or the World Food Program), or foreign governments, in each case under the administration of the US Agency for International Development. Because of the stronger demand for soybean meal than for soy oil (because of the postwar rise in meat consumption, as discussed in the next section), soy oil was designated in 1954 as a surplus commodity under the PL 480 program. By exporting this surplus, the US government effectively supported US soy oil prices, however in some years when US soy oil prices were deemed too high, exports were apparently decreased to keep prices from going higher (Merrill Lynch 1969). The dramatic increase of soy oil exports from 1955 on is shown in Figure ?.??. During the 1950s, soy oil exports grew at an average annual rate of 7.8%, increasing?? to 6.9% during the 1960s. From 1955-1969 more than half of the exports each year were made under PL 480; the average was 62% a year and the maximum was 86% exported under PL 480 in 1967. Thus an average of 38% was sold commercially and paid for in the US dollars.

During the first Nixon administration (1969-1972), the US started a major push to increase agricultural exports and have them paid for on commercial terms or in US dollars. To promote exports, the US dollar was devalued first in 1971, then again in 1973. After 1971 PL 480 loans had to be repaid in US dollars. Nixon's Secretary of Agriculture Earl Butz (1971-76) actively carried out Nixon's programs. Thus, after 1970, as exports rose, the percentage disposed of under PL 480 steadily fell, dropping to only 9% in 1980. Up until 1969, only about 25% of the PL 480 oil was donated under Title II; the rest was sold on concessional terms. However after 1970, as PL 480 declined, an increasingly larger percentage of it was donated.

Major customers for PL 480 soy oil changed over the years. In 1959-60 the largest Title I sales were to Spain (148,000 tonnes), Yugoslavia (21,545), United Arab Republic (21,182), Turkey (20,729), Pakistan (12,972), and Colombia (12,791). In 1966-67 the largest Title I sales were to India (103,873 tonnes), Pakistan (66,678), Yugoslavia (52,163), Tunisia (44,452), and Egypt (26,308 tonnes). As of late 1979 the US had exported about 4.7 million tonnes of soy oil in 24 years under Title I of PL 480 programs. Practically all this exported oil was (and is) crude degummed, which was refined by the importing country.

Another major development that began in the late 1940s and increased rapidly during the 1950s and thereafter was the use of food additives in soy oil products. These include metal scavengers, antioxidants, antifoaming agents, emulsifiers, coloring and flavoring agents, and crystallization inhibitors. A number of these (such as antioxidants) were developed during World War II as part of food preservation programs. Metal scavengers such as citric acid (from 1948) or isopropyl citrate, added at the deodorizing stage, preferably while cooling the oil, help overcome the prooxidant effect of remaining traces of metals. To increase the oil's oxidative stability and, in some cases, to help reduce hydrogenation, antioxidants such as BHA, BHT and PG (propyl gallate) were introduced during the 1950s and 1960s. TBHQ (tertiary butylhydroquinone) was legalized in 1972; the main manufacturer was Eastman Chemical Products, a subsidiary of Eastman-Kodak. Added singly or, more likely, in combinations after the oil was deodorized, and in amounts of up to 200 parts per million (0.0%), these partly took over the function of tocopherols (vitamin E), half or more of which are removed during processing. An antifoaming agent, usually methyl silicone, was often added to frying fats. Mono- and di-glyceride emulsifiers were added during partial hydrogenation of some cooking and salad oils, and to soft margarines. Coloring and flavoring agents and crystallization inhibitors were used mostly in margarines. It was not until the late 1960s and early 1970s that consumer concern over the possible safety of these additives arose.

While important advances with soy oil were made during the 1950s, even more dramatic changes took place with soybean meal. These changed the very price structure of the soybean crushing industry and transformed diets throughout the Western world, eventually affecting diets almost everywhere.

Rise of Soybean Meal and the Meat Centered Diet . From the early 1930s until about 1946, the tremendous growth of the soybean crop in America was due largely to the demand for soy oil. The oil was in greater demand than meal during that period, as evidenced by the steady rise in the ratio of the price of oil to the price of meal (Fig. ?.??). However, after World War II, the oil/meal price ratio began a long decline, dropping from 6.5 in 1946 to about 2.5 after 1980, as the demand for soybean meal (and protein prices) grew faster than the demand for oil (and oil prices). Whereas immediately after the war, the value of the oil in a bushel of soybeans was worth about 52% of the total and the meal only 48%, by 1981, the oil was worth only 31% of the total and the meal was worth 69%. Thus, since World War II, the worldwide demand for soybean meal has been the primary force spurring America's soybean boom. Traditionally called an "oilseed," the soybean should perhaps be renamed a "protein seed," for it generates much more protein-rich meal per pound of oil than any other oilseed (more than twice as much as its nearest competitor, cottonseed), produces much more protein-rich meal per acre than any other oilseed (see Fig. ??.?), and also has the highest meal-to-oil price ratio. These factors emerged as the key to the soybean's success from the late 1940s on. Thus, ironically, the oilseed with the lowest oil content (and the highest protein content) became the world's leading source of oil. And even though the production of each pound of crude soy oil led to the production of 4.43 pounds of soybean meal, the demand for the oil, while rising rapidly, could never quite keep up with the demand for meal; thus the oil played only a supporting or secondary role in the total expansion.

The demand for soybean meal began to build in the late 1930s and rose steadily during the war. Many had expected that US meat production would decline during the war, with grains and soybeans being used directly for human consumption. That concern, however, never materialized. Total US meat production rose rapidly after 1938, and in 1940 it passed the former US record of 1934. In 1943 it was 39% above the 1935-39 average, with major increases in the production of hogs (up 89% over 1935-39), eggs (up 49%), and cattle (up 23%). This was made possible in part by record crops of feed grains and soybeans. More significant, annual per capita consumption of meats rose from an average of 125.6 pounds in 1935-39 to 141.2 pounds during the war years of 1940-44, up 12.4%. This high consumption was maintained even though the military and foreign shipments to allies took more than one-fourth of total US meat supplies. Yet even by 1944 the per capita consumption of all meats (or of beef) did not reach the peak level of 156 pounds attained in 1909 (Crickman 1945).

The tremendous expansion of production and consumption of both soy oil and soybean meal in the postwar period was caused by a balance of strong demand and supply. The basic cause of expanding demand was affluence, which is typically accompanied, worldwide and throughout most periods in history, by a shift from traditional grain-centered diets to diets containing more animal products (especially meats and poultry), protein, oil, and fat. In short, with rising incomes, people have generally climbed up the food chain. (Until the 1960s there was little scientific or consumer awareness of the health dangers connected with the "affluent diet.") On the supply side, the rapid expansion of soybean acreage and crushing capacity, and a highly efficient production and crushing complex, kept the prices of soybean oil and meal at very low levels, which stimulated their use.

In the period after World War II, low-cost, widely available soybean meal and grain triggered the greatest explosion in livestock and poultry production that the world has ever seen, and laid the basis for the Western meat-centered diet. The burgeoning livestock numbers, in turn, led to an enormous and sustained demand for soybeans (the crop increased more than fivefold between 1950 and 1980) and soybean meal. This demand can be analyzed in terms of a number of specific factors:

1. Increased Livestock and Poultry Production. The number of animals increased to meet the demand for meat. Per capita meat consumption rose dramatically.

2. Rise of Confined Feeding Systems. Prior to World War II, most US livestock and poultry had been raised on family farms. Cattle were usually grazed on rangeland or pasture and fed hay, silage, and some corn during the winter. Poultry flocks were small, and some ate barnyard scraps. Purchased protein concentrates (such as tankage or oilseed meals) and grains constituted a relatively small percentage of the total feed units. The main reasons for the rise of huge confined feeding systems or feedlots were: (1) Soybean meal and surplus feed grains were very low in cost. Initially feedlots were designed as a way of transforming surplus feed grains from the 1950s into profitable meat, poultry, and dairy products; (2) The development of chemical fertilizers no longer required the use of animal manures as farm fertilizers, so the animals need no longer remain on the farm; (3) High labor costs encouraged centralization and automation, as did an increasingly scientific approach to the operation of "animal factories."

3. Growth of the Mixed Feed Industry. Farmers had used mixed feeds (containing feed grains, animal by-products, oilseed meals, etc.) in small quantities since the late 1800s. Their use was accelerated from the late 1930s when soybean meal first became an important ingredient and concepts of animal nutrition and scientific feed formulation and feeding gained popularity. Development of the concepts of essential amino acids and protein complementarity during the 1940s and 1950s (see Chapter 7) placed new emphasis on the importance of high-quality proteins in mixed feeds. The rise of low-cost feeds and large, centralized feeding units after the 1950s made it easier for feed compounders to contact and educate livestock and poultry feeders. The key message was that buying grain-based feeds, fortified and balanced with (soy) protein, led to more efficient and profitable livestock and poultry feeding, despite the slightly higher initial cost of feed. There was a trend away from selling feed supplements to selling pelleted, high-performance feeds, formulated with precision to a wide range of animals and feeding purposes.

4. Increased Use of Soybean Meal in Feeds. Scientific feed formulation designed to maximize animal growth at the least cost usually favored the use of soybean meal as a protein source. It increasingly replaced other protein concentrates, which were more expensive and/or contained lower quantity and quality protein. When overfishing caused the collapse of the Peruvian anchoveta industry after 1970, for example, soybean meal took the place of much fish meal in livestock feeds. Since it takes about 6.6 pounds of feed protein to produce 1 pound of broiler protein, and 8.3 and 14.3 pounds of feed protein to produce 1 pound of pork and beef protein respectively (as shown in Figure ??.??), clearly small increases in per capita meat and poultry consumption by Americans lead to large increases in demand for feed proteins.

5. Rapid Expansion of the Broiler Industry. Starting in the US in the late 1950s and later around the world, the broiler industry experienced rapid growth for various reasons: (1) Chickens are twice as efficient as hogs in converting feed to meat and four times as efficient as beef. As feed prices increase, this advantage becomes increasingly important; (2) It takes much less time and energy to produce a pound of chicken than a pound of pork or beef; (3) Starting in the late 1950s family chicken farms were transformed into huge, highly efficient, integrated and automated production units, many of which handled a million or more broilers a year by the late 1960s. These were usually operated on contract with the firm (often a multinational corporation) that supplied the feed and hybrid chickens and purchased the finished broilers or eggs; (4) Because of these increased efficiencies of production, the retail price of chicken actually fell (in both actual and inflation-adjusted dollars) between 1950 and 1974 (the last year for which we have data). In 1950 broilers sold for $1.312 per kg ($0.65/lb) and choice grade beef sold for $1.645 per kg ($0.82/lb). In 1974 the actual price for broilers had decreased to $1.248 per kg or $0.586 per lb in inflation adjusted dollars, while the actual price of beef had increased to $3.06 per kg or $1.44/lb in inflation adjusted dollars (Milner et al. 1978); (5) Chicken meat contains much less fat than beef or pork. Uncooked chicken flesh and skin contains 5.1% fat, while light meat without the skin contains only 1.5% fat. Uncooked prime beef contains 46% fat, a T-bone steak contains 38% and hamburger (ground beef) contains 21%. Pork (total edible portion) contains 53% fat, medium-lean bacon contains 67%, and lean cuts contain 23% fat (Watt and Merrill 1963). After the mid-1960s, as consumers became increasingly aware of the dangers of high-fat diets, these differences increasingly influenced buying patterns. In response to all these factors, the per capita chicken consumption in the US from 1950 grew at a faster rate than beef consumption, increasing from 21.9 pounds in 1953 to 52.1 pounds in 1973. The increase in Western Europe during the same period was from 6.0 pounds to 22.4 pounds, and in Japan from 1 pound to 15 pounds. In most Third World countries, the first feeding industry to develop is poultry, and starting in the 1970s the poultry industry in Third World countries expanded dramatically, generally using imported soybean meal as a protein source.

As fast-growing, non-ruminant animals, broilers require more protein in their diet than cattle or hogs?? And unlike cattle, they cannot live solely on forages. By the 1960s, approximately 25% of a typical poultry diet was soybean meal, and this was typically the more concentrated 49% meal, rather than the 44% meal used more for feeding older hogs and cattle. Moreover, more than 70% of all poultry in the world was fed soybean meal. In 1950 dairy cattle consumed 30% of the soybean meal used in feeds, while poultry and hogs each used 25%, and other animals took 20%. By 1980 poultry were consuming a remarkable 65%, with hogs using about 30% and the remaining 5% going to cattle and other animals??

6. Reduced Handling, Shipping, and Exporting Costs. By the use of bulk handling rather than the bagging of the 1940s, and of bigger barges and rail cars, handling costs dropped significantly. In the period from the early 1950s to the early 1970s, the cost of shipping soybean meal from the Gulf of Mexico to Europe declined by a factor of four (Spicola 1974). Reduced tariffs and other trade barriers, and the decline of the dollar versus most foreign currencies during the 1970s, further reduced the costs of exporting. Consequently exports of both meal and whole beans increased dramatically.

These six factors have been the principal ones leading to a rapid expansion of soybean meal utilization worldwide. In the US its production grew from about 4.5 million tonnes in 1953 to over 22 million tonnes in 1980, while exports for the same period grew from about 363,000 tonnes to 6,076 tonnes. Exports of soybean meal grew at remarkable rates in the two decades after the war, increasing at an average rate of 69.2% yearly throughout the 1950s and 22.6% yearly during the 1960s (Kromer 1970). Stated differently, in the early 1950s, exports of meal were less than 2.5% of total US soybean meal production, whereas by 1980 they had climbed to almost 30% of US production (Fig. ?.??).

By the 1980s nearly all of the soybean meal consumed in the US was used as a protein supplement in poultry and livestock feeds. The 1978-79 US production of soybean meal (22 million tonnes) accounted for 74.3% of all protein concentrates produced in the US. Other leading concentrates were gluten feed and meal (9% of the total) and tankage and meat scraps (7%).

Throughout the 1960s the whole new system of meat production looked spectacularly successful, with a bright future. Surplus grains were being disposed of profitably, Americans had the meat and dairy products they yearned for at reasonable prices; the livestock, grain, and soybean producing and crushing industries were thriving. It was not until the early 1970s that basic problems with the new affluent meat-centered diet and with feeding large amounts of grains to livestock and poultry began to emerge.

1960-1982 . The dramatic growth of soybean oil and meal in the US continued throughout the 1960s and 1970s, continuing earlier trends. Production of US soy oil skyrocketed from 2.0 million tonnes (4,400 million pounds) in 1960 to 5.3 million tonnes (11,700 million pounds) in 1979, for an increase of 166% in 20 years or 5.3% a year compounded annually. Starting in 1968 more than one-half of all "visible" oils and fats in the US were being supplied by soy oil.

Exports increased strongly too, despite the emergence of Brazil and Argentina as major competitors from the mid-1970s and the trend for foreign nations to import soybeans rather than oil and meal. Thus soybean exports increased most rapidly, followed by meal, with oil exports growing only slowly (Fig. ??.??), and being quite small relative to beans and meal. By 1982 roughly 75% of soy exports were in the form of beans. Because of the increasing exports of soybeans relative to soybean products, the percentage of US soybeans crushed domestically steadily dropped, from 74% in 1959 to 66% in 1969 and only 49% in 1979; other countries were crushing the soybeans they imported. Soy oil exports increased from 317,000 tonnes in 1960 (15.9% of total US production) to 1,096,000 tonnes in 1960 (21.4% of total production). The 1980 exports were worth $689 million, which, however, was only 8.4% of the total revenues from exports of soybeans, oil, and meal. Over the years, soy oil and the oil in exported soybeans became an increasingly important part of total US exports of oils and fats, accounting for 24.0% of the total in 1950, 44.7% in 1960, 62.2% in 1970, and 61.9% in 1975.

Per capita consumption of soy oil climbed dramatically during this period, from about 13 lb per person (nearly 30% of all fats consumed) in 1952 to 16 lb in 1960, up to 32 lb (about 60% of all food fats) in 1974, thus roughly doubling in only 14 years (Fig. ?.??). How about 1984??

By 1980 the vast majority of the oilseeds crushed in the US was soybeans, representing 86% of the total by weight. In that year, the most widely consumed food fats and oils in the US were soy oil (4.62 million tonnes), butter (0.48 million), lard (0.43 million), edible tallow (0.41), coconut oil (0.37), corn oil (0.32), and cottonseed oil (0.32 million tonnes consumed) ( Fats and Oils Outlook and Situation 1981; card??). This shows the great lead enjoyed by soy oil. To place US soy oil in its worldwide context, note that in 1980 worldwide production of all oils and fats was 56.1 million tonnes. Of this, soy accounted for 12.36 million tonnes, or about 22% of total world oils and fats production. Of that 12.36 million tonnes, the US produced 5.32 million tonnes, or 43.0% of total world soy oil production.

As recently as the 1950s dietary oils and fats were best known nutritionally as a rich source of calories and of fat soluble vitamins A, D, E, and K, plus a contributor of palatability/richness and satiety value to the diet. However starting in the early 1960s and increasingly throughout the 1970s and 1980s, oils and fats became the most controversial of the major nutrients. They attracted the growing attention of physicians, nutritionists, epidemiologists, and consumers. Increasingly they made headlines in one of the most heated debates in the history of Western nutrition and medicine. One reason for this was that high consumption of lipids, and in some cases especially saturated fats and cholesterol, was increasingly implicated as a major causative factor in America's most widespread and serious diseases. Foremost among these were coronary heart disease and cancers of the bowel and breast, but also stroke, high blood pressure, atherosclerosis, diabetes mellitus, and obesity. As the evidence accumulated, Americans were urged by prestigious committees and associations of physicians, nutritionists, other scientists, and even government agencies to reduce their total fat intake and, even more, their intake of saturated fats and cholesterol. Another closely related reason for the increased awareness of oils and fats in the diet was the strong growth of consumer interest in the relationship between diet, nutrition, and good health, which started in the mid-1960s and increased steadily thereafter. A survey done by the American Soybean Association in 1978, for example, showed that health and nutrition were the main reasons people mentioned for choosing a particular oil. Weight watchers have always tried to reduce their consumption of oils and fats, since they contain more than twice as many calories per gram (9 vs. 4) as carbohydrates and proteins. Now, however, oils and fats became the subject of a much deeper and more widespread concern. Yet because of the saturated-polyunsaturated debate, not all oils suffered equally. In fact soy oil, with its relatively high ratio of polyunsaturated to saturated fatty acids and its abundance of essential linoleic acid, probably emerged from the battle with an improved image. And, as we noted above, its market share and per capita consumption increased substantially.

Of the various diseases with which high fat consumption was associated, coronary heart disease, America's most widespread cause of mortality, was by far the best known. This connection and the history of the controversy surrounding it are discussed in Chapter 7. By the 1970s the strong association between fat consumption and cancer of the gastrointestinal tract and sex-related organs (which account for over half of all cases of cancer) was becoming well known. The first research linking high-fat diets with breast cancer was published by Tannenbaum in 1942. Numerous epidemiological studies done starting in the mid-1960s in countries around the world showed highly significant correlations between fat intake and cancers of the breast and colon (Fig. ?.??). Americans, for example, were found to have five times the rate of these two cancers as Japanese, who ate much less fat. Excessive amounts of bile acids, which the body produces in larger quantities than usual when fed a high-fat diet, were suspected as promoters of colon cancer. Note that all fats, not just saturated fats, were suspected in breast and colon cancers. Of the many reports on cancer and diets the most recent and one of the most prestigious was "Interim Guidelines Recommended for Reducing Cancer Risks Through Diet," published in June 1982 by the National Research Council of the National Academy of Sciences (Ref??). It noted that "The committee found the strongest evidence for a connection between cancer and the consumption of fats," both in epidemiological and laboratory studies. Its first and foremost dietary recommendation was to "Eat less foods high in saturated and unsaturated fats," reducing fats from the present 42% down to 30% of daily calories, a reduction of about 29%. Most of the anticancer diets, as well as the widely publicized Dietary Goals (US Senate 1977), called for a reduction in fat consumption of about this amount. Let us now look at a number of specific health concerns that arose in the period from 1960-1982 connected with oil and fat consumption.

Total Fat Consumption. The various degenerative diseases associated with high fat consumption are not found in most "primitive" or traditional cultures. They generally increase with the spread of affluence and civilization. In the US, fat consumption increased steadily, rising from 125 gm of total fat per person per day in 1909-13 to an all-time high of 159 gm in 1976, an increase of 27% (Brewster and Jacobson 1978). Even during the period 1965-1978, when the connection between high fat intakes and heart disease and cancer was becoming well known, total fat intake still increased 31%. Much of the increase starting in the 1960s was a result of increased consumption of fast foods prepared outside the home, many of which (fried chicken, french fries) required deep-frying or were high in fat (hamburgers, hot dogs, whole milk and products) (Spicola 1974). There was also very large increases in consumption of hydrogenated vegetable fats (margarine and shortening) accompanied by smaller decreases in butter and lard, both animal fats. In 1978 the average American was consuming 72.4 pounds of "invisible" or "nonseparated" fats (such as those in meat, milk, cheese, eggs, nuts, avocados, etc.) and 55.6 pounds of "visible," "separated," or "added" fats (such as salad and cooking oils, butter, margarine, shortenings, etc.) for a total of 128 pounds or 5.6 ounces (159 grams) a day. Of the total invisible and visible fat intake, 43% came from visible added fats, 34% came from red meat, poultry, and fish, 12% came from dairy products, 3% came from eggs, and the remaining 8% came from other foods. In 1978, America had one of the highest per capita levels of fat consumption of any country in the world, exceeded only by Denmark, New Zealand, and the Netherlands. As expected, these countries also had the highest rates of coronary heart disease and cancer of the breast and colon. Worldwide, the average per capita consumption of edible oils and fats was 25-26 pounds a year, or less than half of the 55.6 pounds consumed in America.

Animal Fats versus Vegetable Oils. Despite the increase in total per capita consumption of oils and fats, there was a dramatic and probable favorable shift in the type of fats consumed. In the early 1950s Americans were consuming approximately equal amounts of vegetable oils and of animal fats. From the mid-1950s there was a sharp decline in the per capita consumption of animal fats (such as butter and lard) and an even sharper increase in the per capita consumption of vegetable oils, resulting in a gradual increase in the total per capita consumption of oils and fats. By 1978 vegetable oils accounted for 84% of the total and animal fats for only 16% (Fig. ??.??). The same shift occurred worldwide; in 1978 some 71% of the world's edible oils and fats came from plants and only 29% from animals, with the latter percentage steadily decreasing. There were at least three basic reasons for this shift: (2) hydrogenation, which allowed vegetable oils to be used in making substitutes for butter and lard (i.e. margarine and shortening); (1) the growing concern, especially after 1960, with the health dangers associated with consumption of saturated fats and cholesterol, most of which came from animal fats; and (3) the lower price of vegetable oils.

Saturated versus Polyunsaturated. Starting in the 1960s, the terms "saturated" and "polyunsaturated," as applied to oils and fats, became household words, as typical consumers became amateur oil chemists. (Dupe??) The fatty acids comprising oils and fats can be divided roughly into three types: saturated, which are solid at room temperature; monounsaturated, which are soft; and polyunsaturated, which are liquid. In 1952 Kinsell and co-workers (Ref??) first reported that increased consumption of polyunsaturated fatty acids decreased serum cholesterol levels. Many subsequent studies supported this finding, while showing that increased consumption of saturated fats raised serum cholesterol levels. The results were widely publicized, especially by companies selling margarine and vegetable oils, and by the 1960s consumers began to look for oils and fats with a high P/S ratio, the ratio of polyunsaturated to saturated fatty acids. The two oils with the highest (most favorable) P/S ratios are safflower and sunflower oils, each with a P/S ratio of about 10. Unhydrogenated soy oil (which is not widely sold commercially) ranks next, together with corn oil, having a ratio of 4-5. Lightly hydrogenated, winterized soy oil (by far the most widely used soy oil) is 2.5-3.0. Soft tub margarine is 1.9-3.0 and regular stick margarine or household shortening is 1.0. By comparison, lard is less than 0.5 and butter, beef fat, and egg yolk are less than 0.1.


Part 7 | Next
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9