The development of commercial disease control
2006; Wiley; Volume: 55; Issue: 5 Linguagem: Inglês
10.1111/j.1365-3059.2006.01440.x
ISSN1365-3059
Autores Tópico(s)Wheat and Barley Genetics and Pathology
ResumoPlant PathologyVolume 55, Issue 5 p. 585-594 Free Access The development of commercial disease control P. E. Russell, P. E. Russell E-mail: Phil.E.Russell@btinternet.comSearch for more papers by this author P. E. Russell, P. E. Russell E-mail: Phil.E.Russell@btinternet.comSearch for more papers by this author First published: 17 July 2006 https://doi.org/10.1111/j.1365-3059.2006.01440.xCitations: 29 AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinked InRedditWechat Introduction As part of my introduction to the theme of the 2005 presidential meeting 'Plant Pathology with a Purpose' I stated that 'in my opinion, plant pathology exists with an aim to secure food and other resources for the benefit of us all'. While this statement has obvious commercial overtones, and while I realize that there are some people engaged in research studies that can be regarded as plant pathological but which do not obviously satisfy my criterion, I stand by my claim. No doubt my views have been strongly influenced by my career in the crop protection industry; a time during which I experienced first-hand all the elements of fungicide research, development, marketing and market support. My work was always directed at disease control, either by discovery and development of new fungicides or by management of resistance of plant pathogens to fungicides. In later years, these two elements merged, as resistance management became an integral part of the registration process for new molecules in Europe. However, my work was not without set boundaries. By definition, the crop protection industry is a commercial business that is ultimately responsible to its shareholders. The products the research-based companies sell must generate enough profit to maintain the expensive research and development (R&D) programmes they run. Disease-control markets must be very well defined and the research objectives of the company well understood in order that the industry retains its ability to fund such research. During my time in R&D for various companies I became acutely aware that many pathologists outside the industry did not really know or understand how it worked, the limitations it worked under and the financial constraints it faced. Here, I intend to illustrate how the modern crop protection industry has developed from its very humble beginnings to its current state. The beginning of disease control It has been suggested that organized agriculture began with the cultivation of barley (Hordeum vulgare) and the two wheats, Triticum dicoccum and T. monococcum, around 5000 bc in the Middle East, moving to Europe some 500 years later and to England around 3500 bc (Behrens, 1957). It is quite reasonable to expect that these and other crops such as orchard fruits, together with the general flora, would have been infected by various fungi, bacteria and viruses, but of course, at that time the true cause of plant disease was not known and would not be known for several millennia. Damage to plants by natural disasters was obvious, as was damage from insects, many of which could be distinguished from one another and associated with particular damage. That the effects of plant disease were associated with natural phenomena would have been realized; the first attempts at disease control were really attempts to control weather conditions by various rituals, including incantations to a particular deity at various times of the year. This situation continued for many centuries, although some progress was made with the naming of various diseases and disorders. The majority of these names referred to 'blast', 'blight' and 'mildew'; 'blast' and 'blight' seemingly indicating a rapid destruction of the plant, and mildew a slower, more visible growth on the plant surface. It is interesting to note that, even today, the general public refer to 'blight' and 'mildew' as general terms for any disease they come across in the garden or allotment. An excellent history of the early development of plant pathology in various cultures is given by Orlob (1973). As agriculture developed, attempts were made to understand a little more about the problems afflicting plants, but the only experience people had was with diseases and disorders that affected themselves. In India in about 500 ad the situation arose in which plants were thought to be suffering from human ailments. Thus, trees and other plants were thought to suffer from 'wind', 'bile', 'phlegm', 'jaundice' and 'indigestion'. Diagnosis was a skill 'to be made by a person of superior intelligence, who having made the diagnosis, should administer medical treatments with strenuous efforts'. The 'medical treatments' applied included: '(for cucumbers) fumigate with the fumes of the bones of a cow and dog and the ordure of a cat'; 'A wise man … cure diseases of trees caused by bile by means of substances that are cool and sweet'; and possibly a cure that is up to date even today: 'Having dug out the earth near the roots of the trees affected with wind disorder, a wise man should replace it with fresh dry earth in order to cure the disease'. Table 1 illustrates some more of these early remedies for trees. Table 1. Tree diseases and control c. 500 ad Ailment Treatment Phlegm Nurture with tepid water Indigestion Dress roots with powder from the leaves of Flacourtia sapida and fumigate with ghee and honey Wind Apply tepid liquids; broth of flesh and fat; dust with ashes of cowdung The invention of the microscope in the 17th century led to the discovery of microscopic fungi and bacteria in and on diseased plants, but rather than being recognized as the cause of disease, they were thought to be the result of it by spontaneous generation. It was not until 1807 that Prevost, working with bunt of wheat (caused by Tilletia caries), demonstrated that the disease was caused by the fungus and that it could to some extent be controlled by copper sulphate (Prevost, 1807). His observations that copper sulphate would kill the fungal structures was highly significant, but seem not to have been accepted. They were also not the first report of the effect of this chemical on bunt, Schulthess (1761) having noticed a disease-controlling effect, but without being able to explain why. Even when the Irish potato famines struck in 1845–49, it was believed that the associated so-called fungus (then referred to as Botrytis infestans) was the result of the disease, not the cause. Attempts at disease control were made by applying a mixture of lime, salt and copper sulphate to the soil. As may be expected, such attempts failed and it took another 10 years before the cause of the late blight was accepted as being infection by the fungus. During this period Botrytis infestans became Phytophthora infestans, since when the latter has been recognized as belonging to the Oomycota rather than the true fungi. It seems a great shame that no one applied the mixture to the foliage during this time. Other materials had been used as fungicides for many years, mostly targeted at controlling smuts and bunts of cereals by seed treatment. The use of these early treatments, some based on wine, would most likely have come about by simple association of accidental treatment with an effect. Later attempts, using arsenic and copper sulphate as tried by Prevost, may simply have arisen as experiments to improve on these early treatments. The targeted use of foliar applications of what can be regarded as true fungicides probably began with the use of elemental sulphur in the early 19th century to control powdery mildews and other diseases of fruit and grapevines. In the mid-19th century, lime sulphur, prepared by boiling lime and sulphur together, was used to control vine powdery mildew caused by Uncinula necator (Kenrick, 1833) and some time before 1862 the concept of adding lime was extended to copper sulphate as a means of reducing the phytotoxic effects of the latter on cereal seed. It is interesting that, even today, the mode of action of sulphur in controlling plant diseases is still debated (reviewed by Williams & Cooper, 2004). For the next 100 years, several other concoctions were developed as fungicides, many of which remain in use today, but it was not until the 1930s that the era of 'synthetic fungicides' began to expand rapidly. Typical fungicides used up to about 1940 are shown in Table 2. Table 2. Fungicides in use up to 1940 (from Russell, 2005) Year Fungicide Primary use bc Natural products Cankers, mildews c. 60 Wine Cereal seed treatment 1637 Brine Cereal seed treatment 1755 Arsenic Cereal seed treatment c. 1760 Copper sulphate Cereal seed treatment 1824 Sulphur (dust) Powdery mildews and other foliar pathogens 1833 Lime sulphur Broad spectrum, fruit, vines 1885 Bordeaux mixture Broad spectrum 1891 Mercuric chloride Turf fungicide 1900 CuOCl2 Broad spectrum, especially potato- late blight (Phytophthora infestans) 1914 Phenylmercury acetate Cereal seed treatment 1929? Mercurous chloride Soil-applied fungicide, especially for clubroot (Plasmodiophora brassicae) 1932 Cu2O Seed and foliar broad spectrum 1934 Dithiocarbamates patented Broad spectrum protectants 1940 Chloranil, dichlone Broad spectrum, generally seed treatments It is tempting to refer to the period of increasing fungicide use up to the 1930s as the DIY era, as various recipes were available for people to make their own fungicides; growers preparing their own materials using chemicals obtained from hardware stores and gardening suppliers (Table 3). Some ready-made preparations were also available at the beginning of the 20th century for those not wishing to mix their own preparations (Fig. 1). Table 3. Early 20th-century recipes for preparation of fungicides and their uses (from Sanders, 1910) Fungicide Recipe Use Bordeaux mixture Copper sulphate 2 lb Fresh burned lime 2 lb Water 10 gallons Potato late-blight, apple and pear scab, cucumber and melon mildew, peach leaf curl, apple mildew, cherry leaf scorch, tomato leaf rust etc. Woburn Bordeaux emulsion Copper sulphate 10 oz Lime water 8·5 gallons Water to 10 gallons Paraffin (solar distillate, non-flammable) As above; also available ready-made Ammoniacal copper carbonate solution Carbonate of copper 1 oz Carbonate of ammonia 5 oz Soft water 16 gallons Tomatoes under glass Recommended to 'wipe' tomatoes before marketing as preparation is poisonous Potassium sulphide Boil 0·5 oz in 1 gallon of water and add whites of two eggs to help preparation stick to leaves Rose mildew plus other diseases Violet fungicide 3 lb 4 oz copper sulphate 5 lb 8 oz copper carbonate 22 gallons of water Add 1 oz permanganate of potash For various fungal diseases of violets, pansy and viola Cupram, or copper carbonate Carbonate of copper 1·25 oz Strong ammonia 16 liquid oz Water to 10 gallons Roses, peaches, nectarines for shot-hole and peach leaf curl Lime sulphur spray Flowers of sulphur 1 lb Quicklime 15 lb Water 50 gallons Apple and pear scab, leaf spot, mildew; instructions include mixing ingredients in a wooden barrel with 6 gallons of water, when the mix will boil of its own accord; stir and add the rest of the water when boiling stops Iron sulphate Iron sulphate 25 lb Sulphuric acid 1 pint Water 50 gallons For disinfection of tomato houses; prepare in a wooden vessel; add acid to iron sulphate first then add water very cautiously Figure 1Open in figure viewerPowerPoint A range of ready-made preparations available at the start of the 20th century. Examination of Tables 2 and 3 together with Fig. 1 illustrates an interesting point. All the diseases treated were ones whose effects could be clearly seen and appreciated. Seed treatments were targeted at the smuts and bunts, diseases whose effects on grain are clearly visible. The other diseases targeted were ones affecting flowers, orchard and salad crops. The debilitating effect of rose mildew was obvious as it detracted from the beauty of the rose; apple scab and mildew had obvious effects on the apple tree and fruit; and the effects of disease on salad crops could lead to dramatic losses. It is also relevant that application technology had not developed to include large field-scale applications because the need was not appreciated. The expansion of the crop protection industry and introduction of regulation The chemical industry made great advances and was very active producing munitions during the war years, but with the cessation of hostilities it is arguable that the chemical industry suddenly had to find new markets for its expertise. Chemical crop protection was one such market. As early as 1930, various laboratory techniques to aid discovery of new fungicides were described (McCallan, 1930). The dithiocarbamates, patented in 1934 by Tisdale & Williams working for DuPont, were possibly the first chemicals to come from this systematic search procedure (Tisdale & Williams, 1934). They were originally produced as accelerators in the rubber vulcanization process, but developed into one of the most important classes of broad-spectrum protectant fungicides. They did not reach the market until 1942, when thiram was introduced, but this was soon followed by zineb (zinc complex), nabam (sodium complex) in 1943 and maneb (manganese complex) in 1955. Possibly the best-known material today, mancozeb, based on manganese and zinc, was introduced in 1961. The period between 1945 and 1970 was a prolific time for the crop protection industry. Major classes of chemicals were introduced during this time, including the phthalimides in 1952 (folpet, captan), guanidines (dodine, 1957), methyl benzimidazole carbamates (mbc generators, thiabendazole 1964), SBI morpholines (dodemorph, 1965) and the 2-aminopyrimidines (ethirimol, dimethirimol, 1969). However, the emphasis was still on controlling diseases of fruits, vegetables and ornamentals. For a more complete account of the chemistry introduced at this time, see Russell (2005). There was obviously interest in diseases of other crops, but the emphasis was on epidemiology rather than control. For cereals, the general impression was that diseases were not terribly important, excluding, of course, the smuts, bunts and diseases such as take-all (Gaeumannomyces graminis) that caused such visible effects. Application technology was also in its infancy for large-scale field applications. Some of the early views on cereal diseases presented by the plant disease surveys conducted by MAFF (Ministry of Agriculture, Fisheries and Food) in the UK are particularly interesting (Table 4). Table 4. Some diseases of cereals recorded by the UK Ministry of Agriculture and Fisheries, 1933–68 Disease Comments Black stem rust Most common in south-west Wales where barberry is common; moderate attacks common elsewhere; epidemic in 1940, 1957–62 was widespread, but infrequent; by 1968 of no consequence Yellow rust Can appear in January, more usually in May, 1933–42 widespread, but generally slight; cvs Deprez 80, Wilma and Wilhelmina appeared particularly susceptible; severe in 1961, and research into race identification conducted; of no importance between 1962 and 1968 Brown rust Considered of little economic importance throughout this period; fairly common Powdery mildew In 1933 seen in February, more commonly in May–August and more so in the east; usually appeared too late to do damage; cv. Wilma highly susceptible; bad attacks linked to excessive nitrogenous manuring; in 1957 considered important on barley and oats, less so on wheat; effects on yield examined; by 1968 many races identified and 'causes more loss than any other disease' Take-all Most frequent where cereal follows cereal; sporadic but bad in 1935 and 1937; research ongoing into the effect of soil conditions on the disease; first record for Kent in 1960 and associated with severe crop loss; by 1968 the severe effect on yield confirmed and the phenomenon of take-all decline being studied Fusarium foot rot and ear blight Variable from year to year, but presence does not indicate active parasitism; other Fusarium spp. sometimes found Eyespot First recognized in England in 1935, seeming most common in the east, where up to 85% of fields infected; associated with lodging; needs to be distinguished from sharp eyespot, which is believed to be of fungal origin (proved to be caused by Rhizoctonia solani in 1943); by 1968, widespread use of cv. Cappelle Desprez reduced its importance Leaf blotch (spot) and glume blotch In 1933 common diseases, mostly in the south, but damage rare; in late 1950s a few crops affected in the south and by 1968 effects seen in the south-west; glume blotch the dominant disease; severe effects on grain quality recorded Ergot Generally found on cereals in small quantities every year; reported on wheat in 1933, 1937 and 1942, but detected in seed samples every year from 1926 to 1943 by the Official Seed Testing Station; by 1968 of little consequence It was clear that cereal disease patterns were changing and, particularly during the late 1960s and early 1970s, research began to examine the effects of disease control using some of the newly introduced fungicides. The main target was powdery mildew of barley, with experiments being conducted with foliar sprays of ethirimol, tridemorph, benomyl and maneb throughout Europe (Bruin, 1972; Chery, 1972; Mundy & Page, 1973). As an illustration of the cereal fungicide market at that time, tridemorph and ethirimol were apparently introduced as foliar sprays for barley at the end of the 1960s (Leadbeater et al., 2000), while Thomas & Turner (1998) reported that recommendations for foliar fungicide use on wheat did not appear until 1975. The growing interest in disease control and the steady flow of new chemicals after the 1940s created safety concerns. In 1950, in the UK, a working party was established by MAFF to consider operator safety. This resulted in the Agriculture (Poisonous Substances) Act in 1952, with the first regulations being introduced in 1953. This was soon followed by regulations to protect consumers of treated produce and, in 1955, to protect the environment. The reports recommended that new toxic chemicals should be notified to the newly formed Advisory Committee on Poisonous Substances in Agriculture before being sold. The voluntary Notification of Pesticides Scheme was agreed, with product recommendations being reached by discussion between the government departments and the manufacturers, represented by the Association of British Chemical Manufacturers, the Association of British Insecticide Manufacturers and the British Pest Control Association. In 1964, the requirements were revised and the Pesticides Safety Precautions Scheme introduced, which set out the various data types to be provided to the advisory committee to enable them to reach a decision on product safety. Interestingly, efficacy data relevant to the proposed target were not required, but in the UK a new compound could be submitted to the Agricultural Chemicals Approval Scheme for approval, success giving the user permission to display a 'recommended' emblem on the product label as an indication that the material had been tested and shown to work. The rise in regulatory requirements was happening worldwide, although arguably the key drivers were European countries and the USA, with many developing countries either adopting the procedures used in these countries or accepting the regulatory decisions made by these countries as an indication of the suitability of products for their own use. Within Europe, however, there was much discrepancy between individual countries in regulatory data requirements and how data were interpreted. Eventually it was agreed that the registration system in the EU needed to be more consistent between countries, and discussions were held that led to the production in 1991, and eventual adoption several years later, of EC directive 91/414, which set out a set of common data requirements to establish the safety and efficacy of new molecules and products intended for use in crop protection. Directive EC 91/414 has since been amended (Council Directive 94/37/EC; Anonymous, 1994) and has been a great help to the industry, although its interpretation and implementation are still being debated. Indeed, even though the directive set out a common set of data requirements, the interpretation of those data is still a cause of debate between countries; common decision-making has thus not yet been achieved. The need for disease control At the global level, approximately 35% of crop yields are lost to preharvest attack. Postharvest losses can add another 10–20% (Table 5; Oerke et al., 1994). The losses are approximately equally divided between insects, weeds and diseases, and this is happening despite the use of modern crop protection agents. Without protection measures, Europe could expect to lose some 20% of its cereal production. With protection measures, the level is reduced to 6% (Table 6). Table 5. Value and extent of world crop losses caused by insect, weed and pathogen attack (data from Oerke et al., 1994) Region Value of loss (US$ billion) Percentage of crop lost Africa 12·8 49 North America 22·9 31 Latin America 21·7 42 Asia 145·2 47 Europe 16·8 28 Former Soviet Union 22·1 41 Oceania 1·9 37 Table 6. Estimated annual losses in production (%) caused by diseases in wheat and barley 1988–90 (data from Oerke et al., 1994) Country Wheat loss (%) Barley loss (%) Actual Potential Actual Potential Austria 8 18 7 18 Benelux 7 22 5 21 Denmark 7 20 5 21 Finland, Norway, Sweden 7 17 7 17 France 6 21 7 20 Germany 7 20 5 20 Ireland 6 21 7 19 Switzerland 7 20 5 18 United Kingdom 7 22 7 18 Overall 6 21 6 20 Losses in some other crops can be far more dramatic, as illustrated by data for the USA (Table 7). These data are presented in a slightly different format in that they show the percentage of yield attributable to the use of fungicides. Table 7 only shows a few of the crops covered by Gianessi & Reigner (2005), but illustrates that while the potential small-grain cereal losses in the USA are of the same scale as in Europe, several crops in the USA are totally dependent upon effective fungicide use. Table 7. Yield benefits attributable to fungicide use in the USA (data from Gianessi & Reigner, 2005) Crop Yield attributable to fungicide use (%) Wheat 19 Barley 16 Apples 86 Soybean 19 Rice 23 Grapes 95 Hazelnuts 76 Papaya 100 Pears 99 Cherries 76 The impact that fungicides can have on crop production in apples in the USA is well illustrated in Fig. 2. It is quite relevant to note that the dithiocarbamate fungicides zineb and maneb were introduced in 1943, captan (a phthalimide) in 1952 and dodine (a guanidine) in 1957. The dramatic increases in yields are thus most likely related to the excellent control of apple scab (Venturia inaequalis) given by these chemicals. Figure 2Open in figure viewerPowerPoint Increases in apple yields in the USA (data from Gianessi & Reigner, 2005). Gianessi & Reigner (2005) expanded their research to look at the cost–benefit ratio of fungicide use on a wide range of crops. Taken over all the crops considered for the USA, which admittedly include many minor crops which are totally dependent upon an effective fungicide programme to maintain yields, they found that every $1 invested by the farmer would return a benefit of $14·6. The reality of modern commercial disease control R&D The market value for crop protection agents in 2004 was US$30·7 billion, of which the fungicides sector was $7·3 billion (CropLife International, 2005). The search for new molecules for use as crop protection agents follows the same basic process whether the search is for a fungicide, a herbicide or an insecticide. The processes used by individual companies are also very similar (Table 8). Where they differ is in the fine detail; some companies may base early research on controlled-environment studies, while others may use glasshouses. Various aids to a 'rapid throughput' of chemicals may be used, including automated in vitro and in vivo screening procedures evaluating the activity of a molecule directly on the pathogen or on disease development on the plant, or the use of targeted in vitro biochemical screens where activity against a preselected biochemical process is sought. Table 8. Summary of the processes involved in the discovery and development of new crop protection agents Research and development stage Key activities Supporting activities Chemical synthesis A few mg of chemical; much use made of automated synthesis procedures Chemical properties predicted; if a risk identified, molecule not synthesized Initial biological screening Rapid throughput; in vitro, in vivo and biochemical techniques; laboratory, controlled-environment or glasshouse screens for activity; screening rates of up to a million compounds per year possible, but rarely used; typical time for a compound in this stage 2–3 weeks; success rate very low, maybe 0·01%; output molecules with some activity potential First patents; information fed into structure-activity computer prediction programs to guide future synthesis Extended research Mostly in vivo tests, including comparisons with commercial products in the glasshouse and small-scale field trials conducted around the world using first formulations; promising molecules receive extensive investigation of biological properties; depending on tests performed, up to 1 kg of active ingredient required; typical success rate around 5%; time in stage variable, 3 months to 2 years Preliminary toxicology and environmental profile; resistance risk assessment could begin Development Large-scale international field trials using formulated materials; many kg of active ingredient required; comparisons with current market leader products made, from which key attributes of new molecule identified; co-formulations and tank mixes investigated and samples may later be given to advisors and collaborators for independent evaluation; time in stage may be 6–8 years for successful molecules Data gathering for full regulatory package: toxicology and environmental studies; resistance risk assessments; marketing evaluation begins and key countries identified; patents extended wherever possible Sales Typically 8–10 years after initial synthesis for key targets Development and market support processes continue When I first joined industry, the fungicides market was just developing. The general philosophy seemed to be that the chemist would make a new molecule, the biologist would find out what it could do and, if activity was found, the material would be handed to the marketing department for them to find a use for it and sell it. Looking back on this process, it is easy to see how misdirected this was, and as the costs of R&D began to escalate, largely because of the increasing costs associated with producing data for regulatory packages, it was clear that the whole industry had to become far more targeted in its approach to the search for new products. The market thus became the key driver; market potential had to be estimated and market needs determined. These market data then had to be set against the costs associated with discovering and bringing to market a new product (Table 9). Key markets were established and key pathogens identified for which a new fungicide could generate enough profit to support the R&D process. Minor markets were not ignored, but they were not considered until success in a major market was certain. Table 9. Discovery and development costs of a new crop protection product (US$ millions) in 2000 compared with 1995. [Source: European Crop Protection Association (ECPA), 2003] Category 1995 (nominal) 2000 (nominal) 2000 (at 1995 prices) Research Chemistry 32 41 36·9 Biology 30 44 39·5 Toxicology/ environmental chemistry 10 9 7·9 Total 72 94 84·3 Development Chemistry 18 20 17·9 Field Trials 18 25 22·4 Toxicology 18 18 16·1 Environmental chemistry 13 16 14·3 Total 67 79 70·7 Registration 13 11 9·9 Total 152 184 164·9 It should be noted that many of the toxicology and environmental chemistry studies are carried out in order to satisfy registration requirements. The cost of registration quoted is thus the cost of compiling the registration dossiers and submitting them for approval. The cost of discovering a new fungicide and bringing it to market in 2000 was around US$184 million (approximately Euro200 million). The current estimate from Bayer CropScience is US$250 million. The major crop protection companies typically invest 6–10% of their profits in the search for new molecules. These figures exclude the possible costs of building a chemical production and formulation plant and are payable before any product has been sold. The company involved would need to recoup this investment within the remaining patent life of the product (typically around 10 years will remain at the point of initial marketing, although extra patent protectio
Referência(s)