Artigo Acesso aberto Revisado por pares

Recommended Dietary Allowances (RDAs), Recommended Dietary Intakes (RDIs), Recommended Nutrient Intakes (RNIs), and Population Reference Intakes (PRIs) are not “Recommended Intakes”

1997; Lippincott Williams & Wilkins; Volume: 25; Issue: 2 Linguagem: Inglês

10.1097/00005176-199708000-00022

ISSN

1536-4801

Autores

Peter Aggett, J. Bresson, Ferdinand Haschke, Olle Hernell, Berthold Koletzko, Harry N. Lafeber, Kim F. Michaelsen, J.L. Micheli, Anne Ormisson, J. Rey, Joana Sousa, Lawrence T. Weaver,

Tópico(s)

Child Nutrition and Water Access

Resumo

Recommended intakes of nutrients are hypothetical, perhaps even mythical, concepts used as yardsticks for the assessment of dietary surveys and food statistics; to provide guidance on appropriate dietary composition and meal provision; or for food labelling. The United States, Canada, and European countries, including The Netherlands, Germany, United Kingdom, France, and the Nordic countries, publish their own standards (1-6), and recently the European Union's Scientific Committee for Food has produced its own report, Nutrient and Energy Intakes for the European Community (7). Each committee has created its own terminology; and the many terms and their meanings have caused considerable confusion, which is exacerbated by the use of recommendations for purposes for which they were not intended. Many paediatricians and other health professionals think that the recommendations provide a reliable basis for the specific advice that they give families on the feeding of neonates, infants, children, and adolescents. Unfortunately, many users of these statistical terms do not appear to appreciate their real meanings and uses as expressed in the full reports: Instead they base their interpretation on the summary tables alone (8,9). They have therefore misunderstood the meaning of reference values, the underpinning assumptions, or the individual groups at which they are aimed. In addition, government agencies and industry are equally culpable in misusing reference values with their own objectives in mind, which are most often to increase the values rather than to reduce them (10). Finally, additional confusion arises from a failure to understand that, in contrast to labelling reference values, recommended intakes are not policy recommendations per se, but a set of reference standards developed by nutritional scientists that may be used to inform and to implement a sound public health policy (11). These are the points that we would like to clarify in this commentary. EVOLUTION OF DIETARY STANDARDS The first codified dietary recommendation was in the the British Merchant Seaman's Act in Britain in 1835, which mandated provision of lime or lemon juice to prevent scurvy in the Royal Navy (12). Probably the first true standard was published, in response to a request from the British Privy Council, to prevent starvation and associated diseases among the unemployed population during the economic depression of 1862. Several other recommendations were proposed during the following 50 years, notably by the food committees of the British Royal Society, of the League of Nations Health Organisation, and of the Canadian Council on Nutrition. From being recommendations for programmes to relieve starvation and illness resulting from economic and wartime crises, the dietary recommendations became, between 1920 and 1940, standards to maintain and improve the health of the population, with increased emphasis on meeting nutritional needs of infants, children, and pregnant women. Concurrently, from being observational standards, based on observed customary patterns of food consumption, they became based on scientific precepts and knowledge of human needs for essential nutrients and energy sources (11,13). A basic text that has influenced thinking is the Recommended Dietary Allowances (RDAs), prepared by the Food and Nutrition Board of the U.S. National Research Council and first published in 1941, “to serve as a guide for planning adequate nutrition for the civilian population” (14). According to the Food and Nutrition Board, RDAs are defined as the “levels of intake of essential nutrients that, on the basis of scientific knowledge, are judged... to be adequate to meet the known nutrient needs of practically all healthy persons” (2). Similarly, in 1983, the Canadian Ministry of Supply and Services defined their term Recommended Daily Nutrient Intakes as “the level of dietary intake thought to be sufficiently high to meet the requirements of almost all individuals in a group with specified characterisation” (15). Subsequently, a group of French experts applied a different concept, “Advisable Nutrient Intakes,” that “take into account not only scientific data on nutritional needs but also preferences and dietary habits, to the extent that these habits are not detrimental to health” (4). However, both concepts emphasise “appropriate” or “desirable” intakes, rather than specific needs; and as such they are akin to guidelines (11). The United Kingdom's Committee on Medical Aspects of Food Policy (COMA) published recommendations in 1979 on the amounts of energy and nutrients for groups of people in the United Kingdom. The definition of the RDA for a nutrient that was used in this report was “the average amount of the nutrient which should be provided per head in groups of people if the needs of practically all members of the group are to be met.” When these recommendations were reviewed in 1991, the British panel was aware of the continuing misuse and misinterpretation of the figures. To minimise this, it decided to set a range of intakes based as far as possible on the assessed range of requirements for each nutrient. These various levels were called Dietary Reference Values (DRVs). Three values were provided; the Estimated Average Requirement, the Reference Nutrient Intake, and the Lower Reference Nutrient Intake. According to the panel, intakes above the Reference Nutrient Intake would almost certainly be adequate, and the Lower Reference Nutrient Intake would represent the lowest intakes which would meet the needs of some individuals in the group; intake below this level would almost certainly be inadequate for most individuals (1). The Scientific Committee for Food (SCF) report followed the same thinking. But the committee preferred to call “the intake that is enough for virtually all healthy people in a group the population reference intake (PRI),” and used the term lowest threshold intake to characterise “the intake below which, on the basis of current knowledge, almost all individuals will be unlikely to maintain metabolic integrity according to the criteria chosen for each nutrient” (6). The Recommended Dietary Allowance from the United States, the Recommended Daily Nutrient Intakes from Canada, the Recommended Daily Amount and the Reference Nutrient Intakes from the United Kingdom, and the Population Reference Intakes from the European Union are similar concepts (Table 1). However, the rationale of the committees has evolved from the concept of “recommendation,” as being an implicit “amount that should be consumed,” to the concept of “reference values,” which reduces the chance of misunderstanding. Moreover, the use of the term “population” proposed by the European Union has the advantage of indicating that the proposed reference values are for groups of people and not for individuals. REFERENCES FOR FOOD LABELLING PURPOSES Consumers in the United States and Canada have long been accustomed to reading on the label of prepacked foodstuffs that one portion, or 100 g, of a food contains a particular percentage of the U.S. Recommended Daily Allowances. As a result of the Dietary Supplement Act of 1992, these values are no longer called RDAs but Reference Daily Intakes (RDIs), a change the U.S. Food and Drug Administration agrees is necessary to minimise confusion between RDAs and RDAs. Label reference values have been established for 19 vitamins and minerals. Their values have been set chiefly by selecting the highest RDA value from among the various sex and age groups in U.S. RDA tables published in 1968. The new U.S. regulation has also established label reference values for 8 other nutrients, including fat, cholesterol, and fibre. These values have been named Daily Reference Values (DRVs), but although U.S. regulatory requirements make it necessary to distinguish between the two sets of label reference values, in an attempt to avoid needless confusion, all reference values on food labels should be referred to as Daily Values, or DVs (16). Nevertheless, the use of many acronyms is confusing, and some familiarity with their derivation is necessary to understand that the DRVs (Dietary Reference Values) established by United Kingdom's Committee on Medical Aspects of Food Policy are quite different from the DRVs (Daily Reference Values) of the U.S. Food and Drug Administration regulations, or vice versa (Table 2). The European Union, like the Food and Drug Administration, ruled that food labelling should be standardised to help consumers; and hoped that such standardisation would also facilitate a single European market (16). However, contrary to other bodies, the European Union's Scientific Committee for Food has recommended that the Average Requirement for adult men (7) would be a more useful reference value than the PRI; because it would avoid the impression that an individual's intakes are systematically inferior to the needs of all members of the population. This recommendation presents a problem for products intended only for use by infants and young children. Therefore, the Scientific Committee for Food has proposed an additional set of reference values, based on the PRIs for children aged 6 months to 3 years (7,18). This approach resembles that advocated in the U.S. Dietary Supplement Act of 1992, which did not provide reference values for children younger than 4 years of age but included guidance on values that manufacturers may use on labels of foods for infants, young children, and pregnant and lactating women (16). ASSESSMENT OF VALUES-THE APPROACHES USED Because of the wide use of the RDIs, it is important to illustrate their limitations and applications by discussing the method that have been used to assess requirements. The physiological requirement for a nutrient should be the basis for calculating a reference intake. The ideal definition of a physiological requirement is the amount and chemical form of a nutrient that is needed systematically to maintain normal health and development without disturbance of the metabolism of any other nutrient. The corresponding dietary requirement would be the intake sufficient to meet the physiological requirement: Ideally this should be achieved without extreme homeostatic processes and excessive depletion or surplus in bodily depots (19). ANALOGY WITH INTAKES The earliest approach was empirically to base reference recommendations on the energy and nutrient intakes of groups of subjects apparently in good health. This method is weak, because it assumes that the subjects are in good health and are achieving their full genetic potential and that the diets are are quantitatively and qualitatively appropriate and free from adverse longterm effects. Reservations about this approach are strengthened by the current concern about the possible influence of early nutrition on metabolic “programming” and on the subsequent risk of hypertension, obesity, diabetes mellitus, and cardiovascular disease (20-23). It is commonly assumed, on the basis that breast-feeding is best for infants, that the natural intakes of breast-fed babies are an appropriate guide to optimal nutritional needs between birth and 6 months of age. However, there are uncertainties in this approach: We are not sure of the actual intakes of infants, because of the variability of milk supply (between 550 and 1100 ml/day) between women (24), because of the changing composition of the milk during the course of lactation during the day and even during a feeding, and because of difficulties in measuring these factors (25). Another major limitation of using maternal milk as a reference is its complex composition, which has evolved presumably to meet the specific requirements of the human infant and to ensure the optimum absorption and utilisation of its constituent nutrients. This complexity can not yet be emulated by synthetic feeds. Consequently, there are differences in the efficiency of substrate utilisation between breast- and formula-fed infants and, as a consequence, in the infants' dietary requirements. Examples of such differences are energy and protein requirements arising from variations in the energy cost of weight gain and body composition, and of iron requirements and bioavailability of iron from the two modes of feeding (26). Thus, the composition of human milk and the nutrient intake of breast-fed infants may not always be a useful guide for those babies who are not exclusively breast fed. FACTORIAL ANALYSIS The factorial approach calculates requirements by summarising the amounts needed for growth and maintenance and for physical activity in assessing energy requirements (27,28). The requirements for growth are derived from estimations of the composition of gain in body weight. However, very limited direct analytical information on body composition is available, although some data are available for newborns, adults, and a 4.5-year-old boy (29,30). One cannot reliably extrapolate from such information to the composition of the body and new tissue of children and adolescents in general. Such indirect methods as 40K and total body water measurements, bioelectrical impedance, and magnetic resonance imaging might fill some of these gaps in our knowledge. However, their limitations and those of the assumptions used to calculate body composition of infants, children, and adolescents should not be overlooked (30). Maintenance requirements are usually derived from the estimation of unavoidable losses in urine, faeces, sweat, shed skin, hair, and so forth. These losses depend not just on the dietary supply of the nutrient in question, but also on other metabolically interdependent nutrients. For example, nitrogen equilibrium in adults cannot be maintained simply by replacing observed obligatory losses (28); the supply of other nutrients is critical (e.g., energy, which affects adaptation to low protein supply) (31). Moreover, we do not know whether the metabolic adaptation needed to maintain equilibrium is functionally unimportant, or whether it might be detrimental to long-term health and survival (23,28). The situation is even more complex during childhood (32). Data on inevitable losses are scanty in newborns, toddlers, and infants. Furthermore, because such information is best gained under circumstances of negligible intakes of the nutrient under consideration, when homeostatic conservation is maximal, it is unlikely that such data could be acquired ethically. The interrelationship in infants and children between energy requirements and the efficiency of lean and new tissue synthesis, as well as the maturation of metabolism, are insufficiently understood to enable confident assessment of infants' and children's qualitative and quantitative protein requirements (33). BALANCE METHODOLOGY Balance studies at known intakes provide information about net whole retention and net intestinal absorption or secretion and whole-body retention of nutrients. The technique is demanding and, unless particular care is taken, technical difficulties in sampling can cause an overestimation of intakes and an underestimation of losses in faeces and urine. As a result, balance studies tend to overestimate net retention and thus to underestimate requirements (34). Subjects should be in equilibrium at the intake of the nutrient in question. This equilibrium depends on the previous supply of the nutrient relative to systemic requirements, which in turn influences intestinal uptake and transfer of nutrients and the rates at which they are subsequently utilised, catabolised, sequestered in body pools, or excreted. If systemic stores are large in relation to daily intake or requirement, their mobilisation will delay the onset of any adaptation that would be needed for balance-study equilibrium and for collection of reliable and meaningful data. This is the case with calcium, for which the daily intake is only a very minor fraction of the total calcium content of the body. A similar problem arises for phosphorus and magnesium. Consequently a very long time, possibly months, is taken to achieve equilibrium and to induce the adaptive responses detectable in conventional balance studies that are looking for increased intestinal acquisition and reduced losses through the intestine and renal tract. Conversely, the systemic reserve of zinc is relatively small, and adaptation to reduced intakes can be detected after a matter of days (35). Despite these shortcomings, most reliable and available information depends on balance studies that can at least be used to assess the adequacy of customary intakes, or at negligible intakes, to determine basal or obligatory losses under maximal adaptation, although the ethical constraints mentioned previously prohibit involving children in such studies. Additionally, the use of isotopic labels of endogenous or exogenous (dietary) pools of nutrients can enable better characterisation of the flux and pool sizes underlying homeostasis. Thus the balance-study approach, with these refinements, is likely to remain a key method in investigating requirements until new methods become established. AVOIDANCE OF DEFICIENCY OR TOXICITY Estimation of nutrient requirements should also take into account clinical, biochemical, and epidemiologic studies on specific deficiencies. The prevention of scurvy in the British Navy is the first, and probably most striking example of this approach (12). Indeed, later experiments on human volunteers confirmed that scurvy could be efficiently prevented or cured with as little as 10 mg ascorbic acid per day. However, higher intakes are necessary to maintain a reserve sufficient to prevent clinical scurvy from developing for 1 to 1.5 months in volunteers given an ascorbic acid-free diet, or to maintain the bodily pool by replenishing catabolised ascorbate in a normal diet (36). This illustrates that matching the requirement to the amount of nutrient needed to correct or just to avoid a deficiency will supply this nutrient at a marginally adequate level. In recent years, much attention has been focused on the possible toxic effects of nutrients when ingested in excessive amounts. Indeed, all substances probably have a level of excessive intake at which harmful effects occur (2). Thus it is perhaps prudent to set upper limits of intakes at levels well below those at which there is a risk of frank toxicity. A major worry is the undisciplined use of such micronutrients as vitamins at intakes of 20 to 600 times the reference intakes (8) on the basis that such intakes could prevent the common cold, cure infections, heal wounds, improve mental alertness, and prevent senility. However, high doses of water-soluble vitamins have toxic effects, (37,38) and interaction between trace elements occurs at levels of intake frequently used in clinical practice (39). METABOLIC ADAPTATION AND “BIOAVAILABILITY” Interpretation of attempts to assess requirements using balance approaches, judging the adequacy of customary intakes, and avoidance of deficiency or toxicity, must consider the metabolic adaptation of the individuals to widely different nutrient intakes. Lack of this information limits our ability to assess both the appropriateness of any particular intake measurement and to gauge the relative influences of intestinal and systemic factors and of dietary factors on the absorption and retention of the nutrient and to determine how reliably such observation can be extrapolated to other intakes. These factors determine the “bioavailability” of a dietary component-that is, the efficiency with which a nutrient is absorbed and used systemically. Some idea of this efficiency is needed to translate physiological requirements of a nutrient (calculated by factorial analysis or analogy with total parenteral feeding requirements) to an actual intake reference value or to facilitate the extrapolation of data between intakes. Host influences on bioavailability are often neglected and the term has been used to describe the effect of the diet or foodstuffs on nutrient absorption. Sometimes the latter is true (for example the more efficient intestinal uptake and transfer of haem iron compared with that of nonhaem iron) (40). The implication is that bioavailability can be measured only in vivo and that such measurements in one set of circumstances cannot be applied in other circumstances, when, for instance, foods are consumed by other individuals under different conditions. Uncertainty about adaptation also applies when any increased intakes needed during pregnancy and lactation are considered. There are no good data supporting the hypothesis that women with food intakes adequate for nonpregnant women should increase their intakes during pregnancy, except for a small increase in energy requirements. Indeed, dietary surveys show little difference in intakes between pregnant and nonpregnant women (41). A good example is iron. Because there are no specific mechanisms to lose iron once it has entered the body, the principal means of regulating the systemic iron burden is the efficiency of its uptake and transfer by the intestinal mucosa. Applying a restricted figure of bioavailability to pregnancy results in figures for iron requirements far in excess of observed intakes. Such calculations neglect the sequential increase of bioavailability of iron from standard test meals consumed during pregnancy. This type of adaptation might be as, or even more, important than the nature of the dietary matrix and the chemical form of iron (41,42). Our ignorance of homeostatic adaptation has also led to the practice of including in the calculation of reference values a little extra to allow for the maintenance of systemic store. An important outcome of such presumptions of low bioavailabilities and allowances for storage is that recommendations for some nutrients exceed the amounts consumed by the populations for whom the recommendations are being made. Only recently have some committees tried to ascribe to low levels of intake, and to intakes during pregnancy, higher degrees of bioavailability than those applied to larger intakes (e.g. the United Kingdom's and European Union's reference values for iron and zinc) (1,7). Given the difficulty of assessing bioavailability, it is not surprising that ascribed values are arbitrary and that, as a consequence, particularly for minerals, even though various recommending bodies agree fundamentally on approximate physiological requirements, these estimates are expressed as quite different reference values (19). CONCLUSIONS Calculating dietary reference requirements is a difficult exercise. The estimated values are approximations and are often devised for safety rather than for preciseness. They reflect the often limited data available and the need for an inductive extrapolation of data from small groups of individuals to large, heterogeneous populations. This is unreliable because there is no information on the population distribution of requirements, which is assumed to be Gaussian but is in reality probably skewed and influenced considerably by homeostatic mechanisms. The influence of such factors as systemic adaptation, polygenetic inheritance, and cultural culinary practices on the bioavailability and requirements for nutrients should be better characterised. It is hoped that better understanding of the cellular and molecular bases of intestinal and systemic metabolic adaptation will help to clarify this area. Reference values are particularly uncertain for infants and young children. They are often obtained by interpolation between experimental data from studies in infants and adults. Furthermore, human milk is not regarded as a suitable standard for assessing the requirements of formula-fed infants, even though the immediate and longer term outcomes of breast-fed infants can be used as valuable guidelines. Reference intakes (PRIs, DRVs) better reflect intended use than recommended intakes or allowances and remind us that these statistical terms require caution in their use. They are designed for populations rather than for individuals; and, although they provide a means of assessing the probability that members of a population are vulnerable to nutrient deficiency, reference values can not be used for the diagnosis of nutrient deficiency in individuals. The validity of reference values can be established only by long-term monitoring of the effects or outcomes of nutrient and energy intakes. This effort will require international collaboration and pooling of data and could be achieved by concomitant standardisation of nomenclature both for reference values and their use for food labelling. Initiatives to achieve such standardisation would be welcome. Despite these limitations, current reference values are an invaluable source of information for education programmes and nutritional strategies for populations. Finally, if the reports of all these distinguished committees are appropriate sources of information for educational programmes in nutrition, we fully agree with Harper's remark that they are designed for specialists and must be interpreted carefully (11). Harper has quoted Ruth Leverton as saying “RDAs are not for amateurs” (42). May we add that we are not even sure that they are really for professionals?

Referência(s)