Diet-culture coevolution and health
Although much conjecture exists (particularly between disciplines), many scientists consider that the Anthropocene began when our ancestors first harnessed fire for cooking (0.5–1.5 × 106 years BP). Consistent with this, ensuing hominins evolved smaller, weaker jaws and compact dentition, indicating less need for chewing, gnawing and shearing. Cooking would have increased meat consumption, fostering encephalisation, along with the evolution of reduced gut size. Furthermore, bipedalism and anatomically remodelled hands evolved for tools and weaponry.7 Given our newly evolved intellectual and physical capacity, the scene was set for our species to reshape the world rather than be shaped by it. However, biocultural adaptations to a range of new food sources have led to a clinically relevant discordance between our ancient genes and modern diets and lifestyles. This is a phenomenon linked to what is often referred to as the “diseases of affluence” or “diseases of civilisation”.3,8
Three major human dietary transitions have occurred, and these have contributed to both historical and contemporary disease burden. Following the Pleistocene ice age, large prey animals became scarce in Europe as humans became more efficient predatory hunters, and so hunter-gatherers were eventually replaced by sedentary farming communities that had mastered the domestication of both animal and plant species.9–12 Hand in hand with a settled agro-pastoral lifestyle came increased disease as a negative correlate. This transition occurred approximately 11,000 years BP in the Middle East and more recently in other geographic locations (9,000 years BP in SE Asia, and 5,000 years BP in Sub-Saharan Africa and the Americas).13,14 The major outcome of this transition was the development of large settlements (civilisation). We now recognize that urbanisation had a strongly negative impact on ancestral wellbeing.3
From a human health perspective, the domestication of animals changed the meat composition from that of wild game. Domesticated meat had higher saturated fats and cholesterol,15 while crop monocultures led to monotypic diets with lower nutrient diversity. High population density in settlements, along with the domestication of animals, promoted virulent diseases, water stress and periodic food shortages, as well as dramatic famines.16 Indeed, the rates for disease, famine and parasite burden were higher in farming communities when compared to those in hunter and forager communities. Conversely, longevity and physical stature were predictably lower. Consistent with this, iron deficiency anaemia was higher amongst farmers.17 Overall, pre-agrarian forager populations were probably far healthier and free from chronic disorders, such as diabetes, with obesity being almost non-existent.3
We are now approaching 40 years since the publication of the “evolutionary discordance hypothesis”. According to this, departures from the diet and activity patterns of our hunter-forager ancestors contributed greatly and in specifically definable ways to the endemic and chronic pathologies of modern civilization.18 This paradigm should sit center stage in facing our long-term nutritional and health challenges.
The Industrial Revolution heralded the second major dietary transition and made foods that were never before encountered during human evolution accessible to all. These included refined sugar, vegetable oil and cereals. One of the best examples of how this led to a diet-culture coevolution mismatch with serious health implications is given by the story of homocysteine and vascular disease. The industrial-scale refinement of wheat flour contributed to a loss of essential micronutrients, particularly folic acid and vitamin B6. Low blood levels of these two vitamins elevate vasculotoxic homocysteine. Today, we know that high homocysteine made an enormous contribution to late twentieth century cardiovascular disease mortality.19,20 It is fortuitous that this evolutionary mismatch, which led to serious morbidity and mortality, can be corrected by the simple measure of introducing folate into the diet through both mandatory and discretionary fortification programs. This is a good example of DM, with the bonus of preventing neural tube defect-affected pregnancy,21 which is ostensibly the main reason for fortification. To provide context, in 1995, it was calculated that mandatory folic acid fortification in the US could prevent approximately 50,000 coronary artery disease deaths each year.22 Three years later, in 1998, mandatory fortification was implemented. Today, at least 80 countries have adopted a similar fortification strategy.23
The third and final major dietary transition relates to the pervasive uptake of processed and “junk” foods. One only has to consider that following the Industrial Revolution, refined and dairy products comprised 72.1% of the total dietary energy intake in the US. By comparison, energy intake would have been a fraction of this in the pre-agricultural hominin diet.3,8 Modern society has also disrupted diurnal rhythms, with consequences for eating patterns at the individual, family and societal levels. This change has led to the creation of the so called, “obesogenic environment”: a melding of poor lifestyle choices, aberrant dietary patterns, digital technology and importantly, thrifty stone-age genes. From the DM perspective, it is quite clear that the human phenome is morphing under the pressure of changing cultural norms, which include the following:
The conflict between thrifty genes that evolved to help our ancestors ride out famine and unhealthy modern obesogenic environments.
The move toward smaller families and late reproduction in the developed world. This conspires with the increasing need for in vitro fertilization, and would likely be a challenge to natural selection, promoting ever greater infertility.24,25
The information age. Since the advent and mass adoption of information technology in the early 1990s, our personal and connected “virtual” worlds have been subsumed within an extended anthropogenic phenotype, further promoting a sedentary lifestyle. Across the planet, computers and handheld portable devices have disengaged us from the real, physical/natural world, reducing caloric expenditure in the face of increased caloric intake and thrifty genes.
Some of the best described maladapted phenotypes associated with today’s dietary landscape, born out of nineteenth and twentieth century advances in food technology are: type 2 diabetes, dental caries, coronary artery disease, obesity, and several others. Collectively, these have been referred to as manifestations of the syndrome known as “saccharine disease”. This phrase was first coined by Thomas Latimer Cleave (1906–1983), an English surgeon who highlighted the negative effects of consuming refined carbohydrates that would not have been available during early human evolution. Aligned with the concept of DM, Charles Darwin’s writings provided the scholarly framework for Cleave’s life pursuit of better understanding diet-health relationships, particularly the maladaptation of the human body to modern Western diets.
Ultimately, however, what is most concerning is a likely fourth dietary transition hovering on the near horizon. Economist and demographer Thomas Malthus (1766–1834) stated that “The power of population is indefinitely greater than the power of the earth to produce subsistence for man”. In other words, long before “global warming and planetary health” became an issue, Malthus recognised that human exponential population growth would outstrip the availability of food and other resources. Hence, this potential fourth transition would be marked by famine, disease, conflict, and a catastrophic shift in natural, geopolitical and social balance. His intellect was ahead of its time because Malthus observed that humanity has a propensity to use food abundance for population growth rather than for sustaining a higher quality of life. Today, we refer to this as the “Malthusian spectre” or “Malthusian trap”. He suggested that human populations would grow until lower socioeconomic groups falter and become susceptible to famine and disease, a scenario referred to as a “Malthusian catastrophe”. The big question is how will humans adapt to this version of our future? As the pace of such change would likely be rapid, it is unlikely that human diet-culture coevolution would have time to be adaptive in terms of long-term health outcomes. Despite this, it is encouraging to consider that with our 3.3 × 109 base pairs, of which 99.9% are monomorphic, humans have sufficient polymorphic loci (2.97 × 106, one per 300 nucleotides) to accommodate phenotypic variation without causing death or notable disease. This provides a reserve capacity for phenotypic adaptation to cope with the full repertoire of environmental selection pressures.
There are a large number of important diet-related genes that have evolved to adapt to our environment or that have failed to evolve quickly enough to prevent maladaptation to our contemporary world. Indeed, some genes exhibit both of these traits, a phenomenon embracing antagonistic pleiotropy. Antagonistic pleiotropy is where a gene confers benefits early in the lifecycle but is detrimental to the organism’s fitness later in the lifecycle.26,27 Understanding these benefits is critically important to the application of DM, and to a better overall perspective of diet-culture coevolution in health sciences. Tables 1 and 228–71 provide examples of important genes.
Table 1A selection of nutrient-related genes with health or evolutionary benefits
A selection of nutrient-related genes with health or evolutionary benefits
|
---|
Dietary component | Gene | Relevance | Notes |
---|
Iron | C282Y variant in HFE gene (rs1800562). | Alters the accumulation of iron, leading to iron overload. Historically, the genetic trait would have countered iron deficiency in women during the reproductive phase of the lifecycle, and helped prevent prematurity/low birthweight conditions that would increase perinatal mortality.28,29 | An example of antagonistic pleiotropy. Beneficial for women during their reproductive years. Causes haemochromatosis in later life. HFE encodes human homeostatic iron regulator protein (High Fe2+). |
Starch | AMY1 (Encodes salivary amylase gene). | A higher AMY1 copy number is selected for agricultural populations with a diet high in starch. This improves the digestion of starch-rich foods. It is suggested that this buffers against the fitness-lowering influence of intestinal disease.30,31 | A low AMY1 copy number has recently been shown to significantly correlate with obesity in African American children32. Salivary amylase AMY1 is the most abundant enzyme in human saliva. |
Lactose | LCT (lactase-phlorizin hydrolase) and, in Europeans, MCM6 (rs4988235). | A strong natural selection at 5,000–10,000 years BP conferred lactose tolerance due to dairy farming in Northern Europe,33 and involved LCT-C/T-13910 and the MCM6 rs4988235 variant (T allele = lactase persistence; C allele = hypolactasia). Three variants exhibited convergent evolution in African populations (LCT-G/C-14010, T/G-13915 and C/G-13907).34 Similarly, strong selection at 7,000 years BP conferred lactose tolerance due to animal domestication and adult milk consumption. | MCM = mini-chromosome maintenance protein, which is associated with the differential transcriptional activation of the lactase promoter. |
Long-chain polyunsaturated fatty acids (LCPUFA) | FADS (fatty acid desaturase) gene cluster. | FADS variants increase the synthesis of LCPUFAs from plant-derived medium chain PUFAs. The selection of this trait may have permitted early African hominins to move away from marine sources of LCPUFAs, and in turn, permit early expansions out of the continent at 60,000–80,000 years BP.35 LCPUFAs are needed for encephalization, and it is well-recognized that docosahexaenoic acid (DHA) is critical in young children for brain development. | FADS gene variants driven to near fixation in African populations by positive selection, removing obligatory tethering to marine sources of LCPUFAs in isolated regions.35 |
Ethanol | *504Lys variant of ALDH2 (Aldehyde dehydrogenase-2) gene. Also known as Glu487Lys, or rs671. | The gene variant is associated with alcohol-related liver disease, oesophageal and colorectal cancers, and Alzheimer’s disease. However, it is best known for its protective role against alcohol dependency.36 | Oesophageal cancer may arise due to the build-up of toxic acetaldehyde, with the rs671 allele strongly associated with this cancer phenotype in East Asia, although this is almost absent in other global populations.36 |
Plant alkaloids and related dietary xenobiotics | CYP2D6 (cytochrome P450) gene and GST (glutathione S-transferase) gene. | Several examples, but the increase in CYP2D6 copy number can lead to ultrarapid metabolisers. In NE African populations, more efficient detoxification of plant alkaloids leads to a greater repertoire of foods available during famine or seasonal bottlenecks.37 Among their many roles, GST variants can alter the recycling of reduced vitamin C, and in turn, bolster an individual’s antioxidant status.38 | |
Salt | AGT (rs699) and CYP3A5 (rs776746) genes | Common infectious diarrhoeas in early populations would have been lethal due to overwhelming sodium retention capacity and water volume depletion. Genes within the renin-angiotensin-aldosterone system were positively selected for salt retention in Africa prior to human expansion. These protective alleles were less useful in new geographic locations, post expansion, and increased susceptibility to common diseases (i.e., hypertension) because of the mismatch between an increasingly high salt diet and ancestral genes.39,40 | Polygenic. rs699 and rs776746 alleles exhibited a change in frequency with latitude. The ancestral (hypertension risk) allele is higher in African populations compared to non-African populations that have the hypertension protection-derived allele.39,40 'Thirsty' genes may thereby act on salt and water retention, facilitating the individuals’ survival against the challenge of volume-depleting illnesses and stressful situations, but this may now cause high blood pressure. A classic gene-environment mismatch.40 |
Folate | MTHFR gene (rs1801133) encoding methylenetetrahydrofolate reductase. Also known as C677T-MTHFR (Ala222Val) | The derived T allele can raise homocysteine, which is an embryotoxic, atherogenic, thrombogenic and hypertensive thiol amino acid.20 This is a problem where folate intake is low. The same polymorphism is associated with neural tube defects (NTD),22,41 but may also help in maintaining the fidelity of DNA synthesis when folate levels are low.42 | The rs1801133 variant encodes a critical enzyme locus. It partitions one-carbon units between dTMP and methionine biosynthesis. Such pleiotropy may be responsible for limited, very early lifecycle benefits, but late life disease i.e., embryo/foetal survival (DNA fidelity) vs. vascular disease (homocysteine). If correct, this is an example of antagonistic pleiotropy. |
Vitamin D | SLC24A5 (rs2675345) encoding solute carrier family 24 member 5 (NCKX5) | Our ancestral hunter-gatherer diet was replete in vitamin D (fish and meat), but after the agricultural revolution, grains became a major dietary component, causing vitamin D deficiency. Given the significance of vitamin D in maintaining reproductive efficiency, selection will have favoured a loss in pigmentation, in order to allow for improved vitamin D photosynthesis in our skin. A selective sweep of the derived rs2675345 allele occurred in Europe 5,300–6,000 years BP, resulting in depigmentation.43 | The K+ dependent Na+/Ca2+ exchanger is important in melanin biosynthesis. The same variant arrived in East Africa via the Middle East, and was introduced to South Africa by migrating East African agriculturalists at approximately 2,000 years BP, and once there was favoured by natural selection, with admixture promoting a wide gamut of moderately pigmented phenotypes.44 |
Table 2Selected nutrient-related genes associated with common, rare and evolutionarily pertinent disease phenotypes
Selected nutrient-related genes associated with common, rare and evolutionarily pertinent disease phenotypes
|
---|
Disease phenotype | Gene | Relevance | Notes |
---|
Malaria | TAS2R16 Bitter taste receptor gene | The K172 allele of TAS2R16 suppresses bitter taste, and has greater prevalence in Africa, where malaria is common. This allele promotes the consumption of plants rich in bitter tasting alkaloids, which confers protection against malaria.45,46 Recent evidence has shown that the N172 allele expressed at the cell surface is 2-fold higher than that for the K172 variant, providing a molecular explanation for the K172 allele’s reduced sensitivity to bitter tasting β-glucosides and its relationship to alcohol dependence.47 | A family of approximately 25 G-protein coupled receptors (GPCRs). Receptor activation allows for the perception of bitter taste. |
Type 2 diabetes mellitus | CAPN10 Calpain 10 gene | This gene plays a key role in insulin secretion, with the ancestral allele associated with insulin resistance. Selection of this derived, protective allele is a fairly recent phenomenon.48,49 In a 2010 study, the 111 CAPN10 haplotype (rs3792267; rs3842570; rs5030952) and rs3842570 separately contributed to the risk of type 2 diabetes.50 According to the thrifty gene hypothesis, variants that increase the susceptibility to type 2 diabetes under contemporary lifestyle conditions afforded a survival advantage in past environments by increasing the efficiency of energy use and storage. | The CAPN10 gene is also linked to metabolic syndrome and polycystic ovary syndrome.51 |
Obesity and comorbidities | UCP2 (uncoupling protein 2); LEPR (leptin receptor) | Increase in body mass index (BMI).52–55 | Influence obesity comorbid phenotypes. |
Obesity, type 2 diabetes mellitus, Alzheimer’s disease, metabolic syndrome | FTO encodes the fat mass and obesity-associated protein | The at risk rs9939609 variant (AT and AA genotypes) alters dietary intake via an effect on satiety. This effect on appetite is under the influence of various FTO polymorphisms that modulate circulating leptin levels and energy expenditure.56,57 | Encoded protein, also known as alpha-ketoglutarate-dependent dioxygenase. |
Obesity and comorbidities | MC4R encodes the melanocortin 4 receptor | Regulation of the melanocortin and orexigenic protein (AgRP) pathways.58 | Agouti-related protein (AgRP) regulates hypothalamic control of feeding behaviour. |
Obesity and comorbidities | PPARγ encodes the peroxisome proliferator-activated receptor γ | The rs1801282 variant (Pro12Ala) decreases PPARγ transcription, and in turn, influences adiposity.58–62 | Findings are contradictory, but this is generally considered to be a thrifty gene that influences BMI. |
Obesity and comorbidities | The ADIPOQ gene encodes adiponectin, C1Q and collagen domain containing protein | This gene regulates fat metabolism and insulin sensitivity. It has anti-diabetic, anti-atherogenic and anti-inflammatory properties.63 | Variants contribute to body size and blood adiponectin levels, and modulate the risk for type 2 diabetes. |
Obesity and comorbidities | ADRA2B gene encodes adrenergic receptor α-2B | Variant lacking three glutamic acids from a glutamic acid repeat element is associated with reduced metabolic rate in obesity.64,65 | This is a G-protein coupled receptor. This gene is also linked to emotional memory, and predisposes people to focus on the negative aspects of a situation. |
Obesity and comorbidities | NR3C1 gene encoding nuclear receptor subfamily 3 group C member 1 | The BCII variant (C to G transition) confers susceptibility to fat accumulation. Homozygotes have increased weight, BMI, fasting glucose, insulin and HOMA.66 | This is a glucocorticoid receptor protein relevant to BMI in contemporary obesogenic environments. |
Coronary artery and Alzheimer’s disease | APOE ε4 (apolipoprotein E4) | The APOE ε4 allele is a recognised risk factor for vascular and Alzheimer’s disease, and has thrifty characteristics, including exhibiting possible antagonistic pleiotropy, such as improved reproductive efficiency early in life and Alzheimer’s disease later in life.67 | Protein is important in lipoprotein metabolism and lipid transport. |
Congenital malformations (i.e., Neural Crest Defect) | Vitamin A and HOX gene nutrient-gene interaction | An embryonic gradient in the vitamin A (retinoic acid) level regulates HOX gene expression, and therefore timing of body section development. Excess vitamin A in the wrong embryonic location can result in congenital malformation due to the dysregulation of HOX genes.68 | HOX is derived from Homeobox. The synthetic derivates of vitamin A, such as isotretinoin, have a key role on cellular differentiation and development. The effects are mediated by nuclear receptors that transactivate HOX genes. They are therefore potentially teratogenic.69 |
Kuru/variant Creutzfeld-Jakob disease (vCJD) | PRNP gene encodes protein involved in multiple prion diseases | The PRNP M129V variant (M129M) is present in all subjects with vCJD. This polymorphism also protected cultures that practised cannibalism: the Fore linguistic tribe of Papua, New Guinea were protected from developing “kuru”, a prion disorder acquired through endocannibalistic mortuary feasts where diseased kin were consumed. All older surviving Fore tribe members are PRNP M129V heterozygotes.70,71 | That humans can actually evolve for cannibalism is food for thought. Diet, as a selection pressure has shaped human biology through the lineage of Homo sapiens in the strangest of ways, and solidly underpins the application of Darwinian medicine to a variety of contemporary health problems. |
No discussion on diet-culture coevolution and health would be complete without mentioning “thrifty genes”. Contemporary ideas on the theory of thrifty genes and thrifty phenotypes label these phenomena as metabolic characteristics that permit frugality in the deposition or expenditure of energy.72 The belief is that while these traits emerged during the evolution of our species to buffer reproduction as a countermeasure to ecological stochasticity (unpredictable food sources), actions occur on differing timescales. That is, thrifty genes are rooted in our far-off ancestral past, although some thrifty phenotypes are primed to occur over a single or recent number of lifecycles (refer to DOHaD below).
Highly adaptive genotypes/phenotypes that confer a survival advantage within certain environmental contexts can be maladaptive under different ecological contexts. A good example is given by thrifty traits on remote oceanic islands, where food availability is unpredictable. On Nauru, a Micronesian Pacific island, 30–40% of older island teenagers have type 2 diabetes as a comorbidity of obesity, which has arisen from the mismatch between thrifty genes, calorie rich diet and sedentary lifestyle (a model obesogenic environment). A traditional islander’s life will have been harsh, and so an ability to quickly build up fat reserves at times of abundance would help them survive the resource bottleneck of famine and provide them with an enormous survival advantage. Genes that confer this survival advantage under ancestral conditions have responded badly to typical Western diets. On Nauru, the island grew prosperous on the back of fertilizer derived from the island guano deposits. The result was a perfect storm: archetypal sedentary lifestyle, calorie-rich diet, and incompatible ancestral (thrifty) genes. Type 2 diabetes represents the leading cause of non-accidental mortality on the island. Among the important genes linked to insulin resistance/type 2 diabetes in Nauru are leptin receptor (Pro1019Pro) and apolipoprotein D Taq1 polymorphisms.72,73
Although ‘‘thrifty’’ genes, such as the given examples, can help to buffer resource bottlenecks during periods of famine, we now recognize that from a diet-culture coevolution perspective, potent natural selection and genetic drift have acted speedily to alter approximately 700 genomic regions in our relatively recent past (between 15,000 and 5,000 years BP).74–76 As shown in Tables 1 and 2, it is clear that a blur exists between diet-related genes that afford beneficial vs. adverse outcomes. This dichotomy can be influenced by lifecycle stage, a broad range of environmental-related parameters, and social factors. With this in mind, it is difficult to ignore how relevant an evolutionary perspective is to modern medicine, arguing the case for Darwinian/evolutionary theory to be established as a cognate element within undergraduate medical programs.2,77
A fundamental question will inevitably arise: why do many of the aforementioned diseases actually exist? Many may see the answer as having a metaphysical connotation, but as with so many evolutionary paradigms, the answer is relatively simple. First, natural selection acts slowly, allowing for a mismatch to occur between our bodies and quickly formed novel environments. Second, natural selection is constrained, meaning that any given trait is a probable trade off. Third, r/K selection of organisms promotes success in specific environments, shaping species for a trade-off between the number and quality of offspring, but critically not long-term degenerative conditions and human suffering. The r/K paradigm has developed into a rigorous model for the evolution of life histories, and these are discussed at length by David Reznick and colleagues.78 Ultimately, the outcome is that non-communicable metabolic disease has not been shaped by natural selection but rather by our vulnerability to this kind of disease. In what appears almost to be counter-intuitive, natural selection is thus as much about maladaptation as it is adaptation.
It is worth noting that in the developed world, there are few energetic limitations on reproduction. Combined with modern medicine, this makes any selective advantage or disadvantage of a genotype far less significant than it would have been to our ancestors. This is particularly true given that human populations are no longer isolated, and consequently, the geographic flow of genetic information is very high. Therefore, has human evolution stalled? Well no, not at all - humans must still respond to infection, and this remains a significant target for evolutionary processes, which is a topic dealt with below.
Developmental Origins of Health and Disease (DOHaD)
The narrative thus far deals with genes and a longer term evolutionary origin to environmental maladaptation and disease. However, the DOHaD addresses shorter term effects over an individual or a limited number of lifecycles; the DOHaD addresses the link between early maternal diet (i.e., in utero nutritional environment) and later life health outcomes (i.e., maladapted phenotypic outcomes). While this is now a well-documented topic, Lewis and colleagues79 have added in the likelihood that the way the placenta responds to these short term nutritional signals will itself be subject to natural selection to increase Darwinian fitness (optimal foetal growth).
DM strikes at the core of the DOHaD paradigm, with maternal undernutrition during pregnancy/undernutrition in infancy/impaired nutrient transfer to embryo and/or foetus leading to metabolic adaptations that augment immediate foetal/neonatal survival. However, when such changes become fixed, they modify cellular function/structure in the liver and muscle tissues and alter hormone receptor density. The story first began to unfold when Hale and Barker80,81 proposed that these kinds of changes remain beneficial in the long term if nutrition is restricted.82 However, they postulated that these changes can become deleterious within a contemporary obesogenic environment.80,81,83–86
It is becoming clear that many chronic adult diseases exhibit a long latency that originates from early life exposures that propagate developmental plasticity as a biological adaptation for survival. This developmental plasticity is underscored by epigenetic changes, including DNA methylation and histone modifications, now firmly established as mechanisms that regulate gene expression via the remodelling of chromatin.
We now know that biochemical mechanisms such as these facilitate the genome and epigenome in shaping human phenotypic traits. Indeed, it is a DM cornucopia of possibilities because epigenetic modifications can arise from a number of environmental exposures, including nutrients (type/amount), xenobiotics, toxins, ultraviolet radiation, temperature, rainfall, altitude, etc. This point is amplified when one considers that even monozygotic twins that share the same DNA diverge after birth as a direct consequence of epigenetic modifications arising from this tapestry of different exposures.
Of course, from a DM perspective, DNA methylation occurs throughout the human lifecycle, not just at the earliest phase, and is dependent upon the availability of methyl groups (sources include; folic acid, choline, vitamin B12 and methionine, with provision also under the influence of key gene variants such as MTHFR). DNA methylation is critical for the regulation of important developmental processes, including genomic stability, gene imprinting and regulation, transposon silencing, and the direction and maintenance of cell lineage.87–89 Randolph Nesse and colleagues90 provide a detailed review of “evolutionary molecular medicine” as a critical cognate element within DM, and I would argue that it is also a particularly important facet of the DOHaD, which itself has grown into a major subdiscipline of the life sciences. To illustrate this, consider how small size or relative thinness at birth and during infancy correlates with later life osteoporosis, cardiovascular disease, stroke, type 2 diabetes, obesity and metabolic syndrome.91,92 Clearly, the DOHaD construct allows early dietary exposure to be an exigent force in determining adult phenotype, with both beneficial and maladapted phenotypes possible depending on early-late life nutritional disparity.
Within the DOHaD, plastic adaptation to early-life dietary exposure via epigenetics is now recognised as being transgenerational, adding to the difficulties of interpretation. However, put simply, embryonic/foetal/infant adaptive plasticity harmonises humans to their nutritional environment. The negative and clinically important correlate being that a mismatch in contemporary and early nutritional environments fits the DOHaD model, particularly in the context of vascular and metabolic diseases.93 I would argue this adaptive/maladaptive phenomenon sits comfortably within a DM context, even though time frames are shorter than the usual evolutionary chronology.
Ageing
The evolution of human life history
Let us begin with the obvious question – “how long can humans live”? Well from a Darwinian perspective, this is absolutely the wrong question. The correct one is “how long must humans live” in order to achieve Darwinian fitness? This has been addressed by Carnes and Witten, who suggest that 50–55 years is sufficient for humans to achieve this biological mandate.94 They further suggest that this (midlife) age boundary demarcates the transition from expected health and vigour to a period where vigour is increasingly difficult to maintain. More than 60 years ago, George Williams developed and applied his theory of antagonistic pleiotropy to ageing, implicating genes that were favoured by natural selection during the reproductive phase of life, but with the same genes promoting an ageing phenotype in later life.27 This paper, published the year before I was born, is a classic, and while the paper and I have physically aged (matured!), antagonistic pleiotropy has been found to be a common phenomenon, although the trade-off between reproduction/fecundity and longevity/lifespan is not always present.95 Certainly, trade-offs between fecundity and longevity are commonplace in wild animal populations where natural selection pressures are strong, but interestingly, when the selection pressures that drive this antagonistic pleiotropy are relaxed in captive species, these trade-offs are absent.96
Other theories of ageing that sit alongside antagonistic pleiotropy include Peter Medawar’s theory of mutation accumulation97 and the disposable soma theory of Kirkwood and Holliday.98 However, the earliest thoughts on how ageing evolved were considered by August Weismann, who developed the idea of programmed death, whereby ageing evolved to the benefit of the species, not the individual.99,100
Broadly speaking, the disposable soma theory of senescence maintains that organisms age because of an evolutionary trade-off between growth, reproduction, and the critical repair of DNA.98,101 This theory was originated by Thomas Kirkwood; his disposable soma model explains that an organism only has finite resources for allocation to its various critical cellular processes. Thus, a larger investment in growth and reproduction inevitably results in a lesser investment in DNA repair processes. This shift in the use of resources leads to increased cellular damage, shorter telomeres, a build-up of genetic mutations, and ultimately senescence.
Clearly, a balance needs to be struck. If the investment is too low in repair/maintenance biochemistry, it would be evolutionarily unsound because the age of mortality would likely precede the reproductive phase of the lifecycle. However, if the investment is too high in repair/maintenance biochemistry, it would also be evolutionarily unsound because the individual’s offspring would have a higher likelihood of dying before reaching reproductive age. Ultimately, there must be a biological compromise with resources partitioned fittingly. Despite such a compromise, it is believed that damage occurs to somatic repair and maintenance systems, modulating the rate of senescence.102
Consideration of the role of ageing in evolutionary models has raised interesting possibilities for supplements that might buffer cell repair/maintenance biochemistry, particularly in the context of nutritional cofactors and anti-oxidants. Recent research has shown that a variety of anti-oxidants can protect against the UV-related loss of folate in lightly pigmented individuals. Folate is essential for DNA repair,103 and anti-oxidants may therefore help buffer senescence processes.
There are many other ideas on senescence, however, two are worth mentioning as they conflict with the disposable soma paradigm. First, calorie restricted diets can extend life, but they also elicit an adaptive process which causes the organism to partition a higher proportion of resources into somatic maintenance/repair, deflecting critical resources from reproduction104 – a process that does not make these two paradigms mutually exclusive. Others have critiqued the disposable soma theory because it fails to address why women tend to live longer than men.105 This can be explained via the Grandmother Hypothesis, which suggests that menopause exists in women to restrict the length of the reproductive phase of the lifecycle as a protective mechanism. This allows females to live longer and increase the amount of care they subsequently provide to their grandchildren (alloparenting), increasing their overall evolutionary fitness.106
Peter Medawar’s theory of mutation accumulation97 is often discussed; it posits that harmful mutations are only expressed in later life, when reproduction has ceased and future survival is significantly diminished. After reproduction, natural selection will be weak and these harmful mutations are thus not eliminated. Medawar suggests that over time, these mutations, which are deleterious in late life, accumulate due to genetic drift and lead to the evolution of what we now think of as senescence. Indeed, Medawar developed the concept of a "selection shadow", where the shaded region of a graph of age vs reproductive potential represents the 'shadow' of time during which selective pressure has no effect.107
Research is constantly refining our models, and a recent 2021 article seems particularly relevant. Lam and colleagues108 have uncovered a bifurcating model of nutrient sensing via the central melanocortin pathway, with signalling through MC4R regulating the acquisition and retention of calories, whereas signalling via MC3R primarily regulates the disposition of calories into growth, lean mass and the timing of sexual maturation. Indeed, many of these ideas on evolution and ageing overlap with other paradigms such as the DOHaD. For example, the epigenetic profile is likely to be critical for ageing and age-related health. We know that prenatal exposure to famine during the 1944–45 Dutch Hunger Winter led to reduced DNA methylation of the insulin-like growth factor gene (IGF2) and increased methylation of leptin (LEP), interleukin 10 (IL10) and other genes in famine subjects compared to non-famine same sex siblings 60 years after the famine occurred.109 Quite clearly, very early life famine (nutrition) exposure influences adult metabolism and hence disease phenotype, a proxy for senescence/ageing rate.
Some specific genes are also very important to the overall picture. We now recognize that human longevity is heritable, and Deelen and colleagues110 showed that rs429358 apolipoprotein E (ApoE4 allele) is associated with lower odds of surviving to the 90th and 99th percentile age, while rs7412 (ApoE2 allele) shows the opposite. Another key variant (rs7676745) located close to the G-protein-coupled receptor 78 gene (GPR78) has lower odds of surviving to the 90th percentile age. Overall, the study showed a role for tissue-specific expression of multiple genes in longevity. It also showed a genetic correlation of longevity with that of several disease-related phenotypes, pointing to a shared genetic architecture between health and longevity.110
Disease
Cancer and dementia are two of the most significant degenerative diseases facing our ageing, contemporary society. Greaves111 has discussed the DM case for cancer. Much effort is put into better understanding of proximate mechanisms for cancer, but little is known about why humans are particularly vulnerable compared to other species. This vulnerability is likely a legacy of our evolutionary history. That is, it has arisen at least in part as a consequence of “design” constraints, compromises and trade-offs that are the currency of evolutionary processes. It is also important to recognize that the distribution of cancer-related gene mutations is under the influence of founder effects, drift, migration and population structure and may account for a variable prevalence in different populations.111,112 The most obvious cancer related to an evolution-environment mismatch is skin cancer. Recent population migrations have led to maladapted pale skin predominating in countries such as Australia and consequently an enormous increase in the occurrence of all forms of skin cancer. Of course, the reverse is also true, and vitamin D photosynthesis is challenged in darker skin phototypes at northerly latitudes.113 Indeed, it is critical to note that many overt mismatches between genotype and environment/lifestyle occur in contemporary society, acting as a spark to tinder for carcinogenesis.
Molly Fox examined Alzheimer’s disease from the DM perspective.114 She grouped the theories on human susceptibility into eight previously recognised categories, based on novel extension of the lifespan: lack of natural selection pressure during the post-reproductive phase; antagonistic pleiotropy; thrifty genotype, fast brain evolution; delayed neuropathy by selection for grandmothering; novel alleles selected to delay neuropathy; and by-product of selection against cardiovascular disease.114 The author also put forward a new hypothesis based on environmental mismatch.
Pharmacogenomics and nutrigenomics
The discipline of pharmacogenomics examines the role of the genome in drug response. The best studied, and probably most relevant drug-metabolizing enzymes belong to the Cytochrome P450 (CYP) family. These critical enzymes introduce reactive or polar groups into drugs and other xenobiotics, and among the best studied are CYP2D6, CYP2C19, CYP2C9, CYP3A4 and CYP3A5. Among them, these genes and their respective enzymes metabolise approximately 70–90% of currently available prescription drugs.115,116 Of these, CYP2D6 is probably the best known and most extensively studied of all CYP genes. It is highly polymorphic, with over 100 variants identified.117 For example, there can be altered copy number and variation in the rate of metabolism (slow metabolisers are common in East Asia). This CYP enzyme is used to metabolize drugs including morphine and tramadol, and genetic variation contributes to lower efficacy and increased side-effects.
Drug metabolising phenotypes are categorised as ultra-rapid metabolisers; extensive metabolisers (normal); intermediate metabolisers; poor metabolisers. This kind of genetic variation can be particularly important in treating cancer. For example, genes coding for dihydropyrimidine dehydrogenase, UDP-glucuronosyltransferase, thiopurine methyltransferase, and cytidine deaminase are responsible for the pharmacokinetics of 5-florouracil/capecitabine, irinotecan, 6-mercaptopurine and gemcitabine/cytarabine, respectively. All these are highly polymorphic and, as such, can lead to severe toxicity and even death. Screening and adaptive dosing can mitigate risk, and such challenges of pharmacogenetics sit within the realm of a Darwinian approach to medicine.
Nutrigenomics addresses the relationship between the human genome, human nutrition, and health. There is an obvious overlap with pharmacogenomics. An example of this is given by a study published in 2004 showing that the MTHFR C677T gene variant affects the intracellular level and distribution of folates and changes the growth and chemosensitivity of colon and breast cancer cells to methotrexate and 5-fluorouracil.118 This common MTHFR variant may therefore be a useful pharmaco-/nutrigenetic determinant for tailored antifolate chemotherapy.119
Clearly, this narrative plays to the concept of personalised medicine, an increasingly important discipline within health and preventative medicine, and one that aligns perfectly with a Darwinian approach to medicine.
The hygiene hypothesis, microbiota, pathogens and parasites within the Darwinian perspective of human health
The hygiene hypothesis posits that recent and dramatic increases in allergies and autoimmune diseases in developed nations have arisen from reduced exposure to the full range of infectious immunoregulatory agents that ancestral humans were once exposed to. This model has gained considerable support based on strong epidemiological/research data.120
Early Homo sapiens would have been chronically infected by parasites.121 This would have increased following the domestication of animals and the development of larger urban aggregations. The cohabitation with animals and sedentary lifestyles (compared to hunter/gatherers) promoted disease transmission from livestock, and the faecal-oral transmission of both pathogenic microbes and parasites. This pathogenic environment and co-existence would have been normal for our ancestors.
The microbiome
Even today, we are not individual entities – we play host to an inordinate number of creatures who live inside and on us and who would not be able to survive without us. Furthermore, despite what some people might like us to think, in many cases our lives are far better off with our miniature hitch hikers, than without them - generally not the case with parasites we might also harbor. Some definitions might help: Commensalism is a relationship in which two organisms associate but which benefits one partner without affecting the other. It contrasts with mutualism, in which both organisms benefit. Amensalism is a relationship in which one partner is harmed while the other is left unaffected. This latter relationship differs from parasitism, where one organism clearly benefits while the other is harmed. The microbiome by comparison is a phrase originally coined by Joshua Lederberg and refers to the totality of microbes, their genetic elements, and environmental interactions in a given environment.
With these definitions in mind, it becomes clear that the relationship between gut flora and humans is not simply commensal in nature (that is to say, a harmless coexistence), rather, it is often a mutualistic interaction. People can survive without gut flora, or with a modified flora, but this is far from ideal as these intestinal microorganisms perform a plethora of useful, indeed critical functions. They can ferment unused energy substrates such as fibre (undigested carbohydrates) and subsequently absorb short chain fatty acids (butyrates, propionates, acetates), they stimulate/prime our immune system, and they prevent the overgrowth of harmful, pathogenic bacteria and even regulate the development of the gut. They are often credited with synthesizing vitamins that can be used by the host, although in the case of vitamin K, this represents a form of the vitamin (menaquinone) that seemingly is not particularly useful to the host during nutritional deprivation. Similarly, faecal loss of folate each day equates to the amount synthesized by the gut bacteria. Despite this, it is widely accepted that the intestinal flora does contribute to our vitamin status. They may also generate hormones that facilitate fat storage and help to metabolise bile acids, sterols and xenobiotics. When this beneficial “microbiome” is out of equilibrium, opportunist pathogens can expand and lead to infection and even increased cancer risk.
Our human microbiome is site specific, and while in the gut, beneficial species include Lacobacilli, such as L. acidophilus, other parts of our body such as our skin, mouth, and genitals, have their own favoured resident species. In the gut, for example, providing a prebiotic such as fructo-oligosacharides can promote the growth of beneficial bacteria such as bifidobacterial species. Increases in the beneficial species L. acidophilus can also bring benefits. There are many mechanisms at work, but one includes modifying the colonic ecosystem. L. acidophilus can lower colonic pH (increase lactic acid) and raise redox potential, which favours further colonization by health promoting bacteria. These kinds of mechanisms inhibit the growth of potentially harmful pathogens such as: gram +ve and –ve bacteria, such as E. coli, Bacterioides, Fusobacterium, C. perfringens, Salmonella sp., Listeria sp., Shigella sp., Campylobacter and Vibrio cholerae.
There is little doubt that the “good” bacteria have evolved to be in our gut: components of the host flora share common antigenic epitopes with the intestinal mucosa, giving a biochemical basis to immune tolerance of host to resident bacteria.122
Given the number and diversity of cells that conspire to form our microbiome and its enormous impact on human health, some scientists have advocated that the microbiome is actually an entirely new organ system in its own right. As such, it is likely to have changed over the anthropocene. Walter and Ley123 suggest that the advent of agriculture altered human diets, particularly in the context of starch and milk being added to diets in many parts of the world. Genetic evidence indicates that people able to utilize starch and milk directly using their own host enzymes, while minimizing microbial fermentation, had a selection/fitness advantage. Our contemporary diets, which are even richer in simple substrates compared with Neolithic diets, interact with features of contemporary lifestyle to further stress the interactions between ourselves (host) and our microbiota, partially contributing to today’s planetary epidemic of metabolic disorders.
This idea that change has occurred to our microbiome is demonstrated rather well by a comparative study on the impact of diet in shaping gut microbiota in children from Europe and Burkina Faso (rural West Africa where the fiber content is similar to that of ancestral human settlements at the time of the birth of agriculture).124 In this study, Filippo and colleagues showed Burkina Faso children exhibited enrichment in Bacteroidetes and depletion in Firmicutes species, with a unique occurrence of bacteria from the genus Prevotella and Xylanibacter, species containing bacterial genes for cellulose and xylan hydrolysis, that were completely lacking in European children. They also found more short-chain fatty acids and less Enterobacteriaceae in Burkina Faso children than in European children. They interpret this to mean that the gut microbiota coevolved with a polysaccharide-rich diet in Burkina Faso individuals, allowing them to maximize energy intake from fibre while simultaneously protecting them from inflammations and non-infectious colonic disorders. Quite rightly, they suggest that this comparison of human intestinal microbiota from children subject to a modern western diet and a rural diet, highlights the importance of preserving this treasure of microbial diversity from ancient rural communities worldwide.124
With such heterogeny at play, it is important to retain a context in which our microbiome evolved to be generally non-pathogenic (benign unless they grow abnormally), and that as a result, they exist in harmony in a symbiotic association with us. We are now starting to see beyond this and recognize that they may influence autoimmune diseases such as multiple sclerosis, rheumatoid arthritis, diabetes, and possibly some cancers. Evidence is also emerging that common obesity might also be exacerbated by a poor blend of microbes in the gut.125
Moving to the outside of our body, the skin is interesting because it contains a number of discrete microenvironments. The microbiota here is as varied as other planetary organisms adapted to desert, rainforest or marine biomes. The analogy isn’t so frivolous - according to Davis,126 our resident dermal bacteria are characteristic of three main regions of skin: A) axilla (armpit), perineum (area between the anus and scrotum in males and between the anus and posterior vulva junction in females), and toe webs; B) hand, face and trunk; and C) upper arms and legs. He suggests that partially occluded skin (axilla, perineum, and toe webs) has a higher count of microorganisms than less occluded areas such as our legs, arms, and trunk. Most notable is Staphylococcus epidermis, which is the predominant skin resident. This anaerobe represents approximately 90% of the skin microlife. Within the nasal cavity and around the perineum, Staphylococcus aureus is present in up to one-third of people but is present in two-thirds of people on their vulvar skin. Micrococci and diptheroids are also prevalent. The latter seem to be relevant in acne pathogenesis. Streptococci are also found on the skin, but also in the mouth, where strains are important in converting sugars into acids that can lead to tooth decay (streptococcus mutans). This is actually a significant microorganism as dental caries and periodontal disease (gingivitis) affects in the region of 80 percent of the population in the developed world. The nails have a similar array of microorganisms but also carry fungi (Aspergillus, Penicillium, Mucor and Cladosporium). In moist areas where desiccation is less likely, you find Enterobacter, Klebsiella, Escherichia coli, Acinetobacter and Proteus spp.
It now seems to be emerging that we have quite a varied skin microflora but that within this variation between individuals is a relatively constant (preserved) scaffold/core of bacterial species. The complexity of this ecosystem is affected by the human environment, including our shampoo, soap, washing powder and clothing materials, so this is quite a difficult question to address. In the context of the gut microflora, an attempt is being made to classify individuals by enterotype (i.e., by an individual microfloral composition). Sequenced fecal metagenomes suggest that three robust clusters exist that are not related to the individuals’ geographic origins.127,128
The microflora of the gut and skin seems to change with age, and this is also true of the urogenital flora. Vaginal microflora are altered according to age, vaginal pH, and hormone levels. Transient microorganisms can cause issues, such as with the yeast, Candida albicans. One of the more painful, common problems when the dermal microflora falls out of equilibrium is conjunctivitis. The conjunctiva is typically home to very few, or indeed, no microorganisms. In tests, Haemophilus and Staphylococcus are the genera most often seen. As infectious as conjunctivitis is, you are far more likely to get an initial pathogen colonization occurring in the upper respiratory tract. The trachea and pharynx mostly contain bacterial genera found in the normal oral cavity (typically α-and β-hemolytic streptococci). They do also contain anaerobic staphylococci, neisseriae, diphtheroids, and others. Among the potentially pathogenic organisms that are found in the pharynx are haemophilus, mycoplasmas, and pneumococci spp. The upper respiratory tract is where the pathogens, Neisseria meningitides, C. diphtheriae, Bordetella pertussis, and others get their first foothold. The lower respiratory tract, containing small bronchi and alveoli, is by comparison a sterile desert. Anything that did make it this far would need to face off attack by immune sentinels such as alveolar macrophages.
Antibiotics
We continually hear how uncontrolled antibiotic use has led to the acquisition of resistance to therapy by a number of pathogenic microorganisms. With this in mind, it is interesting to consider the novel treatment of Clostridium difficile (C. difficile), the most frequently identified cause of nosocomial infectious diarrhoea in the US. A 2013 study reported in the New England Journal of Medicine revealed that the infusion of donor faeces for the treatment of recurrent C. difficile infection was proven to be three times more effective than antibiotics in curing infection by this organism.129 Gut health would thereby appear to require a holistic view, in which the overall microbiotic ecosystem needs to be in balance. This is most obvious in the case of the gastrointestinal tract, but this must presumably also be the case with other microbiotic systems and permits a Darwinian approach to treatment.
Hygiene hypothesis and atopy
There are some interesting correlates between our microbiota and important clinical conditions that we do not fully understand in terms of mechanisms of action and causal factors. One of the most fascinating relates to what has been termed the hygiene hypothesis of atopic disease (an atopy or atopic syndrome is a predisposition toward the development of certain types of allergenic hypersensitivity reactions). This theory advocates that environmental changes in our industrialised world have led to reduced microbial contact during early childhood and that this has resulted in a ballooning epidemic of allergic rhinoconjunctivitis, asthma and atopic eczema. In particular, it frames the importance of early life exposure to symbiotic microorganisms, including the gut flora and probiotics, as well as parasites. Worryingly, the rise in autoimmune diseases and acute lymphoblastic leukaemia in children within the developed world is often linked in to the hygiene hypothesis.
The hygiene hypothesis is fairly cohesive, especially when taking into account the Th1/Th2 paradigm of immune responsiveness (Th1 polarized response is not induced early in life, conditioning the body to subsequently be more susceptible to developing Th2 related disease later in life), but it is recognized that it needs to be bolstered in key areas: A) the importance of infections in causing immune deviance may be outweighed by the stimulatory action of endogenous intestinal microbiota B) suppressive and modulatory immune responses complement the Th1/Th2 paradigm and C) protecting against atopy, guarding against infectious, inflammatory, and autoimmune diseases may be contingent upon healthy host-microbe interactions, which is an implicit component in the hygiene hypothesis.130
Although the hygiene hypothesis advocates that the recent surge in allergic and autoimmune diseases results from changes in the way humans interact with microbes within our ecosystem, this theory falls short in explaining A) why allergic asthma is increasing in 'unhygienic' US inner cities B) why allergies are less prevalent among migrants' children living in large European conurbations C) why commonplace infections via airborne viruses don’t guard us against allergic sensitization D) why the inverse association between certain infections (i.e., hepatitis A) and allergic diseases has been reproduced in some but not all populations and E) why probiotics do not prevent or correct allergic diseases. These are challenging questions that target the controversial nature of the hygiene hypothesis and should help us to better understand the hypothesis and allow identification of the infectious agents that are truly responsible for protecting against autoimmune and atopic diseases.131
Parasitism
Our species has a long history of coexisting with parasites. Egyptian papyrus records dating back 5,000–6,000 years contain a written record of parasites, including threadworms, Guinea worms and tapeworms. Greek philosophers such as Hippocrates again make reference to human parasites and parasites of other species, including domesticated animals and fish. One of the standout parasitic diseases is so gross that no misinterpretation of the records is possible: dracunculiasis is a disease caused by the Guinea worm and is typified by the female worm exiting through the leg. This abhorrent symptom is so specific to the disease that it is recorded in many texts and plays through history. The first evidence we have from archaeological remains shows that humans living in northern Chile suffer from lung fluke, as 8,000-year-old fossilized faeces were found to contain the eggs of this parasite. Hookworm (Necator americanus) is a serious modern-day problem in parts of the developing world where it causes iron deficiency anemia in both men and women. In fact, it represents the major form of blood loss in these regions. In Western countries, iron deficiency anaemia is still a problem, but only for females, largely due to heavy menstrual losses. As troublesome as periods can be, the alternative cause of anaemia – hookworm, is far less pleasant. The parasite is one of the most common roundworm infections of the intestine and is widespread in tropical and subtropical countries where locals often defecate on the ground and where the soil water content is optimal for hookworm eggs to develop into worm larvae. The World Health Organization has suggested that hookworm disease affects as many as 740 million people worldwide. Indeed, globally, iron deficiency anaemia is the most prevalent nutrition related disease, in no small part due to hookworms.
Hookworm eggs pass onto the ground in human faeces where they develop into immature infective larvae in the cool moist soil. Larvae extend their bodies into the air and wave their bodies around as if gently blowing in the wind. When they come into contact with human skin, typically when trodden on by a bare foot, they penetrate the skin, enter the bloodstream, and are transported to the lungs. From here, the larvae migrate into the windpipe and are then swallowed and carried down the gastrointestinal tract to the small intestine. As the worm mass accumulates, diarrhoea is likely, as well as cramping and nausea. The worm eggs eventually appear in the stool and begin the cycle afresh. If the infection becomes chronic, serious anaemia will develop. This is due to those serrated mouthparts munching on the intestinal wall and causing blood loss. Clearly, when an infection coincides with depleted nutrition, pregnancy, or malaria, the anaemia that develops can be severe. If nutritional health is good, a small worm burden is more easily tolerated.
Having described this rather unpleasant anatomical circuit that hookworms follow, it is interesting to consider the parasite in a different context. Experts now believe that without gut worms such as this, our immune system loses its natural equilibrium, leading to the development of allergies and sometimes serious conditions such as asthma. Therefore, this fits the “hygiene hypothesis” very nicely, albeit with rather larger organisms sharing our body space than I alluded to previously. Today, parasitic worms have largely been eradicated among humans living in developed countries. However, in some developing countries, two-thirds of all children have intestinal worms such as hookworms. What is most interesting is that allergies are very rare in these children. One study showed that drug treatment to eradicate hookworm in children in SE Asia led to a significant increase in dust mite allergy. It therefore seems that over millions of years of co-evolution, worms and humans have evolved methods to suppress host immune responses, a process that quite obviously extends a worm’s survival inside our bodies.132
There is a parallel line of thinking with respect to Crohn’s and other autoimmune diseases. Indeed, diseases like this and ulcerative colitis might be treatable by deliberately infesting patients with parasitic worms, such as whipworms and hookworms. This work has been pioneered by Joel Weinstock, a gastroenterologist, parasitologist and immunologist at Tufts University in Boston, who has used the pig whipworm (Trichuris suis) rather than the human whipworm (Trichuris trichiura) for therapy, as the porcine worm can not survive inside the human gastrointestinal tract for a long time. Helminth treatment certainly seems to protect against intestinal autoimmune disease in humans and is supported in animal models. In one study, Weinstock asked 29 participants with Crohn's disease to consume 2,500 pig whipworm eggs every three weeks for six months. A total of 79.3% of subjects improved significantly, with 72.4% experiencing remission, but a placebo effect could not be ruled out.133 He also showed that this treatment is both safe and effective in patients with active ulcerative colitis.134
There is a pattern emerging, in which it is becoming increasingly clear that treatments need to address our evolutionary past to be truly effective disease interventions – a DM perspective is beginning to appear. Treating C. difficile with healthy colonic innoculum is much the same as treating inflammatory bowel disease with pig whipworms. Correcting the evolved harmony of humans and our “subservient commensals” allows us to create a healthy equilibrium and is clearly an important component within the hygiene hypothesis.
The significance of the conditions I have mentioned as well as others, such as elephantiasis, schistosomiasis, malaria, and amoebiasis, cause a significant disease burden and need to be examined with an objective perspective. To this end, Mideo and Reece135 have examined plasticity in parasite phenotypes in the context of the evolutionary and ecological implications for disease. They explore the field of DM to integrate disease prevention with an explanation of the variation in harm caused by parasites due to infectiousness. This can then inform the development of control strategies that prevent or withstand undesirable parasite evolution. The problem is that many parasites live in a hostile, dynamic environment – that is, the bodies of other organisms. This means that the success of integrating evolutionary biology with medicine, i.e., in a DM context, requires an improved understanding of how natural selection has solved the adaptive problems that parasite biology is attuned to. Without question, parasites are “plastic”; that is, they experience a rapid and broad variation in the environmental conditions found inside hosts and vectors. With this in mind, there is increasing evidence for flexibility in the expression of parasite traits that underpin in-host parasite replication and between-host parasite transmission. By that I mean adaptation that extends from immune evasion traits to biological investment in transmissible forms. This kind of response (phenotypic plasticity) to variation in the environment (resource availability, in-host competition and pharmacologic treatment) keeps a parasite one step ahead in terms of survival (maximal fitness). In their review, Mideo and Reece135 discuss how we might interject to give ourselves the advantage over our parasites.
Selected examples of the trade-off in providing an evolutionary advantage against infectious disease
Psoriasis is a polygenic condition with high-penetrance. Despite the unpleasant nature of this skin complaint and its increased rate of metabolic syndrome, skin lesions have been linked to enhanced wound healing and an improved ability to counter infection. Indeed, it has been suggested that leprosy, tuberculosis and other infectious entities act as stress factors that select psoriasis-promoting genotypes in some populations.136
There are times when both wildtype and homozygous recessive genotypes are less fit than heterozygotes. When this occurs, both mutant and wildtype alleles tend to be maintained in a population. This phenomenon is referred to as a heterozygote advantage or balanced selection. The classic example given to illustrate this principle describes how the amino acid valine is substituted for a glutamic acid in the haemoglobin molecule as a ‘molecular fix’ that can protect individuals from sickle cell anemia. The ‘mutant’ HbS allele is especially common where malaria is endemic because heterozygosity (HbAHbS) for this trait protects individuals against a life threatening parasitic infection (8% of all child deaths were due to malaria – 853,000 deaths/year in 2003). While the heterozygotes have an advantage, wildtype (HbAHbA) individuals are less able to contend with falcoparium malaria, and homozygous recessive individuals (HbSHbS) suffer from overt sickle cell anaemia, a dreadfully debilitating and often lethal clinical condition. Despite the awful presentation of the sickle cell anaemia phenotype, the frequency of HbSHbS individuals in parts of Africa within the malaria belt can reach as high as 4% of the population. Seemingly, the advantages of maintaining heterozygosity for this trait within the population are high, including a high cost in suffering and disability within those recessive for the trait! Another example of the heterozygote advantage is given by Tay-Sachs disease. Here, heterozygosity confers a degree of protection against tuberculosis, despite the recessive trait proving fatal by the time a child reaches age four.