4 questions | Applied Sciences homework help
- Benson JS. FDA activities protect public. FDA Consumer 25(1):7–9, 1991.
Author Title Publication Volume(Issue):Pages, Year How to Locate References The format this textbook uses for references to maga- zine and journal articles is: √ Consumer Tip Online documents and journal article abstracts are easily accessed through the “references” pages of the Consumer Health Sourcebook Web site (www. chsourcebook.com). Since 2000, more than 45 million online journal articles have been assigned permanent Digital Object Identifier (DOI®) numbers that enable them to be located with the search engine at www.doi.org/index. html.17 Scientific journals are also housed at medical school and hospital libraries. Many libraries have full-text online access, and most can obtain books and article reprints through the interlibrary loan process. Using Google to search for an article’s title may locate a full-text-copy that has been posted. *In this text, citations numbered in boldface type are recommended for further reading. Separating Fact From Fiction One of the factors that makes America great is our freedom of speech. To maintain this freedom, we must also run a risk. False prophets can get up on pedestals (such as radio and television talk shows) and tell you almost anything they please. Gabe Mirkin, M.D.1 Finding the occasional straw of truth awash in a great ocean of confusion and bamboozle requires intelligence, vigilance, dedication and courage. But if we don’t practice these tough habits of thought . . . we risk becoming a nation of suckers, up for grabs by the next charlatan who comes along. Carl SaGan2 An inability to comprehend even basic statistical concepts can trans- form modern youth into victims in search of an irrational belief system that will needlessly harm, panic, and abuse. PaSquale aCCarDo, M.D. ronalD linDSay, M.D.3 Be careful about reading health books. You might die of a misprint. Mark Twain “By God! You can fool all of the people all of the time!” © medical economics, 1982 Chapter Two Consumers who wish to make intelligent decisions about health matters must address several ques- tions: What are scientific facts? How can they be identified? To what extent should people believe what they read and hear? Where can valid information be found? This chapter explains how scientific methods are used to determine facts, how health information is dis- seminated, and how reliable information can be obtained. How FactS are DetermineD Trustworthy health information comes primarily through exposing hypotheses (assumptions) to critical examina- tion and testing. A hypothesis is scientific only if it is testable and can predict measurable events. It is gener- ally not a good idea to invest resources in investigating hypotheses that lack a plausible rationale. The scientific method offers a way to evaluate infor- mation to distinguish fact from fiction. It does not rely on reports of personal observations and experiences as evidence of fact. Rather, it provides an objective way to collect and evaluate data. Astronomer Carl Sagan said that “science is a way of thinking much more than it is a body of knowledge.” He also noted2: At the heart of science is an essential tension between two seemingly contradictory attitudes—an openness to new ideas, no matter how bizarre and counterintuitive they may be, and the most ruthless skeptical scrutiny of all ideas, old and new. This is how deep truths are winnowed from deep nonsense. Of course, scientists make mistakes in trying to understand the world, but there is a built-in error-correcting mechanism: The collective enterprise of creative thinking and skeptical thinking together keeps the field on track.
The scientific method has at least three noteworthy characteristics:
- Scientific methods are essential for validating health claims and other information.
- Under the rules of science (and consumer protection), those who make a claim bear the burden of proof.
- Scientific research requires proper study design, the highest possible accuracy of measurement or observation, and appropriate statistical analysis of the findings.
- Don’t assume that information is valid simply because it is broadcast or published. No magical superforce is protect- ing the marketplace against misinformation.
- The best way to avoid errors is to use trustworthy sources of information. It is far more sensible to use reliable “in- formation filters” such as Consumer Reports on Health rather than trying to integrate newsbits on one’s own.
Keep tHeSe pointS in minD aS You StuDY tHiS cHapter Key Concepts First, it is self-correcting. Scientists do not assume that this method discovers absolute truth but rather that it produces conclusions that subsequent studies may modify. In this sense, science is cumulative. Second, the scientific method requires objectivity. Findings must not be contaminated by the personal beliefs, perceptions, biases, values, or emotions of the researcher. Research results often lead to new questions that should be explored. Third, experiments must be reproducible. One study, taken alone, seldom proves anything. To be valid, one researcher’s findings must be repeatable by others. As summarized by Haack4: What is distinctive about inquiry in the sciences is . . . sys- tematic commitment to criticism and testing, and to isolating one variable at a time; experimental contrivance of every kind; instruments of observation from the microscope to the The Scientific Method in Action5 In 1978 researchers at Mt. Sinai Hospital in Miami Beach, Florida, compared the effects of chicken soup, cold water, and hot water on the clearance rate of nasal mucus. Each liquid was consumed through a straw from a covered cup or open vessel. A videotaping system was used to record the advance of tiny radioactive discs as mucus carried them out the nose. Cold water slowed mucus flow, but chicken soup and hot water sipped from an open cup speeded it up. Since chicken soup outperformed hot water, the researchers concluded that it appeared to have a special ability to clear a stuffy nose. Mom and Grandma were right! Personal Glimpse Part One Dynamics of the Health Marketplace14 different studies, (c) whether it is clear that the risk marker preceded the disease, (d) whether the dose and not just the mere presence of the marker predicts disease risk, and (e) whether, in light of what else is known, it appears logical that the marker is responsible. Controlled clinical trials compare an experimental group of people who receive the treatment being tested and a control group of people who receive a different treatment or no treatment. For example, members of the experimental group may receive a pill with active ingredients, whereas those in the control group receive another treatment, an inert substance (placebo), or no treatment. Studies may be conducted “blind” or “double- blind” to minimize or eliminate the effect of bias on data collection and interpretation. In blind studies the participants do not know which treatment they receive. In double-blind studies neither the people administer- ing the treatment nor the experimental subjects know who gets what. In crossover studies participants in two or more groups are switched from one intervention to another after a specified period of time. Some studies do not use control groups. Ernst and others10 have warned that experimental subjects who receive placebos should not be classified as “untreated” and that many people fail to distinguish between a placebo response and the improvement that results from the natural course of an illness. Chapter 3 discusses this subject further. Large, randomized, well-controlled, double-blind studies in which several medical centers participate are considered the gold standard of research trials.11 Be- cause such studies are very expensive to conduct, they are reserved for questions of great importance. Long- term research (“outcomes research”) is also needed to questionnaire; sophisticated techniques of mathematical and statistical modeling; and the engagement, cooperative and competitive, of many persons, within and across generations, in the enterprise of scientific inquiry. The long list of references cited in Chapter 15 of this text illustrates the enormous amount of effort that can be involved in developing important conclusions. Research Design Scientific research requires proper study design, the highest possible accuracy of measurement or observa- tion, and appropriate statistical analysis of the findings. The conclusions are then used to develop new theories or modify old ones. Science writer Rodger Doyle6 has compared the types of studies medical scientists use to investigate health and disease: CaSe STuDieS involve systematic observation of people who are ill. laboraTory exPeriMenTS include studies of animals, living tissue, cells, and disease-causing agents. ePiDeMioloGiC STuDieS analyze data from various population groups to identify factors related to the occurrence of diseases. ConTrolleD CliniCal TrialS offer the most credible evidence. Anecdotal reports are personal observations that have not been made under strict experimental conditions. Competent researchers may use anecdotes for suggest- ing new hypotheses, but never as supporting evidence. The fact that a person recovers after doing something is rarely sufficient to demonstrate that the recovery was caused by the action taken and is not simply coincidental. Moreover, reports of personal experiences can be biased, inaccurate, or even fraudulent. Well-designed experi- ments involving many people are needed to establish that a treatment method is effective. Without them, even honest, competent doctors can be misled by their clinical experiences.7 Epidemiologists search for “risk markers” (predic- tors of a disease) by comparing people with different characteristics.8 These markers can include personal characteristics (e.g., weight, blood cholesterol levels), personal activities (taking vitamins, exercising regularly, smoking cigarettes), and environmental factors (inhal- ing radon gas or tobacco smoke) that are statistically related to specific diseases. Before concluding that any relationship is causal rather than coincidental, however, epidemiologists must consider: (a) the strength of the association, (b) the consistency of the association in Any procedure proposed to treat human disease should be subject to the same standards of safety and effectiveness that apply to usual medical procedures. It is, however, unacceptable to require any scientific body to examine every proposed claim. There will never be enough facilities to consider the avalanche of proposals. Very simply, the burden of proof rests with the proponents. . . . Testimonials and anecdotal accounts, no matter how enthusiastic, do not constitute proof. Public enthusiasm and interest do not create validity. Edward H. Davis, M.D.9 √ Consumer Tip Chapter Two Separating Fact from Fiction 15 compare the effectiveness of proven options.12 Table 2-1 illustrates the typical steps in a clinical investigation. It is important that research findings not be overgen- eralized. Conclusions based on data from one population may not apply to another, and the results obtained from animal or test-tube studies may not be applicable to humans. The importance of scientific testing was strikingly demonstrated by a study of mammary artery ligation, a surgical procedure used in the 1940s and 1950s for treating angina pectoris (chest pain resulting from coronary artery disease). Proponents believed that tying off the mammary arteries stimulated the growth of new blood vessels that would increase the supply of blood to the heart muscle. The procedure was considered ef- fective until double-blind controlled tests demonstrated that pretending to operate (merely cutting the skin of the patient’s chest wall) was as effective as tying off the mammary arteries.13 Misuse of Statistics Many people tend to accept statistical data without ques- tion. To them, any information presented in quantitative form is correct. Advertisers, quacks, and pseudoscien- tists often cite invalid data or misrepresent valid data to promote their wares. Strasak and colleagues14 have identified 47 statistical errors in medical research. The common ones that can cause confusion include: biaS: A factor that may cause people to make erroneous obser- vations or draw erroneous conclusions. For example, in a study of vitamin C and the common cold, participants who knew they were taking vitamin C reported fewer colds than those who were taking it but did not know it.15 non SequiTur: The stated conclusion does not follow from the facts. inSuffiCienT DaTa: Small amounts of data limit the certainty of results. Tests done on small numbers must usually be confirmed by larger studies. nonCoMParable DaTa: Care must be taken when groups are compared. For example, people who eat the most sugary cereals are at lowest risk of developing cancers. But don’t conclude that eating sugary cereals reduces cancer risk. Those who eat the most sugary cereals tend to be children, and the risk of cancer is much lower in children than it is in adults. nonrePreSenTaTive DaTa: Improper sampling techniques (lack of random sampling) may yield data that do not accurately tYpical StepS in a clinical inveStigation Table 2–1 Step A question or problem is identified. A hypothesis is formulated. A limited aspect of the hypothesis is se- lected for testing. A study is designed. The study is conducted. Data are collected, recorded, and tabu- lated. The data are analyzed to determine whether the results appear significant or were likely to occur by chance alone. A determination is made on whether the hypothesis has been supported or refuted. The study may be repeated by the re- searchers or by others to verify their results or conclusions. Studies relevant to this area are reviewed. Example What is the effect of vitamin C on the common cold? Supplementation with vitamin C can reduce the incidence of colds. Will daily administration of 1000 mg of vitamin C prevent colds? Sixty adults will be given 1000-mg tablets of vitamin C daily for 4 months, and 60 of comparable age, race, sex, and health status will be given an inactive substance (placebo tablets). The participants will not know which they receive (a blind study). Volunteers are obtained and instructed on how to proceed. There were six colds in the vitamin C group and seven in the placebo group. The small difference between the two groups could easily have occurred by chance alone and therefore is not “statistically significant.” The hypothesis was not supported. The experiment found no evi- dence that vitamin C supplements reduce the incidence of colds. Many double-blind experiments have found that supplementation with vitamin C does not prevent colds (see Chapter 11). Skilled reviewers agree that enough well-designed studies have been done to conclude that vitamin C megadoses do not prevent colds. Part One Dynamics of the Health Marketplace16 represent the study population or group. For example, to determine which car the average American likes best, it would not be appropriate to poll only owners of one make of car, those living in one region, or even those listed in a telephone book (since many people have a nonpublished number, a cell phone, or no telephone). Figure 11-1 provides another example. ConfuSion of aSSoCiaTion anD CauSaTion: A finding that taking dietary supplements is associated with fewer missed work days does not mean that dietary supplements prevent people from getting sick. Other factors associated with taking supplements, such as having a healthier overall lifestyle, may be the real reason for reduced sick days among the supplement users. oMiSSion of an iMPorTanT faCTor: Many individuals who feel helped by an unorthodox remedy have taken it together with effective treatment but credited the unorthodox remedy. In How to Lie with Statistics, Darrell Huff16 describes how drug research data can be misrepresented by us- ing biased samples, meaningless averages, purposeful omissions, illogical conclusions, and deceptively drawn charts. He notes that a basic technique used by charla- tans when they present testimonial evidence is the post hoc, ergo propter hoc fallacy: “This happened after that, therefore this was caused by that.” The fact that someone who smokes 50 cigarettes and drinks heavily each day lives to age 95 does not mean that these habits are healthful. Huff says that to analyze a statement, one should ask, “Who says so? How does he know? How did he find out? Is anything missing? Does it all make sense?” Manufacturers are quick to take advantage of pre- liminary research that may appear to support increased use of their products. In 1988 the Physicians’ Health Study Group17 reported that aspirin use every other day had reduced the incidence of heart attacks among 11,000 generally healthy physicians. The researchers concluded that although aspirin might help prevent heart attacks, the study’s results should not be applied to the gen- eral population and that doctors should weigh potential risks as well as benefits when advising their patients. (Chapter 17 discusses this further.) Within days after the report was published, aspirin ads began referring to it and suggested that consumers ask their doctors whether aspirin might help them. The FDA commissioner, who believed that the ads were likely to encourage inappropri- ate self-medication, warned manufacturers that aspirin did not have FDA approval for preventing heart attacks in healthy people and that continuing the ads would trig- ger regulatory action. Fish oils, calcium supplements, antioxidant vitamins, and high-fiber products have also been marketed in ways that oversimplify or exaggerate the significance of research findings. peer review Peer review is a process in which work is reviewed by others who usually have equivalent or superior knowl- edge. It may be used during the development or execu- tion of a study, as well as afterward. When studies are completed, researchers strive to publish their results in journals so that others can use or criticize the findings and science can advance. Detailed standards for report- ing and evaluating studies have been published.18 The best scientific journals are peer-reviewed by experts; papers submitted for publication are reviewed by two or more expert referees, then accepted, modified, or rejected by the editor. The peer-review process is im- perfect but can usually screen out “obviously flawed and unreliable manuscripts.”19 Reports from more than 5000 peer-reviewed scientific journals are listed in the Index Medicus and its online counterpart MEDLINE. Such listing is a favorable sign but not a guarantee of quality. The quality of peer review varies from journal to journal, and even the best ones occasionally publish articles that deserve to be rejected. Moreover, in recent years, many low-quality journals that promote unsci- entific (“alternative”) methods have been included in the Index Medicus. The two most prestigious American medical journals are JAMA (Journal of the American Medical Association) and The New England Journal of Medicine. JAMA has more than 3000 names in its reviewer-referee file. Personal Glimpse Self-Persuasion Charlatans are not the only people who engage in the post hoc, ergo propter hoc fallacy. As noted by Lisa Feldman Barrett, Ph.D., professor of psychology at Northeastern University: People try to connect things that happen to them. In doing this, they lean toward ideas that fit their expectations and away from those that do not. Suppose somebody believes that vitamins provide energy. On a day when he feels energetic, he attributes that feeling to the vitamin, rather than to other factors, such as the quality of his sleep the night before. On a day when he feels fatigued, however, he doesn’t register the experience as evidence against his belief. The scientific method safeguards against these tendencies by forcing people to look at disconfirmatory evidence and examine alternative explanations. Chapter Two Separating Fact from Fiction 17 Systematic Reviews A systematic review is a literature review focused on a single question that tries to identify, appraise, and synthesize all high-quality research evidence relevant to that question. Systematic reviews of high-quality randomized, controlled trials are crucial to evidence- based medicine. The selection of articles for inclusion is usually performed by reviewing the titles and abstracts of the articles identified and excluding those that do not meet eligibility criteria. Then the data are abstracted in a standardized format. The methods used to gather and analyze the data should be transparent enough to allow others to repeat the process. Systematic reviews commonly include a meta- analysis, a statistical approach for “averaging” the results of studies that address closely related research hypotheses. When doing this, the reviewers must give appropriate weight to the quality and size of each study. If the studies differ so much that it makes no sense to try to find an average effect, the reviewers will not do a meta-analysis. Systematic reviews can be done by organizations and agencies as well as by individuals. Several that are given great weight by the medical community are described here. The American Medical Association’s Council on Scientific Affairs studies many medical issues and re- ports to the AMA’s House of Delegates. Once accepted, these reports help shape AMA public policies and may be published in JAMA. The National Academy of Sciences issues the Di- etary Reference Intakes (see Chapter 10) and many other reports by expert committees. The National Institutes of Health Consensus De- velopment Program, begun in 1977, has held about 100 consensus conferences in which experts meet for several days to discuss a topic and issue a report. Except for the acupuncture report (discussed in Chapter 8), the reports reflect a scientific consensus. The American College of Physicians’ Clinical Ef- ficacy Assessment Project focuses primarily on relatively new procedures. The U.S. Preventive Services Task Force publishes recommendations for preventive services that prudent health professionals should offer their patients in the course of routine clinical care. These recommendations, which represent the pooled judgment of many experts, are discussed in Chapters 5, 14, and 19. The Agency for Health Care Research and Quality (AHRQ), a component of the U.S. Public Health Ser- vice, was established in 1989 to enhance the quality, appropriateness, and effectiveness of health services. Formerly called the Agency for Health Care Policy and Research (AHCPR), it has published many clinical practice guidelines with separate versions for clinicians and consumers. The Cochrane Database of Systematic Reviews, updated quarterly, is an electronic journal of systematic reviews produced by the Cochrane Collaboration, an international network of individuals and institutions committed to preparing systematic reviews of the effects of health care and disseminating them on CD-ROM and through the Internet. Established in 1993, it hopes to cover the entire spectrum of medical interventions.20 The National Guideline Clearinghouse (NGC) is an Internet-based public resource sponsored by the AHRQ, in partnership with the AMA and America’s Health Insur- ance Plans. The Web site summarizes more than 2500 clinical practice guidelines that have met its criteria. It should be noted, however, that NGC does not develop, produce, approve, or endorse the guidelines represented on its site and that some promote unscientific practices that this book criticizes in Chapters 8 and 9. Poor quality reviews can lead clinicians to the wrong conclusions and ultimately to inappropriate treatment decisions. In 2011, the Institute of Medicine (IOM) addressed this issue by recommending standards for systematic reviews21 and the development of clinical practice guidelines.22 Publication Bias Scientific journals are more likely to publish positive studies than negative ones. This occurs because editors and reviewers tend to favor positive results and experi- menters are more likely to write up studies with positive findings than with negative findings.23 The resultant situation—referred to as publication bias—may make something appear more significant than it actually is. Publication bias was vividly demonstrated by a study in which three versions of a bogus article were sent to 101 consulting editors of two psychology journals. The submissions were identical except that one reported positive results, one reported near-positive results, and the third reported no significant results. The positive versions received three times as many recommendations for publication and were rated as better designed.24 In recent years, drug companies and researchers have been accused of suppressing studies and data unfavor- able to their products.25 In response to this concern, 11 medical journals announced that in 2005 they would stop considering reports of clinical trials that had not been registered in a public trials registry before or at the time Part One Dynamics of the Health Marketplace18 they began to enroll patients.26 The need for the policy was underscored by a study of 122 journal articles that concluded that about half of them had been incompletely reported, harm was more likely to be unreported, and 65% had inconsistencies between primary outcomes defined in the most recent protocols and those defined in published articles.27 Conflict of Interest Financial conflicts of interest can affect the objectivity and trustworthiness of research conduct and publications. For example, researchers whose funding comes from a drug company may be concerned that negative reports may cut off future funding. Industry is now the leading funder of medical research, and much research is con- ducted in nonacademic settings. Industry also is involved in funding evidence reviews and practice guidelines. Conflicts of interest cannot be completely elimi- nated, but awareness of the problem has increased the use of countermeasures. For example, prospective medical journal authors and speakers at accredited courses are required to provide a written disclosure of any financial tie they may have to the subject matter. The IOM28 has urged medical institutions to strengthen their conflict- of-interest policies and asked Congress to require health-related manufacturers to publicly disclose pay- ments they make to pharmaceutical, biotechnology, and device firms and to report through a public Web site the payments they make to doctors, researchers, academic health centers, professional societies, patient advocacy groups, and others involved in medicine. Scientific Misconduct Occasionally, individual scientists publish or attempt to publish bogus research data. The extent of this type of fraud is not known, but its existence presents one more argument for replicating studies. Peer review, high- quality journals, and the demand for replication make detection likely when fraud occurs. Physicist David Goodstein, who has worked with federal agencies to develop guidelines for defining misconduct in science, reported that between 1980 and 1987 only 21 cases of misconduct involving doctors or biologists came to light—which was only three ten-thousandths of all scientists who received research grants.29 Unconfirmed studies, particularly when inconsistent with other stud- ies, seldom have a major impact on what physicians do. Thus, although scientific fraud occurs, it seldom affects patient care. One corrupted study that did affect patient care was published in 1998 by The Lancet, Britain’s premier medical journal. In it, Dr. Andrew Wakefield and col- leagues suggested that the measles-mumps-rubella (MMR) vaccine might be linked to autism. The paper didn’t declare that cause-and-effect had been dem- onstrated, but at the press conference announcing its publication, Wakefield attacked the triple vaccine, and he has continued to do so ever since. Subsequent stud- ies have found no connections, but sensational public- ity caused immunization rates in the United Kingdom to drop more than 10% and has left lingering doubts among parents worldwide. In 2010, the British Gen- eral Medical Council, which registers doctors in the United Kingdom, concluded that Wakefield had acted dishonestly, irresponsibly, unethically, and callously in connection with the research project and its subsequent publication. Lancet retracted the paper 5 days later, and Wakefield’s medical license was subsequently struck from the register.30 truStwortHineSS oF SourceS It can be extremely difficult for consumers, and some- times even for health professionals, to determine the accuracy of health information. Separating fact from fiction can be a complex and time-consuming process. The reasons for this difficulty include:
- Advice from laypersons may be based on hearsay and personal experience rather than scientific data. Factual information, especially when several individuals are in- volved, is often distorted in transmission.
- Many false ideas “feel right” or seem commonsensical to people who lack the technical knowledge to evaluate them.
- Preliminary and limited scientific studies may be overem- phasized by the media.
- Research data published by experts may conflict sufficiently to cause public confusion.
- Inaccurate health information may be disseminated purely for reasons of self-interest or profit.
- Claims that treatments are based on scientific evidence may not be true. Schick and Vaughn31 have noted that un- scientific practitioners often cite or misconstrue “scientific findings” to support their views.
- “Confirmation bias” can play a decisive role. As noted by Carroll,32 people tend to give more weight to data that support their beliefs, and those who become blinded to evidence refuting a favored hypothesis “cross the line from reasonableness to closed-mindedness.”
Hitchins33 has noted that “extraordinary claims require extraordinary evidence and that what can be as- serted without evidence can also be dismissed without evidence.” Chapter Two Separating Fact from Fiction 19 Professionals. Most health professionals give reliable advice, but scien- tific training does not guarantee reliability. For example:
- Adelle Davis promoted inaccurate and dangerous nutrition advice despite adequate training in nutrition. As noted in Chapter 11, many of the scientific studies she cited to back up her theories had no relevance to them.
- Robert Atkins, M.D., best known for his low-carbohydrate diet, promoted many types of disreputable treatments (see Chapter 12).
- Andrew Weil, M.D., a Harvard Medical School graduate, mixes sound and unsound advice (see Chapter 11).
Several chapters of this book suggest how to identify professionals who engage in unscientific practices. Many lines of questionable nutrition products have been marketed with endorsements by scientists with respectable credentials. The most notable case occurred with United Sciences of America, Inc., a multilevel company that sold dietary supplements claimed to be effective in preventing cancer, heart disease, and many other diseases. Literature from the company said that its products had been designed and endorsed by a 15-per- son scientific advisory board that included two Nobel prize-winners. However, four members of the board told investigators that they had neither designed nor endorsed the products. A few other multilevel companies claim to be guided by scientific boards, and a few supplement manufacturers have used endorsements by individual practitioners in advertisements. Barrett,34 who believes that all such practitioners hold minority viewpoints, has warned: Vitamin product endorsements by doctors—no matter how prestigious they are—should be viewed with extreme cau- tion. All I have seen so far have included claims that were unproved and also illegal. Nonprofessionals. Many consumers have misconceptions about the factors that influence health. People who share their experiences and knowledge may believe in unproven and unscien- tific methods. Such people often are highly motivated to spread their beliefs. Testimonials from movie stars, professional athletes, and other celebrities are commonly used to promote questionable health methods. National organizations exist to promote “alternative” cancer rem- edies (Chapter 16), the Feingold diet (Chapter 6), and other dubious methods. Millions of people have been involved in the sale of dietary supplements and other products through multilevel companies such as Herbalife and The Trump Network (see Chapter 4).35 Pseudoscientists. A pseudoscience is a set of ideas put forth as scientific when they are not. Pseudoscientists misuse and distort scientific evidence to support whatever products or services they promote. They may use scientific terminol- ogy and data to concoct theories that seem plausible to laypersons. They are often sophisticated in manipulating situations to gain notoriety and acceptance. They may write articles and books and may also reach consumers through television and radio programs. Some are “nutri- tion consultants” with “degrees” from diploma mills and nonaccredited schools. Several observers have described characteristics that can help consumers distinguish pseudoscientists from true scientists. Hatfield,36 for example, has noted: Generally speaking, an establishment scientist has attended and graduated from an accredited university, belongs to one or more well-respected professional organizations, conducts carefully controlled and documented research, and reports these findings in professional journals that maintain high standards for accepting research papers. By contrast, those claiming to be an alternative to es- tablishment science have no common set of standards or practices from which measurements and comparisons can be made or quality of performance judged. Personal testimonies and causal observations quite often serve as the basis of their research rather than act as the impetus to begin research. Peterson37 has likened improperly designed research to a man rowing a boat from only one side: No matter how long or how hard he works, he never succeeds in doing anything except going in a circle, never realizing that it isn’t his dedication or his strength but his method that is flawed. Until fringe research puts both oars in the water, it is doomed to remain where it has always been: spinning aimlessly near the shores of science. True medical scientists have no philosophical commitment to particular treatment approaches, only a commitment to develop and use methods that are safe and effective for an intended purpose. Several observ- ers have noted that pseudoscientists use hypotheses and data differently from scientists. Whereas scientists test hypotheses, abandon disproved ones, and welcome re- view of their findings and conclusions, pseudoscientists reject findings that contradict their beliefs and accuse critics of prejudice and conspiracy.38,39 In this regard, Criss40 explains why we should not as- sume that people with strange ideas are modern Galileos: To be a true analogy, these people would have to do experi- ments, make observations, and bring these results for all to Part One Dynamics of the Health Marketplace20 see and question in an open forum. Further, they would have to be denied freedom of speech and press, or any expression all over the land—for that was the injunction against Galileo in 1616! It was as a result of this experience that the scientific method was adopted among scientists. . . . It has allowed the replacement of old ideas with new ones, and has provided a means of judging, and discarding, unfounded ideas. Beyerstein43 has noted that “alternative” practitioners rarely produce scientific data: Unless an unconventional therapist keeps detailed records of a sufficiently large number of patients with the same complaint, we have no way of knowing whether the reported number of “cures” exceeds the normal unaided rate of recovery for the symptoms in question. Fringe practitioners rarely keep such data, preferring to publicize lists of satisfied customers rather than the percentage of the total cases that they represent. In addition, because alternative healers practically never carry out long-term follow-up studies, neither do we know how many of their clients receive temporary symptom relief rather than a genuine cure. Educational institutions. Educational standards are maintained through a sys- tem of accreditation by agencies approved by the U.S. Secretary of Education or the Council on Recognition of Postsecondary Accreditation. Accredited institutions tend to have well-trained faculty members and provide reliable guidance to their students. However, Chapters 8 and 9 note that acupuncture, chiropractic, naturopathy, and massage therapy schools have their own accredita- tion agencies, even though much of their teachings are unscientific. Nonaccredited schools that teach health subjects are usually untrustworthy, and some are diploma mills that issue “degrees” and certificates whose only requirement is the payment of a fee. Chapter 11 discusses the problem of bogus nutrition credentials. Many elementary and high school teachers of health subjects have had minimal formal training in these subjects and hold beliefs similar to those of the general public. Consequently, many misconceptions are passed from teacher to student. In 1983, Dr. Roger Lederer, professor of biology, and Dr. Barry Singer, associate professor of psychology, California State University,44 noted that problems existed even at universities: In recent years the teaching of pseudoscience and quackery in universities has become common and apparently accepted under the aegis of academic freedom. Typically the material is not formally presented as “Pseudoscience 101,” but is offered as a component of a regular course. Over the years, at many schools, the situation has become even worse. As Mole45 has summarized: “Science and society” classes do not nurture the critical think- ing abilities of students. They only nurture a deep suspicion toward all truth claims, particularly those claims perceived to clash with the political ideals of students. . . . If there are no valid criteria for accepting the truth of science, then virtually any idea about the empirical world is valid and there are no authoritative reasons to reject or accept any particular idea. Many medical schools, hospitals, and professional organizations offer courses they identify as “alternative,” “complementary,” and/or “integrative” methods. Some are appropriately critical, but most provide a forum for promoters. The Accreditation Council for Continuing Medical Education (ACCME) states that “all the recom- mendations involving clinical medicine in a CME activ- ity must be based on evidence that is accepted within the profession of medicine as adequate justification for their indications and contraindications in the care of patients.” However, the ACCME has been very lax in Historical Perspective The Pseudoscience of Biorhythms Creation of biorhythm theory is attributed to Wilhelm Fleiss, a German surgeon who postulated in the 1890s that behavior was determined by innate “male” and “female” rhythms of the body. Current theory postulates three cycles: a 23-day physical cycle, a 28-day emotional cycle, and a 33-day intellectual cycle. Proponents claim: (a) each cycle begins at the exact moment of birth and oscillates up and down with absolute precision throughout life; (b) when the cycles are high, people are likely to be at their best; when they are low, the opposite is true; and (c) on critical days, when