SCIN 137 AMU week 7 lesson lab US Climate Lab Introduction to Meteorology American Military university
Introduction
Topics to be covered include:
- Definition of statistics and its relevance to hypothesis testing
- Measures of central tendency
- Standard deviation and standard error
- Commonly used statistical tests and interpretation of p-value
In the last lesson, we discussed the importance of properly organizing data in spreadsheets and learned how to create graphs in Excel to visualize our data. Now we turn our attention to the statistical tests that support our graphs by quantifying patterns and relationships between variables. We will introduce the concepts of data distribution and sample size, measures of central tendency, standard deviation and standard error, and the importance of p-values. Finally, we will define some of the most commonly used statistical tests and when to use them.
The Statistical Connection Between Science and Seafood
Whether you realize it or not, the results of statistical analyses permeate multiple areas of our daily lives. Statistical tests quantify relationships between variables, verify the safety and efficacy of new medications, help farmers determine which fertilizers or pesticides to use to increase crop yields, and even benefit conservation and resource management. For example, if you enjoy seafood, then you have already benefitted from statistical analyses.
According to the United Nations Food and Agriculture Organization, humans consume approximately 85 percent of the fish caught globally, with the remainder converted into fishmeal and fish oil for aquaculture (Marine Stewardship Council, n.d.). In 2013, fish accounted for approximately 17 percent of our global intake of animal protein, and people in developing countries and coastal areas may receive more than 25 percent of their animal protein from fish (Marine Stewardship Council, n.d.). These numbers do not take into account people who simply enjoy fishing for recreation.
However, as the global population increases, so will the demand for seafood. If we want to ensure sustainable populations of economically valuable fish species for future generations, then we need rigorous statistics and scientific assessments to help fisheries managers calculate how many fish we can safely remove each year.
The United States possesses eight regional councils to set annual catch limits for federal fisheries, and the South Atlantic Fishery Management Council uses a unique approach to statistics called SEDAR, which stands for SouthEast Data, Assessment, and Review (South Atlantic Fishery Management Council, 2017). Established in 2002, SEDAR is a cooperative process between the South Atlantic Fishery Management Council and the National Oceanographic and Atmospheric Administration (NOAA) that conducts stock assessments through a series of data workshops to improve the quality and reliability of these assessments (South Atlantic Fishery Management Council, 2017).
The SEDAR process proceeds through week-long workshops to compile data sets, conduct quantitative population analyses and estimate population sizes, and allow panels of experts to review the assessments to ensure their validity (SEDAR, n.d.). In addition, anyone can attend the workshops, including college or graduate students, curious members of the general public, recreational fishermen, and charter boat captains. Sometimes the boat captains and fishermen can provide useful insights from their personal experiences on the water to verify the biologists’ data.
SEDAR represents a unique effort to connect the general public with scientific research by making the workshops transparent and accessible, and the results of the stock assessments help inform conservation efforts, fishing regulations, and annual catch limits. These useful results come from data that are carefully analyzed. As we delve into statistical analyses in this lesson, challenge yourself to think of ways those tests may impact other facets of your daily life.
SEDAR (Southeast Data, Assessment, and Review
The Importance of Statistics for Hypothesis Testing
Recall from Lesson 3 when we discussed how to formulate a hypothesis. When you state a hypothesis for a scientific study, you actually create a null hypothesis and an alternative hypothesis. A null hypothesis states that no relationship exists between the independent and the dependent variable, while the alternative hypothesis proposes that a relationship exists and is the researcher’s educated guess or explanation of the phenomenon in question (Silva-Ayçaguer, Suárez-Gil, & Fernández-Somoano, 2010).
For most studies, researchers use statistics to try and reject the null hypothesis which results in acceptance of the alternative hypothesis. For example, the pharmaceutical company Merck voluntarily withdrew Vioxx, an anti-inflammatory drug to relieve arthritis and acute pain (U.S. Food and Drug Administration, 2016), from the market after statistical analyses showed an increased risk of cardiovascular complications (CNN, 2004). In this case, the null hypothesis would have stated that no relationship exists between Vioxx and cardiovascular problems. Researchers in clinical trials used statistics to reject the null hypothesis and instead support the alternative hypothesis, which showed a relationship between the drug and cardiovascular complications such as heart attacks and strokes (CNN, 2004).
So how exactly do statistics show relationships between variables? For the rest of this lesson, we will discuss how to summarize large data sets based on measures of central tendency, standard deviation and standard error, and data distributions. From there, we will define some of the most commonly used statistical analyses and when to use them. We will also analyze case studies to practice determining which tests are most appropriate for the research in question.
Measures of Central Tendency
In the last lesson, we primarily worked with raw data when creating spreadsheets and graphs in Microsoft Excel. After data is visualized (e.g., satterplots) and understood researchers will then calculate individual summary statistics that further describe the data. This serves to condense large data sets format and values that can be used to make conclusions. We call these summary statistics measures of central tendency because they describe a data set by identifying the central position within the data (Lund Research Ltd., 2013a). The three most common measures of central tendency are mean, median, and mode.
Mean
The mean, also called the average, of a data set is the most popular measure of central tendency, and we briefly introduced calculating means in Microsoft Excel in the previous lesson. We calculate the mean by adding together all the values in a data set and dividing by the number of values (Lund Research Ltd., 2013a). For example, consider the following data set:
13, 14, 15, 16, 17
We would calculate the mean as follows: (13 + 14 + 15 + 16 + 17) / 5 = 15
The mean provides a valuable model of the data set by including each number in the data set as part of the calculation, and researchers frequently rely on the mean when summarizing data sets and choosing further statistical analyses. However, the mean has one major disadvantage. Outliers, which are defined as a value numerically distant from the rest of the data set, can easily skew means so that they do not accurately represent the data (Moore & McCabe, 1999). Careful graphing of data as we described in Lesson 6 can help identify datasets with outliers. Let’s consider another data set:
10, 10, 15, 15, 3000
We would calculate the mean as follows: (10 + 10 + 15 + 15 + 3000) / 5 = 610.
At first glance, the value 3000 probably looks out of place, and the mean does not provide the best measure of central tendency. Outliers may occur in data sets due to natural variation, measurement error, equipment malfunction, experimental or human error, inadequate sample size, or even study participants intentionally reporting incorrect data. If researchers determine that an error created an outlier, then they can safely exclude that value from analysis. However, if the outlier occurred due to random chance or some other natural process, they should keep it in the data set and use a different measure of central tendency.
Median
The median is the middle value in a data set that has been arranged from smallest to largest values (Lund Research Ltd., 2013a). The median works well for describing the central tendency of data sets with outliers. For example, if we consider our previous data set:
10, 10, 15, 15, 3000
The median is 15, which provides a better measure of central tendency than the mean. But what happens when you have an even number of values? If we amend our data set:
10, 10, 13, 15, 15, 3000
Now we have six values, so you may ask whether 13 or 15 represents the median. We calculate the median by finding the mean of the two middle values as follows: (13+15)/2 = 14.
Mode
The mode is the most frequent value in a data set and can be visualized as the highest bar in a histogram or largest section of a pie chart (Lund Research Ltd., 2013a). If we consider this data set:
10, 13, 14, 14, 15
The value 14 appears twice in the data set while the other values appear once, so 14 is the mode. Researchers often use the mode to describe categorical data sets where they wish to know the most common or popular category. For example, we could conduct a survey where we ask students their preferred study technique from choices such as note cards, reading notes, highlighting sections of the textbook, or drawing diagrams. Whichever option gets the most votes would represent the mode.
The mode, however, encounters difficulty when two or more values share the highest frequency in a data set. Let’s look at another data set:
10, 10, 13, 14, 15, 15
In this instance, the values 10 and 15 each appear twice, and neither one falls near the center of the data set. As a result, the mode would not be the most appropriate measure of central tendency for this data set.
Overall, measures of central tendency provide an excellent starting point for analyzing data sets, but they usually do not completely explain experimental results. For example, let’s consider the following two data sets:
Data set A: 13, 14, 15, 16, 17
Data set B: 3, 12, 15, 19, 26
Both data sets have the same mean and median, which is 15. Furthermore, since each value only appears once in a data set, the mode does not provide a useful measure of central tendency. However, we can clearly see that these two data sets differ greatly. The range for Dataset A is 4, while the range for Dataset B is 23. We need more advanced statistical tests to quantify the differences between these two data sets, but before we can perform these analyses, we need to consider additional factors such as standard deviation, standard error, data distribution, and sample size, which we will discuss next.