# Data Literacy 101: Did enrollment drop in Rhode Island's private preschools?

By Kassira Absar, Research Associate

# Data Literacy 101: Did enrollment drop in Rhode Island’s private preschools?

## Margins of error, statistical significance, and chicken noodle soup

The latest American Community Survey data show that, of all Rhode Island preschool age three and over enrolled in preschool, the percent enrolled in private preschool dropped five percentage points between 2015 and 2016. If you’re employed at a private preschool, should you be concerned? Is it time to start looking for a new job?

Maybe. Let’s take a look.

When comparing survey data for two groups or time periods, we need to consider whether the numbers (known as “point estimates”) we see at first glance—53 and 47, in this example—actually indicate different values in the populations they represent. For this we turn to tests of statistical significance.

What does being “statistically significant” mean? Statistics are all about probability, and when a difference is significant, it means we can be reasonably confident that the change or difference between point estimates is real and not due to chance. We use statistics to help us understand something about an entire population based on a smaller sample of that population.

Think of a pot of chicken noodle soup, with its carrots, onions, and chicken, as the population you want to understand. You may be curious how much of that soup is made up of onions, but you probably don’t want to sift through the entire pot. Assuming the soup is well-stirred, you can make a pretty good estimate based on one ladle of soup. Just as each scoop with the ladle may or may not contain the same amount onion pieces as the next, each survey sample will yield slightly different results, even if carefully done using the same method at the time.

Returning to our Rhode Island preschoolers: We know the change in enrollment is not statistically significant because we tested it, which you can do yourself with many tools available online. Thus, the five percentage point change between 2015 and 2016 may be happenstance, and we should not draw any lofty conclusions about the state of enrollment in private preschools in Rhode Island. Let’s dive a little deeper.

Our test of statistical significance relies on a concept called “margin of error.” MOEs, affectionally called “Moes,” provide a range around the point estimate where the actual value is likely to lie. For example, if a ladle of soup is five percent onion, it may be that onions really make up three to seven percent of the larger pot (in this case, the MOE is plus or minus two percentage points).

The 2016 point estimate for children age three and above enrolled in private preschool is 47 percent, but the published margin of error (plus or minus seven percentage points) tells us that we can be reasonably confident that the real enrollment rate, were we able to measure it for all preschool enrollees in Rhode Island, lies somewhere between 40 and 54 percent. Similarly, the 2015 enrollment rate, with a nine percentage point MOE, may lie anywhere in the range 44 to 62 percent.

This means that instead of going down from 2015 to 2016, there is a chance the actual enrollment rate was exactly the same for both years; it may have held steady at 50 percent. It is also entirely possible the rate went up from 44 to 54 percent. Or dropped even further from 62 to 40 percent. Given all the possibilities—increase, decrease, no change—it is clear why we lack confidence in the supposed five percentage point decrease in enrollment, based only on comparing point estimates.

Of course, that is not the end of the story. The ranges of possible values, called “confidence intervals,” are based on levels of certainty. The U.S. Census Bureau typically provide MOEs at a 90 percent certainty level, meaning that there is still a 10 percent chance that the real population value for Rhode Island’s private preschool enrollment fell outside of the 40 to 54 percent range in 2016.

Additionally, accurate margins and statistical tests assume well-constructed, representative surveys; an assumption that isn’t always safe (although very likely so in the case of the American Community Survey data used here).

Finally, the word “significant” can be misused. Just because something is statistically significant does not mean it is practically significant. Most decisions should not be based on statistical significance alone.

Here at the APM Research Lab we generally consider statistical significance as a sort of minimum but not necessarily sufficient threshold. Why? Well, what if we had found that the difference between 53 percent and 47 percent was significant? Should that one-year drop send teachers at Rhode Island’s private preschools scrambling to find new jobs?

Probably not. But they might consider having a nice bowl of soup!

-Kassira

Reactions? Please *email us** your thoughts or respond on **Twitter** or **Facebook**.*

*This article was authored by Kassira Absar, former Research Associate for the APM Research Lab.*