Jeremy S. Wu, Ph.D.
Jeremy S. Wu, Ph.D.
  • Home
  • About
    • Personal
  • Activities
    • Regency at McLean
  • Big Data
    • Maps
      • Asian Americans by CD 2015
      • Asian Americans by CD 2014
      • Asian Americans by CD 2013
      • Berkeley Earth
      • Chinese Smart Cities
    • 清华论坛
  • Blogs
  • Justice
    • 1882 Timeline
    • 2020 Census
    • APA FISA Watch
    • Fed Cases
    • Profiling

Jeremy S. Wu, Ph.D.

胡善庆博士

Jeremy S. Wu, Ph.D.
  • Home
  • About
    • Personal
  • Activities
    • Regency at McLean
  • Big Data
    • Maps
      • Asian Americans by CD 2015
      • Asian Americans by CD 2014
      • Asian Americans by CD 2013
      • Berkeley Earth
      • Chinese Smart Cities
    • 清华论坛
  • Blogs
  • Justice
    • 1882 Timeline
    • 2020 Census
    • APA FISA Watch
    • Fed Cases
    • Profiling

Not All Data are Created Equal

  • Big Data
  • General
  • Statistics
  • Statistics 2.0

Suppose we have data on 60,000 households.  Are they useful for analysis? If we add that the amount of data is very large, like 3 TB or even 30 TB, does it change your answer?
 
The U.S. government collects monthly data from 60,000 randomly selected households and reports on the national employment situation.  Based on these data, the U.S. unemployment rate is estimated to within a margin of sampling error of about 0.2%.  Important inferences are drawn and policies are made from these statistics about the U.S. economy comprised of 120 million households and 310 million individuals.
 
In this case, data for 60,000 households are very useful.
 
These 60,000 households represent only 0.05% of all the households in the U.S.  If they were not randomly selected, the statistics they generate will contain unknown and potentially large bias.  They are not reliable to describe the national employment situation.
 
In this case, data for 60,000 households are not useful at all, regardless of what the file size may be.
 
Suppose further that the 60,000 households are all located in a small city that has only 60,000 households.  In other words, they represent the entire universe of households in the city.  These data are potentially very useful.  Depending on its content and relevance to the question of interest, usefulness of the data may again range widely between two extremes.  If the content is relevant and the quality is good, file size may then become an indicator of the degree of usefulness for the data.
 
This simple line of reasoning shows that the original question is too incomplete for a direct, satisfactory answer.  We must also consider, for example, the sample selection method, representation of the sample in the population under study, and the relevance and quality of the data relative to a specified hypothesis that is being investigated.
 
The original question of data usefulness was seldom asked until the Big Data era began around 2000 when electronic data became widely available in massive amounts at relatively low cost.  Prior to this time, data were usually collected when they were driven and needed by a known specific purpose, such as an exploration to conduct, a hypothesis to test, or a problem to resolve.  It was costly to collect data.  When they were collected, they were already considered to be potentially useful for the intended analysis.
 
For example, when the nation was mired in the Great Depression, the U.S. government began to collect data from randomly selected households in the 1930s so that it could produce more reliable and timely statistics about unemployment. This practice has continued to this date.
 
Statisticians initially considered data mining to be a bad practice.   It was argued that without a prior hypothesis, false or misleading identification of “significant” relationships and patterns is inevitable by “fishing,” “dredging,” or “snooping” data aimlessly.  An analogy is the over interpretation or analysis of a person winning a lottery, not necessarily because the person possesses any special skill or knowledge about winning a lottery, but because random chance dictates that some person(s) must eventually win a lottery.
 
Although the argument of false identification remains valid today, it has also been overwhelmed by the abundance of available Big Data that are frequently collected without design or even structure.  Total dismissal of the data-driven approach bypasses the chance of uncovering hidden, meaningful relationships that have not been or cannot be established as a priori hypotheses.  An analogy is the prediction of hereditary disease and the study of potential treatment.  After data on the entire human genome are collected, they may be explored and compared for the systematic identification and treatment of specific hereditary diseases.
 
Not all data are created equal and have the same usefulness.
 
Complete and structured data can create dynamic frames that describe an entire population in detail over time, providing valuable information that has never been available in previous statistical systems.  On the other hand, fragmented and unstructured data may not yield any meaningful analysis no matter how large the file size may be.
 
As problem solving is rapidly expanding from a hypothesis-driven paradigm to include a data-driven approach, the fundamental questions about the usefulness and quality of these data have also increased in importance.  While the question of study interest may not be specified a priori, establishing it a posteriori to data collection is still necessary before conducting any analysis.  We cannot obtain a correct answer to non-existing questions.
 
How are the samples selected?  How much does the sample represent the universe of inference?  What is the relevance and quality of data relative to the posterior hypothesis of interest?   File size has little to no meaning if the usefulness of data cannot even be established in the first place.  
 
Ignoring these considerations may lead to the need to update a well-known quote: “Lies, Damned Lies, and Big Data.”
Data Structure Lies Random Sampling
May 30, 2014 Jeremy

Post navigation

Crossing the Stream and Reaching the Sky → ← Lying with Big Data

Related Posts

NSD201801-040

Theft of Trade Secrets by Chinese Professors for Technology to ChinaOn May 16, 2015, Tianjin University Professor Hao Zhang was arrested upon entry into the U.S. from the People’s Republic of China […]

NSD201801-023

Protected Rice Seeds to ChinaOn Oct. 26, 2016, in the District of Kansas, Wengui Yan pleaded guilty to one count of making false statements to the FBI while working as a […]

DOJ Reply to Delaware Congressional Delegation

2016-1-15 Indictments of Asian Americans – Carper #3177492

新世界时报

新世界时报:在美华人科学家和商人如何规避法律风险p36

Recent Posts

NSD201801-042

Trade Secrets to South KoreaOn May 1, 2015, Kolon Industries, Inc., a South Korean industrial company, was sentenced in the Eastern District of Virginia to 5 years’ probation and was ordered […]

More Info

NSD201801-040

Theft of Trade Secrets by Chinese Professors for Technology to ChinaOn May 16, 2015, Tianjin University Professor Hao Zhang was arrested upon entry into the U.S. from the People’s Republic [...]

More Info

NSD201801-029

Theft of Valuable Source Code for ChinaOn June 14, 2016, Jiaqiang Xu was charged in the Southern District of New York in a six-count superseding indictment with economic espionage and theft […]

More Info

NSD201801-028

Satellite Trade Secrets to Undercover AgentOn July 7, 2016, in the Central District of California, Gregory Allen Justice was arrested by FBI special agents on federal charges of economic [...]

More Info
Powered by WordPress | theme SG Window