This article first explains the necessity and inevitability of the emergence of the “Steam Age” in history from the perspective of historical materialism, that is, the pace of scientific and technological development and social progress is unstoppable.
The invention of the world wide web is a good example, despite its inventor Tim Berners Lee’s story is often told in a heroic way, but the essence of the story is still the coupling progress of individual life and history. if Tim had never been born, the world wide web would still be invented in the similar time by another historical charactor. Because all the development characteristics of culture and technology pointed to the era of network. Such technology prediction that was self-fulfilling as the world wide web was a necessary condition in Moore’s law.
The Moore’s law, based on the experience of Gordon Moore, one of the co-founder of Intel Company, its core theory is that the number of transistors that can be placed on an integrated circuit doubles roughly every 24 months. In other words, processor performance doubles every two years.
Although Moore’s law is called a “law,” it is in itself an experience, or, as this article calls it, a “mapping.”
Moore’s law for a long time continues to expand its influence in lots of fields. Later when people try to continue to use Moore’s law to explain the affected social phenomenon by the semiconductor industry development, they found that Moore’s law can only in a limited range decided the development of society and history: Semiconductor promoted the machine capacity by leaps and bounds, but in addition the calculation results is limited by data itself.
In the book End of Theory, it is mentioned that “we no longer need to find ways to build models” and “In the era of big data, correlation is enough”.
The problems of the data itself are mainly reflected in three aspects:
1. Authenticity of data/Academic fraud
2. Integrity of data/”Big Data Fallacy”
3. Whether the data processing method is reliable/P-hacking
Google’s translation algorithm can easily build a translation mapping system from the correlation between big data, but the practice of P-hacking in academic research is a reverse correlation, contrary to academic ethics and not conducive to scientific development.
Of the problems of the scientific community generally, is that the huge number of (particularly explosive growth in recent years, the research literature and a large amount of data that has come to end for some research direction of mining, such as the synthesis of combinatorial chemistry is used to make drug molecules like speed improved 800 times and DNA sequencing speed is about 1 billion times faster than the first, the protein database in 25 years has increased by 300 times, but the actual found exponentially decline in the number of new drugs.
On the one hand, this is due to the “better than Beatles” problem; on the other hand, the automated, systematic process measurement of high-throughput screening is essentially just a huge volume, but is not necessarily a better solution in terms of depth and efficiency.
It will be very fun, on the one hand, this is a very complicated problem, the human brain can’t fully understand it, but the computer can understand and operation, on the other hand, this makes people realize the problem of fuzziness, unpredictability and apparent paradox – this consciousness itself is contradictory, because it is beyond our ability to consciously express.
The way we think the world is shaped by the tools at our disposal.
The way we think the world is shaped by the tools at our disposal. As the historians of science Albert van Helden and Thomas Hankins put it in 1994, ‘Because instruments determine what can be done, they also determine to some extent what can be thought.
Vast quantities of data are necessary to see the problems with vast quantities of data. What matters is how we respond to the evidence in front of us.