Chapter 2 expressed the history of computation, the threat of computation we faced and how the computation would shape our life, and way to think.
At the very beginning of of the article, the author quoted John Ruskin, who said there would be a ‘storm-cloud of nineteenth century’ which would changing the world in 1871. He believed that light have a moral quality, for light could order our intelligence and vision, in other words, shape what we think and how we think.
In some degree, the light what Ruskin described came into being in 1880. Alexander Graham Bell first demonstrated a device called photophone, which could transmits the human voice in ‘wireless’ way : by bouncing a beam of light. Although there were some limitation to use the photophone – for example, the atmospheric condition could affect the sound it output, or as Bell said ‘heard sun laugh and cough and sing’ – it still was a new way to carry complex information fast. It showed the possible to connect the world by the machine, the Ruskin’s light is reified in the network.

However, thinking through machines predate the machine themselves, as the article said. The concept of computational thinking begin with the weather.
In 1916, the mathematician Lewis Fry Richardson was at work on the Western Front. As a pacifist, he joint the ambulance unit. Over several months, he worked out the first full calculation of atmospheric weather conditions, and the first computerised daily forecast, without a computer. Actually, his work begin from 1910 in Eskdalemuir Obervatory with hundreds of observers across Europe. Richardson believed it would be possible to predict the weather condition over successive hours, with a range of complex weather data and advanced observations.
Although, when it was completed, his forecast was against the actual observed data, it proved some useful method: break the world into grid squares, and solve the weather equations for each square. However, this process require technology to match the scale and speed of the weather, which he did not have.
Therefore, Richardson laid out a thought experiment in 1922. In the experiment, ‘computers’ was human beings in a huge building. All the computers work on a part of the map where each sits. Senior clerks would collect the future weather which is being computed. Then they carry forecast to a quiet room, code and send them out. But Richardson said “that is a dream”.
After the explosion of the Second World War, there was a boom of the funding for research. As the article said, a overwhelming flow of information poured from a newly networked world, and the system of knowledge production expanded rapidly.
In 1945, the engineer Vannevar Bush raised a machine called ‘memex’ to solve the problem. All of the books, records, and communicated could be stored in the memex. The article described it as: “The memex would enable anyone to multiple discoveries across many disciplines into a single machine. The memex link together documents in many ways, which created associations between domians of knowledge.” As Bush said, it was “wholly new forms of encyclopedias”.

The next one is Jonh von Neumann, who worked in Mahhattan Project. Both von Neunmann and his colleagues Zworykin had studied in meteorology. They believed that all stabled processed could be predicted . With complex simulation of physical processes, the real-world outcome would be predicted. And the key is to invent calculating machines.
In 1947, von Neumann invented the calculating machines, called ENIAC (Electronic Numerical Integrator And Computer). ENIAC was huge and filled in a large room. ENIAC was absolutely different from a computer in nowadays, but it did could solve many problem the US army faced. ‘A legible machine’ was what the author described it as. Even casual observer could watch as the blinking lights picking out different operations progressed around the wall of the room.

Later, IBM made another machine SSEC(Selective Sequence Electronic Calculator), which is smaller than ENIAC. It got a sleek modern appearance, and was installed in a full view of public, behind thick plate glass. Raised floor had hidden unsightly cabling. The SSEC was set in a room beside a ladies’ shoe shop. It was calculating position of planets for NASA while nobody knew what it was doing.
Computers was getting more and more invisible.

And to solve a number of real-time data of weathers, the Whirlwind I was invented in 1951. It laid the groundwork for the SAGE(Semi-Automatic Ground Environment), a complex computer system used for army. The SAGE had run the The largest single computer program ever created in 20th century. However, it caused a number of mistakes such as mistaking simulation data for actual missile attack, or migrating birds as incoming Soviet bomber fleets.
The problem with computation is it could fail to distinguish between simulation and reality, then replace the world with flawed models of itself.
The history of computation is over. The question is: If the computation could be reliable?
The author answered NO, and give several reason.
At first, the computation may not suit all problem. Computation always turns problem into abstract dilemma. When it comes to broader questions such as egalitarian society, it seem to be intractable. It is also a dilemma to use computation in something questionable area. If it is right to use ENIAC in the simulating of the atomic bomb?Dose it help people or kill people? It is hard to answer.
The data what the computation may be not real. The typical example is the data of the flight. It is easy to see all information on a simple website. Every flights broadcast ADS-B signal while everyone could pick up with a radio receives and share data online. But, as the author stressed, the God’s-eye view is illusory. The serve could be blocked or erase easily, such as private jet or flight of politicians.

Another example is GPS. GPS is available to civilians. The signal seems to be unquestioned and easy to use by anynone, but The signal could be manipulated by any power including the US goverment. And the GPS device is could be fooled easily. The cheating player of Pokemon GO could acted as walking in a real world even if he just sit in a room.

There is another problem if computation keep increase its authority of our life. Code/space is a term to describe if a space rely on computation. The problem is what to do if the computation does not work. A airport is totally a code/space for the whole system is running on computers. Nothing could be done without the system, even checking a ticket.

Code/space is rather than a building nowadays. We can not have a daliy life without a phone, and the e-book or other things online could not downloaded without internet. Life and culture is also a Code/space.

And computation caused the bias. We are getting more and more easy to believe the information from automated system. The crews of Airbus would follow most of automated given alters in a experiments but only 1/3 of them meaningful. They seldom check what had happened. What the worse, we may even shape our behavior into the automated system. Nobody would ignore the instructions from GPS. Sometimes, the device would give the instruction to a wrong way, or a right way but not suitable for driving when someone keep following the instruction without observation, which could lead to falling into lake or even death.
The reason is not the computation but the human beings themselves. Human always tend to the simpler choice or shortcut, while automated machine always could handle complex situation and give instruction in time. So people would follow the result of computation without thinking. Our modes of thought are being shaped by computation.
Computation gathered data to model and then predicted future. But the real world contains something uncertain or ambiguous that hard for certain computation, which would be excluded from the field of possible future. Because the computation insists that there is a answer, but the uncertainty was filled in science, such as climate change.
When it comes back to the Ruskin, computation not only becomes the storm-cloud to change the world, but also the light to shape us. The question is: should we believe in it? At the end of the article, the author mentioned the “coastline paradox”: if we measure the length of the coastline more accurately, the length is getting longer with smaller lines taken into account. It seems impossible to get the true result. As what the last sentence said, the more obsessively we attempt to compute the world, the more unknowably complex it appears.