学术英语Unit4

发布时间:2017-06-15 20:01:45

Task 1

The Dawn of The Age of Artificial Intelligence

Reasons to cheer the rise of the machines

Erik Brynjolfsson & Andrew Mcafee

1The advances we’ve seen in the past few years – cars that drive themselves, useful humanoid robots, speech recognition and synthesis systems, 3D printers, Jeopardy!-champion computers—are not the crowning achievements of the computer era. They’re the warm-up acts. As we move deeper into the second machine age we’ll see more and more such wonders, and they’ll become more and more impressive.

2How can we be so sure? Because the exponential, digital, and recombinant powers of the second machine age have made it possible for humanity to create two of the most important one-time events in our history: the emergence of real, useful artificial intelligence (AI) and the connection of most of the people on the planet via a common digital network.

3Either of these advances alone would fundamentally change our growth prospects. When combined, they’re more important than anything since the Industrial Revolution, which forever transformed how physical work was done.

Thinking Machines, Available now

4Digital machines have escaped their narrow confines and started to demonstrate broad abilities in pattern recognition, complex communication, and other domains that used to be exclusively human. We’ve recently seen great progress in natural language processing, machine learning (the ability of a computer to automatically refine its methods and improve its results as it gets more data), computer vision, simultaneous localization and mapping, and many other areas.

5We’re going to see artificial intelligence do more and more, and as this happens costs will go down, outcomes will improve, and our lives will get better. Soon countless pieces of AI will be working on our behalf, often in the background. They’ll help us in areas ranging from trivial to substantive to life changing. Trivial uses of AI include recognizing our friends’ faces in photos and recommending products. More substantive ones include automatically driving cars on the road, guiding robots in warehouses, and better matching jobs and job seekers. But these remarkable advances pale against the life-changing potential of artificial intelligence.

6To take just one recent example, innovators at the Israeli company OrCam have combined a small but powerful computer, digital sensors, and excellent algorithms to give key aspects of sight to the visually impaired (a population numbering more than twenty million in the United States alone). A user of the OrCam system, which was introduced in 2013, clips onto her glasses a combination of a tiny digital camera and speaker that works by conducting sound waves through the bones of the head. If she points her finger at a source of text such as a billboard, package of food, or newspaper article, the computer immediately analyzes the images the camera sends to it, then reads the text to her via the speaker.

7Reading text ‘in the wild’ – in a variety of fonts, sizes, surfaces, and lighting conditions—has historically been yet another area where humans outpaced even the most advanced hardware and software. OrCam and similar innovations show that this is no longer the case, and that here again technology is racing ahead. As it does, it will help millions of people lead fuller lives. The OrCam costs about $2,500 – the price of a good hearing aid – and is certain to become cheaper over time.

8Digital technologies are also restoring hearing to the deaf via cochlear implants and will probably bring sight back to the fully blind; the FDA recently approved a first-generation retinal implant. AI’s benefits extend even to quadriplegics, since wheelchairs can now be controlled by thoughts. Considered objectively, these advances are something close to miracles – and they’re still in their infancy.

Billions of Innovators, Coming Soon

9In addition to powerful and useful AI, the other recent development that promises to further accelerate the second machine age is the digital interconnection of the planet’s people. There is no better resource for improving the world and bettering the state of humanity than the world’s humans – all 7.1 billion of us. Our good ideas and innovations will address the challenges that arise, improve the quality of our lives, allow us to live more lightly on the planet, and help us take better care of one another. It is a remarkable and unmistakable fact that, with the exception of climate change, virtually all environmental, social, and individual indicators of health have improved over time, even as human population has increased.

10This improvement is not a lucky coincidence; it is cause and effect. Things have gotten better because there are more people, who in total have more good ideas that improve our overall lot. The economist Julian Simon was one of the first to make this optimistic argument, and he advanced it repeatedly and forcefully throughout his career. He wrote, “It is your mind that matters economically, as much or more than your mouth or hands. In the long run, the most important economic effect of population size and growth is the contribution of additional people to our stock of useful knowledge. And this contribution is large enough in the long run to overcome all the costs of population growth.”

11We do have one quibble with Simon, however. He wrote that, “The main fuel to speed the world’s progress is our stock of knowledge, and the brake is our lack of imagination.” We agree about the fuel but disagree about the brake. The main impediment to progress has been that, until quite recently, a sizable portion of the world’s people had no effective way to access the world’s stock of knowledge or to add to it.

12In the industrialized West we have long been accustomed to having libraries, telephones, and computers at our disposal, but these have been unimaginable luxuries to the people of the developing world. That situation is rapidly changing. In 2000, for example, there were approximately seven hundred million mobile phone subscriptions in the world, fewer than 30 percent of which were in developing countries.

13By 2012 there were more than six billion subscriptions, over 75 percent of which were in the developing world. The World Bank estimates that three-quarters of the people on the planet now have access to a mobile phone, and that in some countries mobile telephony is more widespread than electricity or clean water.

14The first mobile phones bought and sold in the developing world were capable of little more than voice calls and text messages, yet even these simple devices could make a significant difference. Between 1997 and 2001 the economist Robert Jensen studied a set of coastal villages in Kerala, India, where fishing was the main industry.10 Jensen gathered data both before and after mobile phone service was introduced, and the changes he documented are remarkable. Fish prices stabilized immediately after phones were introduced, and even though these prices dropped on average, fishermen’s profits actually increased because they were able to eliminate the waste that occurred when they took their fish to markets that already had enough supply for the day. The overall economic well-being of both buyers and sellers improved, and Jensen was able to tie these gains directly to the phones themselves.

15Now, of course, even the most basic phones sold in the developing world are more powerful than the ones used by Kerala’s fisherman over a decade ago. And cheap mobile devices keep improving. Technology analysis firm IDC forecasts that smartphones will outsell feature phones in the near future, and will make up about two-thirds of all sales by 2017.

16This shift is due to continued simultaneous performance improvements and cost declines in both mobile phone devices and networks, and it has an important consequence: it will bring billions of people into the community of potential knowledge creators, problem solvers, and innovators.

‘Infinite Computing’ and Beyond

17Today, people with connected smartphones or tablets anywhere in the world have access to many (if not most) of the same communication resources and information that we do while sitting in our offices at MIT. They can search the Web and browse Wikipedia. They can follow online courses, some of them taught by the best in the academic world. They can share their insights on blogs, Facebook, Twitter, and many other services, most of which are free. They can even conduct sophisticated data analyses using cloud resources such as Amazon Web Services and R, an open source application for statistics.13 In short, they can be full contributors in the work of innovation and knowledge creation, taking advantage of what Autodesk CEO Carl Bass calls “infinite computing.”

18Until quite recently rapid communication, information acquisition, and knowledge sharing, especially over long distances, were essentially limited to the planet’s elite. Now they’re much more democratic and egalitarian, and getting more so all the time. The journalist A. J. Liebling famously remarked that, “Freedom of the press is limited to those who own one.” It is no exaggeration to say that billions of people will soon have a printing press, reference library, school, and computer all at their fingertips.

19We believe that this development will boost human progress. We can’t predict exactly what new insights, products, and solutions will arrive in the coming years, but we are fully confident that they’ll be impressive. The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world. It will make mockery out of all that came before. 

1. Erik Bynjolfsson: He is an American academic and Schussel Family Professor of Management at the MIT Sloan School of Management, the Director of the MIT Center for Digital Business and a Research Associate at the National Bureau of Economic Research, known for his contributions to the world of IT Productivity research and work on the economics of information more generally.

2. Andrew Mcafee: He is the associate director of the Center for Digital Business at the MIT Sloan School of Management, studying the ways information technology (IT) affects businesses and business as a whole. His research investigates how IT changes the way companies perform, organize themselves, and compete, and at a higher level, how computerization affects competition, society, the economy, and the workforce. He was previously a professor at Harvard Business School and a fellow at Harvard’s Berkman Center for Internet and Society. He is the author of Enterprise 2.0, published in November 2009 by Harvard Business School Press, and co-author of Race Against the Machine with Erik Brynjolfsson. In 2014, this work was expanded into the book The Second Machine Age. He writes for publications including Harvard Business Review, The Economist, Forbes, The Wall St. Journal, and The New York Times. He speaks frequently to both academic and industry audiences, most notably at TED 2013 and on the The Charlie Rose Show.

3. Julian Simon: 朱利安·西蒙,美国伊利诺斯大学的经济学和工商管理教授。他先后发表了《人口增长经济学》(1997年版)和《最终的资源》(1981年版)等论著和论文。文中抨击了当前西方世界那种认为人口增长,资源耗尽,世界末日迫近的悲观主义思潮。他坚持认为能源、粮食和其他物质资源无论在什么意义上都不是有限的,因为人类的智力和创造力是最基本的能源,现在利用资源也不会降低将来经济发展的速度。所以,西方有的学者认为西蒙的见解是非常卓越的,把西蒙的人口经济理论称为乐观主义理论。

4. IDC: International Data Corporation的英文缩写,全球著名的信息技术、电信行业和消费科技市场咨询、顾问和活动服务专业提供商。IDC帮助IT专业人士、业务主管和投资机构制定以事实为基础的技术采购政策和业务发展战略。IDC在全球拥有超过900名分析师,他们具有全球化、区域性和本地化的专业视角,对90多个国家的技术发展趋势和业务营销机会进行深入分析。在IDC超过43年的发展历史中,众多企业客户借助IDC的战略分析而达致关键业务的成功。

1. Jeffrey O. Kephart, Learning from Nature, [online], available from:

http://www.sciencemag.org/content/331/6018/682.full.pdf?sid=6078c5fe-fbf6-47fc-a254- f893a3a4395c

2. Daniel G. Bobrow, Mark J. Stefik, Perspectives on Artificial Intelligence Programming, [online], available from:

http://www.sciencemag.org/content/231/4741/951.abstract?sid=5512a63d-6bfc-4980-833 c-508747963bc3

Task 2

Dusting Off The Turning Test

Robert M. French

1Hold up both hands and spread your fingers apart. Now put your palms together and fold your two middle fingers down till the knuckles on both fingers touch together. While holding this position, one after the other, open and close each pair of opposing fingers by an inch or so. Notice anything? Of course you did. But could a computer without a body and without human experiences ever answer that question or a million others like it? And even if recent revolutionary advances in collecting, storing, retrieving, and analyzing data lead to such a computer, would this machine qualify as “intelligent”?

2Just over 60 years ago, Alan Turning published a paper on a simple, operational test for machine intelligence that became one of the most highly cited papers ever written. Turning, whose 100th birthday is celebrated this year, made seminal contributions to the mathematics of automated computing, helped the Allies win World War II by breaking top-secret German codes, and built a forerunner of the modern computer. His test, today called Turning test, was the first operational definition of machine intelligence. It posits putting a computer and a human in separate rooms and connecting them by teletype to any imaginable questions of either entity. The computer aims to fool the interrogator that he/she is the human. If the interrogator cannot determine which is the real human, the computer will be judged to be intelligent.

3In the early days of artificial intelligence (AI), the Turning test was held up by many as the true litmus test for computational intelligence③④. However, workers in AI gradually came to realize that human cognition emerges from a web of explicit, knowledge-based processed and automatic, intuitive, “subcognitive” processes, the latter deriving largely from humans’ direct interaction with the world. It was argued, therefore, that by tapping into this subcognitive substrate – something a disembodied computer did not have – a clever interrogator could unfailingly distinguish a computer from a person. By 1995, most serious researchers in AI had stopped talking about machines passing Turing’s original, teletype-based test, let alone harder versions involving testing visual, authority, and object-manipulation abilities. The Turning test had been, as one researcher put it, “consigned to history”.

4However, two revolutionary advances in information technology may bring the Turning test out of retirement. The first is the ready availability of vast amounts of raw data – from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data. Two deep questions for AI arise from this new technology. The first is whether this wealth of data, appropriately processed could be used by a machine to pass an unrestricted Turning test. The second question, first asked by Turning, is whether a machine that had passed the Turning test using this technology would necessarily be intelligent.

5Suppose for a moment, that all the words you have ever spoken, heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible, along with similar data for hundreds of thousands, even millions, of the people. Ultimately, tactile, and olfactory sensors could also be added to complete this record of sensory experience over time. Researchers at the cutting edge of today’s computer industry think that this kind of life-experience recording will become commonplace in the not-too-distant future. Recently, a home fully equipped with cameras and audio equipment continuously recorded the life of an infant from birth to age three, amounting to200,000 hours of audio and video recordings, representing 85% of the child’s waking experience⑪⑫.

6Assume also that the software exists to catalog, analyze, correlate, and cross-link everything in this sea of data. These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions that tap into facts derived from our embodiment or from our subcongnitive associative networks, like the finger experiment that began this article or like asking native English speakers whether the neologism “Flugblogs” would be a better name for a start-up computer company or for air-filled bags that you tie on your feet for walking across swamps. Someone somewhere has almost certainly done the finger experiment and may well have posted their observations about it to the Internet – or will do so after reading this article – and this information would be accessible to a data-gathering Web crawler. By extension, if a complete record of the sensory input that produced your own subcognitive network over your lifetime were available to a machine, is it so far-fetched to think that the machine might be able to use that data to construct a cognitive and subcognitive network similar to your own? Similar enough, that is, to pass Turning test.

7Computers are already extremely good at collecting and analyzing data from 8 billion (and counting) Web pages, document databases, TV programs, Twitter feeds, etc.. In early 2011, IBM’s Watson, a 2880-processor, 80-teraflop (i.e., 80 trillion operations/s) computing behemoth with 15 terabytes of RAM, won a Jeopardy challenge against two of the best Jeopardy players in history. Watson’s success was attributable, at least in part, to its meticulous study of Jeopardy-like answers and questions, but its performance was nevertheless astounding. How much would be required to retool Watson for a no-holds-barred Turning test?

8The real challenge is not to store countless petabytes (1 million gigabytes) of information, but to selectively retrieve and analyze that information in real time. The human brain processes data in a highly efficient manner, requiring little energy and relying on a densely interconnected network of 100 billion relatively slow and imprecise neurons. It is still not known to what extent the mechanisms of neuronal firing and the patterns of neuronal interconnectivity are optimal for the analysis of the data stored in the brain. IBM is betting that it just might be. The company recently unveiled a new generation of experimental “neurosynaptic” computer chips, based on principles that underlie neurons, with which they hope to design cognitive computers that will “emulate the brain’s abilities for perception, action and cognition”.

9Yes, you say, but data-crunching computers will never be able to think about their own thoughts, which in the final analysis is what makes us human. But there is nothing stopping the computer’s data-analysis processes, themselves, from also being data for the machine. Programs already exist that self-monitor their own data processing.

10All of this brings us squarely back to the question first posed by Turning at the dawn of the computer age, one that has generated a flood of philosophical and scientific commentary ever since. No one would argue that computer-simulated chess playing, regardless of how it is achieved, is not chess playing. Is there something fundamentally different about computer-simulated intelligence?

1. Robert M. French: He is a research director at the French National Centre for Scientific Research. He is currently at the University of Burgundy in Dijon. He holds a Ph.D. from the University of Michigan, where he worked with Douglas Hofstadter on the Tabletop computational cognitive model. He specializes in cognitive science and has made an extensive study of the process of analogy-making. French is the inventor of Tabletop, a computer program that forms analogies in a microdomain consisting of everyday objects placed on a table. He has done extensive research in artificial intelligence and written several articles about the Turing Test, which was proposed by Alan Turing in 1950 as a means of determining whether an advanced computer can be said to be intelligent. French was for a long time an outspoken critic of the test, which, he suggested, no computer might ever be able to meet. More recently, however, he has noted that artificial intelligence is advancing so quickly that a computer might soon be able to pass the test.

2. Notes from the passage:

A. Turing, Mind 59, 433 (1950).

A. Hodges, Science 336,163 (2012).

H. Dreyfus, What Computers Still Can’t Do (MIT Press, Cambridge, MA, 1992).

J. Haugeland, Artificial Intelligence, the Very Idea (MIT Press, Cambridge, MA, 1985).

D. R. Hofstadter, Metamagical Themas (Basic Books, New York, 1985), pp. 631-665.

R.M. French, Mind 99, 53 (1990).

R.M. French, Trends Cogn. Sci. 4, 115 (2000).

S. Harnad, Minds Mach. 1, 43 (1991)

B. Whitby, in Machines and Thought: The Legacy of Alan Turning, P.Millian, A. Clark, Eds. (Oxford Univ. Press, Oxford, 1996), pp. 53-63.

G. Bell, J. Gemmell, Total Recall: How the E-Memory Revolution Will Change Everything (Button, New York, 2009).

D. Roy et al., in Proceedings of the 28th Annual Conference of the Cognitive Science Society, R. Sun, N. Miyake, Eds. (Erlbaum, Mahwah, Nj, 2006), pp. 2059-2064.

www.media.mit.edu/research/groups/1446/human-speechome-project

D. Tabot, “A Social-media decoder,” Technol Rev. (Nov/Dec.2011);

www.technologyreview.com/computing/38910/.

D. Ferrucci et cl., Al Mag. 31,59 (2010)

R. Kurzweil, “Why IBM’s Jeopardy victory matters,” PC Mag. (2011);

www.pcmag.com/article2/0,2817,2376035,00.asp.

See www-03.ibm.com/press/us/en/pressrelease/35251.wss,posted on 18 August 2011.

J. Marshall, J. Exp. Theor. Artif. Intell. 18, 267 (2006).

This work was supported in part by ANR grant 10-065-GETPIMA. Thanks to D. Dennett, M. Weaver, and especially M. Mitchell for comments on an early draft of this article.

3. Jeopardy: 美国著名的智力问答竞赛节目。比赛以一种独特的问答形式进行,问题设置的涵盖面非常广泛,涉及到历史、文学、艺术、流行文化等。

1. Andrew Hunt and David Thomas, The Pragmatic Programmer: From Journeyman to Master, Addison Wesley, October 13, 1999.

2. Paul Graham, Hachers & Painters, O’Peilly Media, Inc., May, 2004.

Task 3

TED Speech: Roberts That Show Emotion

Speaker

David Hanson is the founder and CEO of Hanson Robotics – a company that aims tocreate robots as socially adept as any human being. Through his organization, he has seen the success of robotic facial hardware that establishes eye contact, recognizes faces and carries out natural spoken conversation. Hanson hopes these robotic faces prove useful to cognitive science and psychology, and to the entertainment industry.

A former Walt Disney Imagineer, this young entrepreneur and roboticist has been labelled a “genius” by both PC Magazine and WIRED, and has earned awards from NASA, NSF and Cooper Hewitt Design. If Hanson succeeds, he will create a socially intelligent robot that may even one day have a place in the human family.

Scripts

0:12 I’m Dr. David Hanson, and I build robots with character. And by that, I mean that I develop robots that are characters, but also robots that will eventually come to empathize with you. So we’re starting with a variety of technologies that have converged into these conversational character robots that can see faces, make eye contact with you, make a full range of facial expressions, understand speech and begin to model how you’re feeling and who you are, and build a relationship with you.

0:42 I developed a series of technologies that allowed the robots to make more realistic facial expressions than previously achieved, on lower power, which enabled the walking biped robots, the first androids. So, it’s a full range of facial expressions simulating all the major muscles in the human face, running on very small batteries, extremely lightweight.

1:01 The materials that allowed the battery-operated facial expressions is a material that we call Frubber, and it actually has three major innovations in the material that allow this to happen. One is hierarchical pores, and the other is a macro-molecular nanoscale porosity in the material.

1:16 There he’s starting to walk. This is at the Korean Advanced Institute of Science and Technology. I built the head. They built the body. So the goal here is to achieve sentience in machines, and not just sentience, but empathy.

1:33 We’re working with the Machine Perception Laboratory at the U.C. San Diego. They have this really remarkable facial expression technology that recognizes facial expressions, what facial expressions you’re making. It also recognizes where you’re looking, your head orientation. We’re emulating all the major facial expressions, and then controlling it with the software that we call the Character Engine. And here is a little bit of the technology that’s involved in that.

1:57 In fact, right now – plug it from here, and then plug it in here, and now let’s see if it gets my facial expressions. Okay. So I’m smiling. (Laughter) Now I’m frowning. And this is really heavily backlit. Okay, here we go. Oh, it’s so sad. Okay, so you smile, frowning. So his perception of your emotional states is very important for machines to effectively become empathetic.

2:34 Machines are becoming devastatingly capable of things like killing. Right? Those machines have no place for empathy. And there is billions of dollars being spent on that. Character robotics could plant the seed for robots that actually have empathy. So, if they achieve human level intelligence or, quite possibly, greater than human levels of intelligence, this could be the seeds of hope for our future.

2:58 So, we’ve made 20 robots in the last eight years, during the course of getting my Ph.D. And then I started Hanson Robotics, which has been developing these things for mass manufacturing. This is one of our robots that we showed at Wired NextFest a couple of years ago. And it sees multiple people in a scene, remembers where individual people are, and looks from person to person, remembering people.

3:21 So, we’re involving two things. One, the perception of people, and two, the natural interface, the natural form of the interface, so that it’s more intuitive for you to interact with the robot. You start to believe that it’s alive and aware.

3:37 So one of my favorite projects was bringing all this stuff together in an artistic display of an android portrait of science-fiction writer Philip K. Dick, who wrote great works like, “Do Androids Dream of Electric Sheep?” which was the basis of the movie “Bladerunner.” In these stories, robots often think that they’re human, and they sort of come to life. So we put his writings, letters, his interviews, correspondences, into a huge database of thousands of pages, and then used some natural language processing to allow you to actually have a conversation with him. And it was kind of spooky, because he would say these things that just sounded like they really understood you.

4:12 And this is one of the most exciting projects that we’re developing, which is a little character that’s a spokesbot for friendly artificial intelligence, friendly machine intelligence. And we’re getting this mass-manufactured. We specked it out to actually be doable with a very, very low-cost bill of materials, so that it can become a childhood companion for kids. Interfacing with the Internet, it gets smarter over the years. As artificial intelligence evolves, so does his intelligence.

4:39 Chris Anderson: Thank you so much. That’s incredible. (Applause)

学术英语Unit4

相关推荐