首页 > 题库 > 重庆大学
选择学校
A B C D F G H J K L M N Q S T W X Y Z

THE IRON BRIDGEThe Iron Bridge was the first of as kind in Europe and is universally recognized as a symbol of the Industrial Revolution.A: The Iron Bridge crosses the River Severn in Coalbrookdale, in the west of England. It was the first cast-iron bridge to be successfully erected, and the first large cast-iron structure of the industrial age in Europe, although the Chinese were expert iron-casters many centuries earlier.B: Rivers used to the equivalent of today’s motorways, in that they were extensively used for transportation. The River Severn, which starts its life on the Welsh mountains and eventually enters the sea between Cardiff and Bristol, is the longest navigable river in Britain. It was ideal for transportation purpose, and special boats were built to navigate the waters. By the middle of the eighteenth century, the Severn was one of the busiest rivers in Europe. Local goods, including coal, iron products, wool, grain and cider, were sent by river. Among the goods coming upstream were luxuries such as sugar, tea, coffee and wine. In places, the riverbanks were lined with wharves and the river was often crowded with boats loading or unloading.C: In 1638, Basil Brooke patented a steel-making process and built a furnace at Coalbrookdale. This later became the property of Abraham Darby (referred to as Abraham Darby Ito distinguish him from his son and grandson of the same name.) After serving an apprenticeship in Birmingham, Darby had started a business in Bristol, but he moved to Coalbrookdale in 1710 with an idea that coke derived from coal could provide a more economical alternative to charcoal as a fuel for iron making. This led to cheaper, more efficient iron making from the abundant supplies of coal, iron and limestone in the area.D: His son, Abraham Darby II, pioneered the manufacture of cast iron, and had the idea of building a bridge over the Severn, as ferrying stores of all kinds across the river, particularly the large quantities of fuel for the furnaces at Coalbrookdale and other surrounding ironworks, involved considerable expense and delay. However, it was his son Abraham Darby III (born in 1750) who, in 1775, organized a meeting to plan the building of a bridge. This was designed by a local architect, Thomas Pritchard, who had the idea of constructing it of iron.E: Sections were cast during the winter of 1778-9 for a 7-metre-wide bridge with a span of 31 metres, 12 metres above the river. Construction took three months during the summer of 1779, and remarkably, nobody was injured during the construction process- a feat almost unheard of even in modern major civil engineering projects. Work on the approach roads continued for another two years, and the bridge was opened to traffic in 1781 Abraham Darby Ⅲ funded the bridge by commissioning paintings and engravings, but he lost a lot on the project, which had cost nearly double the estimate, and he died leaving massive larger debts in 1789, aged only 39. The district did not flourish for much 1onger, and during the nineteenth and early twentieth centuries factories closed down. Since 1934 the bridge has been open only to pedestrians. Universally recognized as the symbol of the Industrial Revolution, the Iron Bridge now stands at the heart of the Iron Bridge Gorge World Heritage Site.F: It has always been a mystery how the bridge was built. Despite its pioneering technology no eye-witness accounts are known which describe the iron bridge being erected— and certainly no plans have survived. However, recent discoveries, research and experiments have shed new light on exactly how it was built, challenging the assumptions of recent decades. In 1997 a small water colour sketch by Elias Martin came to light in the Swedish capital, Stockholm. Although there is a wealth of early views of the bridge by numerous artists, this is the only one which actually shows it under construction.G: Up until recently it had been assumed that the bridge had been built from both banks, with the inner supports tilted across the river. This would have allowed river traffic to continue unimpeded during construction. But the picture clearly shows sections of the bridge being raised from a barge in the river. It contradicted everything historians had assumed about the bridge, and it was even considered that the picture could have been a fake as no other had come to light. So in 2001 a half-scale model of the bridge was built, in order to see if it could have been constructed in the way depicted in the water colour. Meanwhile, a detailed archaeological, historical and photographic survey was done by the Iron bridge Gorge Museum Trust, along with a 3D CAD (computer–aided design) model by English Heritage.H: The results tell us a lot more about how the bridge was built. We now know that all the large castings were made individually as they are all slightly different. The bridge wasn’t welded or bolted together as metal bridges are these days. Instead it was fitted together using a complex system of joints normally used for wood— but this was the traditional way in which iron structures were joined at the time. The construction of the model proved that the painting shows a very realistic method of constructing the bridge that could work and was in all probability the method used.I: Now only one mystery remains in the Iron Bridge story. The Swedish water colour sketch had apparently been torn from a book which would have contained similar sketches, it had been drawn by a Swedish artist who lived in London for 12 years and travelled Britain drawing what he saw. Nobody knows what has happened to the rest of the book, but perhaps the other sketches still exist somewhere. If they are ever found they could provide further valuable evidence of how the Iron Bridge was constructed.The text has nine paragraphs. A-I. Which paragraph contains the following information? Write the correct letter. A-I on your answer sheet.(  )1.why a bridge was required across the River Severn2.a method used to raise money for the bridge3.why Coalbrookdale became attractive to iron makers4.how the sections of the bridge were connected to each other 

查看试题

THE IRON BRIDGEThe Iron Bridge was the first of as kind in Europe and is universally recognized as a symbol of the Industrial Revolution.A: The Iron Bridge crosses the River Severn in Coalbrookdale, in the west of England. It was the first cast-iron bridge to be successfully erected, and the first large cast-iron structure of the industrial age in Europe, although the Chinese were expert iron-casters many centuries earlier.B: Rivers used to the equivalent of today’s motorways, in that they were extensively used for transportation. The River Severn, which starts its life on the Welsh mountains and eventually enters the sea between Cardiff and Bristol, is the longest navigable river in Britain. It was ideal for transportation purpose, and special boats were built to navigate the waters. By the middle of the eighteenth century, the Severn was one of the busiest rivers in Europe. Local goods, including coal, iron products, wool, grain and cider, were sent by river. Among the goods coming upstream were luxuries such as sugar, tea, coffee and wine. In places, the riverbanks were lined with wharves and the river was often crowded with boats loading or unloading.C: In 1638, Basil Brooke patented a steel-making process and built a furnace at Coalbrookdale. This later became the property of Abraham Darby (referred to as Abraham Darby Ito distinguish him from his son and grandson of the same name.) After serving an apprenticeship in Birmingham, Darby had started a business in Bristol, but he moved to Coalbrookdale in 1710 with an idea that coke derived from coal could provide a more economical alternative to charcoal as a fuel for iron making. This led to cheaper, more efficient iron making from the abundant supplies of coal, iron and limestone in the area.D: His son, Abraham Darby II, pioneered the manufacture of cast iron, and had the idea of building a bridge over the Severn, as ferrying stores of all kinds across the river, particularly the large quantities of fuel for the furnaces at Coalbrookdale and other surrounding ironworks, involved considerable expense and delay. However, it was his son Abraham Darby III (born in 1750) who, in 1775, organized a meeting to plan the building of a bridge. This was designed by a local architect, Thomas Pritchard, who had the idea of constructing it of iron.E: Sections were cast during the winter of 1778-9 for a 7-metre-wide bridge with a span of 31 metres, 12 metres above the river. Construction took three months during the summer of 1779, and remarkably, nobody was injured during the construction process- a feat almost unheard of even in modern major civil engineering projects. Work on the approach roads continued for another two years, and the bridge was opened to traffic in 1781 Abraham Darby Ⅲ funded the bridge by commissioning paintings and engravings, but he lost a lot on the project, which had cost nearly double the estimate, and he died leaving massive larger debts in 1789, aged only 39. The district did not flourish for much 1onger, and during the nineteenth and early twentieth centuries factories closed down. Since 1934 the bridge has been open only to pedestrians. Universally recognized as the symbol of the Industrial Revolution, the Iron Bridge now stands at the heart of the Iron Bridge Gorge World Heritage Site.F: It has always been a mystery how the bridge was built. Despite its pioneering technology no eye-witness accounts are known which describe the iron bridge being erected— and certainly no plans have survived. However, recent discoveries, research and experiments have shed new light on exactly how it was built, challenging the assumptions of recent decades. In 1997 a small water colour sketch by Elias Martin came to light in the Swedish capital, Stockholm. Although there is a wealth of early views of the bridge by numerous artists, this is the only one which actually shows it under construction.G: Up until recently it had been assumed that the bridge had been built from both banks, with the inner supports tilted across the river. This would have allowed river traffic to continue unimpeded during construction. But the picture clearly shows sections of the bridge being raised from a barge in the river. It contradicted everything historians had assumed about the bridge, and it was even considered that the picture could have been a fake as no other had come to light. So in 2001 a half-scale model of the bridge was built, in order to see if it could have been constructed in the way depicted in the water colour. Meanwhile, a detailed archaeological, historical and photographic survey was done by the Iron bridge Gorge Museum Trust, along with a 3D CAD (computer–aided design) model by English Heritage.H: The results tell us a lot more about how the bridge was built. We now know that all the large castings were made individually as they are all slightly different. The bridge wasn’t welded or bolted together as metal bridges are these days. Instead it was fitted together using a complex system of joints normally used for wood— but this was the traditional way in which iron structures were joined at the time. The construction of the model proved that the painting shows a very realistic method of constructing the bridge that could work and was in all probability the method used.I: Now only one mystery remains in the Iron Bridge story. The Swedish water colour sketch had apparently been torn from a book which would have contained similar sketches, it had been drawn by a Swedish artist who lived in London for 12 years and travelled Britain drawing what he saw. Nobody knows what has happened to the rest of the book, but perhaps the other sketches still exist somewhere. If they are ever found they could provide further valuable evidence of how the Iron Bridge was constructed.1.When was the furnace bought by Darby originally constructed?2.When were the roads leading to the bridge completed?3.When was the bridge closed to traffic?4.When was a model of the bridge built?

查看试题

Making Every Drop CountA The history of human civilization is entwined with the history of the ways we have learned to manipulate water resources. As towns gradually expanded, water was brought from increasingly remote sources, leading to sophisticated engineering efforts such as dams and aqueducts. At the height of the Roman Empire, nine major systems, with an innovative layout of pipes and well-built sewers, supplied the occupants of Rome with as much water per person as is provided in many parts of the industrial world today.B During the industrial revolution and population explosion of the 19th and 20th centuries, the demand for water rose dramatically. Unprecedented construction of tens of thousands of monumental engineering projects designed to control floods, protect clean water supplies, and provide water for irrigation and hydropower brought great benefits to hundreds of millions of people. Food production has kept pace with soaring populations mainly because of the expansion of artificial irrigation systems that make possible the growth of 40% of the world’s food. Nearly one fifth of all the electricity generated worldwide is produced by turbines spun by the power of falling water.C Yet there is a dark side to this picture, despite our progress, half of the world’s populations still suffers, with water services inferior to those available to the ancient Greeks and Romans. As the United Nations report on access to water reiterated in November 2001, more than one billion people lack access to clean drinking water, some two and a half billion do not have adequate sanitation services. Preventable water-related diseases kill an estimated 10,000 to 20, 0000 children every day, and the latest evidence suggests that we are falling behind in efforts to solve these problems.D The consequences of our water policies extend beyond jeopardizing human health. Tens of millions of people have been forced to move from their homes — often with little warning or compensation — to make way for the reservoirs behind dams. More than 20% of all freshwater fish species are now threatened or endangered because dams and water withdrawals have destroyed the free-flowing river ecosystems where they thrive. Certain irrigation practices degrade soil quality and reduce agricultural productivity. Groundwater aquifers are being pumped down faster than they are naturally replenished in parts of India, China, the USA and elsewhere. And disputes over shared water resources have led to violence and continue to raise local, national and even international tensions.E At the outset of the new millennium, however, the way resource planners think about water is beginning to change. The focus is slowly shifting back to the provision of basic human and environmental needs as top priority ensuring ‘some for all’,instead of ‘more for some’. Some water experts are now demanding that existing infrastructure be used in smarter ways rather than building new facilities, which is increasingly considered the option of last, not first, resort. This shift in philosophy has not been universally accepted, and it comes with strong opposition form some established water organizations. Nevertheless, it may be the only way to address successfully the pressing problems of providing everyone with clean water to drink, adequate water to grow food and a life free from preventable water-related illness.F Fortunately—and unexpectedly—the demand for water is not rising as rapidly as some predicted. As a result, the pressure to build new water infrastructures has diminished over the past two decades. Although population, industrial output and economic productivity have continued to soar in developed nations, the rate at which people withdraw water from aquifers, rivers and lakes has slowed. And in a few parts of the world, demand has actually fallen.G What explains this remarkable turn of events? Two factors: people have figured out how to use water more efficiently, and communities are rethinking their priorities for water use. Throughout the first three-quarters of the 20th century, the quantity of freshwater consumed per person doubled on average; in the USA,water withdrawals increased tenfold while the population quadrupled. But since 1980, the amount of water consumed per person has actually decreased, thanks to a range of new technologies that help to conserve water in homes and industry. In 1965, for instance, Japan used approximately 13 million gallons of water to produce $1 million of commercial output; by 1989 this had dropped to 3.5 million gallons (even accounting for inflation) — almost a quadrupling of water productivity. In the USA, water withdrawals have fallen by more than 20% from their peak in 1980.H On the other hand, dams, aqueducts and other kinds of infrastructure will still have to be built, particularly in developing countries where basic human needs have not been met. But such projects must be built to higher specifications and with more accountability to local people and their environment than in the past. And even in regions where new projects seem warranted, we must find ways to meet demands with fewer resources, respecting ecological criteria and to a smaller budget.1.Water use per person is higher in the industrial world than it was in Ancient Rome.2.Feeding increasing populations is possible due primarily to improved irrigation systems.3.Modern water systems imitate those of the ancient Greeks and Romans.4.Industrial growth is increasing the overall demand for water.5.Modern technologies have led to a reduction in domestic water consumption.

查看试题

Making Every Drop CountA The history of human civilization is entwined with the history of the ways we have learned to manipulate water resources. As towns gradually expanded, water was brought from increasingly remote sources, leading to sophisticated engineering efforts such as dams and aqueducts. At the height of the Roman Empire, nine major systems, with an innovative layout of pipes and well-built sewers, supplied the occupants of Rome with as much water per person as is provided in many parts of the industrial world today.B During the industrial revolution and population explosion of the 19th and 20th centuries, the demand for water rose dramatically. Unprecedented construction of tens of thousands of monumental engineering projects designed to control floods, protect clean water supplies, and provide water for irrigation and hydropower brought great benefits to hundreds of millions of people. Food production has kept pace with soaring populations mainly because of the expansion of artificial irrigation systems that make possible the growth of 40% of the world’s food. Nearly one fifth of all the electricity generated worldwide is produced by turbines spun by the power of falling water.C Yet there is a dark side to this picture, despite our progress, half of the world’s populations still suffers, with water services inferior to those available to the ancient Greeks and Romans. As the United Nations report on access to water reiterated in November 2001, more than one billion people lack access to clean drinking water, some two and a half billion do not have adequate sanitation services. Preventable water-related diseases kill an estimated 10,000 to 20, 0000 children every day, and the latest evidence suggests that we are falling behind in efforts to solve these problems.D The consequences of our water policies extend beyond jeopardizing human health. Tens of millions of people have been forced to move from their homes — often with little warning or compensation — to make way for the reservoirs behind dams. More than 20% of all freshwater fish species are now threatened or endangered because dams and water withdrawals have destroyed the free-flowing river ecosystems where they thrive. Certain irrigation practices degrade soil quality and reduce agricultural productivity. Groundwater aquifers are being pumped down faster than they are naturally replenished in parts of India, China, the USA and elsewhere. And disputes over shared water resources have led to violence and continue to raise local, national and even international tensions.E At the outset of the new millennium, however, the way resource planners think about water is beginning to change. The focus is slowly shifting back to the provision of basic human and environmental needs as top priority ensuring ‘some for all’,instead of ‘more for some’. Some water experts are now demanding that existing infrastructure be used in smarter ways rather than building new facilities, which is increasingly considered the option of last, not first, resort. This shift in philosophy has not been universally accepted, and it comes with strong opposition form some established water organizations. Nevertheless, it may be the only way to address successfully the pressing problems of providing everyone with clean water to drink, adequate water to grow food and a life free from preventable water-related illness.F Fortunately—and unexpectedly—the demand for water is not rising as rapidly as some predicted. As a result, the pressure to build new water infrastructures has diminished over the past two decades. Although population, industrial output and economic productivity have continued to soar in developed nations, the rate at which people withdraw water from aquifers, rivers and lakes has slowed. And in a few parts of the world, demand has actually fallen.G What explains this remarkable turn of events? Two factors: people have figured out how to use water more efficiently, and communities are rethinking their priorities for water use. Throughout the first three-quarters of the 20th century, the quantity of freshwater consumed per person doubled on average; in the USA,water withdrawals increased tenfold while the population quadrupled. But since 1980, the amount of water consumed per person has actually decreased, thanks to a range of new technologies that help to conserve water in homes and industry. In 1965, for instance, Japan used approximately 13 million gallons of water to produce $1 million of commercial output; by 1989 this had dropped to 3.5 million gallons (even accounting for inflation) — almost a quadrupling of water productivity. In the USA, water withdrawals have fallen by more than 20% from their peak in 1980.H On the other hand, dams, aqueducts and other kinds of infrastructure will still have to be built, particularly in developing countries where basic human needs have not been met. But such projects must be built to higher specifications and with more accountability to local people and their environment than in the past. And even in regions where new projects seem warranted, we must find ways to meet demands with fewer resources, respecting ecological criteria and to a smaller budget.

查看试题

Section AThe role of governments in environmental management is difficult but inescapable. Sometimes, the state tries to manage the resources it owns, and does so badly. Often, however, governments act in an even more harmful way. They actually subsidize the exploitation and consumption of natural resources. A whole range of policies, from farm price support to protection for coal-mining, do environmental damage and (often) make no economic sense. Scrapping them offers a two-fold bonus: a cleaner environment and a more efficient economy. Growth and environmentalism can actually go hand in hand, if politicians have the courage to control the vested interest that subsidies create.Section BNo activity affects more of the earth’s surface than farming. It shapes a third of the planet’s land area, not counting Antarctica, and the proportion is rising. World food output per head has risen by 4 percent between the 1970s and 1980s mainly as a result of increases in yields from land already in cultivation, but also because more land has been brought under the plough. Higher yields have been achieved by increased irrigation, better crop breeding, and a doubling in the use of pesticides and chemical fertilizers in the 1970s and 1980s.Section CAll these activities may have damaging environment impacts. For example, land clearing for agriculture is the largest single cause of deforestation; chemical fertilizers and pesticides may contaminate water supplies; more intensive farming and the abandonment of fallow periods tend to exacerbate soil erosion; and the spread of monoculture and use of high-yielding varieties of crops have been accompanied by the disappearance of old varieties of food plants which might have provided some insurance against pests of diseases in future. Soil erosion threatens the productivity of land in both rich and poor countries. The United State, where the most careful measurements have been done, discovered in 1982 that about one-fifth of its farmland was losing topsoil at a rate likely to diminish the soil’s productivity. The country subsequently embarked upon a program to convert 11 percent of its cropped land to meadow or forest. Topsoil in India and China is vanishing much faster than America.Section DGovernment policies have frequently compounded the environmental damage that farming can cause. In the rich countries, subsidies for growing crops and price supports for farm output drive up the price of land. The annual value of these subsidies is immense; about $250 billion, or more than all World Bank lending in the 1980s. To increase the output of crops per acre, a farmer’s easiest option is to use more of the most readily available inputs: fertilizers and pesticides. Fertilizer use doubled in Denmark in the period 1960-1985 and increased in The Netherlands by 150 percent. The quantity of pesticides applied has risen too: by 69 percent in 1975-1984 in Denmark, for example, with a rise of 115 percent in the frequency of application in the three years from 1981.In the late 1980s and early 1990s some efforts were made to reduce farm subsidies. The most dramatic example was that of New Zealand, which scrapped most farm support in 1984. A study of the environmental effects, conducted in 1993, found that the end of fertilizer subsidies had been followed by a fall in fertilizer use (a fall compounded by the decline in world commodity prices, which cut farm incomes). The removal of subsidies also stopped land-cleaning and over-stocking, which in the past had been the principal causes of erosion. Farms began to diversify. The one kind of subsidy whose removal appeared to have been bad for the environment was subsidy to manage soil erosion.In less enlightened countries, and in the European Union, the trend has been to reduce rather than eliminate subsidies, and to introduce new payments to encourage farmers to treat their land in environmentally friendlier ways, or to leave it fallow. It may sound strange but such payments need to be higher than the existing incentives for farmers to grow food crops. Farmers, however, dislike being paid to do nothing. In several countries, they have become interested in the possibility of using fuel produced from crop residues either as a replacement for petrol (as ethanol) or as fuel for power stations (as biomass). Such fuels produce far less carbon dioxide than coal or oil, and absorb carbon dioxide as they grow. They are therefore less likely to contribute to the greenhouse effect. But they are rarely competitive with fossil fuels unless subsidized and growing them does no less environmental harm than other crops.Section EIn poor countries, governments aggravate other sorts of damage. Subsidies for pesticides and artificial fertilizers encourage farmers to use greater quantities than are needed to get the highest economic crop yield. A study by the international Rice Research institute of pesticide use by farmers in South East Asia found that, with pest-resistant varieties of rice, even moderate applications of pesticide frequently cost farmers more than they saved. Such waste puts farmers on a chemical treadmill: bugs and weeds become resistant to poisons, so next year’s poisons must be more lethal. One cost is to human health. Every year some 10,000 people die from pesticide poisoning, almost all of them in the developing countries, and another 400,000 become seriously ill. As for artificial fertilizers, their use world-wide increased by 40 percent per unit of farmed land between the mid 1970s and late 1980s, mostly in the developing countries. Overuse of fertilizers may cause farmers to stop rotating crops or leaving their land fallow. That, in turn, may make soil erosion worse.Section FA result of the Uruguay Round of world trade negotiations is likely to be a reduction of 36 percent in the average levels of farm subsidies paid by the rich countries in 1986-1990. Some of the world’s food production will move from Western Europe to regions where subsidies are lower of non-existent, such as the former communist countries and parts of the developing world. Some environmentalists worry about this outcome. It will undoubtedly mean more pressure to convert natural habitat into farmland. But it will also have many desirable environment effects. The intensity of farming in the rich world should decline, and the use of chemical inputs will diminish. Crops are more likely to be grown in the environments to which they are naturally suited. And more framers in poor countries will have the money and incentive to manage their land in ways that are sustainable in the long run. That is important. To feed an increasingly hungry world, farmers need every incentive to use their soil and water effectively and efficiently.From the list below choose the most suitable title for the reading passage above. Write the appropriate letter A-E in box 28 on the Answer Sheet.(  )

查看试题

Section AThe role of governments in environmental management is difficult but inescapable. Sometimes, the state tries to manage the resources it owns, and does so badly. Often, however, governments act in an even more harmful way. They actually subsidize the exploitation and consumption of natural resources. A whole range of policies, from farm price support to protection for coal-mining, do environmental damage and (often) make no economic sense. Scrapping them offers a two-fold bonus: a cleaner environment and a more efficient economy. Growth and environmentalism can actually go hand in hand, if politicians have the courage to control the vested interest that subsidies create.Section BNo activity affects more of the earth’s surface than farming. It shapes a third of the planet’s land area, not counting Antarctica, and the proportion is rising. World food output per head has risen by 4 percent between the 1970s and 1980s mainly as a result of increases in yields from land already in cultivation, but also because more land has been brought under the plough. Higher yields have been achieved by increased irrigation, better crop breeding, and a doubling in the use of pesticides and chemical fertilizers in the 1970s and 1980s.Section CAll these activities may have damaging environment impacts. For example, land clearing for agriculture is the largest single cause of deforestation; chemical fertilizers and pesticides may contaminate water supplies; more intensive farming and the abandonment of fallow periods tend to exacerbate soil erosion; and the spread of monoculture and use of high-yielding varieties of crops have been accompanied by the disappearance of old varieties of food plants which might have provided some insurance against pests of diseases in future. Soil erosion threatens the productivity of land in both rich and poor countries. The United State, where the most careful measurements have been done, discovered in 1982 that about one-fifth of its farmland was losing topsoil at a rate likely to diminish the soil’s productivity. The country subsequently embarked upon a program to convert 11 percent of its cropped land to meadow or forest. Topsoil in India and China is vanishing much faster than America.Section DGovernment policies have frequently compounded the environmental damage that farming can cause. In the rich countries, subsidies for growing crops and price supports for farm output drive up the price of land. The annual value of these subsidies is immense; about $250 billion, or more than all World Bank lending in the 1980s. To increase the output of crops per acre, a farmer’s easiest option is to use more of the most readily available inputs: fertilizers and pesticides. Fertilizer use doubled in Denmark in the period 1960-1985 and increased in The Netherlands by 150 percent. The quantity of pesticides applied has risen too: by 69 percent in 1975-1984 in Denmark, for example, with a rise of 115 percent in the frequency of application in the three years from 1981.In the late 1980s and early 1990s some efforts were made to reduce farm subsidies. The most dramatic example was that of New Zealand, which scrapped most farm support in 1984. A study of the environmental effects, conducted in 1993, found that the end of fertilizer subsidies had been followed by a fall in fertilizer use (a fall compounded by the decline in world commodity prices, which cut farm incomes). The removal of subsidies also stopped land-cleaning and over-stocking, which in the past had been the principal causes of erosion. Farms began to diversify. The one kind of subsidy whose removal appeared to have been bad for the environment was subsidy to manage soil erosion.In less enlightened countries, and in the European Union, the trend has been to reduce rather than eliminate subsidies, and to introduce new payments to encourage farmers to treat their land in environmentally friendlier ways, or to leave it fallow. It may sound strange but such payments need to be higher than the existing incentives for farmers to grow food crops. Farmers, however, dislike being paid to do nothing. In several countries, they have become interested in the possibility of using fuel produced from crop residues either as a replacement for petrol (as ethanol) or as fuel for power stations (as biomass). Such fuels produce far less carbon dioxide than coal or oil, and absorb carbon dioxide as they grow. They are therefore less likely to contribute to the greenhouse effect. But they are rarely competitive with fossil fuels unless subsidized and growing them does no less environmental harm than other crops.Section EIn poor countries, governments aggravate other sorts of damage. Subsidies for pesticides and artificial fertilizers encourage farmers to use greater quantities than are needed to get the highest economic crop yield. A study by the international Rice Research institute of pesticide use by farmers in South East Asia found that, with pest-resistant varieties of rice, even moderate applications of pesticide frequently cost farmers more than they saved. Such waste puts farmers on a chemical treadmill: bugs and weeds become resistant to poisons, so next year’s poisons must be more lethal. One cost is to human health. Every year some 10,000 people die from pesticide poisoning, almost all of them in the developing countries, and another 400,000 become seriously ill. As for artificial fertilizers, their use world-wide increased by 40 percent per unit of farmed land between the mid 1970s and late 1980s, mostly in the developing countries. Overuse of fertilizers may cause farmers to stop rotating crops or leaving their land fallow. That, in turn, may make soil erosion worse.Section FA result of the Uruguay Round of world trade negotiations is likely to be a reduction of 36 percent in the average levels of farm subsidies paid by the rich countries in 1986-1990. Some of the world’s food production will move from Western Europe to regions where subsidies are lower of non-existent, such as the former communist countries and parts of the developing world. Some environmentalists worry about this outcome. It will undoubtedly mean more pressure to convert natural habitat into farmland. But it will also have many desirable environment effects. The intensity of farming in the rich world should decline, and the use of chemical inputs will diminish. Crops are more likely to be grown in the environments to which they are naturally suited. And more framers in poor countries will have the money and incentive to manage their land in ways that are sustainable in the long run. That is important. To feed an increasingly hungry world, farmers need every incentive to use their soil and water effectively and efficiently.For each of the following questions or unfinished statements, there are four choices marked A, B, C and D. You should decide on the best choice and write the corresponding letter on the Answer Sheet.1.Research completed in 1982 found that in the United States soil erosion( ).2.By the mid-1980s, farmers in Denmark( ).3.Which one of the following increased in New Zealand after 1984?4.The writer refers to some rich countries as being “less enlightened than New Zealand” because( ).5.The writer believe that the Uruguay Round agreements on trade will( ).

查看试题

Section AThe role of governments in environmental management is difficult but inescapable. Sometimes, the state tries to manage the resources it owns, and does so badly. Often, however, governments act in an even more harmful way. They actually subsidize the exploitation and consumption of natural resources. A whole range of policies, from farm price support to protection for coal-mining, do environmental damage and (often) make no economic sense. Scrapping them offers a two-fold bonus: a cleaner environment and a more efficient economy. Growth and environmentalism can actually go hand in hand, if politicians have the courage to control the vested interest that subsidies create.Section BNo activity affects more of the earth’s surface than farming. It shapes a third of the planet’s land area, not counting Antarctica, and the proportion is rising. World food output per head has risen by 4 percent between the 1970s and 1980s mainly as a result of increases in yields from land already in cultivation, but also because more land has been brought under the plough. Higher yields have been achieved by increased irrigation, better crop breeding, and a doubling in the use of pesticides and chemical fertilizers in the 1970s and 1980s.Section CAll these activities may have damaging environment impacts. For example, land clearing for agriculture is the largest single cause of deforestation; chemical fertilizers and pesticides may contaminate water supplies; more intensive farming and the abandonment of fallow periods tend to exacerbate soil erosion; and the spread of monoculture and use of high-yielding varieties of crops have been accompanied by the disappearance of old varieties of food plants which might have provided some insurance against pests of diseases in future. Soil erosion threatens the productivity of land in both rich and poor countries. The United State, where the most careful measurements have been done, discovered in 1982 that about one-fifth of its farmland was losing topsoil at a rate likely to diminish the soil’s productivity. The country subsequently embarked upon a program to convert 11 percent of its cropped land to meadow or forest. Topsoil in India and China is vanishing much faster than America.Section DGovernment policies have frequently compounded the environmental damage that farming can cause. In the rich countries, subsidies for growing crops and price supports for farm output drive up the price of land. The annual value of these subsidies is immense; about $250 billion, or more than all World Bank lending in the 1980s. To increase the output of crops per acre, a farmer’s easiest option is to use more of the most readily available inputs: fertilizers and pesticides. Fertilizer use doubled in Denmark in the period 1960-1985 and increased in The Netherlands by 150 percent. The quantity of pesticides applied has risen too: by 69 percent in 1975-1984 in Denmark, for example, with a rise of 115 percent in the frequency of application in the three years from 1981.In the late 1980s and early 1990s some efforts were made to reduce farm subsidies. The most dramatic example was that of New Zealand, which scrapped most farm support in 1984. A study of the environmental effects, conducted in 1993, found that the end of fertilizer subsidies had been followed by a fall in fertilizer use (a fall compounded by the decline in world commodity prices, which cut farm incomes). The removal of subsidies also stopped land-cleaning and over-stocking, which in the past had been the principal causes of erosion. Farms began to diversify. The one kind of subsidy whose removal appeared to have been bad for the environment was subsidy to manage soil erosion.In less enlightened countries, and in the European Union, the trend has been to reduce rather than eliminate subsidies, and to introduce new payments to encourage farmers to treat their land in environmentally friendlier ways, or to leave it fallow. It may sound strange but such payments need to be higher than the existing incentives for farmers to grow food crops. Farmers, however, dislike being paid to do nothing. In several countries, they have become interested in the possibility of using fuel produced from crop residues either as a replacement for petrol (as ethanol) or as fuel for power stations (as biomass). Such fuels produce far less carbon dioxide than coal or oil, and absorb carbon dioxide as they grow. They are therefore less likely to contribute to the greenhouse effect. But they are rarely competitive with fossil fuels unless subsidized and growing them does no less environmental harm than other crops.Section EIn poor countries, governments aggravate other sorts of damage. Subsidies for pesticides and artificial fertilizers encourage farmers to use greater quantities than are needed to get the highest economic crop yield. A study by the international Rice Research institute of pesticide use by farmers in South East Asia found that, with pest-resistant varieties of rice, even moderate applications of pesticide frequently cost farmers more than they saved. Such waste puts farmers on a chemical treadmill: bugs and weeds become resistant to poisons, so next year’s poisons must be more lethal. One cost is to human health. Every year some 10,000 people die from pesticide poisoning, almost all of them in the developing countries, and another 400,000 become seriously ill. As for artificial fertilizers, their use world-wide increased by 40 percent per unit of farmed land between the mid 1970s and late 1980s, mostly in the developing countries. Overuse of fertilizers may cause farmers to stop rotating crops or leaving their land fallow. That, in turn, may make soil erosion worse.Section FA result of the Uruguay Round of world trade negotiations is likely to be a reduction of 36 percent in the average levels of farm subsidies paid by the rich countries in 1986-1990. Some of the world’s food production will move from Western Europe to regions where subsidies are lower of non-existent, such as the former communist countries and parts of the developing world. Some environmentalists worry about this outcome. It will undoubtedly mean more pressure to convert natural habitat into farmland. But it will also have many desirable environment effects. The intensity of farming in the rich world should decline, and the use of chemical inputs will diminish. Crops are more likely to be grown in the environments to which they are naturally suited. And more framers in poor countries will have the money and incentive to manage their land in ways that are sustainable in the long run. That is important. To feed an increasingly hungry world, farmers need every incentive to use their soil and water effectively and efficiently.

查看试题

1.(  )The role of governments in environmental management is difficult but inescapable. Sometimes, the state tries to manage the resources it owns, and does so badly. Often, however, governments act in an even more harmful way. They actually subsidize the exploitation and consumption of natural resources. A whole range of policies, from farm price support to protection for coal-mining, do environmental damage and (often) make no economic sense. Scrapping them offers a two-fold bonus: a cleaner environment and a more efficient economy. Growth and environmentalism can actually go hand in hand, if politicians have the courage to control the vested interest that subsidies create.2.(  )No activity affects more of the earth’s surface than farming. It shapes a third of the planet’s land area, not counting Antarctica, and the proportion is rising. World food output per head has risen by 4 percent between the 1970s and 1980s mainly as a result of increases in yields from land already in cultivation, but also because more land has been brought under the plough. Higher yields have been achieved by increased irrigation, better crop breeding, and a doubling in the use of pesticides and chemical fertilizers in the 1970s and 1980s.3.(  )All these activities may have damaging environment impacts. For example, land clearing for agriculture is the largest single cause of deforestation; chemical fertilizers and pesticides may contaminate water supplies; more intensive farming and the abandonment of fallow periods tend to exacerbate soil erosion; and the spread of monoculture and use of high-yielding varieties of crops have been accompanied by the disappearance of old varieties of food plants which might have provided some insurance against pests of diseases in future. Soil erosion threatens the productivity of land in both rich and poor countries. The United State, where the most careful measurements have been done, discovered in 1982 that about one-fifth of its farmland was losing topsoil at a rate likely to diminish the soil’s productivity. The country subsequently embarked upon a program to convert 11 percent of its cropped land to meadow or forest. Topsoil in India and China is vanishing much faster than America.4.(  )Government policies have frequently compounded the environmental damage that farming can cause. In the rich countries, subsidies for growing crops and price supports for farm output drive up the price of land. The annual value of these subsidies is immense; about $250 billion, or more than all World Bank lending in the 1980s. To increase the output of crops per acre, a farmer’s easiest option is to use more of the most readily available inputs: fertilizers and pesticides. Fertilizer use doubled in Denmark in the period 1960-1985 and increased in The Netherlands by 150 percent. The quantity of pesticides applied has risen too: by 69 percent in 1975-1984 in Denmark, for example, with a rise of 115 percent in the frequency of application in the three years from 1981.In the late 1980s and early 1990s some efforts were made to reduce farm subsidies. The most dramatic example was that of New Zealand, which scrapped most farm support in 1984. A study of the environmental effects, conducted in 1993, found that the end of fertilizer subsidies had been followed by a fall in fertilizer use (a fall compounded by the decline in world commodity prices, which cut farm incomes). The removal of subsidies also stopped land-cleaning and over-stocking, which in the past had been the principal causes of erosion. Farms began to diversify. The one kind of subsidy whose removal appeared to have been bad for the environment was subsidy to manage soil erosion.In less enlightened countries, and in the European Union, the trend has been to reduce rather than eliminate subsidies, and to introduce new payments to encourage farmers to treat their land in environmentally friendlier ways, or to leave it fallow. It may sound strange but such payments need to be higher than the existing incentives for farmers to grow food crops. Farmers, however, dislike being paid to do nothing. In several countries, they have become interested in the possibility of using fuel produced from crop residues either as a replacement for petrol (as ethanol) or as fuel for power stations (as biomass). Such fuels produce far less carbon dioxide than coal or oil, and absorb carbon dioxide as they grow. They are therefore less likely to contribute to the greenhouse effect. But they are rarely competitive with fossil fuels unless subsidized and growing them does no less environmental harm than other crops.5.(  )In poor countries, governments aggravate other sorts of damage. Subsidies for pesticides and artificial fertilizers encourage farmers to use greater quantities than are needed to get the highest economic crop yield. A study by the international Rice Research institute of pesticide use by farmers in South East Asia found that, with pest-resistant varieties of rice, even moderate applications of pesticide frequently cost farmers more than they saved. Such waste puts farmers on a chemical treadmill: bugs and weeds become resistant to poisons, so next year’s poisons must be more lethal. One cost is to human health. Every year some 10,000 people die from pesticide poisoning, almost all of them in the developing countries, and another 400,000 become seriously ill. As for artificial fertilizers, their use world-wide increased by 40 percent per unit of farmed land between the mid 1970s and late 1980s, mostly in the developing countries. Overuse of fertilizers may cause farmers to stop rotating crops or leaving their land fallow. That, in turn, may make soil erosion worse.6.(  )A result of the Uruguay Round of world trade negotiations is likely to be a reduction of 36 percent in the average levels of farm subsidies paid by the rich countries in 1986-1990. Some of the world’s food production will move from Western Europe to regions where subsidies are lower of non-existent, such as the former communist countries and parts of the developing world. Some environmentalists worry about this outcome. It will undoubtedly mean more pressure to convert natural habitat into farmland. But it will also have many desirable environment effects. The intensity of farming in the rich world should decline, and the use of chemical inputs will diminish. Crops are more likely to be grown in the environments to which they are naturally suited. And more framers in poor countries will have the money and incentive to manage their land in ways that are sustainable in the long run. That is important. To feed an increasingly hungry world, farmers need every incentive to use their soil and water effectively and efficiently.

查看试题

Persistent bullying is one of the worst experiences a child can face. How can it be prevented? Peter Smith, Professor of Psychology at the University of Sheffield, directed the Sheffield Anti-Bullying Intervention Project, funded by the Department for Education.Here the reports on his findings.Section ABullying can take a variety of forms, from the verbal—being taunted or called hurtful names—to the physical—being kicked or shoved—as well as indirect forms, such as being excluded from social groups. A survey I conducted with Irene Whitney found that in British primary schools up to a quarter of pupils reported experience of bullying, which in about one in ten cases was persistent. There was less bullying in secondary schools, with about one in twenty-five suffering persistent bullying, but these cases may be particularly recalcitrant.Section BBullying is clearly unpleasant, and can make the child experiencing it feel unworthy and depressed. In extreme cases it can even lead to suicide, though this is thankfully rare. Victimized pupils are more likely to experience difficulties with interpersonal relationships as adults, while children who persistently bully are more likely to grow up to be physically violent, and convicted of anti-social offences.Section CUntil recently, not much was known about the topic, and little help was available to teachers to deal with bullying. Perhaps as a consequence, schools would often deny the problem. “There is no bullying at this school” has been a common refrain, almost certainly untrue. Fortunately more schools are now saying: “There is not much bullying here, but when it occurs we have a clear policy for dealing with it.”Section DThree factors are involved in this change. First is an awareness of the severity of the problem. Second, a number of resources to help tackle bullying have become available in Britain. For example, the Scottish Council for Research in Education produced a package of materials, Action Against Bullying, circulated to all schools in England and Wales as well as in Scotland in summer 1992, with a second pack, Supporting Schools Against Bullying, produced the following year. In Ireland, Guidelines on Countering Bullying Behaviour in Post-Primary Schools was published in 1993. Third, there is evidence that these materials work, and that schools can achieve something. This comes from carefully conducted “before and after” evaluations of interventions in schools, monitored by a research team. In Norway, after an intervention campaign was introduced nationally, an evaluation of forty-two schools suggested that, over a two year period, bullying was halved. The Sheffield investigation, which involved sixteen primary schools and seven secondary schools, found that most schools succeeded in reducing bullying.Section EEvidence suggests that a key step is to develop a policy on bullying, saying clearly what is meant by bullying, and giving explicit guidelines on what will be done if it occurs, what records will be kept, who will be informed, what sanctions will be employed. The policy should be developed through consultation, over a period of time — not just imposed from the head teacher’s office! Pupils, parents and staff should feel they have been involved in the policy, which needs to be disseminated and implemented effectively.Other actions can be taken to back up the policy. There are ways of dealing with the topic through the curriculum, using video, drama and literature. These are useful for raising awareness, and can best be tied in to early phases of development, while the school is starting to discuss the issue of bullying. They are also useful in renewing the policy for new pupils, or revising it in the light of experience. But curriculum work alone may only have short term effects; it should be an addition to policy work, not a substitute.There are also ways of working with individual pupils, or in small groups. Assertiveness training for pupils who are liable to be victims is worthwhile, and certain approaches to group bullying such as “no blame”, can be useful in changing the behaviour of bullying pupils without confronting them directly, although other sanctions may be needed for those who continue with persistent bullying.Work in the playground is important, too. One helpful step is to train lunchtime supervisors to distinguish bullying from playful fighting, and help them break up conflicts. Another possibility is to improve the playground environment, so that pupils are less likely to be led into bullying from boredom or frustration.Section FWith these developments, schools can expect that at least the most serious kinds of bullying can largely be prevented. The more effort put in and the wider the whole school involvement, the more substantial the results are likely to be. The reduction in bullying—and the consequent improvement in pupil happiness—is surely a worthwhile objective.From the list below choose the most suitable title for the reading passage above. Write the appropriate letter A-D in box 27 on the Answer Sheet.

查看试题

Persistent bullying is one of the worst experiences a child can face. How can it be prevented? Peter Smith, Professor of Psychology at the University of Sheffield, directed the Sheffield Anti-Bullying Intervention Project, funded by the Department for Education.Here the reports on his findings.Section ABullying can take a variety of forms, from the verbal—being taunted or called hurtful names—to the physical—being kicked or shoved—as well as indirect forms, such as being excluded from social groups. A survey I conducted with Irene Whitney found that in British primary schools up to a quarter of pupils reported experience of bullying, which in about one in ten cases was persistent. There was less bullying in secondary schools, with about one in twenty-five suffering persistent bullying, but these cases may be particularly recalcitrant.Section BBullying is clearly unpleasant, and can make the child experiencing it feel unworthy and depressed. In extreme cases it can even lead to suicide, though this is thankfully rare. Victimized pupils are more likely to experience difficulties with interpersonal relationships as adults, while children who persistently bully are more likely to grow up to be physically violent, and convicted of anti-social offences.Section CUntil recently, not much was known about the topic, and little help was available to teachers to deal with bullying. Perhaps as a consequence, schools would often deny the problem. “There is no bullying at this school” has been a common refrain, almost certainly untrue. Fortunately more schools are now saying: “There is not much bullying here, but when it occurs we have a clear policy for dealing with it.”Section DThree factors are involved in this change. First is an awareness of the severity of the problem. Second, a number of resources to help tackle bullying have become available in Britain. For example, the Scottish Council for Research in Education produced a package of materials, Action Against Bullying, circulated to all schools in England and Wales as well as in Scotland in summer 1992, with a second pack, Supporting Schools Against Bullying, produced the following year. In Ireland, Guidelines on Countering Bullying Behaviour in Post-Primary Schools was published in 1993. Third, there is evidence that these materials work, and that schools can achieve something. This comes from carefully conducted “before and after” evaluations of interventions in schools, monitored by a research team. In Norway, after an intervention campaign was introduced nationally, an evaluation of forty-two schools suggested that, over a two year period, bullying was halved. The Sheffield investigation, which involved sixteen primary schools and seven secondary schools, found that most schools succeeded in reducing bullying.Section EEvidence suggests that a key step is to develop a policy on bullying, saying clearly what is meant by bullying, and giving explicit guidelines on what will be done if it occurs, what records will be kept, who will be informed, what sanctions will be employed. The policy should be developed through consultation, over a period of time — not just imposed from the head teacher’s office! Pupils, parents and staff should feel they have been involved in the policy, which needs to be disseminated and implemented effectively.Other actions can be taken to back up the policy. There are ways of dealing with the topic through the curriculum, using video, drama and literature. These are useful for raising awareness, and can best be tied in to early phases of development, while the school is starting to discuss the issue of bullying. They are also useful in renewing the policy for new pupils, or revising it in the light of experience. But curriculum work alone may only have short term effects; it should be an addition to policy work, not a substitute.There are also ways of working with individual pupils, or in small groups. Assertiveness training for pupils who are liable to be victims is worthwhile, and certain approaches to group bullying such as “no blame”, can be useful in changing the behaviour of bullying pupils without confronting them directly, although other sanctions may be needed for those who continue with persistent bullying.Work in the playground is important, too. One helpful step is to train lunchtime supervisors to distinguish bullying from playful fighting, and help them break up conflicts. Another possibility is to improve the playground environment, so that pupils are less likely to be led into bullying from boredom or frustration.Section FWith these developments, schools can expect that at least the most serious kinds of bullying can largely be prevented. The more effort put in and the wider the whole school involvement, the more substantial the results are likely to be. The reduction in bullying—and the consequent improvement in pupil happiness—is surely a worthwhile objective.Complete the summary below. Choose NO MORE THAN TWO WORDS from the passage for each answer. Write your answers in boxes 22-26 on your answer sheet.What steps should schools take to reduce bullying?The most important step is for the school authorities to produce a(1)which makes the school’s attitude towards bullying quite clear. It should include detailed(2)as to how the school and its staff will react if bullying occurs.In addition, action can be taken through the(3). This is particularly useful in the early part of the process, as a way of raising awareness and encouraging discussion. On its own, however, it is insufficient to bring about a permanent solution.Effective work can also be done with individual pupils and small groups. For example, potential(4)of bullying can be trained to be more self-confident. Or again, in dealing with group bullying, a “no blame” approach, which avoids confronting the offender too directly, is often effective.Playground supervision will be more effective if members of staff are trained to recognize the difference between bullying and mere(5).

查看试题

Persistent bullying is one of the worst experiences a child can face. How can it be prevented? Peter Smith, Professor of Psychology at the University of Sheffield, directed the Sheffield Anti-Bullying Intervention Project, funded by the Department for Education.Here the reports on his findings.Section ABullying can take a variety of forms, from the verbal—being taunted or called hurtful names—to the physical—being kicked or shoved—as well as indirect forms, such as being excluded from social groups. A survey I conducted with Irene Whitney found that in British primary schools up to a quarter of pupils reported experience of bullying, which in about one in ten cases was persistent. There was less bullying in secondary schools, with about one in twenty-five suffering persistent bullying, but these cases may be particularly recalcitrant.Section BBullying is clearly unpleasant, and can make the child experiencing it feel unworthy and depressed. In extreme cases it can even lead to suicide, though this is thankfully rare. Victimized pupils are more likely to experience difficulties with interpersonal relationships as adults, while children who persistently bully are more likely to grow up to be physically violent, and convicted of anti-social offences.Section CUntil recently, not much was known about the topic, and little help was available to teachers to deal with bullying. Perhaps as a consequence, schools would often deny the problem. “There is no bullying at this school” has been a common refrain, almost certainly untrue. Fortunately more schools are now saying: “There is not much bullying here, but when it occurs we have a clear policy for dealing with it.”Section DThree factors are involved in this change. First is an awareness of the severity of the problem. Second, a number of resources to help tackle bullying have become available in Britain. For example, the Scottish Council for Research in Education produced a package of materials, Action Against Bullying, circulated to all schools in England and Wales as well as in Scotland in summer 1992, with a second pack, Supporting Schools Against Bullying, produced the following year. In Ireland, Guidelines on Countering Bullying Behaviour in Post-Primary Schools was published in 1993. Third, there is evidence that these materials work, and that schools can achieve something. This comes from carefully conducted “before and after” evaluations of interventions in schools, monitored by a research team. In Norway, after an intervention campaign was introduced nationally, an evaluation of forty-two schools suggested that, over a two year period, bullying was halved. The Sheffield investigation, which involved sixteen primary schools and seven secondary schools, found that most schools succeeded in reducing bullying.Section EEvidence suggests that a key step is to develop a policy on bullying, saying clearly what is meant by bullying, and giving explicit guidelines on what will be done if it occurs, what records will be kept, who will be informed, what sanctions will be employed. The policy should be developed through consultation, over a period of time — not just imposed from the head teacher’s office! Pupils, parents and staff should feel they have been involved in the policy, which needs to be disseminated and implemented effectively.Other actions can be taken to back up the policy. There are ways of dealing with the topic through the curriculum, using video, drama and literature. These are useful for raising awareness, and can best be tied in to early phases of development, while the school is starting to discuss the issue of bullying. They are also useful in renewing the policy for new pupils, or revising it in the light of experience. But curriculum work alone may only have short term effects; it should be an addition to policy work, not a substitute.There are also ways of working with individual pupils, or in small groups. Assertiveness training for pupils who are liable to be victims is worthwhile, and certain approaches to group bullying such as “no blame”, can be useful in changing the behaviour of bullying pupils without confronting them directly, although other sanctions may be needed for those who continue with persistent bullying.Work in the playground is important, too. One helpful step is to train lunchtime supervisors to distinguish bullying from playful fighting, and help them break up conflicts. Another possibility is to improve the playground environment, so that pupils are less likely to be led into bullying from boredom or frustration.Section FWith these developments, schools can expect that at least the most serious kinds of bullying can largely be prevented. The more effort put in and the wider the whole school involvement, the more substantial the results are likely to be. The reduction in bullying—and the consequent improvement in pupil happiness—is surely a worthwhile objective.1.A recent survey found that in British secondary schools( ).2.Children who are bullied( ).3.The writer thinks that the declaration “There is no bullying at this school”( ).4.What were the findings of research carried out in Norway?

查看试题

Persistent bullying is one of the worst experiences a child can face. How can it be prevented? Peter Smith, Professor of Psychology at the University of Sheffield, directed the Sheffield Anti-Bullying Intervention Project, funded by the Department for Education.Here the reports on his findings.1.(  )Bullying can take a variety of forms, from the verbal—being taunted or called hurtful names—to the physical—being kicked or shoved—as well as indirect forms, such as being excluded from social groups. A survey I conducted with Irene Whitney found that in British primary schools up to a quarter of pupils reported experience of bullying, which in about one in ten cases was persistent. There was less bullying in secondary schools, with about one in twenty-five suffering persistent bullying, but these cases may be particularly recalcitrant.2.(  )Bullying is clearly unpleasant, and can make the child experiencing it feel unworthy and depressed. In extreme cases it can even lead to suicide, though this is thankfully rare. Victimized pupils are more likely to experience difficulties with interpersonal relationships as adults, while children who persistently bully are more likely to grow up to be physically violent, and convicted of anti-social offences.3.(  )Until recently, not much was known about the topic, and little help was available to teachers to deal with bullying. Perhaps as a consequence, schools would often deny the problem. “There is no bullying at this school” has been a common refrain, almost certainly untrue. Fortunately more schools are now saying: “There is not much bullying here, but when it occurs we have a clear policy for dealing with it.”4.(  )Three factors are involved in this change. First is an awareness of the severity of the problem. Second, a number of resources to help tackle bullying have become available in Britain. For example, the Scottish Council for Research in Education produced a package of materials, Action Against Bullying, circulated to all schools in England and Wales as well as in Scotland in summer 1992, with a second pack, Supporting Schools Against Bullying, produced the following year. In Ireland, Guidelines on Countering Bullying Behaviour in Post-Primary Schools was published in 1993. Third, there is evidence that these materials work, and that schools can achieve something. This comes from carefully conducted “before and after” evaluations of interventions in schools, monitored by a research team. In Norway, after an intervention campaign was introduced nationally, an evaluation of forty-two schools suggested that, over a two year period, bullying was halved. The Sheffield investigation, which involved sixteen primary schools and seven secondary schools, found that most schools succeeded in reducing bullying.Section EEvidence suggests that a key step is to develop a policy on bullying, saying clearly what is meant by bullying, and giving explicit guidelines on what will be done if it occurs, what records will be kept, who will be informed, what sanctions will be employed. The policy should be developed through consultation, over a period of time — not just imposed from the head teacher’s office! Pupils, parents and staff should feel they have been involved in the policy, which needs to be disseminated and implemented effectively.Other actions can be taken to back up the policy. There are ways of dealing with the topic through the curriculum, using video, drama and literature. These are useful for raising awareness, and can best be tied in to early phases of development, while the school is starting to discuss the issue of bullying. They are also useful in renewing the policy for new pupils, or revising it in the light of experience. But curriculum work alone may only have short term effects; it should be an addition to policy work, not a substitute.There are also ways of working with individual pupils, or in small groups. Assertiveness training for pupils who are liable to be victims is worthwhile, and certain approaches to group bullying such as “no blame”, can be useful in changing the behaviour of bullying pupils without confronting them directly, although other sanctions may be needed for those who continue with persistent bullying.Work in the playground is important, too. One helpful step is to train lunchtime supervisors to distinguish bullying from playful fighting, and help them break up conflicts. Another possibility is to improve the playground environment, so that pupils are less likely to be led into bullying from boredom or frustration.Section FWith these developments, schools can expect that at least the most serious kinds of bullying can largely be prevented. The more effort put in and the wider the whole school involvement, the more substantial the results are likely to be. The reduction in bullying—and the consequent improvement in pupil happiness—is surely a worthwhile objective.The passage has six sections A-F. Choose the correct heading for sections A-D from the list of headings below. Write the correct number, I -VII, in boxes 14-17 on your answer Sheet.

查看试题

Section AEvery health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community’s total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective.Section BWhat is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were “limits to growth”. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the “invisible hand” of economic progress would provide.Section CHowever, at exactly the same time as this new realization of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy.Section DAlthough the language of “rights” sometimes leads to confusion, by the late 1970s it was recognized in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognized as a “public good”, rather than a “private good” that one is expected to buy for oneself. As the 1976 declaration of the World Health Organization put it: “The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition.” As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy.Section EJust at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP.)As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrator, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources.1.Personal liberty and independence have never been regarded as directly linked to health-care.2.Health-care came to be seen as a right at about the same time that the limits of health-care resources became evident.3.In OECD countries population changes have had an impact on health-care costs in recent years.4.OECD governments have consistently underestimated the level of health-care provision needed.5.In most economically developed countries the elderly will have to make special provision for their health-care in the future.

查看试题

Section AEvery health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community’s total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective.Section BWhat is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were “limits to growth”. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the “invisible hand” of economic progress would provide.Section CHowever, at exactly the same time as this new realization of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy.Section DAlthough the language of “rights” sometimes leads to confusion, by the late 1970s it was recognized in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognized as a “public good”, rather than a “private good” that one is expected to buy for oneself. As the 1976 declaration of the World Health Organization put it: “The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition.” As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy.Section EJust at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP.)As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrator, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources.1.the realization that the resources of the national health systems were limited2.a sharp rise in the cost of health-care3.a belief that all the health-care resources the community needed would be produced by economic growth4.an acceptance of the role of the state in guaranteeing the provision of health-care

查看试题

1.(  )Every health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community’s total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective.2.(  )What is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were “limits to growth”. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the “invisible hand” of economic progress would provide.3.(  )However, at exactly the same time as this new realization of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy.4.(  )Although the language of “rights” sometimes leads to confusion, by the late 1970s it was recognized in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognized as a “public good”, rather than a “private good” that one is expected to buy for oneself. As the 1976 declaration of the World Health Organization put it: “The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition.” As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy.5.(  )Just at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP.)As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrator, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources.

查看试题

Today, most countries in the world have canals. Some canals, such as the Suez or the Panama,(1)ships weeks on time by making their voyage a thousand miles shorter. Other canals permit boats to reach cities that are not located on the coast(2)other canals drain lands where there is too much water, help or(3)fields where there is not enough water.The(4)of a canal depends on the kind of boats going through it. The canal must be wide enough to permit two of the(5)boats using it to(6)each other easily. It must be deep enough to leave about two feet of water(7)the keel of the largest boat using the canal.Some canals have sloping sides, while others have sides that are nearly(8)canals that are cut through rock can have nearly vertical sides.(9)canals with earth banks may(10)if the angle of their sides is too steep. Some canals are lined with brick, stone, or concrete to keep the water(11)soaking into the mud. This also permits ships to go at(12)speeds, since they cannot make the banks fall in by(13)up the water. In small canals with mud banks, ships and barges (大平底船)must(14)their speed.When the canal goes(15)different levels of water, the ships must be(16)or lowered from one level to the other. This is generally done by(17)of locks. If a ship wants to go up to higher water, the lower end of the lock opens to let the boat in. Then this gate closes, and the water is let into the lock chamber from the upper level. This raised the level of the water in the lock.(18)it is the same as the upper level of water. Now the upper gates can be opened to(19)the ship into the higher water. Sometimes a canal contains a series of locks when the(20)in levels is very great.

查看试题

暂未登录

成为学员

学员用户尊享特权

老师批改作业做题助教答疑 学员专用题库高频考点梳理

本模块为学员专用
学员专享优势
老师批改作业 做题助教答疑
学员专用题库 高频考点梳理
成为学员