New blog address
The Climate and Environment at Imperial blog has moved. Visit our new blog.
The Climate and Environment at Imperial blog has moved. Visit our new blog.
By Clea Kolster, PhD student, Science and Solutions for a Changing Planet
The term ‘sustainable development’ was first coined in 1987 in the UN’s World Commission on Environment and Development report, Our Common Future. Almost 30 years later, the concept of sustainable development is more relevant than ever.
The definition given in the report is, to this date, the most widely accepted modern definition of the term: ‘Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.’ Climate and society, energy, water, ecosystem health and monitoring, global health, poverty, urbanization, natural disasters, food, ecology and nutrition – these are some of the main problems that need to be tackled when discussing the possibility of sustainable development. They are all complex problems that require an interdisciplinary and analytical approach. Earlier this year, I joined a group of people doing just that.
On Friday April 3rd 2015, I entered the land of Ivy League elites of Columbia University to take part in the 5th Annual Interdisciplinary PhD Workshop on Sustainable Development. Having been on a year abroad at Columbia University during my undergraduate degree, I knew the spot pretty well and was thrilled about getting the chance to come back as a matured and informed PhD student ready and eager to present my work.
Arriving at the workshop, I struck up a conversation with some of the students around me. I quickly understood that a number of us had made the cross-Atlantic trip, with participants from Denmark, Italy, Sweden, France and even Australia. Students also came from Canada and Mexico, with a large majority attending world-renowned US universities, including Harvard, MIT, Yale, Princeton, UC Berkeley and of course Columbia.
The highlight of the first day was a keynote speech by American economist Jeffrey Sachs, head of the Earth Institute at Columbia University., Sachs has an incredible track record: he is a Quetelet professor (honorary distinction given to Columbia University professors, awarded to only four professors since 1963) of Sustainable Development, special advisor to Ban Ki-Moon, youngest economics professor at Harvard University (age 28), author of three New York Times bestsellers – and the list goes on.
One of the things that engrossed me most was his emphasis on planetary boundaries and the current ideological conflict between growth (mostly economic) and environmental sustainability. Sachs definitely got the whole room thinking about whether or not sustainable development is actually feasible and, for those like myself who desperately want that answer to be positive, what one can do to bring us closer to that goal: a world with sustainable economic, social and environmental objectives.
The rest of the afternoon featured sessions on a variety of topics from natural disasters – including the Venetian example of floods – to urban planning in China and development in India. After a long afternoon of presentations, I got the chance to network and socialize with the students. I met some very interesting individuals, most of whom, contrary to myself, feel as though they are economists before anything else, in spite of an earlier education in engineering.
On the second day, I was due to give my presentation as part of the Energy. In a small room filled with 10-15 other PhD students, all of whom were senior to me, and a few professors, I sat nervously waiting for my turn, beginning to realize that my presentation was clearly going to be one of the most “engineeringy” and technical of all.
I finally gave my 20 minute talk on the “Techno-Economic Analysis of the Link between Above Ground CO2 Capture, Transport, Usage for Enhanced Oil Recovery (EOR) and Storage”. I was happy to take some interesting questions at the end of it (which I hoped meant that the audience was actually interested by my topic) and later on at the coffee reception engaged in some stimulating discussions with some of my peers. It was clear that in spite of our dissimilar approaches, we had all contributed to responding to the question of sustainable development and its feasibility.
Did you know that 1.4 billion people currently live in a state of extreme poverty at below $1.25/day? In fact, it will take a 4 to 5 time increase in total global output by 2050 to get poor countries to meet the $40,000 per capita income of rich countries today. With figures like these, it isn’t surprising that large groups of individuals around the world dedicate their time to assessing and analyzing the best ways of achieving sustainable development encompassing economic, social and environmental goals.
In my view, sustainable development is feasible, we can tackle climate change, we can reduce our exploitation of natural capital while promoting economic growth, we can bridge the gap between poor and rich countries; the problem is – as Jeff Sachs pointed out – a lack of trust. A lack of trust leads to social and political instability and these will always impede sustainable development around the world.
References
World Commission on Environment and Development – Bruntland Commission. 1987. Our Common Future. s.l. : Oxford University Press, 1987.
Find out more about Clea’s research
Professor Jo Haigh, Co-director of the Grantham Institute reports back from the Climate Parliament meeting in Lucerne, 12 June 2015.
I have just found a seat on the train from Lucerne to Zurich airport. It is absolutely packed, I suppose, with people going away for the weekend. Staring through the window at the snow-capped mountains, and having spent the day at an inspirational conference set by the beautiful lake, I am wondering quite why anyone would want to leave.
I have been at a meeting of the Climate Parliament. I only learned of this organisation recently but it is rather splendid – a group of legislators from across the world who are concerned about climate change and looking to influence governments to act. Seventy attendees came together to discuss the potential for a Global Energy Internet: an international electricity grid based on regional renewable energy sources.
Attendees were diverse – including MPs from four continents, industrialists, academics and representatives from energy and climate agencies. Simultaneous translation was provided between French, Spanish, Chinese and English, although that can’t have covered all the first languages represented. Fortunately my talk was first up so I could then concentrate on the rest.
For me a highlight was Li Junfeng, Director General of the Chinese National Centre for Climate Change Strategy and International Cooperation, who spoke on China’s energy demands, provision and ambitions for decarbonisation. He spoke of a “solar Silk Road for the 21st century” and noted the need for revolutions in energy production and for technical innovation with international cooperation on R&D in renewables: “Helping China is helping the World” he said.
Lei Xianzhang, of the State Grid Corporation of China, was up next discussing some of the technical issues and describing the construction of new wind and solar plants (grown by factors of 37 and 350 respectively since 2006) as well as very long (thousands of kilometres) high voltage DC transmission cables.
The political and technical challenges which have faced attempts to construct an energy grid across the Middle East and North Africa were elaborated by Tareq Emtairah, Director of the Centre for Renewable Energy and Energy Efficiency in Cairo. In the absence of an electricity market there has been very little exchange coordinated to date, but the delegates from Jordan, Morocco and Tunisia remained upbeat.
National energy security concerns were foremost in the minds of colleagues from Senegal and Peru. Efforts to connect to international grids were seen as a second priority. I could see their point, but perhaps they were reassured that the proposed networks could be established to the advantage of all.
Damien Ernst, an energy economist from the University of Liège, delivered an inspirational message: his calculations suggest that large scale facilities of solar and wind energy, with long transnational interconnections can provide energy cheaper than fossil fuels. Construct the grid first, he declares, and the incentive to develop renewables will follow.
It is a shame I have to miss tomorrow’s discussions, which will focus on finances and start to develop an action plan, but I have to attend my father’s 90th birthday party. If I live to 90, or 80, or even 75(?) perhaps I will see established an international energy internet. It is a fantastic vision.
by Roan du Feu, PhD student, Science and Solutions for a Changing Planet
The marine renewable energy sector is poised and ready, waiting to harness the power of tides and waves with underwater tidal turbines and floating wave energy converters. A shift to renewable energy sources is essential to reducing global carbon emissions, but what are the consequences of these new technologies? Are we prepared for the effects of filling our already fragile oceans with rows of large, moving structures? Will we cause irreparable damage? Or might there even be some positive effects?
These are all questions relevant to my thesis and so, for two weeks, I attended a course on Marine Renewables and the Environment held at SAMS (the Scottish Association of Marine Science). The aim of this course was to explain the interactions between the abundant and varied ecology of the seas and the devices that will inevitably be placed in the midst of them.
SAMS is one of the oldest oceanographic organisations in the world and one of Europe’s leading research centres for marine science. It takes a great interest in marine prosperity and sustainability, climate change and, crucially, renewable energy. SAMS is located next to Dunstaffnage castle on a wild and exposed spit of land just north of Oban, Scotland, where the sinuous Loch Etive breaches into the grand, island-studded Firth of Lorn.
Over the course of two weeks we studied all forms of marine life. The flora ranged from the tiny, sometimes unicellular, phytoplankton and drifting algae, to the larger seaweeds. The fauna, a rather more extensive group, covered zooplankton, sponges, hydrozoa (including jellyfish and coral), segmented worms, molluscs, crustaceans, echinoderms (starfish and urchins, for example), bryzoans (moss animals), ascidians (sea squirts), fish, and of course the many marine mammals and seabirds that live and feed in the marine environment.
Something that struck me was the vast disparity, between the variety of marine flora and fauna. The dependence of plants on light to photosynthesise is at least partially responsible. The turbidity of British seas means that larger plants can only survive to a depth of about 25m, and even phytoplankton only descend to about 50m. Marine fauna on the other hand have no such limit, and can thrive many kilometres below the surface.
Marine renewable devices can have both negative and positive impacts on these many types of life. But what was made very clear was the great uncertainty in the extent and degree of almost all of these effects.
The most commonly discussed consequence of tidal turbines is blade strike. This represents a potential issue for marine mammals, large fish / sharks, and various types of diving bird, many of which dive well below the depth at which you might expect to find a turbine (guillemots, for example, have demonstrated the almost unbelievable ability to dive up to 180m below the surface in their search for fish). The extent of this problem is completely unknown, it is possible that such animals will be adept at avoiding turbine blades, but it is possible they will not.
Then there is the noise produced during both the construction and the operation of such devices. This could potentially scare off or disorientate some marine mammals (or during pile driving even deafen them) while possibly attracting other more curious species. In this case mitigation techniques such as using bubble shields to damped sound during construction could the limit negative effects.
Another interaction that was heavily discussed was that of the artificial reef effect. Whatever the marine renewable device in question, one immediate change to the local environment will be the addition of large, sometimes complex, structures which provide a new, different habitat type upon which life can and will grow, often significantly increasing the biomass of the area.
This is not necessarily a good thing as structures in the water have been known to provide stepping stones for invasive species that are otherwise unable to cross wide channels. This influx of both floral and faunal growth will attract fish which in turn will attract marine mammals and seabirds into a, potentially, dangerous area. In fact, considering that fish also like to hide in the shelter given by these structures, and that such areas will likely become no fishing zones it seems probable that a marine renewable development could become a veritable haven for fish. Whether the knock-on effects of this, on other marine life or indeed on fishermen, are positive or negative is unknown.
The main thing I am now aware of is the high levels of uncertainty in expert opinion. I had assumed that putting devices in the sea would be an inherently bad thing for the local environment (although good for the world at large), but is it? Possibly, sometimes, but we don’t yet know. Then I was told of all the life that would actually be drawn to such devices, surely that’s good? Possibly, sometimes, but we don’t yet know.
The introduction of marine renewables to the seas is inevitable. It is one of many things that has to happen if we hope to combat the inescapable advance of man-made climate change. But I have learned that it has to be done with care, it has to be monitored well, and the information gathered has to be put to use. It seems possible that such technology could be implemented without contributing to the decline of the oceans, but only as long as it is done in an appropriate and measured manner. For what is the point in trying to save the planet if we end up destroying it in the process?
Read more about Roan’s PhD project
The Climate and Environment at Imperial blog has moved. View this post on our new blog
by Neil Hirst, Senior Policy Fellow, Grantham Institute
China’s Energy Research Institute (ERI) releases an interesting analysis of the prospects for China’s energy production and consumption and CO2 emissions to 2050
Last November’s joint announcement of national climate targets by President Barack Obama and President of China Xi Jinping has framed the preparations for this December’s crucial Paris summit. The US is aiming to reduce its emissions by 26-28% below the 2005 level in 2025. China intends that its CO2 emissions will peak around 2030 and will use best efforts to bring that date forward. Plainly China’s energy and emissions outlook is highly relevant to the global effort to mitigate climate change and to the Paris negotiations.
The Grantham Institute, Imperial College London, is working with China’s Energy Research Institute and the NDRC on a joint project about China’s role in world energy governance. Improving international cooperation on energy policy is a first order requirement for climate mitigation.
Prof Yang Yufeng, who co-leads this project with me, is also lead author of China Energy Outlook and he has just released, through this presentation given a few days ago in Australia, the preliminary findings of its 2015 edition. (He also introduced their Outlook analysis methodology, the China 2050 Energy Calculator, which was developed independently by his team after studying the analysis platforms of other major international institutions, especially including contributions from DECC’s Energy Calculator).
These findings are of exceptional interest for the light that they shed on the questions, opportunities, and difficulties that China faces in trying to bring forward the peaking of its emissions.
The Outlook suggests that by 2050 China could represent 18.5% of world GDP. By that time China’s industrial structure will have been transformed, with the most energy intensive primary and secondary industries, which dominate today, giving way to tertiary industries at a higher end of the value chain. By then renewables should compete with coal in the mix of power generation, mainly in the form of onshore wind, photovoltaics, gasified biomas, and city waste. But it is a hard task, for China, to free itself from dependence on coal. Today coal represents 66% of energy supply but, according to the Outlook, even in 2050 it may still have more than a one third share.
Most interestingly, the Outlook also offers alternative “high”, “low”, and “medium” scenarios of when critical functions of energy, coal, oil, and gas consumption, and CO2 emissions, will peak. Coal consumption peaks in 2020 in the low case, 2025 in medium, but not until 2030 in high. CO2 emissions peak in 2025 in low, 2030 in medium (which would meet President Xi Jinping’s minimum target), and 2035 in high. We must all hope that China can find the pathway that brings this peak forward to 2025, or even before. It is tantalising to see that in the low cases coal consumption is close to flat from today and CO2 emissions from 2020.
GDP per person in 2050 ranges from $25,000 to nearly $40,000 – a level that would give Chinese citizens incomes commensurate with current levels in the developed countries. The critical issue in China, as in other parts of the world, will be to implement the low carbon options that also support the high end of the range for living standards.
The Energy Research Institute is a research body and their study does not represent Chinese government policy. Nevertheless, it throws very interesting light on the options for energy policy as perceived form within China. The full China Energy Outlook 2015, when it is published in the next half year or so, will no doubt go into much greater depth. Under the close relations between the ERI and the Grantham Institute, we plan to exchange details of the China energy models, and to work together on further refinements.
by Bhopal Pandeya, Research Associate (ESPA Fellowship), Grantham Institute
Mountains are often referred to as ‘water towers’ as they provide fresh water to people and biodiversity. The Himalayan region is one of the few hot spots where several big rivers originate and supply water to hundreds of millions of people across the mountains and further downstream. However, higher up in the mountains especially in trans-Himalayan region, there is very little accessible water for local communities. The region receives very low rainfall and thus water supply is largely dependent on the timely occurrence of snow fall and ice melts in the upper mountains. The Upper Kaligandaki Basin (located in Nepal) is one such area where water scarcity is very high. Upland communities are constantly facing serious water shortage which particularly affects their agricultural land.
In Upper Kaligandaki Basin, croplands are located along the river valleys which act as oases in the Himalayas. Traditionally, local people practice an intensive cropping system, growing different crops and vegetables to sustain their lives, and agricultural remains the main source of local livelihoods. But, local people are experiencing increasing difficulty with farming largely due to the unpredictable nature of water supply in local streams. They are now concerned by the changing pattern of snow fall in upper mountain areas and its impact on water flow in the lower regions. People are trying to cope with this situation by adopting various measures such as introducing more resilient crops like apple and walnut, using water harvesting systems and equitably sharing available water. This demonstrates local people’s extraordinary adaptive skills in managing their resources sustainably. To some extent, these measures are helpful in coping with these uncertainties.
Recent developments in the region, especially the construction of roads and the expansion of human settlements, are proving unsustainable and are making already scarce agricultural lands even more vulnerable. These activities lack proper consideration of how to maintain key ecosystem services provided by water and soil resources. Agricultural land and traditional water supply systems are particularly threatened by constant encroachment and land degradation (erosion and landslides) resulting from these activities. As a result, local communities’ main sources of livelihoods are in great danger. At the same time, the whole region is passing through a socio-cultural and demographic transformation which is also challenging especially considering the lack of enthusiasm of younger generations for farming.
In this situation, an innovative approach can build a better understanding of these major ecosystem services and integrate them into local policy and decision making. As one elderly local firmly put it, “our farmlands are highly productive, no need to go abroad for earning… we can earn better here. We produce highly priced crops, fruits and vegetables. But, there are some big problems… water supply is becoming more disruptive, soil loss is extensive and there is also less and less participation of the younger generation in farming practices. We need to address these problems immediately, so we can improve the agricultural production and increase our household incomes”. Clearly there is a great need for a locally suited ecosystem services approach (guided by scientific, socio-political and economic understandings) to improve local livelihoods.
Find out more about the Mountain-EVO project
This post was originally published on the ESPA blog. View original post.
A summary of global temperature for 2014 from NASA and NOAA has just been published, showing that the average global temperature for 2014 was 0.69°C above the average for the 20th century. The small margin of uncertainty in calculating average global temperature means that the exact ranking of 2014 cannot be distinguished from the previous record years of 2005 and 2010, but it is nominally the warmest year on record. The ten warmest years have all occurred since 1998.
Professor Jo Haigh, Co-Director of the Grantham Institute, commented on the report saying that: “This and other indicators are all pointing in the same direction of continued global warming, reflecting the overall upward trend in average global temperatures”
A large amount of warming was seen in the oceans with globally-averaged sea surface temperature 0.57°C above the 20th century average. This is consistent with recent studies that have suggested that much of the extra energy in the Earth system is going in to the oceans. You can read more about the significance of ocean heat uptake in our blog post.
An update from the Met Office on global temperatures is expected later this month and we look forward to seeing the further detail that this will add.
See the full report on the NOAA website.
By Dr Flora Whitmarsh, Grantham Institute
This blog forms part of a series addressing some of the criticisms often levelled against efforts to mitigate climate change.
The Twentieth Session of the Conference of the Parties (COP 20) – the latest in a series of meetings of the decision making body of the UN Framework Convention on Climate Change –began in Lima this week. Many in the media are quick to point to the difficulty of obtaining international agreement on greenhouse gas emissions reductions, and to denounce COP 15, which took place in Copenhagen in 2009, as a failure. Far from being a failure, the Copenhagen meeting paved the way for future climate change action. World leaders agreed ‘that climate change is one of the greatest challenges of our time’ and emphasised their ‘strong political will to urgently combat climate change in accordance with the principle of common but differentiated responsibilities and respective capabilities’, and it was agreed that ‘deep cuts in global emissions are required’. The Copenhagen accord also said that a new Copenhagen Green Climate Fund would be established to support developing countries to limit or reduce carbon dioxide emissions and to adapt to the effects of climate change.
The last objective is in progress: the green climate fund was set up at COP 16, held in Cancun, Mexico in 2010, and several major countries have pledged money. Japan has pledged $1.5 billion, the US has pledged $3 billion, Germany and France have pledged $1 billion each, the UK pledged $1.13 billion and Sweden pledged over $500m. This brings us close to the informal target of raising $10 billion by the end of the year. The goal is to increase funding to $100 billion a year by 2020. There have also been several smaller donations. This is a key step in tackling climate change, because the gap between developed and developing countries in their ability to respond to climate change and their level of responsibility for causing the problem must be addressed.
Obtaining international agreement to reduce emissions is a real challenge. It is not surprising that it is difficult to reach consensus on a course of action between a large range of different countries at different stages of development who bear differing levels of responsibility for greenhouse gas emissions to date: the UN Framework Convention on Climate Change has 196 Parties. However, there has been significant progress towards global emissions reductions, led by the EU, China and the US.
Prior to the Copenhagen COP, the UK Climate Change Act was passed in 2008, and contains a legally binding commitment to reduce UK emissions by at least 80% on 1990 levels by 2050. In addition, the UK Committee on Climate Change has recommended an emissions reduction of 50% on 1990 levels by 2025 in order to meet the longer term target. Some have argued that by taking unilateral action, the UK put itself at risk of losing out economically to countries that had not made such pledges. Competitiveness concerns have been evaluated by the Committee on Climate Change, the body set up as part of the Climate Change Act to advise the UK government on emissions targets. The committee found that ‘costs and competitiveness risks associated with measures to reduce direct emissions (i.e. related to burning of fossil fuels) in currently legislated carbon budgets are manageable.’ Continued support from the EU emissions trading scheme may be needed in the 2020s, but this depends on progress towards a global deal.
By making this commitment the UK has been able to enter into negotiations with other countries from a position of strength. The UK is one of the leading historic emitters of carbon dioxide – it is, of course, the sum total of our emissions beginning in the industrial revolution that will, to a good approximation, determine humanity’s impact on the climate, not the emissions in any given year – and therefore it is right that the UK took the lead by making this commitment. Had we not made such a pledge, it would have put us in a more difficult position when negotiating with other countries, particularly those still on the path to development.
The UK is not now acting alone – other major countries have recently made significant emissions reduction pledges. The recent European Council agreement that the EU should cut emissions by 40% on 1990 levels by 2030 represents a step forward. It was decided that all member states should participate, ‘balancing considerations of fairness and solidarity.’ It was also decided that 27% of energy consumed in the EU should be from renewable sources by 2030, and a more interconnected European energy market should be developed to help deal with the intermittency of renewable sources of energy.
The EU target is still not quite as ambitious as the UK target. However, this latest EU agreement is a significant step in the right direction and demonstrates that international cooperation on a large scale is possible, albeit within a body like the EU with pre-existing economic ties. In addition, it generally costs more to cut emissions the faster the cuts are implemented. If the world is genuine in its commitment to tackling climate change, very significant emissions reductions are ultimately required, and delaying action means having to cut emissions more quickly at a later date – at a higher cost. In addition, the Committee on Climate Change found that despite short term increases in electricity prices, early action means that UK electricity prices are projected to be lower in the medium term compared to a fossil fuel intensive pathway, assuming there is an increase in the carbon price in the future.
A recent development is the bilateral agreement between China and the US. China stated that its emissions would peak by 2030, by which time the country aims to get 20% of its energy from non-fossil fuel sources, and the US pledged to reduce its emissions by 26%-28% on 2005 levels by 2025. Some have suggested that the agreement does not go far enough because China’s emissions will continue to rise until 2030 under the deal, and the US target is not as stringent as the EU or UK targets. However, these pledges coming in the lead up to Lima from the two largest emitters globally are hugely significant, and pave the way for further progress. China has already made significant progress in reducing the energy intensity (energy per unit of GDP) of its economy: the 11th Five Year Plan, covering the period 2006-2010 aimed to reduce energy intensity by 20%, and achieved a reduction of 19.1%. Despite some disruption to the energy supply, this success in meeting the target demonstrates the Chinese government’s track record of achieving its objectives on green growth. The current five year plan aims to cut energy intensity and carbon intensity (carbon emissions per unit of GDP) by a further 16% and 17% respectively on 2010 levels by 2015. It is right that developing countries should be able to grow their economies – China’s per capita GDP is still relatively low – and this has to be balanced with climate change targets.
The EU, China and the US together accounted for just over half of total global carbon dioxide emissions in 2013. Their pledges demonstrate that smaller groups of countries made up of the major emitters can make a difference without waiting for far-reaching international agreement on emissions reductions. Their willingness to act also has the potential to spur other industrialised countries into reducing their own emissions. More action is still needed, but there has been significant progress since the Copenhagen conference, which should pave the way for more ambitious pledges.
by Ajay Gambhir, Grantham Institute
This blog forms part of a series addressing some of the criticisms often levelled against efforts to mitigate climate change.
It is often claimed that intermittent renewable sources of electricity (mainly wind and solar photovoltaics), are too expensive, inefficient and unreliable and that we shouldn’t subsidise them.
What are the facts?
Last year, governments spent about $550 billion of public money on subsidies for fossil fuels, almost twice as much as in 2009 and about five times as much as they spent subsidising renewables (IEA, World Energy Outlook 2014). This despite a G20 pledge in 2009 to “phase out and rationalize over the medium term inefficient fossil fuel subsidies” that “encourage wasteful consumption, reduce our energy security, impede investment in clean energy sources and undermine efforts to deal with the threat of climate change”.
There is a key reason why it makes sense to subsidise the deployment of renewable energy technologies instead of fossil fuels. They are currently more expensive than established fossil fuel sources of power generation such as coal- and gas-fired power stations, because the scale of the industries that produce them is smaller and because further innovations in their manufacture and deployment are in the pipeline. As such there needs to be a period of translating laboratory-stage innovations to the field, as well as learning and scaling-up in their manufacture, all of which should bring significant cost reductions. This is only likely to be possible with either:
Unfortunately, there is unlikely to be a long-term, credible and significant (“long, loud and legal”) carbon price anytime soon, given the immense political lobbying against action to tackle climate change, and the lack of global coordinated emissions reduction actions, which means any region with a higher carbon price than others puts itself at risk of higher energy prices and lost competitiveness. Whilst subsidies are also likely to raise energy prices, their targeting at specific technologies (often under some fiscal control such as the UK’s levy control framework) means they should have less overall impact on prices. In addition, subsidies have helped to put some technologies on the energy map faster than a weak carbon price would have done and have given a voice to new energy industries to counter that of the CO2-intensive incumbents.
Nevertheless, subsidies should not remain in place for long periods of time, or at fiscally unsustainable levels. Unfortunately some countries, such as Spain, have fallen into that trap, with an unexpectedly high deployment of solar in particular leading to a backlash as fiscal costs escalated, rapid subsidy reductions and the stranding of many businesses engaged in developing these technologies. Germany’s subsidy framework for solar, with its longer term rules on “dynamic degression” levels (which reduce over time depending on deployed capacity in previous years) has proven a better example of balancing the incentive to produce and deploy new technologies with the need to manage fiscal resources carefully (Grantham Institute, 2014).
Fortunately, the price of solar and onshore wind has fallen so much (through manufacturing and deployment scale-up and learning that the subsidies were aimed at in the first place) that they are now approaching or have achieved “grid parity” in several regions – i.e. the same cost of generated electricity as from existing fossil fuel electricity sources. Analysis by Germany’s Fraunhofer Institute shows that solar PV, even in its more expensive form on houses’ rooftops, will approach the same level of electricity generation cost as (hard) coal and gas power stations in Germany within the next decade or so, with onshore wind already in the same cost range as these fossil fuel sources. Subsidies should be phased out as the economics of renewables becomes favourable with just a carbon price (which should be set at a level appropriate to reducing emissions in line with internationally agreed action to avoid dangerous levels of climate change).
It’s important to note that grid parity of electricity generation costs does not account for the very different nature of intermittent renewables compared to fossil fuel power stations, which can very quickly respond to electricity demand peaks and troughs and help ensure that electricity is available as required. For example one common contention is that for every unit of solar capacity in northern latitudes, significant back-up of fossil fuel generation (most often gas turbines, which are quick to ramp up) is required to meet dark winter peak demand in the evenings. Indeed, analysis by the US Brookings Institute based on this principle (as given much publicity in The Economist in July 2014) suggested this would make solar PV and wind much more expensive than nuclear, gas and hydro power.
Unfortunately, and as reflected in the published responses to the Economist article, this analysis has proven to be too simplistic: not accounting for the fact that wind and solar provide complementarities since the wind often blows when the sun’s not shining; that electricity grids can span vast distances (with high voltage direct current lines) which effectively utilise wind and sunlight in different regions at different times; that there is a great deal of R&D into making electricity storage much cheaper; that electricity networks are going to become “smarter” which means they can more effectively balance demand and supply variations automatically; and that the costs of these renewable technologies are coming down so fast that (particularly in the case of solar) its economics might soon be favourable even with significant back-up from gas generation.
In summary, the world is changing, electricity systems are not what they once were, and there is a very sound economic case for meeting the challenge of climate change by deploying low-carbon renewable electricity sources. It is encouraging to see that there has been a rapid rise in the deployment of these technologies over the past decade, but more needs to be done to ensure that the low-carbon world is as low-cost as possible. This means supporting and therefore continuing to subsidise these critical technologies to at least some extent.
References
International Energy Agency (2014) World Energy Outlook 2014
Statement from the G20 in Pittsburgh, 2009, available at: https://www.g20.org/sites/default/files/g20_resources/library/Pittsburgh_Declaration.pdf
Grantham Institute, Imperial College London (2014) Solar power for CO2 mitigation, Briefing Paper 11, available at: https://workspace.imperial.ac.uk/climatechange/Public/pdfs/Briefing%20Papers/Solar%20power%20for%20CO2%20mitigation%20-%20Grantham%20BP%2011.pdf
Fraunhofer Institute (2013) Levelized cost of Electricity: Renewable Energy Technologies, available at: http://www.ise.fraunhofer.de/en/publications/veroeffentlichungen-pdf-dateien-en/studien-und-konzeptpapiere/study-levelized-cost-of-electricity-renewable-energies.pdf
The Economist (2014a) Sun, Wind and Drain, Jul 26th 2014, available at: http://www.economist.com/news/finance-and-economics/21608646-wind-and-solar-power-are-even-more-expensive-commonly-thought-sun-wind-and
The Economist (2014b) Letters to the editor, Aug 16th 2014, available at: http://www.economist.com/news/letters/21612125-letters-editor
by Professor Martin Siegert, Co-director, Grantham Institute
On 27th October I convened a meeting at the Royal Society of London to discuss the results of a recent 20-year research horizon scanning exercise for Antarctic Science (Kennicutt et al. 2014). Part of the discussion focused on the research needed to better quantify Antarctica’s likely contribution to sea level rise in the coming decades and beyond, as published in the new Intergovernmental Panel on Climate Change (IPCC) Synthesis Report.
The report states that, ‘Global mean sea level rise will continue during the 21st century, very likely at a faster rate than observed from 1971 to 2010, and will likely be in the ranges of 0.26 to 0.55 m [in the lowest emissions scenario] … and … 0.45 to 0.82 m [in the highest emissions scenario – the closest to “business as usual”]’. It also states that, ‘Based on current understanding, only the collapse of marine-based sectors of the Antarctic ice sheet, if initiated, could cause global mean sea level to rise substantially above the likely range during the 21st century.’ There is medium confidence that any additional sea level rise would be no more than tens of centimetres.
One of the speakers at the event, Prof. David Vaughan, the Director of Research at the British Antarctic Survey, supported the IPCC’s position by remarking that he knew of no glaciologist who would strongly advocate a different position to this, given the evidence at hand. As a glaciologist myself, I can easily accept Prof. Vaughan’s comment and I don’t believe it is controversial among the community. I was, however, provoked by it to consider the relevant issues a little further, given the uncertainties noted by the IPCC, and to take the opportunity to discuss it with colleagues at the meeting.
Historically, ice sheet responses to global warming have been responsible for sea level changes of a metre or more per century. As the glaciers retreated after the last ice age, sea levels rose by an average of over a metre per century between 20,000 years ago and 10,000 years ago – a total of 120 m. Records also show that the rate of sea level rise can exceed this, however. During the so-called ‘meltwater pulse 1a’ (MWP1a) episode around 15,000 years ago, an increase of around 7 m per century took place. The cause of MWP1a remains uncertain, with some pointing to the rapid decay of the North American ice sheet, whereas others link the change to Antarctica. It may be that both ice sheets were involved to some degree, and the details of the issue remain hotly debated. The point to note is that changes in the cryosphere are certainly capable of causing global sea level to rise at a higher rate than the IPCC suggests.
It is worth considering whether we can rule out the possibility of a new meltwater pulse being locked in somewhere in Antarctica or Greenland, ready to be released to the ocean once some threshold has been reached. As the IPCC notes, several regions of the West Antarctic ice sheet (in particular) and East Antarctic ice sheet appear close to or at a physical threshold of change, where ground ice retreat into deeper (below sea level) terrain leads to further accelerated loss of ice to the sea (often referred to as marine ice sheet instability). Papers earlier this year by Joughin et al. (2014) and Rignot et al. (2014) point to such irreversible change having already begun in the Amundsen Bay region of West Antarctica. According to Joughin et al. (2014) the full effects of such change may take several hundred years, in line with the IPCC’s position. Evidence from the other side of West Antarctica demonstrates a region the size of Wales being highly sensitive to future ocean warming (Ross et al. 2012), and that such warmth may be delivered within a few decades (Hellmer et al. 2012). Across the continent in East Antarctica, the structure of the underlying bedrock reveals evidence of major ice recession in the past (Young et al. 2011), hinting that the ice sheet response to warming is not necessarily restricted to West Antarctica. Indeed while West Antarctica may be losing mass more quickly than anywhere else on the planet, the greatest potential for sea level change lies in East Antarctica, which about ten times greater in volume.
So, after considering Prof. Vaughan’s point that no glaciologist would differ markedly from the IPCC on Antarctic ice sheet collapse, I returned a question to him and those gathered: how can we be sure that the Antarctic ice sheet won’t respond to ocean warming more quickly than expected in certain regions? The answer is we can’t be certain even though, like Joughin et al. (2014), we may consider it unlikely. While I did not dispute Prof. Vaughan’s point, in the light of both recent findings and more established figures on how ice sheets can change during episodes of global warming, there is surely a non-zero risk of much greater sea level rise over the coming decades than the IPCC alludes to.
Quantifying this risk is difficult – maybe impossible at present – and as a consequence is likely to be highly controversial, which is why the IPCC does not tackle it. The problem is that quantifying a non-zero risk of global sea level rise over 1 m in the next 100 years is a far more challenging problem – for both scientists and decision makers – than restricting the debate to what we consider most likely. Maintaining this restriction on the debate is neither safe nor sensible, however.
Glaciologists will point to the research needed on the Antarctic ice sheet’s sensitivity to ocean warming to advance the debate. In 20 years as a glaciologist, I have been surprised on numerous occasions by what we discover about the flow and form of past and present ice sheets. I am utterly certain that amazing new discoveries lie ahead. For this reason, an appropriately sceptical scientific attitude is to accept that our knowledge of Antarctica remains woefully inadequate to be certain about future sea level rise, and to always challenge the consensus constructively.
The solution lies in our ability to model the ice-ocean system in a way that allows confident predictions of the ice sheet response to ocean warming. To do this we need two things. First is better input data, by way of high-precision landscaping beneath the ice sheet in regions most sensitive to change, and in areas where no data have been collected (and there are several completely unexplored parts of the continent). The data collected would also allow us to better understand the process of ice flow in key regions of potential change. A second advance needed is in the coupling of ice-sheet and ocean models. Both are challenging, but well within our abilities to achieve them. Indeed the horizon scanning exercise discussed last week made such investigations a priority.
By Professor Colin Prentice, AXA Chair in Biosphere and Climate Impacts
Further to previous posts on this blog regarding Owen Paterson’s recent speech to the Global Warming Policy Foundation, I would like to take this opportunity to correct his dismissive statement about biomass energy as a potential contribution to decarbonized energy production in the UK. This is what the former Environment Secretary said:
“Biomass is not zero carbon. It generates more CO2 per unit of energy even than coal. Even DECC admits that importing wood pellets from North America to turn into hugely expensive electricity here makes no sense if only because a good proportion of those pellets are coming from whole trees.
The fact that trees can regrow is of little relevance: they take decades to replace the carbon released in their combustion, and then they are supposed to be cut down again. If you want to fix carbon by planting trees, then plant trees! Don’t cut them down as well. We are spending ten times as much to cut down North American forests as we are to stop the cutting down of tropical forests.
Meanwhile, more than 90 percent of the renewable heat incentive (RHI) funds are going to biomass. That is to say, we are paying people to stop using gas and burn wood instead. Wood produces twice as much carbon dioxide than gas.”
There are two misconceptions here.
(1) It is extremely relevant that ‘trees can regrow’ – this is the whole reason why biomass energy is commonly accounted as being carbon neutral! To be genuinely carbon neutral, of course, every tonne of biomass that is burnt (plus any additional greenhouse gas emissions associated with its production and delivery to the point of use) has to replaced by a tonne of new biomass that is growing somewhere else. This is possible so long as the biomass is obtained from a sustainable rotation system – that is, a system in which the rate of harvest is at least equalled by the rate of regrowth, when averaged over the whole supply region.
Now it has been pointed out several times in the literature (e.g. Searchinger et al., 2009; Haberl et al., 2012) that if biomass is burnt for energy and not replenished (for example, if trees are cut down and the land is then converted to other uses), then it is not carbon neutral. Indeed, the carbon intensity of this form of energy production is at least as high as that of coal. Paterson may have been influenced by a report on this topic (RSPB, Friends of the Earth and Greenpeace, 2012) which drew attention to the “accounting error” by which energy derived from biomass might be classed as carbon neutral while actually being highly polluting. But this refers to an extreme scenario, whereby increased demand for forest products leads to no increase in the area covered by forests. In this scenario, biomass energy demand would have to be met from the existing (global) forest estate, drawing down the carbon stocks of forests and forcing builders to substitute concrete and other materials for wood. This would certainly be undesirable from the point of view of the land carbon balance; and carbon accounting rules should recognize the fact.
Nonethless, this extreme scenario is implausible. It assumes that the value of biomass as fuel would be comparable to that of timber (highly unlikely) and more generally that there would be no supply response to increased demand. In more economically plausible scenarios, the increased demand for biomass fuel is met by an increase in the use of by-products of timber production (which today are commonly left to decay or burnt without producing any energy), and by an increase in the amount of agriculturally marginal land under biomass production – including non-tree energy crops such as Miscanthus, as well as trees.
Paterson’s blanket dismissal of the potential for biomass production to reduce CO2 emissions is therefore not scientifically defensible. Sustainable biomass energy production is entirely possible, already providing (for example) nearly a third of Sweden’s electricity today. It could represent an important contribution to decarbonized energy production in the UK and elsewhere.
(2) It might seem to be common sense that planting trees (and never cutting them down) would bring greater benefits in extracting CO2 from the atmosphere than planting trees for harvest and combustion. All the same, it is wrong. The point is that just planting trees produces no energy, whereas planting trees for biomass energy production provides a substitute for the use of fossil fuels. There is an enormous difference. Indeed, it has been known for a long time that the total reduction in atmospheric CO2 concentration that could be achieved under an absurdly optimistic scenario (converting all the land that people have ever deforested back into forests) would reduce atmospheric CO2 concentration by a trivial amount, relative to projected increases due to burning fossil fuel (House et al., 2002; Mackey et al. 2013).
I thank Jeremy Woods (Imperial College) and Jonathan Scurlock (National Farmers Union) for their helpful advice on this topic, and suggestions to improve the text.
Haberl, H. et al. (2012) Correcting a fundamental error in greenhouse gas accounting related to bioenergy. Energy Policy 45: 18-23.
House, J.I., I.C. Prentice and C. Le Quéré (2002). Maximum impacts of future reforestation or deforestation on atmospheric CO2. Global Change Biology 8: 1047-1052.
Mackey, B. et al. (2013) Untangling the confusion around land carbon science and climate change mitigation policy. Nature Climate Change 3: 552-557.
RSPB, Friends of the Earth and Greenpeace (2012) Dirtier than coal? Why Government plans to subsidise burning trees are bad news for the planet. http://www.rspb.org.uk/Images/biomass_report_tcm9-326672.pdf
Searchinger, T. et al. (2009) Fixing a critical climate accounting error. Science 326: 527-528.
By Dr Simon Buckle, Grantham Institute
Owen Paterson’s remarks on the UK response to climate change miss the point. I do not disagree with him that the UK decarbonisation strategy should be improved. In particular, there is a need for a more effective strategy on energy demand. However, my preferred policy and technology mix would be very different to his and include the acceleration and expansion of the CCS commercial demonstration programme in order to reduce the energy penalty and overall costs of CCS. And without CCS, there is no way responsibly to use the shale gas he wants the UK to produce in the coming decades for electricity generation or in industrial processes, or any other fossil fuels.
However, these are second order issues compared to his call for scrapping the 2050 targets and the suspension of the UK Climate Change Committee. On current trends, by the end of the century, the surface temperature of our planet is as likely as not to have increased by 4°C relative to pre-industrial conditions. The present pause in the rise of the global mean surface temperature does not mean we do not need to be concerned. We are fundamentally changing the climate system, raising the likelihood of severe, pervasive, and irreversible impacts on society and the natural systems on which we all depend.
A cost-effective policy to limit these very real climate risks must be based on concerted, co-ordinated and broad-based mitigation action. This is needed to deliver a substantial and sustained reduction in global greenhouse gas emissions, which continue on a sharply rising trajectory. The best way to create the conditions for such action by all the major emitting economies – developed and developing, in different measure – is through the UN negotiation process, supplemented by bodies such as (but not confined to) the Major Economies Forum. The focus of this process is now on achieving a deal covering emissions beyond 2020, due to be finalised at the Paris summit at the end of next year.
There are encouraging signs of progress, e.g. in both the US and China, and the EU is due to agree its own 2030 targets at the end of this month. But the process is difficult and protracted. I agree with Paterson that 2050 is not the be all and end all. I have argued here that the Paris talks should focus on how the next climate agreement can help us collectively to achieve a global peak in emissions before 2030, the first necessary step to any stringent mitigation target, rather than trying to negotiate a deal covering the whole period to 2050.
If Paris is a success, we might then re-assess whether or not the UK’s current mitigation targets are adequate or not. But we are rapidly running out of time to achieve what the world’s governments profess to be their aim of limiting global warming to at most 2 degrees Celsius above pre-industrial levels. The longer we delay mitigation action, the more difficult that challenge will be and the more expensive. At some point soon it will become impossible in practical terms.
Given its leadership on this issue over many decades, UK action to scrap the Climate Change Act and/or suspend or abolish the Climate Change Committee would be severely damaging. Seeking short-term domestic political advantage – which is what this move appears to be – through recommendations that would undermine national, European and international efforts to limit climate risks is irresponsible. Sadly, this seems to be what the so-called political “debate” in the UK has been reduced to.
By Ajay Gambhir, Research fellow on mitigation policy at the Grantham Institute
The United Nations Climate Summit 2014, to be held in New York on 23rd September, comes at an important point in the calendar for discussions on how to address climate change. Next year will see nations submit pledges on their future greenhouse gas emissions levels, as part of the United Nations process culminating in the 21st Conference of the Parties (COP) in Paris at the end of 2015, the ambition of which is to secure a global agreement to tackle climate change.
There is now a rich body of evidence on the implications of mitigation at the global, regional and national levels. This note presents some of the evidence, revealed by research in the Grantham Institute over recent years, which supports the view that mitigation remains feasible and affordable.
Technologies and costs of a global low-carbon pathway
The Grantham Institute, in partnership with Imperial College’s Energy Futures Laboratory (EFL) demonstrated a relatively simple, transparent analysis of the relative costs of a low-carbon versus carbon-intensive global energy system in 2050. The report concluded that mitigation in line with a 2 degrees Celsius limit to global warming would cost less than 1% of global GDP by 2050 (excluding any potentially significant co-benefits from improved air quality and enhanced energy security).
Joint Grantham and EFL report: Halving global CO2 by 2050: Technologies and Costs
The importance of India and China
The two most populous nations, India and China, have undergone rapid economic growth in recent decades, resulting in significantly increased demand for fossil fuels, with associated increases in their CO2 emissions. Mapping pathways towards a low-carbon future for both regions presents challenges in terms of technology choices, affordability and the interplay with land, water and other resources. The Grantham Institute, in partnership with other research groups (including IIASA and UCL), has produced long-term visions of both regions using energy technology modelling and detailed technology and resource assessments, to set out pathways to very low-carbon economies which can be achieved at relatively modest costs. In addition, the Institute has undertaken assessments of the feasibility and cost of achieving the regions’ near-term (2020) Cancun pledges.
Grantham Report 1: An assessment of China’s 2020 carbon intensity target
Grantham Report 2: China’s energy technologies to 2050
Grantham Report 4: An assessment of India’s 2020 carbon intensity target
Grantham Report 5: India’s CO2 emissions pathways to 2050
Key sectors and technologies
Reports have been produced on a number of key technologies across all economic sectors and on the role that these can play in a low-carbon world: electric and other low-carbon vehicles in the transport sector; low-carbon residential heating technologies; other building efficiency and low-carbon options; and a range of technologies and measures to reduce emissions from industrial manufacturing.
The successful development and deployment of a range of low-carbon power sector technologies will be central to decarbonising the power generation sector over the coming decades, thereby providing the basis for low-carbon electrification in the building, transport and industrial sectors. The Institute has produced briefing papers on the technological status, economics and policies to promote solar photovoltaics and carbon capture and storage (including with bioenergy to produce net negative emissions).
Grantham briefing paper 2: Road transport technology and climate change mitigation
Grantham briefing paper 3: Carbon capture technology: future fossil fuel use and mitigating climate change
Grantham briefing paper 4: Carbon dioxide storage
Grantham briefing paper 6: Low carbon residential heating
Grantham briefing paper 7: Reducing CO2 emissions from heavy industry: a review of technologies and considerations for policy makers
Grantham briefing paper 8: Negative emissions technologies
Grantham briefing paper 10: Shale gas and climate change
Grantham briefing paper 11: Solar Power for CO2 mitigation
Grantham Report 3: Reduction of carbon dioxide emissions in the global building sector to 2050
Competitiveness
A critical consideration in any nation or region’s mitigation strategy is the degree to which a low-carbon transition might put its industries at risk of losing competitiveness against rivals in regions with less stringent mitigation action. In a landmark study using responses from hundreds of manufacturing industries across the European Union, researchers at the Institute, in partnership with the Imperial College Business School and Universidad Carlos III de Madrid, have produced robust evidence to support the contention that the EU’s Emissions Trading System has not produced any significant competitiveness impacts or industry relocation risks.
On the empirical content of carbon leakage criteria in the EU Emissions Trading Scheme – Ecological Economics (2014)
Industry Compensation under Relocation Risk: A Firm-Level Analysis of the EU Emissions Trading Scheme – American Economic Review (2014)
Global energy governance reform
The energy policies of governments around the world will, to a large extent, determine global greenhouse gas emissions. Western governments cooperate on their energy policies through the International Energy Agency (IEA), which is a powerful advocate and analyst of low carbon energy strategies. Unfortunately the IEA excludes developing nations, such as China, India, Brazil, Indonesia, from its membership. The Grantham Institute is working with China’s Energy research Institute (ERI) to advise the Chinese government on China’s options for greater engagement in international energy cooperation, including closer association with the IEA. China’s participation is important for world energy security and affordability – the other main objectives of energy policy – as well as for climate mitigation. A consultation draft report published by this ERI/Grantham project is at Global energy governance reform and China’s participation. An earlier report by the Grantham Institute with Chatham House is at Global energy governance reform.
By C. Chambon, Research Postgraduate, Department of Chemistry
As part of a group of six Imperial students who visited California, I travelled to San Francisco to work on two projects: the New Climate Economy project, and a research collaboration with the Joint BioEnergy Institute.
The New Climate Economy project is a government-commissioned project looking at how economic goals can be achieved in a way that also addresses climate change. The Innovation stream, led by Stanford University and the Grantham Institute at Imperial, is focused on the potential economic and environmental impact of disruptive technologies. Beginning in January, a group of six Imperial students each focused on a different technology for the project, researching and preparing case studies for our weekly teleconferences. The topics researched were as varied as solar PV, nanomaterials, customer segmentation and the smart grid. My focus was on carbon capture and storage or utilisation (CCUS) technologies, and the policies needed to support them.
In Palo Alto, we worked together with Stanford students to construct a business model for each of our technology clusters. Our research findings were presented to NRG Energy’s newly formed Station A, a kind of skunkworks for energy resilience within NRG, a wholesale power company. The collaboration was a successful and a productive one, and several of us will continue to work with the New Climate Economy project to publish our research. The work will contribute to the UNFCCC COP negotiations in Paris in 2015.
During the latter half of the trip, I combined visits to Stanford with research for my PhD at Lawrence Berkeley National Lab across the bay. The San Francisco Bay Area is well-renowned as a bioscience and biotech hub, and is home to over 200 bioscience companies, start-ups and research institutes. One of these is the Joint BioEnergy Institute (JBEI), a branch of Lawrence Berkeley National Lab in the Berkeley hills. JBEI is a U.S. Department of Energy bioenergy research center dedicated to developing second-generation biofuels. These are advanced liquid fuels derived from the solar energy stored in plant biomass. The cellulosic biomass of non-food plants and agricultural waste can be converted to petrol, diesel and jet fuel, whilst the non-cellulosic part is a promising candidate to replace aromatic chemicals.
My project at JBEI looked at the upgrading of lignin, extracted from the non-cellulosic part of woody biomass, into aromatic building-blocks. This experience was a valuable addition to my PhD project, which looks at the valorisation of lignin from pine wood to improve the economics of the biorefinery. A highlight of my stay was a visit to the scaled-up biorefining facilities at LBNL, where a one-of-a-kind reactor is used to convert biofeedstocks into fuels. It was a very inspiring glance into the future of biorefining and I look forward to working closely with LBNL researchers and others working in the field of bioenergy.
By Hannah Nissan, Research Assistant in Regional Climate Modelling, Physics
In April 2009 the UK Met Office issued their now infamous forecast: “odds-on for a BBQ summer”. By the end of August, total precipitation since June had climbed to 42% above average levels for 1971-2000 (UKMO, 2014). Why is it so challenging to provide seasonal forecasts several months ahead?
A question which arises often in conversations about climate change is “how can we predict the climate in 50 years when we can’t even get the weather right next week?” While we have no skill in forecasting the weather on a particular day in 2060, we can make much more confident projections about the average conditions at that time. Some explanation of this comes from mathematician and meteorologist Ed Lorenz, who made an accidental discovery while rerunning some calculations inspired by weather predictions on his computer (Lorenz, 1963). Taking a seemingly harmless shortcut, he rounded a number to a couple of decimal places, and the result was a completely different weather forecast. His finding was coined the “butterfly effect”, which invokes a powerful analogy: tomorrow’s weather forecast depends so strongly on getting today’s weather right (a significant challenge) that forgetting to account for the effect of a butterfly’s flapping wings is enough to derail it. In contrast, climate is an average of the weather over a few decades, and doesn’t suffer as strongly from this debilitating “initial conditions” problem.
The distinction becomes rather blurred when we turn to seasonal prediction. When looking at longer term forecasts, say for this coming summer or next winter, correctly characterising the initial conditions of parts of the climate system still matters. This is particularly true for slower varying fields like soil moisture and the upper ocean levels, and to a lesser extent for the atmosphere. However, on these timescales other challenges become increasingly important. As we move from forecasting days ahead, to weeks, months, seasons and decades, the number and complexity of physical processes that must be well described by the computer models used to simulate the weather increases. Delivering better forecasts requires improvements on both these fronts, which compete for limited computer resources.
The benefits of developing more sophisticated prediction models must be balanced against the critical need for an indication of the uncertainty related to a forecast. To do this meteorologists create not just one weather forecast, but a whole ensemble of predictions starting from slightly different initial conditions. Other ensembles to capture the uncertainty in the physics of the model itself are also simulated. The spread of the ensembles tell us something about the range of possible outcomes and their likelihoods. We can never have perfect knowledge of the current state of the weather, nor a perfect model, so running lots of ensembles to develop a probabilistic weather forecast in this way is really important.
To illustrate the trade-offs involved between running ensembles and improving model complexity, consider an example from the UK Met Office. The wetness and storminess of UK winters are strongly linked with the North Atlantic Oscillation (NAO), a pattern of sea surface pressure between Iceland and the Azores islands. By investing in research to uncover some of the important physical drivers for the NAO and including these in their model, the Met Office have recently achieved a significant improvement in NAO forecasts (Scaife, 2014a,b). At the European Centre for Medium Range Weather Forecasting (ECMWF), improving the way convection is modelled delivered better tropical precipitation forecasts and greater skill in predicting seasonal European weather (Bechtold, 2013).
Predictability itself is not a fixed quantity, but varies with each situation. Some weather systems are strongly influenced by larger scale phenomena happening over longer periods of time for which forecasts can be quite reliable. For example, weather in much of the tropics depends on the El Nino Southern Oscillation (ENSO), a pattern of sea surface temperature and atmospheric pressure in the tropical Pacific which persists for several months. Others may be dominated by brief, local processes that are much harder to predict, like convection. In general, models are able to simulate the cascade of energy downwards, from large movements like the jet stream down to small waves and turbulence. The reverse case is not so easy: it is a major challenge to represent the effects of processes occurring at small physical scales and short time periods, like turbulence and convection, on the wider weather. Consistent with the butterfly effect, some of the effects of small-scale processes are inherently unpredictable and must be represented by random noise.
A good test for the usefulness of a seasonal forecast is whether it offers an improvement over simply looking at the average conditions. In other words, can we do a better job if we simply say that summer temperatures will be the same this year as they have been on average over the last 30 years? Weather prediction models beat statistical forecasts in the tropics, where the influence of ENSO is strong and fairly predictable. This has not in general been the case over Europe and in other higher latitude regions, where lots of different phenomena interact (Buizza, 2014). However, the latest forecast systems are starting to show some skill even here (Scaife, 2014b).
Temperature forecasts several months ahead are often better than looking at long term data. Predictive skill for precipitation, however, is much worse. This is because rainfall is driven partly by local processes, right down to how individual raindrops are formed. Temperature on the other hand, tends to be controlled by larger, more predictable features (Buizza, 2014). That said, the disastrous floods in Pakistan in 2012 were well forecast a week ahead by ECMWF because, in that particular situation, the rainfall was controlled by large air movements that were relatively well understood (Hoskins, 2012).
The challenging reality is that predictability varies from case to case according to the physical factors controlling each phenomenon. Extracting predictive information across both space and time scales can allow us to unpick these convoluted problems and make real improvements in seasonal prediction (Hoskins, 2012).
With thanks to Brian Hoskins for his helpful review comments.
References
Lorenz, 1963. Deterministic non-periodic flow. JAS 20:130-141.
UKMO, 2014. Summer 2009. http://www.metoffice.gov.uk/climate/uk/2009/summer.html. Accessed 20/03/2014.
Bechtold, P., N. Semane, P. Lopez, J.-P. Chaboureau, A. Beljaars, N. Bormann, 2013: Representing equilibrium and non-equilibrium convection in large-scale models. ECMWF RD TM 705, available at http://www.ecmwf.int/publications/library/ecpublications/_pdf/tm/701-800/tm705.pdf
Buizza, R., 2014. Coupled prediction: opportunities and challenges. Seminar, Imperial College London, 18th March.
Scaife, A., 2014a. Forecasting European winters: present capability and potential improvements. Willis Research Network Seminar: Forecasting, Flood & Fortitude: An Afternoon with Willis, 18th March.
Scaife, A., 2014b. Skilful long range prediction of European and North American winters. GRL, in press.
Hoskins, B., 2012. The potential for skill across the range of seamless weather-climate prediction problem: a stimulus for our science. QJRMS 139(672):573-584.
[1] Convection is when local heating causes air to rise