Project Zero is not intended to be available for use, purchase, or access by U.S. persons, including U.S. citizens, residents, or persons in the United States of America, or companies incorporated, located, or resident in the United States of America, or who have a registered agent in the United States of America.
Grids are the world’s largest machines. It’s difficult to exaggerate just how central they are to our lives and prosperity today – from when your phone is charged in the morning to when you switch off the lights at night. Only the internet and the water network are of comparable importance to our day-to-day lives, and they both depend on electricity delivered by the grid: the internet uses electricity constantly, from your home’s wifi hub to web servers in data centres, and water networks use it to pump, treat, and pressurise water.
So far in modern history, an electrical grid – a monopolised network of power lines, transformers, and interconnectors which supplies electricity across a large area – has been essential to the most basic steps in economic development. Underdeveloped countries tend to have agricultural economies and need to industrialise to get richer. Building factories to make goods that can be exported requires electricity – for lighting, heating, and machines. If an economy develops further and becomes more services-based, electricity only grows in importance. Take air conditioning: Lee Kuan Yew, the founder of Singapore, claimed that it was essential for increasing efficiency in his country. Naturally, it requires electricity too.
Around the world, however, the grid is becoming a blocker to a new priority: net zero. We need to replace fossil fuel power plants, but in many countries, even if you successfully get permission to build renewable generation, you face a long queue before you can actually connect to the grid and supply consumers. In the UK, these queues can reach up to 10 years. Hundreds of gigawatts of renewables are on hold in continental Europe due to grid issues. Even the United States, which is normally good at building new physical infrastructure, is finding it tough to upgrade its grid: the size of its connection queue is almost twice the national current energy generation capacity.
The consequences of these delays matter to us all. The construction of new homes is being put on hold. High energy costs are already bankrupting businesses. Emissions targets may not be met. Data centres for training new AI models could end up being built in countries less friendly to our interests. How did we get into this mess?
The first grid was switched on in Lower Manhattan in 1882, created by Thomas Edison. It allowed four hundred lamps to be lit in homes and businesses – no longer did these people have to rely on candles or gas lamps to work, read, or do household tasks.
These early grids only supplied one town or city, something we would call a microgrid today. They were privately owned, built by the same companies who were building power stations – in other words, vertically integrated electricity companies, powered by fossil fuels.
As the twentieth century progressed, countries developed economically and electricity became more central to daily life. The grid turned out to be a natural monopoly (a market where it makes sense for there to be a single provider) for a few reasons. Having a single grid means that electricity wires don’t have to be duplicated by competing companies, which saves money.
Fossil fuel power plants have economies of scale, so if they can plug into a national grid and serve a whole country or region, they can be as big as possible. A single grid allows electricity to be transported easily, so when one of those large power plants needs to go offline for maintenance, users can simply get power from other plants across the country. As we close these plants in favour of intermittent renewables (which do not see the same economies of scale) and learn how to coordinate demand, this rationale for the grid makes less sense – we don’t need such a large grid to ensure that we can match demand.
The sheer scale of the grid, as well as historical circumstances, have led to a variety of approaches to grid organisation. Some countries have a single unified grid, and particularly large states have multiple regional grids, which typically share connections between each other. The United States has three and China has two, for example. Others even share grid infrastructure or management across countries, like the Baltic countries or continental Europe.
Although each grid is largely physically separate from others, and managed separately, there are places where they join – interconnectors. Allowing electricity to flow between grids is a great opportunity for generators and consumers (or companies acting on their behalf) to sell or buy the electricity they need.
European countries have been exchanging electricity for over a century already. And it’s not just continental Europe: as early as 1961 Britain’s grid has had an undersea interconnector with France. It has since added a series of interconnectors with Ireland and several other countries. This means that electricity generated by, say, French nuclear reactors can power the British grid at times (again, more on this in a future post on physical energy markets). This also works the other way round: when France has maintenance problems with its reactors and becomes a net importer of electricity, prices in Britain increase as France buys electricity there. Interconnectors are also a key part of the Baltic countries’ strategy to wean themselves off Russian electricity – integrating with the European grid instead. Concerningly, however, the lead time for interconnectors can be as long as nine years.
Imagine a swimming pool, full of water. At one end of the pool, people are withdrawing water. At the other end, people are pouring water in. This is a rough analogy of how the grid works: consumers and businesses are using electricity and generators are creating it.
It’s the job of grid operators to maintain a flow of electricity across its network at all times – keep the water level stable – so that everyone has the electricity they want whenever they want it.
What few people realise is that this process happens instantaneously. That’s where the swimming pool analogy breaks down; there is no ‘pool’ where electricity is stored, instead electrons are flying through the network at 300,000 kilometres per second. Demand and supply have to be matched at the perfect amount, at the perfect time. (A complex system of price signals is used to achieve this. We’ll cover these physical energy markets in a future post.)
Grids work as follows: generators, typically fossil fuel power stations, burn fuel to produce electricity. At this point, the electricity generated is low voltage, inefficient for travelling over distances.
This brings us to an important point about distance and current. Rewind the clock to Edison’s first microgrid: at this point in history, it wasn’t clear what the final form of the electricity industry would be. In particular, there was a huge dispute over what kind of current to use. Direct current, supported by Edison, runs in a single direction and was useful for powering incandescent lighting in homes. George Westinghouse, using patents he purchased from Nikola Tesla, developed an alternating current (AC) system in which electricity switches direction many times a second. He first deployed it in 1886, setting up a fierce rivalry with Edison. Each campaigned to have their type of current deemed the standard, and Edison even resorted to publicly electrocuting stray animals with AC in order to tarnish its reputation.
As grids tried to scale, distance became an increasingly important factor. Electricity is more efficient to transport at higher voltage levels. DC’s voltage levels were much harder to change than AC’s, giving the latter an inherent advantage at scale. This advantage eventually led to AC’s victory, and it became the grid standard type of current for grids around the world.
This means that AC is used in modern grids, and transformers are used to ‘step up’ the voltage of electricity from generators to make it easier to transport. They can cost millions of pounds and can weigh hundreds of tonnes , requiring a convoy to transport them to their site. Only a handful of companies globally make transformers, and procurement lead times can stretch to four years in some cases. Siemens estimates that a quarter of renewable projects globally are jeopardised due to these delays. It’s not just permitting or planning permission that slows down grid timelines, but equipment procurement.
The electricity then flows onto the grid and travels through cables, usually suspended into the air using pylons. Elevating the cables like this is the most cost effective way of keeping them away from people – burying them is extremely expensive but happens at times. This part of the grid is called the transmission network, and is often the source of bottlenecks in grids around the world.
Closer to the final destination, another transformer steps the voltage back down to make it suitable for consumer appliances. This part of the grid is called the distribution network, and there are often several per country. Distribution lines, sometimes buried underground, take electricity to homes and businesses who use it.
It’s worth noting that AC’s victory over DC has not been comprehensive. High Voltage Direct Current transmission is often used to transport electricity between countries, with less power loss than AC. China has built a 3,324 kilometre HVDC line between Changji and Guquan, for example. HVDC is growing in popularity because it can help integrate renewables and move electricity in more than one direction.
The grid’s importance is only increasing as we electrify more of the economy. One in five cars sold around the world is now electric. The UK plans to install millions of heat pumps. The training runs for the next generation of AI models are driving huge demand for clean electricity – some estimate that they will require ~20% of total US electricity production as soon as 2030.
Future generation, not just use, will be different too: we’re retiring fossil fuel power stations and moving to a new, renewables-based system, in which generation will be increasingly decentralised.
These trends in generation and use of electricity are continuing, even accelerating – and we’re not ready.
The world’s twentieth century grids aren’t fit for our twenty-first century needs. Legacy grids are monopolies, built for a small number of large generators, and their physical infrastructure served that purpose – large pylons transmitting electricity from specific, centralised sites to consumers.
The numbers already involved in fixing the grid are staggering: £58 billion in the UK alone, €584 billion in Europe, and $2.5 trillion in the United States. By some estimates, we need to double the size of the grid around the world (already 80 million kilometres long) to reach net zero. That’s the scale of the change that’s coming – the biggest since Edison and Westinghouse built the first microgrids in the 1880s.
In fact, microgrids are the answer now, too. As we’ll explore in future posts, distributed energy resources (which can be aggregated into virtual power plants) are a means of getting electricity where it’s needed. Microgrids recently proved their worth in North Carolina after Hurricane Helen, helping restore electricity after hundreds of substations were destroyed and thousands were left without power. This new paradigm can offer reliable power in countries or places which don’t have a functioning grid, allowing them to leapfrog in their economic development. Microgrids can also scale electricity provision in developed countries where the old monopoly grid is just too slow.
The centralised grid monopoly cannot – must not – last in its current state if we’re to reach sustainability as a planet. We need to decentralise our energy networks. Project Zero will make that happen.