Parliamentary Select Committees: Are elected chairs the key to their success?

Dr Mark Goodwin, Univeristy of Birmingham, Dr Stephen Bates, University of Birmingham, and Professor Steve McKay, University of Lincoln, explore the role of elected chairs in parliamentary Select Committees.

In the past two months, two of Britain’s richest men have been forced by Parliament to admit to, and apologise for, serious failings in their business practices that could end up costing them millions in compensation. Sports Direct owner Mike Ashley admitted to the Business, Innovation and Skills Select Committee that, despite being Britain’s 22nd richest person with an estimated fortune of £3.5bn, he had not been paying staff in the company’s main warehouse the minimum wage. A few weeks later, the same committee witnessed what many saw as a bizarre performance from another British billionaire, Sir Philip Green, as his failings in the sale of British Home Stores were exposed in between complaints about excessive staring from the committee members. These are just the latest in a string of high profile inquiries by parliamentary select committees over the past six years that have also seen Rupert Murdoch attacked with a custard pie, Michael Gove alleging a ‘Trot conspiracy’ in English schools and a vice president of Google being informed that “you do evil”.

Parliament’s House of Commons select committee system, which allows groups of backbench MPs to scrutinise the work of government departments and to initiate their own inquiries in areas related to the work of those departments, has existed in its present form since 1979. But in recent years, select committees have gained an unprecedented public profile, with ever more media attention focused on the committee corridor rather than the main chamber of the Commons. One explanation for this shift is the set of reforms to the select committee system introduced in 2010 on the recommendation of the Committee for Reform of the House of Commons chaired by MP Tony Wright, and collectively known as ‘the Wright reforms’. The Wright reforms, as they relate to select committees, provided for the direct election of select committee chairs by MPs, and of committee members by party caucuses, replacing the previous system of patronage by party whips. Many commentators and parliamentarians have credited these reforms with revitalising the committee system, by allowing more independent-minded parliamentarians to take control of committee scrutiny and hold government, and increasingly, those outside government such as Ashley and Green, to account.

Our current research project is to identify what difference these reforms have made to the way that Parliament works, and to establish whether these reforms have improved the operation of select committees. Our initial findings suggest that, despite the ‘universal praise’ select committees enjoy, the reforms have done little to improve rates of turnover, attendance or gender balance. One further area of interest is how far the Wright reforms have improved the system by altering the character of committee personnel, for example, by allowing the kind of independent, unbiddable parliamentarians previously excluded from committees by party whips to serve on, or even more importantly, to chair, select committees.

Since 2010, 47 MPs have been elected to chair select committees subject to the Wright reforms. These include departmental committees, such as those for Business Innovation and Skills (which carried out the questioning of Ashley and Green) or Culture, Media and Sport (which questioned Rupert Murdoch and others over phone hacking), as well as cross-cutting committees such as Public Accounts (the source of the Google inquiry on tax avoidance) and Science and Technology. Chairs are elected through a secret ballot of all Members of Parliament using the Alternative Vote system. At first sight, it seems hard to defend the idea that the mechanism of electoral competition is a key driver in producing higher quality committee chairs. The pool of candidates for any committee chair is restricted by two factors. Firstly, since select committees are parliamentary institutions that seek to scrutinise and hold government to account, the chair must be a backbencher. With the expansion of the payroll vote in recent years, this reduces the pool of candidates from 650 to around 410. Secondly, committee chair positions are divided up among the parliamentary parties in rough proportion to their levels of electoral support and with government having a large say in which committee chairs they retain. For Labour- or Conservative-chaired committees, therefore, the pool of candidates is around 150-200 for each of 27 posts, meaning that competition is rather less fierce than it might initially appear. Of the 57 positions filled using the Wright system to date (26 in 2010, 27 in 2015 and 4 by-elections, with some MPs winning more than once), 20 were elected unopposed as the only candidate. Thirteen of the 47 elected chairs had also previously served as select committee chairs under the old, unelected system. It is difficult to imagine how this alone could produce a transformative impact on the operation of committees.

The evidence on whether the Wright system has produced committee chairs who are more independent of their party and of government is mixed. Eleven of the 47 elected chairs were previously, or subsequently, members of the Cabinet or shadow Cabinet which suggests they may not be quite the maverick outsiders that some analyses of the select committee system have suggested. When looking at individual rebellion rates, however, there is some evidence that select committee chairs are more independent of the instructions of their party managers than other comparable MPs. For all MPs in the current parliament, the mean rebellion rate (the proportion of votes where the MP voted against the majority of their own party) was 0.54%. For elected select committee chairs, the mean rebellion rate was 1.3%. Since MPs can expect to vote in well over 1000 divisions during the course of a session, the numbers of votes involved may be significant, even if the proportions are small. Using a better comparison group – all backbench MPs excluding frontbenchers from all parties – shows a statistically significant difference in the average rebellion rate of Wright committee chairs in the current parliament (1.3%) compared to other backbenchers (0.65%). The average lifetime rebellion rate for Wright committee chairs is 1.5%, with 10 elected chairs having a rebellion rate over 2%, a rate which if compared to the current parliament, would put them comfortably in the top 50 most rebellious MPs. On this measure, it seems there is some evidence that Wright committee chairs tend to be more rebellious or more independent of their parties than other comparable MPs. However, due to the pattern of rebellion, this is most likely caused by a few serial rebels among the committee chairs pushing the average up (for example, the Work and Pensions committee chair, Frank Field, has a lifetime rebellion rate of 5.6%).

Much has been made of the fact that the Wright system might allow an injection of new blood, with MPs less socialised and institutionalised into the parliamentary system taking committee chair positions previously only available to parliamentary lifers. There are a number of cases where this narrative makes sense – for instance, Health chair Sarah Wollaston and Defence chair Rory Stewart were elected with less than one full parliamentary term under their respective belts, and are generally regarded as among the more independent and effective chairs. Yet looking at the group as a whole suggests that it is the ‘old stagers’ rather than the ‘new brooms’ that dominate. The average length of service in Parliament before election to a select committee chair is 16 years. Eleven of the 47 chairs had been in Parliament for over 20 years before election to a chair, and three for over 30 years. In the first ever elections under the Wright system following the 2010 general election, 26 MPs were elected to chair select committees. Of these 26, 10 did not seek re-election to Parliament in 2015 and 1 was defeated in the general election. One of the initial hopes of the Wright committee was that the new system might produce a parliamentary career path that offered an alternative to seeking to climb the ministerial ladder. There is little evidence that this has materialised so far, with many elected chairs standing down after serving one parliament – cynics might say to boost their pension with the additional pay that comes with chairing a select committee. More charitably, it seems to have served as an alternative avenue for leadership for those coming to the end of their parliamentary careers and with no prospect of ministerial office.

Is there any evidence, then, that the election of select committee chairs has brought in a different kind of parliamentarian – younger, less biddable, more rebellious, representing Parliament rather than government, and focused on scrutiny rather than climbing the ministerial ladder? So far, after two rounds of elections and several by-elections, it seems that the answer is no, on the whole. If there has been an improvement in the performance of select committees, the new system of electing chairs does not seem to be the primary cause. If select committees seek to entrench their growing significance in future, they should perhaps seek to avoid complacency in assuming that the Wright reforms are a sufficiently powerful mechanism to drive improvement.

 

This article was originally published by the Political Studies Association Specialist Group on Parliaments and Legislatures on 5th July 2016.

 

Leave a comment

Filed under Research

The Next Cold War has Already Begun – In Cyberspace

Conor Deane-McKenna, is a Doctoral Researcher in Cyberwarfare at the University of Birmingham.

The world is fighting a hidden war thanks to a massive shift in the technologies countries can use to attack each other. Much like the Cold War, the conflict is being fought indirectly rather than through open declarations of hostility. It has so far been fought without casualties but has the potential to cause suffering similar to that of any bomb blast. It is the Cyber War.

When we think of cyber attacks, we often think of terrorists or criminals hacking their way into our bank accounts or damaging government websites. But they have now been joined by agents of different governments that are launching cyber attacks against one another.

They aren’t officially at war, but the tension between the US and Russia – and to a lesser degree China – remains high over a number of disputed decisions. Cyber attacks allow these countries to exert their power against each other in an often anonymous way. They can secretly make small gains but a wrong move could spell disaster, much like the operations of nuclear submarines during the Cold War.

There are numerous forms of cyber attacks that can be used. Malware, typically in the form of a Trojan horse or a worm, installs itself on a computer and takes control, often without the knowledge of the victim. Other attacks can disrupt computer systems through brute force. For example, distributed denial of service (DDOS) attacks involve flooding a system with so many requests to access a website that it crashes the site’s server.

Countries are also trying to build up their cyber defences. Many infrastructural systems connected to power plants, for example, have been physically disconnected or “air-gapped” from the internet. Other defences such as firewalls and security programs are in place in all government systems to prevent their hacking by outside sources.

Just as dangerous as “real war”

Some argue that the idea of cyber warfare has been overhyped because cyber attacks don’t have the physical consequences that “real” wars do. But the cyber weapons being used and developed could cause a large degree of economic as well as infrastructural damage – and this could endanger property and even human life. In 2007, scientists at the Idaho National Laboratory in the US were able to show how a cyber attack on an electricity generator could cause an explosion. This shows the real danger that cyber attacks can pose, not simply to national security infrastructure but also to hospitals, schools and homes.

The year 2007 was actually crucial in the history of cyber warfare, marking the point when several major states began putting cyber weapons to use in a well-documented way. After Estonia attempted to relocate a Soviet war memorial, Russia was accused of launching a series of DDOS attacks on Estonian websites including government and banking sites. Such action was not just embarrassing but damaging to both the power of the Estonian state and the economic activity of the country.

Although it wasn’t discovered until 2010, the Stuxnet worm was the first prominent cyber weapon to be used by the US, and was originally deployed against Iran in 2007. The worm, part of the wider “Operation Olympic Games”, was designed to prevent Iran from producing uranium that could be used in nuclear weapons. The software was hidden on a USB stick and uploaded to the control systems of the enrichment plant, causing its centrifuges to operate outside of safe parameters and leading to a series of breakdowns.

Cyber attacks can make real world attacks possible.
EPA

The Israeli cyber section, Unit 8200, which had a hand in the Stuxnet design, was also involved in the blackout of air radar during an attack on nuclear facilities in Syria in Operation Orchard, 2007. Shutting down the ageing Soviet-era radar through a mixture of cyber attacks allowed Israeli jets to bomb the site in the Deir-ez-Zor region of Syria.

The Israeli example shows how cyber attacks will start to become part of standard military operations. Both the US and Chinese cyber warfare divisions are parts of the countries’ conventional military structures. And both states have made it clear that they will not rule out using cyber attacks for the sake of maintaining national security interests.

Acting with impunity

These capabilities pose a danger to everyone, not just governments, and not just because they could lead to infrastructure being blown up. Stuxnet was discovered because the worm found its way onto the global internet and caused problems for tens of thousands of PCs across the world. It’s not hard to imagine the widespread economic and personal damage that could be done with an even more malicious program. Stuxnet also shows why simply keeping critical infrastructure disconnected from the internet is not enough to protect it.

The other particularly worrying aspect of cyber warfare is that it allows states to act with relative impunity. Advanced encryption technologies make it almost impossible to prove exactly who is responsible for a specific cyber attack. As a result, states can now act unilaterally with little fear of open retaliation. For example, despite a bilateral agreement between the US and China to refrain from hacking for economic benefit, Chinese hackers have continued to infiltrate secure systems in the United States. There are few real consequences for this outright breach of sovereignty.

On the positive side, some have argued that cyber attacks allow states to pursue their foreign policy goals without using conventional military action, and could even dissuade superpowers from doing so. Disabling Iran’s nuclear programme, for example, reduced the short-term likelihood the US would feel the need to make a military attack on the country. With tensions between superpowers high, but the risk of full-scale world war still relatively low, cyber attacks are likely to become an increasingly common way for countries to gain at their competitors’ expense.

This article was originally published by The Conversation on 7th April 2016.

2 Comments

Filed under Uncategorized

Protectionist parties and antidumping in the US

This week Tommaso Aquilante, a lecturer in Managerial Economics at the Birmingham Business School an issue which is likely to be of increasing importance in inter-country relationships. The full article is available as a pdf, published by the Birmingham Business School.

So what is antidumping?

Antidumping (AD) is the most popular import restriction among industrialised economies. In the United States, key decisions on AD are delegated to the International Trade Commission (ITC), an independent agency composed of six non-elected commissioners. This column examines their voting behaviour over three decades (1980 – 2010). AD decisions crucially depend on which party has appointed them and on the trade policy interests of key senators in that party: whether (Democratic) Republican-appointed commissioners vote in favor of AD depends crucially on whether the petitioning industry is key (in terms of employment) in the states represented by leading (Democratic) Republican senators. This casts further doubt on a protectionist measure, AD, that increasingly looks like an industrial policy tool rather than an instrument to be used to restore fairness in commercial exchanges.

Click Here to read the full article.

Leave a comment

Filed under Uncategorized

What is a Contested Conventions – and What Would One Mean for the GOP?

Adam Quinn, is a Senior Lecturer in International Politics at the University of Birmingham.

The possibility that the Republican primary race could end in a contested convention is a journalist’s dream and one that the media has speculated on during every electoral cycle in recent memory.

It was entertaining for those who cover politics as sport to imagine the fight between Barack Obama and Hillary Clinton could still be unresolved by the time Democrats arrived in Denver in 2008, or the Republican candidate in 2012, Mitt Romney, being swamped on the Tampa convention floor by his multiple lesser challengers. In truth, it was never likely in either case.

But 2016 may be the year that the dream is finally realised at the Republican Party convention in Cleveland, Ohio, in July.

There is no precedent for this in the modern era. Since candidates began to be chosen through voter participation in primaries in the 1970s, the process of nominating presidential candidates at the formal convention has almost always been straightforward. The presumptive nominee arrives with a majority of delegates already pledged thanks to victories in the state-by-state contests. The first ballot among delegates then confirms their nomination. In recent times, losing candidates have even released their delegates to vote for the winner, adding to the sense of party unity.

Smoke-filled rooms

The last time a major party convention began without certainty as to the nominee was the Republican Party gathering in Kansas City in 1976, when incumbent president Gerald Ford, having received more delegates and votes in primaries, was close enough to need to wrangle uncommitted delegates to his side to narrowly see off his challenger Ronald Reagan.

Republican Party’s contested convention in 1976 when Gerald Ford beat Ronald Reagan to the nomination.
William Fitz-Patrick via Wikimedia Commons

Eight years before, in Chicago in 1968, the Democratic Party set the high-water mark for rancorous convention argument in modern times, when the pro-Vietnam-War candidate Hubert Humphrey secured the nomination over anti-war Eugene McCarthy despite not having competed in any primaries, while police clashed violently with demonstrators in the streets outside the venue.

That took place, however, while most convention delegates were still effectively selected by local party leaders rather than by popular vote. Indeed, dissatisfaction among Democrats with the events of 1968 was what prompted rapid reform towards the current primary system.

Nightmare scenario

In both 1968 and 1976, however, although there was some manoeuvring in advance of the first ballot to ensure a winning total for the nominee, a single ballot was all that was required to identify a winner. The Republican contest of 2016, on the other hand, promises something not seen since the Democratic convention of 1952 when incumbent president Harry S Truman declined to run for a second term: voting for the nominee that goes beyond the first ballot.

This would see candidates fighting for the nomination over a sequence of ballots, working between each vote to win over new delegates to their cause.

Under ordinary circumstances, the chances of a small shortfall in delegates displacing a very clear frontrunner would be small. Even if the leading candidate fell short of the 1,237 pledged delegates required for a certain win on the first ballot, it would be likely that they could negotiate with less successful candidates for support, or appeal to the small number of unpledged delegates, and put themselves over the top before voting started.

But 2016 is an exceptional year, because the Republican frontrunner is Donald Trump. An abrasive businessman and reality TV star, Trump has no history of elected office or role in the Republican Party. He is regarded with horror by both establishment moderates and ideological conservatives because of his changeable policy views and inflammatory vulgarity. He also has unprecedentedly high disapproval ratings for a national candidate among the general public. Many Republicans justifiably fear for the party’s prospects across all elections this cycle if he were to lead the ticket.

Bad dream for GOP.
Gage Skidmore, CC BY

For these reasons, if Trump fails to secure enough delegates to win the nomination on the first ballot, he is thought far less likely to receive increased support on subsequent ballots. This is the strategy of his remaining primary opponents, Texas’s junior senator, Ted Cruz and the governor of Ohio, John Kasich (explicitly in the latter case): to continue to compete precisely with the aim of denying Trump that first-ballot majority, and then to peel off delegates at the convention to defeat him in subsequent ballots.

This is possible because, once they have done their duty on the first ballot, all bound delegates are released to vote as they see fit, if not immediately in round two then swiftly thereafter.

To see why this strategy might work, it is important to understand one key factor: the actual delegates themselves. The people who attend the convention are not necessarily personal supporters of the candidate they are bound to represent in that first ballot. They are usually committed, longtime party activists selected by the states and districts.

Since Trump is an outsider to the party, who appeals primarily to voters who feel disgruntled and excluded by the political process, he is unlikely to have many natural supporters among such delegates. This has led several informed analysts, such as Nate Silver, to conclude that Donald Trump needs a first-ballot victory to prevail at the convention.

There may be trouble ahead

If the convention is contested, the most likely beneficiary is the candidate in second place: Ted Cruz.

Cruz is precisely the sort of conservative figure, well-networked among a wide tranche of the Republican base, who is likely to be favoured by newly unbound delegates. But there is also an outside possibility that once the contest has moved into later ballots an entirely new figure could be nominated as a “compromise” candidate, such as the speaker of the House of Representatives, Paul Ryan. But this would risk considerable discontent among Republican voters who would be asked to accept a candidate imposed by the party who had not been presented to them during the year-long multi-candidate primary process.

Whether the finally victorious presidential nominee turned out to be Cruz or anyone else, the dreaded uncertainty hanging over a convention that went in this direction is how Trump and his supporters – who have cultivated a febrile and sporadically violent atmosphere at campaign rallies over recent weeks – would react to his being denied the nomination even after winning the largest single share of votes and delegates.

If they reacted with anger – and if Trump were disinclined to play peacemaker – then the Cleveland convention could be the most volatile since Chicago ’68.

This article was originally published by The Conversation on the 8th of April.

Leave a comment

Filed under Uncategorized

Book Review: The Fence and the Bridge: Geopolitics and Identity Along the Canada-US Border by Heather N. Nicol

Iván Farías Pelcastre is a Postdoctoral Scholar and Visiting Fellow at the University of Southern California and a graduate of the PhD in Political Science and International Studies from the University of Birmingham. He is particularly interested in the analysis of policy interdependence and political integration between Canada, Mexico and the US resulting from the operation of the North American Free Trade Agreement, its side and parallel accords and their corresponding institutions.

In The Fence and The Bridge: Geopolitics and Identity Along the Canada-US Border, Heather N. Nicol examines the ideas, narratives and perceptions through which these two North American countries see the boundary that both unites and divides them. The book contributes to the analysis and discussion of the historical transformation of the concept of borders, and the ideas that surround their management and maintenance. Specifically, Nicol examines and prompts readers to question the basis of current political ideas on US and Canadian borders and their significant impact on the design and implementation of their corresponding domestic and foreign policies. Nicol is particularly interested in and concerned about the rise of a securitisation agenda in North America after the events of 9/11, and how this has influenced and impacted Canada’s perception and management of its own borders.

The innovative aspect of Nicol’s work lies in highlighting the centuries-long struggle that underpins current understandings of, and policies on, US-Canadian border management. Through the analysis and discussion of various historical episodes and cultural expressions surrounding bilateral agreements on US-Canadian borders, she demonstrates that behind the practical commitments and bilateral policies established between the US and Canada on border control lies a number of ideological differences that are unlikely to be reconciled in the near (or even distant) future. Nicol argues that such differences result from perpetual attempts by the US to impose its hegemonic practices, ideas and culture onto other countries, including Canada, and the unceasing determination of Canadians to construct and maintain an identity of their own, i.e. not ‘American’. It is in the context of this discussion that Nicol introduces the idea of ‘reflexive transnationalism’, to understand and explain the ways and means through which Canadian and US nationalisms have been constructed, divided and even connected by their shared borders.

Nicol uses representations, metaphors, stereotypes and other popular culture references, such as cartoons, to identify and illustrate US and Canadian attitudes towards their common borders. She argues that these images and texts are just as important as political speeches or diplomatic documents in constructing national and transnational identities and hence ideas embedded in current border security arrangements. Most of these images and texts convey a hegemonic ideological and economic discourse from the US, which is portrayed as a continental giant always adamant to pressure, manipulate and even absorb an always ‘resistant Canada’, focused on nation-building.

For instance, Nicol discusses the content of newspaper articles written in the 1900s, which are said to represent the then allegedly pacific-but-imperialist narrative underlying US policies towards Canada. While some stated that ‘Canada today owes its national existence to the forbearance and to the pacific policy of the United States’, others straightforwardly confirmed a ‘national motive’ for the US invasion of Canada, which could only benefit from the protection of Uncle Sam – whether it wants it or not (71,76). Nicol argues that the rise and dissemination of these attitudes and ideas among people and policymakers on both sides of the North American borders progressively transformed them from mere geographical pointers into sites of securitisation. Through the analysis of these and other texts, Nicol concludes that the US and Canadian borders have never been solely markers of the confines of different socio-political communities, but have been built on the rationale of protection against hostility – be it from one’s neighbour’s hegemony or the threats of a post-9/11 world.

While Nicol’s analysis is noteworthy, the book is strongly oriented towards a Canadian audience and presents a partial view of Canada as an ever-pacifist country which has been subjected to US dominance. Depicting Canada as the sole target of US hegemony in the American continent is not only inadequate, but also inaccurate. For instance, Nicol incorrectly describes the ‘Manifest Destiny’ as ‘the belief that [the United States of] America had a divine role to play as a ruler of the North American continent’. She argues that this belief led the US to regard its border with Canada as nothing but an inconvenient line which needed to be erased.

Such an assertion, however, not only inaccurately describes the Manifest Destiny, but also underrates its impact on other countries, which were substantially more affected by the prevalence of this belief in the nineteenth and twentieth centuries than Canada has ever been. The Manifest Destiny was actually the belief that the United States were destined to expand throughout and dominate the whole American continent, i.e. not just Canada. Although Canada might have endured diplomatic and ideological pressure from the US resulting indirectly from the prevalence of the Manifest Destiny, Mexico, Cuba and the rest of Latin America endured direct military interventions, invasions and territorial losses directly resulting from this belief. By overlooking such substantial differences, Nicol compromises the potential use of the book in comparative political or international relations studies beyond Canada.

Moreover, the book’s significant emphasis on the emergence of a US-Canadian transnational identity, emerging from US hegemony and Canadian ‘nation-building’, inaccurately captures the expanding cooperation and ideological alignment that has occurred – either to the right or left – between various administrations, for instance those of President Jimmy Carter and Prime Minister Pierre Trudeau or Prime Minister Stephen Harper and President George W. Bush. The simplification of US and Canadian positions on border cooperation, cross-border trade and security initiatives seems to be aimed at reaffirming Canadian identity as ‘not American’, rather than analysing the basis of the unprecedented harmonisation of North American border, security and immigration policies.

In conclusion, Nicol makes a worthy contribution to US-Canadian border studies by providing a Canadian perspective on the increased hardening of what is ultimately an imaginary line dividing two imagined communities. Her work invites us to consider that ‘the meaning of borders is open to contestation’ (259), and that while their current management is a reflection of past popular understandings and nationalist ideas, these do not necessarily always mean the same thing to everyone. Indeed, a more cosmopolitan understanding of borders might emerge as peoples and countries uncouple the concepts of states and nations, through better and increased communication and understanding of each other’s societies. Nicol could increase her contribution to prompting such change by de-emphasising the differences between Captain Canuck and Uncle Sam, and examining in a more nuanced way the similarities between them.

This article was originally published by LSE Review of Books on 18th March 2016.

Leave a comment

Filed under Uncategorized

Is the Snooper’s Charter As Bad as you Think?

Security services lack the resources to randomly spy on people argues Gavin E L Hall, a doctoral researcher at the University of Birmingham.

On March 1, UK Home Secretary Theresa May announced that the redrafting of the Investigatory Powers Bill was complete, following a call for the written evidence on the original draft in November 2015.

The purpose of the bill is to clarify what activity the British security services and law enforcement can engage in online, either by restating existing practice or by introducing new powers. The ability to seek a year’s worth of Internet browsing history has attracted a good deal of publicity and given rise to public concern. The issue is pressing as existing legislation was ruled unlawful, following a challenge from Members of Parliament Tom Watson and David Davis, and is set to expire on March 31, 2016.

David Anderson QC, the independent reviewer of terrorism legislation, recommends that the so-called “Snooper’s Charter”—the bill’s unofficial name—strikes a balance between freedom and security. Such a positive statement does not appease the critics of the bill who argue, to paraphrase Benjamin Franklin, that liberty is being surrendered in search of temporary security.

Is the public’s fear justified as we enter a new Orwellian age, or do the security services have valid reasons for seeking access to data?

Cyber Reality

The rule of law has developed in the Western world over centuries. The rights of the individual and the demands of the state have been in conflict during this period of legal evolution, and the present Investigatory Powers Bill can be viewed as part of this history—not just a knee jerk. Placing the bill in this historical context is important, as it enables us to establish that the existing laws of the land have been developed over a prolonged period as a series of compromises, while generally maintaining the balance between individual freedom and state security. In other words, what invasions of privacy are permissible in the name of security is well-established, in the real world at least.

Should it be permissible to engage in the same activity in the cyber environment as in the real world, or does this new technology warrant different standards of freedom and privacy? For example, if two known terrorists are meeting at a private house, should a bug be planted? If they speak by phone, should it be tapped? Or if they speak via WhatsApp, should it be hacked? Is there a difference between these examples, or are they fundamentally the same?

The state, in the form of security services, does not randomly spy on people—it lacks the resources. The security services spend their time keeping citizens safe, and mass surveillance requires too much time, both in terms of collation as well as analysis, which would result in the inability to adequately perform the primary security function. While it may technically be possible under the bill to impugn individual freedom, John Bull has little to fear.

The mass collection of data enables the security services to keep us safe by utilizing snippets of information from a variety of sources to develop situational awareness and identify possible terrorists, with seven attacks being prevented in the United Kingdom in the last six months. This is not only in terms of the presence of information, but also the absence of information—identifying negative patterns is equally important to identifying positive ones. For example, if a known terrorist network goes quiet and stops communicating, it can be an indication that they are about to launch an attack.

Three areas are key to analyzing an individual: centrality, between-ness and degree. Centrality refers to how important the individual is; between-ness relates to a person’s access to others; and degree is the number of people one interacts with. These core principles of intelligence gathering have not changed with the advent of new technology. The sole difference of the 2016 world is that an environment outside the real world exists in which information and data can be stored and transmitted.

The Investigatory Powers Bill seeks to ensure that the same powers that exist within the real world are also present in the cyber environment by avoiding potential arguments that certain powers are only legally relevant to the real world.

White Noise

Of 225 individuals charged with terrorism offenses in the United States between 9/11 and January 2014, only 17 were the result of mass surveillance techniques used by the National Security Agency. Indeed, the issue of white noise—too much information cluttering the relevant information being sought—is considered to be a substantial challenge to security services.

Furthermore, the likelihood of this clause being useful against terrorists is minimal, as most are likely to be using TOR, or similar privacy ensuring tools. TOR—the onion router—is a way of connecting to the Internet that masks your IP address by bouncing the connection across a series of proxies.

While the Internet service provider (ISP) record would reveal a connection to TOR, it would not show which sites had been visited. Seeking to explore what an individual has been doing online—a space where social interaction takes place—is akin to seeking information on an individual’s habits in the real world. As such, asking an ISP to provide data is no more or less of an intrusion into individual privacy than asking the proprietor of a business to share customer information, either general or specific.

Companies have been resisting for commercial reasons. They perceive that their customers will not be happy if they agree to data being made available to the government. In a Reuters poll, 46.3% of respondents agreed with Apple’s stance of refusing to unlock the smartphone of an actual terrorist, with 35.5% disagreeing.

The potential for backdoors to be built into software has recently been the subject of sustained dispute between the Federal Bureau of Investigation (FBI) and Apple, and attracts the most public attention. If more specific attention was made toward educating the public as to why the state was seeking such powers and the benefits of formally codifying—in legal practice—the powers of agencies of state security, the public could be more favorable to the UK bill.

This article was originally published by Fair Observer on 10th March 2016.

Leave a comment

Filed under Uncategorized

What Is Going on in Ukraine Now?

Lance Spencer Davies, a Doctoral Researcher in the Department of Politics and International Studies at the University of Birmingham, provides an update on recent developments in the ongoing conflict in Ukraine.

On the face of it, the conflict in Ukraine seems to have stabilised somewhat. Sporadic shelling aside, the last few months of 2015 saw the “hot” phase of the conflict in eastern Ukraine wind down to a relative calm.

Both parties’ forces have been slowly withdrawing in accordance with the latest ceasefire agreement, and while there were some isolated clashes between the opposing parties over the Christmas period, they haven’t derailed the current plans. Indeed, German Chancellor Angela Merkel remains optimsitic about achieving progress in the negotiations.

Meanwhile, Russia’s military attention seems now to be mostly devoted to its military intervention in Syria and the tensions that’s caused with other countries, particularly Turkey.

But it’s easy to forget how serious the situation still is. After all, this conflict has killed more than 9,000 people – and the tension between Ukraine and Russia remains palpable.

Ukraine has blamed Russia for recent cyber attacks against a key Ukrainian power company late last month. This comes amid an ongoing feud between Moscow and Kiev over Ukraine’s outstanding $3 billion Eurobond, which it insists cannot be repaid in full.

Western capitals have continued to place the blame solely on Moscow’s doorstep for destabilising eastern Ukraine in order to alter the landscape of Europe. Supreme Allied Commander for Europe, General Breedlove, has called this “a 21st-century offensive employing 21st-century tools for strategic deception and calculated ambiguity to achieve Moscow’s political goals”.

Russia’s Foreign Minister, Sergey Lavrov, has declared that “We do not pursue any evil intentions and we are still open for honest conversation”. In comparison, President Putin has been accused of remarking that Russia could take Kiev in two weeks if it wanted to.

For many observers these contradictions point towards the inherent duplicity in Russia’s behaviour, as it seeks to deliver mixed messages in order to facilitate Moscow’s malevolent aims.

Yet with Moscow’s support for the opposition forces coinciding with its engagement in the peace effort, has this signalled a complexity underpinning Russia’s behaviour which has not readily been accounted for by Western capitals?

A paradoxical policy

The long-running crisis has fallen into a cycle of escalation and de-escalation, and Moscow’s role in it is as complicated as ever.

On the one hand, Moscow has provided support to fighters in the eastern regions since July 2014, whether they be on the offensive or at the brink of defeat. That decision to support the use of force in violation of Ukraine’s sovereignty is fundamentally politically motivated, and to the extent Russia has acknowledged its role, there has been little pretence otherwise.

This is a new low even by the standards of Putin’s past behaviour. In the 2008 intervention in South Ossetia, for instance, the Kremlin at least tried to legitimise its policy with reference to international law, instead of sticking to brazen denial and obfuscation – although Putin has recently admitted that military specialists have been involved in eastern Ukraine.

Russia’s response has also been linked to its reservations about the spread of instability. While it may be little more than a useful smokescreen for Russia’s more obstructionist behaviour, the Kremlin has made at least some effort to help mitigate the humanitarian crisis.

Russia has continued to participate in the framework of the OSCE’s Special Monitoring Mission to Ukraine (SMM); it’s been a central actor (albeit controversially) in the delivery of humanitarian aid in accordance with the ICRC (International Committee of the Red Cross), has accommodated mass refugee flows, and has agreed alongside other powers in the settlement process to promote post-conflict reconstruction efforts in Ukraine’s eastern regions.

Ukraine’s Petro Poroshenko remains vigilant.
Reuters

However, there are limits to this acceptance. Moscow continues to refuse to accept the expansion of the SMM’s capacity of observation along the Russian-Ukrainian border. It also consistently opposes the establishment of an international peacekeeping mission, which would inevitably interfere with its military support for the opposition.

Diplomatic steps

Throughout the diplomatic processes in Geneva and Minsk, Moscow doggedly tried to gain as much leverage as it could for the opposition forces. And despite the West’s worry that the Kremlin may obstruct a new settlement to preserve its influence over the region, this has not ruled out a new negotiated settlement – but Russia clearly intends to be a central actor in any such agreement.

Moscow wants a settlement in Ukraine that takes its own interests into consideration. Its preference is evidently for a federation of Ukraine via a compromised solution between Kiev and the eastern regions.

Moscow’s behaviour will always be guided by a complex array of legitimate security interests, guaranteeing involvement in political or security negotiations concerning its immediate regional space. What’s certain is that the Kremlin doesn’t want instability along its border, particularly as Ukraine is a potentially huge economic market for Russia (and not least a customer for Gazprom).

Indeed, Russia has remained alert to the consequences of pushing Ukraine into a new spiral of violence. But this doesn’t mean that the potential for future conflict has been eradicated – or that Ukraine is any less wary of Russia’s intentions.

This article was originally published on 12th January 2016 by The Conversation.

Leave a comment

Filed under Comment and analysis, Uncategorized